text
stringlengths
10
951k
source
stringlengths
39
44
Geography of Lithuania Lithuania is a country in the Baltic region of Europe. The most populous of the Baltic states, Lithuania has of coastline consisting of the continental coast and the "Curonian Spit" coast. Lithuania's major warm-water port of Klaipėda (Memel) lies at the narrow mouth of Curonian Lagoon, a shallow lagoon extending south to Kaliningrad and separated from the Baltic sea by Curonian Spit, where Kuršių Nerija National Park was established for its remarkable sand dunes. The Neman River and some of its tributaries are used for internal shipping (in 2000, 89 inland ships carried 900,000 tons of cargo, which is less than 1% of the total goods traffic). Situated between 56.27 and 53.53 latitude and 20.56 and 26.50 longitude, Lithuania is glacially flat, except for morainic hills in the western uplands and eastern highlands no higher than 300 metres. The terrain is marked by numerous small lakes and swamps, and a mixed forest zone covers over 33% of the country. The growing season lasts 169 days in the east and 202 days in the west, with most farmland consisting of sandy- or clay-loam soils. Limestone, clay, sand, and gravel are Lithuania's primary natural resources, but the coastal shelf offers perhaps of oil deposits, and the southeast could provide high yields of iron ore and granite. According to some geographers, the geographical midpoint of Europe is just north of Lithuania's capital, Vilnius. Lithuania is situated on the eastern shore of the Baltic Sea. Lithuania's boundaries have changed several times since 1918, but they have been stable since 1945. Currently, Lithuania covers an area of about . About the size of West Virginia, it is larger than Belgium, Denmark, Latvia, the Netherlands, or Switzerland. Lithuania is situated on the eastern shore of the Baltic Sea and borders Latvia on the north, Belarus on the east and south, and Poland and the Kaliningrad region of Russia on the southwest. It is a country of gently rolling hills, many forests, rivers and streams, and lakes. Its principal natural resource is agricultural land. Lithuania's northern neighbor is Latvia. The two countries share a border that extends 453 kilometres. Lithuania's eastern border with Belarus is longer, stretching 502 kilometers. The border with Poland on the south is relatively short, only 91 kilometers, but it is very busy because of international traffic. Lithuania also has a 227-kilometer border with Russia. Russian territory adjacent to Lithuania is Kaliningrad Oblast, which is the northern part of the former German East Prussia, including the city of Kaliningrad. Finally, Lithuania has 108 kilometers of Baltic seashore with an ice-free harbor at Klaipėda. The Baltic coast offers sandy beaches and pine forests and attracts thousands of vacationers. Lithuania lies at the edge of the North European Plain. Its landscape was shaped by the glaciers of the last Ice Age, which retreated about 25,000–22,000 years BP (Before Present). Lithuania's terrain is an alternation of moderate lowlands and highlands. The highest elevation is 297.84 meters above sea level, found in the eastern part of the republic and separated from the uplands of the western region of Samogitia by the fertile plains of the southwestern and central regions. The landscape is punctuated by 2,833 lakes larger than 10,000 m² and 1,600 smaller ponds. The majority of the lakes are found in the eastern part of the country. Lithuania also has 758 rivers longer than ten kilometres. The largest river is the Nemunas (total length 917 km), which originates in Belarus. The other larger waterways are the Neris (510 km), Venta (346 km), and Šešupė (298 km) rivers. However, only 600 kilometers of Lithuania's rivers are navigable. Once a heavily forested land, Lithuania's territory today consists of only 32.8 percent woodlands—primarily pine, spruce, and birch forests. Ash and oak are very scarce. The forests are rich in mushrooms and berries, as well as a variety of plants. Lithuania has a humid continental climate ("Dfb" in the Köppen climate classification). Average temperatures on the coast are in January and in July. In Vilnius the average temperatures are in January and in July. Simply speaking, is frequent on summer days and at night. Temperatures occasionally reach in summer. Winters when easterly flows from Siberia predominate, like 1941–42, 1955–56 and 1984–85, are very cold, whereas winters dominated by westerly maritime airflows like 1924–25, 1960–61 and 1988–89 are mild with temperatures above freezing a normal occurrence. occurs almost every winter. Winter extremes are at the coast and in the east of Lithuania. The average annual precipitation is on the coast, in Samogitia highlands and in the eastern part of the country. Snow occurs every year, it can be snowing from October to April. In some years sleet can fall in September or May. The growing season lasts 202 days in the western part of the country and 169 days in the eastern part. Severe storms are rare in the eastern part of Lithuania and common nearer the coast. The longest measured temperature records from the Baltic area cover about 250 years. The data show that there were warm periods during the latter half of the eighteenth century, and that the nineteenth century was a relatively cool period. An early twentieth century warming culminated in the 1930s, followed by a smaller cooling that lasted until the 1960s. A warming trend has persisted since then. Lithuania experienced a drought in 2002, causing forest and peat bog fires. The country suffered along with the rest of Northwestern Europe during a heat wave in the summer of 2006. Extreme temperatures in Lithuania by month Concerned with environmental deterioration, Lithuanian governments have created several national parks and reservations. The country's flora and fauna have suffered, however, from an almost fanatical drainage of land for agricultural use. Environmental problems of a different nature were created by the development of environmentally unsafe industries. Air pollution problems exist mainly in the cities, such as Vilnius, Kaunas, Jonava, Mažeikiai, Elektrėnai, and Naujoji Akmenė—the sites of fertilizer and other chemical plants, an oil refinery, power station, and a cement factory. Water quality has also been an issue. The city of Kaunas, with a population of about 400,000, had no water purification plant until 1999; sewage was sent directly into the Neman River. Tertiary wastewater treatment is scheduled to come on-line in 2007. River and lake pollution are other legacies of Soviet carelessness with the environment. The Courland Lagoon, for example, separated from the Baltic Sea by a strip of high dunes and pine forests, is about 85 percent contaminated. Beaches in the Baltic resorts, such as the well-known vacation area of Palanga, are frequently closed for swimming because of contamination. Forests affected by acid rain are found in the vicinity of Jonava, Mažeikiai, and Elektrėnai, which are the chemical, oil, and power-generation centers. Lithuania was among the first former Soviet republics to introduce environmental regulations. However, because of Moscow's emphasis on increasing production and because of numerous local violations, technological backwardness, and political apathy, serious environmental problems now exist. Natural hazards: hurricane-force storms, blizzards, droughts, floods Environment—current issues: contamination of soil and groundwater with petroleum products and chemicals at former Soviet military bases Environment—international agreements: Air Pollution, Air Pollution-Nitrogen Oxides, Air Pollution-Persistent Organic Pollutants, Air Pollution-Sulphur 85, Air Pollution-Sulphur 94, Air Pollution-Volatile Organic Compounds, Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Desertification, Endangered Species, Environmental Modification, Hazardous Wastes, Law of the Sea, Ozone Layer Protection, Ship Pollution, Wetlands Lithuania has an abundance of limestone, clay, quartz sand, gypsum sand, and dolomite, which are suitable for making high-quality cement, glass, and ceramics. There also is an ample supply of mineral water, but energy sources and industrial materials are all in short supply. Oil was discovered in Lithuania in the 1950s, but only a few wells operate, and all that do are located in the western part of the country. It is estimated that the Baltic Sea shelf and the western region of Lithuania hold commercially viable amounts of oil, but if exploited this oil would satisfy only about 20 percent of Lithuania's annual need for petroleum products for the next twenty years. Lithuania has a large amount of thermal energy along the Baltic Sea coast which could be used to heat hundreds of thousands of homes, as is done in Iceland. In addition, iron ore deposits have been found in the southern region of Lithuania. But commercial exploitation of these deposits probably would require strip mining, which is environmentally unsound. Moreover, exploitation of these resources will depend on Lithuania's ability to attract capital and technology from abroad. "Natural resources:" peat, arable land, amber Land use: Irrigated land: 13.4 km² (2011) Total renewable water resources: 24.9 km3 (2011) Area: Comparative area Land boundaries: Coastline: . The coastline consists of 20 kilometres from Klaipėda, 50 kilometres at Cape Nehrung, and 21 kilometres in the region of Palanga and the mouth of the Šventoji river. "The "Memelland" occupies two-thirds of the Lithuanian coast-line." Maritime claims: Elevation extremes:
https://en.wikipedia.org/wiki?curid=17821
Demographics of Lithuania This article is about the demographic features of the population of Lithuania, including population density, ethnicity, level of education, health, economic status, and religious affiliations. The earliest evidence of inhabitants in present-day Lithuania dates back to 10,000 BC. Between 3000 and 2000 BC, the people of the Corded Ware culture spread over a vast region of eastern Europe, between the Baltic Sea and the Vistula River in the West and the Moscow–Kursk line in the East. Merging with the indigenous peoples, they gave rise to the Balts, a distinct Indo-European ethnic group whose descendants are the present-day Lithuanian and Latvian nations and the former Old Prussians. The name of Lithuania – "Lithuanians" – was first mentioned in 1009. Among its etymologies there are a derivation from the word "Lietava", for a small river, a possible derivation from a word leičiai, but most probable is the name for union of Lithuanian ethnic tribes ('susilieti, lietis' means to unite and the word 'lietuva' means something which has been united). The primary Lithuanian state, the Duchy of Lithuania, emerged in the territory of Lietuva, the ethnic homeland of Lithuanians. At the birth of the Grand Duchy of Lithuania (GDL), ethnic Lithuanians made up about 70% of the population. With the acquisition of new Ruthenian territories, this proportion decreased to 50% and later to 30%. By the time of the largest expansion towards Kievan Rus' lands, at the end of the 13th and during the 14th century, the territory of the GDL was about 800,000 km2, of which 10% was ethnically Lithuanian. The ethnic Lithuanian population is estimated to have been 420,000 out of 1.4 million in 1375 (the territory was about 700,000 km2), and 550,000 out of 3.8 million in 1490 (territory: 850,000 km2) Ruthenians were only nowadays Ukrainians and the whole Belarus including Smolensk and Mozhaisk Galindians were of Lithuanian ethnicity (belonging to the same family as Prussians or Latvians). In addition to the Ruthenians and Lithuanians, other significant ethnic groups throughout GDL were Jews and Tatars. The combined population of Poland and GDL in 1493 is estimated as 7.5 million, of whom 3.25 million were Poles, 3.75 million Ruthenians and 0.5 million Lithuanians. With the Union of Lublin Lithuanian Grand Duchy lost large part of lands to the Polish Crown (see demographics of the Polish–Lithuanian Commonwealth). An ethnic Lithuanian proportion being about 1/4 in GDL after the Union of Lublin was held till the partitions. There was much devastation and population loss throughout the GDL in the mid and late 17th century, including the ethnic Lithuanian population in Vilnius voivodeship. Besides devastation, the Ruthenian population declined proportionally after the territorial losses to the Russian Empire. In 1770 there were about 4.84 million inhabitants in GDL, of which the largest ethnic group were Ruthenians, about 1.39 million – Lithuanians. The voivodeships with a majority ethnic Lithuanian population were Vilnius, Trakai and Samogitian voivodeships, and these three voivodeships comprised the political center of the state. In the southern angle of Trakai voivodeship and south-eastern part of Vilnius voivodeship there were also many Belarusians; in some of the south-eastern areas they were the major linguistic group. The Ruthenian population formed a majority in GDL from the time of the GDL's expansion in the mid 14th century; and the adjective "Lithuanian", besides denoting ethnic Lithuanians, from early times denoted any inhabitant of GDL, including Slavs and Jews. The Ruthenian language, corresponding to today's Belarusian and Ukrainian, was then called Russian, and was used as one of the chancellery languages by Lithuanian monarchs. However, there are fewer extant documents written in this language than those written in Latin and German from the time of Vytautas. Later, Ruthenian became the main language of documentation and writing. In the years that followed, it was the main language of government until the introduction of Polish as the chancellery language of the Lithuanian–Polish Commonwealth in 1697; however there are also examples of documents written in Ruthenian from the second half of the 18th century. The Lithuanian language was used orally in Vilnius, Trakai and Samogitian voivodeships, and by small numbers of people elsewhere. At the court of Zygmunt August, the last king of the Duchy, both Polish and Lithuanian were spoken. After the Third Partition of the Polish–Lithuanian Commonwealth on October 24, 1795, between the Russian Empire, the Kingdom of Prussia and the Habsburg Monarchy, the Commonwealth ceased to exist and Lithuania became a part of the Russian empire. After the abolition of serfdom in 1861, the use of the Polish language noticeably increased in eastern Lithuania and western Belarus. Many Lithuanians, living further east, were unable to receive the Lithuanian printed books smuggled into Lithuania by knygnešiai during the time of the ban on printing books in the Latin alphabet, and they switched to Polish. Although this also used the Latin alphabet, it was much less affected by the ban, because Polish was still used by the politically important class of the nobility, and also used predominantly in the biggest towns of Lithuania, and supported by the church. The Lithuanian National Revival had begun to intensify by the end of the 19th century, and the number of Lithuanian speakers and people identifying themselves as ethnic Lithuanians started to increase; but at the same time many Polish speaking Lithuanians, especially former "szlachta", cut themselves adrift from the Lithuanian nation. There were population losses due to several border changes, Soviet deportations, the Holocaust of the Lithuanian Jews, and German and Polish repatriations during and after World War II. After World War II, the ethnic Lithuanian population remained stable: 79.3% in 1959 to 83.5% in 2002. Lithuania's citizenship law and the Constitution meet international and OSCE standards, guaranteeing universal human and civil rights. Lithuanians are neither Slavic nor Germanic, although the union with Poland, German and Russian colonization and settlement left cultural and religious influences. 1 Source: . The Klaipėda Region was annexed from Germany in 1923, but was not included in the 1923 census. A separate census in the Klaipėda region was held in 1925. Among the Baltic states, Lithuania has the most homogeneous population. According to the census conducted in 2001, 83.4% of the population identified themselves as Lithuanians, 6.7% as Poles, 6.3% as Russians, 1.2% as Belarusians, and 2.3% as members of other ethnic groups. Poles are concentrated in the Vilnius Region, the area controlled by Poland in the interwar period. There are especially large Polish communities in Vilnius district municipality (52% of the population) and Šalčininkai district municipality (77.8%). The Electoral Action of Poles in Lithuania, an ethnic minority political party, has strong influence in these areas and has representation in the Seimas. The party is most active in local politics and controls several municipal councils. Russians, even though they are almost as numerous as Poles, are much more evenly scattered and lack strong political cohesion. The most prominent community lives in Visaginas (52%). Most of them are engineers who moved with their families from the Russian SFSR to work at the Ignalina Nuclear Power Plant. A number of ethnic Russians (mostly military) left Lithuania after the declaration of independence in 1990. Another major change in the ethnic composition of Lithuania was the extermination of the Jewish population during the Holocaust. Before World War II about 7.5% of the population was Jewish; they were concentrated in cities and towns and had a significant influence on crafts and business. They were called Litvaks and had a strong culture. The population of Vilnius, sometimes nicknamed Northern Jerusalem, was about 30% Jewish. Almost all of these Jews were killed during the Nazi German occupation, or later emigrated to the United States and Israel. Now there are only about 4,000 Jews living in Lithuania. Lithuania's membership of the European Union has made Lithuanian citizenship all the more appealing. Lithuanian citizenship is theoretically easier (see court ruling notes below) to obtain than that of many other European countries—only one great-grandparent is necessary to become a Lithuanian citizen. Persons who held citizenship in the Republic of Lithuania prior to June 15, 1940, and their children, grandchildren, and great-grandchildren (provided that these persons did not repatriate) are eligible for Lithuanian citizenship . Lithuanian citizens are allowed to travel and work throughout the European Union without a visa or other restrictions. The Lithuanian Constitutional Court ruled in November 2006 that a number of provisions of the Law of the Republic of Lithuania on citizenship are in conflict with the Lithuanian Constitution. In particular, the court ruled that a number of current provisions of the Citizenship Law implicitly or explicitly allowing dual citizenship are in conflict with the Constitution; such provisions amounted to the unconstitutional practice of making dual citizenship a common phenomenon rather than a rare exception. The provisions of the Citizenship Law announced to be unconstitutional are no longer valid and applicable to the extent stated by the Constitutional Court. The Lithuanian Parliament amended the Citizenship Law substantially as a result of this court ruling, allowing dual citizenship for children of at least one Lithuanian parent who are born abroad, but preventing Lithuanians from retaining their Lithuanian citizenship after obtaining the citizenship of another country. There are some special cases still permitting dual citizenship. See Lithuanian nationality law. The Lithuanian language is the country's sole official language countrywide. It is the first language of almost 85% of population and is also spoken by 286,742 out of 443,514 non-Lithuanians. The Soviet era had imposed the official use of Russian, so most adult Lithuanians speak Russian as a second language, while the Polish population generally speaks Polish. Russians who immigrated after World War II speak Russian as their first language. The younger generation usually speaks English as their second language, while a substantial portion of the total population (37%) speak at least two foreign languages. According to census of 2011, 30% of the population can speak English. Approximately 14,800 pupils started their 2012 school year in schools where the curriculum is conducted in Russian (down from 76,000 in 1991), and about 12,300 enrolled in Polish schools (compared to 11,400 in 1991 and 21,700 in 2001). There are also schools in the Belarusian language, as well as in English, German, and French. There are perhaps 50 speakers of Karaim, a Turkic language spoken by Karaite Jews, in Lithuania. Lithuanian Sign Language and Russian Sign Language are used by the deaf community. As per the 2011 census, 77.2% of Lithuanians identified themselves as Roman Catholic. The Church has been the majority denomination since the Christianisation of Lithuania at the end of the 14th century. Some priests actively led the resistance against the Communist regime (symbolised by the Hill of Crosses). In the first half of the 20th century, the Lutheran Protestant church had around 200,000 members, 9% of the total population, mostly Protestant Lithuanians from the former Memel Territory and Germans, but it has declined since 1945. Small Protestant communities are dispersed throughout the northern and western parts of the country. Believers and clergy suffered greatly during the Soviet occupation, with many killed, tortured or deported to Siberia. Various Protestant churches have established missions in Lithuania since 1990. 4.1% are Orthodox, 0.8% are Old Believers (both mainly among the Russian minority), 0.8% are Protestant and 6.1% have no religion. Lithuania was historically home to a significant Jewish community and was an important center of Jewish scholarship and culture from the 18th century, until the community, numbering about 160,000 before World War II, was almost entirely annihilated during the Holocaust. By 2011, around 3000 people in Lithuania identified themselves as Jews, while around 1200 identified with Judaic religious community. According to the 2005 Eurobarometer Poll, 12% said that "they do not believe there is any sort of spirit, god, or life force", 36% answered that "they believe there is some sort of spirit or life force" and 49% of Lithuanian citizens responded that "they believe there is a God". The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. Age structure: "0–14 years:" 14.2% (male 258,423/female 245,115) "15–64 years:" 69.6% (male 1,214,743/female 1,261,413) "65 years and over:" 16.2% (male 198,714/female 376,771) (2009 est.) Population growth rate: −0.28% (2009 est.) Net migration rate: −0.72 migrant(s)/1,000 population (2009 est.) Sex ratio: "at birth:" 1.06 male(s)/female "under 15 years:" 1.05 male(s)/female "15–64 years:" 0.96 male(s)/female "65 years and over:" 0.53 male(s)/female "total population:" 0.89 male(s)/female (2009 est.) Infant mortality rate: "Total:" 6.47 deaths/1,000 live births male: 7.73 deaths/1,000 live births female: 5.13 deaths/1,000 live births (2009 est.) Life expectancy at birth: "total population:" 74.9 years "male:" 69.98 years "female:" 80.1 years (2009 est.) Total fertility rate: 1.29 children born/woman (2014) Suicide rate: 31.5 suicides per every 100,000 people (2009) Divorce rate: With 2.8 divorces per every 1000 people (2009), Lithuania in 2004 had one of the highest divorce rate in the European Union . 1 the figures of 1939 exclude the Klaipėda Region Data from Official Statistics Portal: According to the 2011 census, only around 0.2% of the Lithuanian population aged 10 and over were illiterate, the majority of them in rural areas. The proportion is similar for males and females. The general education system in Lithuania consists of primary, basic, secondary and tertiary education. Primary, basic and secondary (or high school) education is free of charge to all residents and is compulsory for pupils under 16 years of age. Pre-primary education is also available free of charge to 5- and 6-year-old children but is not compulsory. Pre-primary schooling is attended by about 90% of pre-school age children in Lithuania. Primary, basic and secondary education in Lithuania is available to some ethnic minorities in their native languages, including Polish, Russian and Belarusian. Primary schooling () is available to children who have reached age 7 (or younger, should the parents so desire) and lasts four years. Primary school students are not assessed through a grade system, instead using oral or written feedback. Students begin studying their first foreign language in their second year of primary school. Data from the 2011 census showed that 99.1% of the population aged 20 and older have attained at least primary education, while around 27,000 pupils started the first grade in 2012. Basic education () covers grades 5 to 10. It is provided by basic, secondary, youth, vocational schools and gymnasiums. After completing the 10th grade, the students must take the basic education achievement test in the Lithuanian language, mathematics, and an elective basic education achievement test in their mother tongue (Belarusian, Polish, Russian or German). In 2011, 90.9% of the population of Lithuania aged 20 or older had attained the basic level of education. Secondary education () in Lithuania is optional and available to students who have attained basic education. It covers two years (11th-12th grades in secondary schools and 3rd-4th grades in gymnasiums). At this level, students have the opportunity to adapt their study plans (subjects and study level) to their individual preferences. Secondary education is completed upon passing national "matura" examinations. These consist of as many as six separate examinations of which two (Lithuanian Language and Literature and one elective subject) are required to attain the diploma. As of 2011, 78.2% of the population of Lithuania aged 20 or older had attained the secondary level of education, including secondary education provided by vocational schools. More than 60% of the graduates from secondary school every year choose to continue education at colleges and universities of the Lithuanian higher education system. As of 2013, there were 23 universities (including academies and business schools recognized as such) and 24 colleges operating in Lithuania. Vilnius University, founded in 1579, is the oldest and largest university in Lithuania. More than 48,000 students enrolled in all higher education programmes in Lithuania in 2011, including level I (professional bachelor and bachelor), level II (masters) and level III (doctorate) studies. Higher education in Lithuania is partly state-funded, with free-of-charge access to higher education constitutionally guaranteed to students deemed "good". There are also scholarships available to the best students.
https://en.wikipedia.org/wiki?curid=17822
Politics of Lithuania Politics of Lithuania takes place in a framework of a unitary semi-presidential representative democratic republic, whereby the President of Lithuania is the head of state and the Prime Minister of Lithuania is the head of government, and of a multi-party system. Executive power is exercised by the President and the Government, which is headed by the Prime Minister. Legislative power is vested in both the Government and the unicameral Seimas (Lithuanian Parliament). Judicial power is vested in judges appointed by the President of Lithuania and is independent of executive and legislature power. The judiciary consists of the Constitutional Court, the Supreme Court, and the Court of Appeal as well as the separate administrative courts. The Constitution of the Republic of Lithuania established these powers upon its approval on 25 October 1992. Being a multi-party system, the government of Lithuania is not dominated by any single political party, rather it consists of numerous parties that must work with each other to form coalition governments. Since Lithuania declared independence on 11 March 1990, it has kept strong democratic traditions. Drawing from the interwar experiences, politicians made many different proposals that ranged from strong parliamentarism to a presidential republic with checks and balances similar to the United States. Through compromise, a semi-presidential system was settled. In a referendum on 25 October 1992, the first general vote of the people since their declared independence, 56.75% of the total number of voters supported the new constitution. All major political parties declared their support for Lithuania's membership in NATO and the European Union (EU). Lithuania joined NATO on 29 March 2004, and joined the EU on 1 May 2004. Since 1991, Lithuanian voters have shifted from right to left and back again, swinging between the Conservatives, led by Vytautas Landsbergis, and the (formerly Communist) Democratic Labour Party of Lithuania, led by president Algirdas Brazauskas. During this period, the prime minister was Gediminas Vagnorius. Valdas Adamkus has been the president for most of the time since 1998. His prime minister was Rolandas Paksas, whose government got off to a rocky start and collapsed within seven months. The alternation between left and right was broken in the October 2000 elections when the Liberal Union and New Union parties won the most votes and were able to form a centrist ruling coalition with minor partners. President Adamkus played a key role in bringing the new centrist parties together. Artūras Paulauskas, the leader of the centre-left New Union (also known as the social-liberal party), became the Chairman of the Seimas. In July 2001, the centre-left New Union party forged an alliance with the Social Democratic Party of Lithuania and formed a new cabinet under former president Algirdas Brazauskas. On 11 April 2006, Artūras Paulauskas was removed from his position and Viktoras Muntianas was elected Chairman of the Seimas. The cabinet of Algirdas Brazauskas resigned on 31 May 2006, as President Valdas Adamkus expressed no confidence in two of the Ministers, formerly party colleagues of Brazauskas, over ethical principles. Brazauskas decided not to remain in office as acting Prime Minister, and announced that he was finally retiring from politics. Even so, he led the ruling Social Democratic Party of Lithuania for one more year, until 19 May 2007, when he passed the reins to Gediminas Kirkilas. On 27 November 2008, Andrius Kubilius was appointed as a Prime Minister. In 2012, Algirdas Butkevičius became the Prime Minister. On 22 November 2016, Saulius Skvernelis became the new and current Prime Minister. Government in Lithuania is made up of three branches originally envisioned by enlightenment philosopher Baron de Montesquieu: executive, legislative, and judicial. Each branch is separate and is set up to do checks and balances on each other branch. The executive branch of the Lithuanian government consists of a President, a Prime Minister, and the President's Council of Ministers. It is in charge of running the government. The President of Lithuania is the head of state of the country, elected directly for a five-year term and can serve maximum of two terms consecutively. Presidential elections take place in a modified version of the two-round system. If half of voters participate, a candidate must win a majority of the total valid vote in order to win election in the first round. If fewer than half participate, a candidate can win outright with a plurality, provided that he or she wins at least one third of the total vote. If the first round does not produce a president, a runoff is held between the top two finishers in the first round, with a plurality sufficient to win. The President, with the approval of the Seimas, is first responsible of appointing the Prime Minister. Upon the Prime Minister's nomination, the President also appoints, under the recommendation of the Prime Minister, the Council of Ministers (13 ministries), as well as a number of other top civil servants and the judges for all courts. The President also serves as the commander-in-chief, oversees foreign and security policy, addresses political problems of foreign and domestic affairs, proclaims states of emergency, considers the laws adopted by the Seimas, and performs other duties specified in the Constitution. Lithuanian presidents have somewhat greater power than their counterparts in Estonia and Latvia, but have more influence in foreign policy than domestic policy. Dalia Grybauskaitė has served as the president of Lithuania since July 2009, winning a reelection bid in 2014. Grybauskaitė succeeded Valdas Adamkus who had served a total of two non-consecutive terms. Former President Rolandas Paksas, who had defeated Adamkus in 2003, was impeached in April 2004 for leaking classified information. The Prime Minister of Lithuania is the head of government of the country, appointed by the President and approved by the Seimas. The Prime Minister, within 15 days of being appointed, is responsible for choosing Ministers for the President to approve to each of the 13 Ministries. In general, the Prime Minister is in charge of the affairs of the country, maintains homeland security, carries out laws and resolutions of the Seimas and decrees of the President, maintains diplomatic relations with foreign countries and international organizations, and performs other duties specified in the Constitution. Similar to the cabinet of other nations, the Council of Ministers consists of 13 Ministers chosen by the Prime Minister and appointed by the President. Each Minister is responsible for his or her own Ministry of the Lithuanian government and must give reports on his or her Ministry when directed to. When the Prime Minister resigns or dies, the position is to be filled as soon as possible and the new leader will appoint a new Government. The parliament (Seimas) has 141 members that are elected for a 4-year term. About half of the members are elected in single-member districts (71), and the other half (70) are elected in the nationwide vote using proportional representation by party lists. A party must receive at least 5% of the national vote to be represented in the Seimas. The judges of the Constitutional Court of the Republic of Lithuania ("Lietuvos Respublikos Konstitucinis Teismas") for a single nine-year term are appointed by the Seimas from the candidates presented by the President (three judges), Chairman of Seimas (three judges) and the chairman of the Supreme Court (three judges). Lithuania has a three-tier administrative division: the country is divided into 10 counties (Lithuanian: singular – "apskritis", plural – "apskritys") that are further subdivided into 60 municipalities (Lithuanian: singular – "savivaldybė", plural – "savivaldybės") which consist of over 500 elderships (Lithuanian: singular – "seniūnija", plural – "seniūnijos"). The county governors (Lithuanian: "apskrities viršininkas") institution and county administrations have been dissolved in 2010. Municipalities are the most important administrative unit. Some municipalities are historically called "district municipalities", and thus are often shortened to "district"; others are called "city municipalities", sometimes shortened to "city". Each municipality has its own elected government. In the past, the election of municipality councils occurred once every three years, but it now takes place every four years. The council appoints elders to govern the elderships. Mayors are elected directly since 2015, being appointed by the council before that.
https://en.wikipedia.org/wiki?curid=17823
Telecommunications in Lithuania This article provides an overview of telecommunications in Lithuania, including radio, television, telephones, and the Internet. The Communications Regulatory Authority of the Republic of Lithuania (RRT) is Lithuania's independent communications-industry regulator. It was established under the Law on Telecommunications and the provisions of the European Union Directives to ensure that the industry remain competitive. There are no government restrictions on access to the Internet or credible reports that the government monitors e-mail or Internet chat rooms without appropriate legal authority. Individuals and groups generally engage in the free expression of views via the Internet, including by e-mail, but authorities prosecute people for openly posting material on the Internet that authorities considered to be inciting hatred. The constitution provides for freedom of speech and press, and the government generally respects these rights in practice. An independent press, an effective judiciary, and a functioning democratic political system combine to promote these freedoms. However, the constitutional definition of freedom of expression does not protect certain acts, such as incitement to national, racial, religious, or social hatred, violence and discrimination, or slander, and disinformation. It is a crime to deny or "grossly trivialize" Soviet or Nazi German crimes against Lithuania or its citizens, or to deny genocide, crimes against humanity, or war crimes. In the first 11 months of 2012 authorities initiated investigations into 259 allegations of incitement of hatred and six of incitement of discrimination, most of them over the Internet. Authorities forwarded 69 of those allegations to the courts for trial, closed 68, and suspended 113 for lack of evidence; the others remained under investigation. Most allegations of incitement of hatred involved racist or anti-Semitic expression, or hostility based on sexual orientation, gender identity, or nationality. It is a crime to disseminate information that is both untrue and damaging to an individual's honor and dignity. Libel is punishable by a fine or imprisonment of up to one year, or up to two years for libelous material disseminated through the mass media. While it is illegal to publish material "detrimental to minors’ bodies" or thought processes, information promoting the sexual abuse and harassment of minors, promoting sexual relations among minors, or "sexual relations", the law is not often invoked and there are no indications that it adversely affects freedom of the media. The constitution prohibits arbitrary interference in an individual's personal correspondence or private and family life, but there were reports that the government did not respect these prohibitions in practice. The law requires authorities to obtain judicial authorization before searching an individual's premises and prohibits the indiscriminate monitoring by government or other parties of citizens’ correspondence or communications. However, domestic human rights groups allege that the government does not properly enforce the law.
https://en.wikipedia.org/wiki?curid=17825
Lithuanian Armed Forces The Lithuanian Armed Forces consist of 20,565 active personnel. Conscription was ended in September 2008 but reintroduced in 2015 because of concerns about the geopolitical environment in light of Russia's military intervention in Ukraine. Lithuania's defence system is based on the concept of "total and unconditional defence" mandated by Lithuania's National Security Strategy. The goal of Lithuania's defence policy is to prepare their society for general defence and to integrate Lithuania into Western security and defence structures. The defence ministry is responsible for combat forces, search and rescue, and intelligence operations. The 4,800 border guards fall under the Interior Ministry's supervision and are responsible for border protection, passport and customs duties, and share responsibility with the navy for smuggling / drug trafficking interdiction. A special security department handles VIP protection and communications security. In May 2015 the Lithuanian parliament voted to return the conscription and the conscripts started their training in August 2015. This was after the Crimean Crisis, which heightened international tensions and thus ended the brief respite of seven years when Lithuania abolished its conscription in 2008. The Lithuanian Armed Forces consist of the Lithuanian Land Force, Lithuanian Air Force, Lithuanian Naval Force, Lithuanian Special Operations Force and other units: Logistics Command, Training and Doctrine Command, Headquarters Battalion, Military Police. Directly subordinated to the Chief of Defence are the Special Operations Forces and Military Police. The Reserve Forces are under command of the Lithuanian National Defence Volunteer Forces. The core of the Lithuanian Land Force structure is the "Iron Wolf" Mechanised Infantry Brigade (MIB "Iron Wolf") consisting of three mechanized infantry battalions (Grand Duke Kestutis mechanized, Lithuanian Grand Duke Algirdas mechanized and Grand Duke Vaidotas mechanized battalions) and artillery battalion. Other units include King Mindaugas Hussar Battalion, Grand Duchess Birute Uhlan Battalion, Grand Duke Butigeidis Dragoon Battalion, Juozas Vitkus Engineer Battalion and Juozas Luksa Land Force Training Center. The Lithuanian Land forces are undertaking a major modernization. New weapons and heavier armour are going to be acquired. In 2007 the Land forces bought the German Heckler & Koch G36 rifle to replace the older Swedish Ak-4 as main weapon. There are plans to buy new Infantry fighting vehicles. Lithuania is determined to restructure the armed forces so that from the end of 2014, one tenth of the Land Forces could at any given time be deployed for international operations, while half of the Land Forces would be prepared to be deployed outside Lithuania's borders. The volunteers have already successfully participated in international operations in the Balkans, Afghanistan and Iraq. The NDVF consists of six territorial units. The Lithuanian Air Force (LAF) is an integral part of the Lithuanian Armed Forces. The LAF is formed from professional military servicemen and non-military personnel. Units are located at various bases across Lithuania: The initial formation of the LAF was the 2nd transport squadron with the transfer of 20 An-2 aircraft from civilian to military use, with initial basing at the Barushai air base on 27 April 1992. These were joined by four L-39C Albatros aircraft purchased from Kazakhstan as part of the intended 16 to be used by the 1st fighter (training) squadron. Mil Mi-8 helicopters were modernised by LAF (extended fuel tanks and avionics). In 2008 2 medium-range radars were acquired for the Air Forces Airspace Surveillance and Control Command. Air space is patrolled by jet fighters from other NATO members, which are based out of the city Šiauliai (Zokniai Airport, known as the Aviation base) (see Baltic Air Policing). The European Union's External border (with Kaliningrad and Belarus) is patrolled by Aviation Unit of the Lithuanian State Border Guard Service which received new helicopters EC-120, EC-135 and EC-145. The Navy has over 600 personnel. The Navy consists of the Warship Flotilla, the Sea Coastal Surveillance System, the Explosive Ordnance Disposal (EOD) Divers Team, the Naval Logistic Service, Training Center and Maritime Rescue Coordination Center. The flotilla is the core component of the Navy, and consists of the Mine Countermeasures Squadron, the Patrol Ships Squadron, and the Harbour Boats Group. The current Commander in Chief of the Lithuanian Navy is Rear Admiral Kęstutis Macijauskas. The Naval base and Headquarters are located in the city of Klaipėda. The Navy uses patrol ships for coastal surveillance. The four newly acquired Flyvefisken class patrol vessels replaced the older Storm class patrol boats and Grisha class corvettes. The Lithuanian Special Operations Force of Lithuanian Armed Forces has been in operation "de facto" since 2002 and it was established "de jure" on 3 April 2008, when amendments of National Defence System organisation and military service law came into force. The Special Operations Force is formed from the Special Operations Unit. The Special Operations Force is responsible for special reconnaissance, direct actions, and military support. It is also in charge of other tasks, e.g., protection of VIPs in peacetime. Its core is based on the Special Purpose Service, Vytautas the Great Jaeger Battalion and Combat Divers Service. Lithuanian Air Force Special Operations Element is subordinate to the Unit at the level of operations management. Its structure is flexible which makes it easy to form squadrons intended for concrete operations and missions from its elements. The Special Operations Force can be called upon inside the territory of Lithuania when law enforcement agencies lack or do not have necessary capabilities to react to terrorist attacks. Capabilities of special forces makes them the main national response force responsible for counter-terrorism operations and operations to prevent violations of sovereignty. The Special Operations Force Squadron "Aitvaras" was deployed to Afghanistan on the operation "Enduring Freedom". From 2005 to 2006 its squadrons were on standby in NATO Response Force. Soon after restoration of independence, Lithuania applied for NATO membership in January 1994. Together with another six Central and Eastern European countries, Lithuania was invited to join the North Atlantic Treaty Organization in the 2002 Prague summit and became a member of the Alliance in March 2004. Lithuania entered NATO on full-fledged rights immediately after the procedures of joining the North Atlantic Treaty were completed and acquired rights to participate in the political decision-making process of the Alliance. Integration into the military structures of NATO became a long-term task of Lithuanian Armed Forces. Mechanised Infantry Brigade "Iron Wolf" was affiliated to the Danish Division on the basis of agreements signed by Denmark and Lithuania in August 2006. Lithuanian Armed Forces started to boost ability of the Brigade to cooperate with the forces of other NATO members. As Lithuania and the other Baltic states do not have capabilities to secure their airspace, fighter jets of NATO members were deployed in Zokniai airport near the city Šiauliai to provide cover for the Baltic states airspace as soon as Lithuania acquired membership in the Alliance. Lithuania also cooperates with the two other Baltic states – Latvia and Estonia in several trilateral Baltic defence co-operation initiatives: In January 2011, the Baltic states were invited to join NORDEFCO, the defence framework of the Nordic countries. In November 2012, the three countries agreed to create a joint military staff in 2013. Future co-operation will include sharing of national infrastructures for training purposes and specialisation of training areas "(BALTTRAIN)" and collective formation of battalion-sized contingents for use in the NATO rapid-response force. Lithuanian soldiers have taken part in international operations since 1993. Since the summer of 2005 Lithuania has been part of the International Security Assistance Force in Afghanistan (ISAF), leading a Provincial Reconstruction Team (PRT) in the town of Chaghcharan in the province of Ghor. The PRT includes personnel from Denmark, Iceland and US. There have also been special operation forces units in Afghanistan. They were placed in Kandahar province. Since joining international operations in 1993 Lithuania has lost two soldiers. 1st Lt. Normundas Valteris fell in Bosnia (17 April 1996), Sgt. Arūnas Jarmalavičius in Afghanistan (22 May 2008).
https://en.wikipedia.org/wiki?curid=17827
Armed Forces of Honduras The Armed Forces of Honduras (), consists of the Honduran Army, Honduran Navy and Honduran Air Force. During the twentieth century, Honduran military leaders frequently became presidents, either through elections or by coups d'état. General Tiburcio Carías Andino was elected in 1932, he later on called a constituent assembly that allowed him to be reelected, and his rule became more authoritarian until an election in 1948. During the following decades, the military of Honduras carried out several coups d'état, starting in October 1955. General Oswaldo López Arellano carried out the next coup in October 1963 and a second in December 1972, followed by coups in 1975 by Juan Alberto Melgar Castro and in 1978 by Policarpo Paz García. Events during the 1980s in El Salvador and Nicaragua led Honduras – with US assistance – to expand its armed forces considerably, laying particular emphasis on its air force, which came to include a squadron of US-provided F-5s. The military unit Battalion 316 carried out political assassinations and the torture of suspected political opponents of the government during this same period. Battalion members received training and support from the United States Central Intelligence Agency, in Honduras, at U.S. military bases and in Chile during the presidency of the dictator Augusto Pinochet. Amnesty International estimated that at least 184 people "disappeared" from 1980 to 1992 in Honduras, most likely due to actions of the Honduran military. The resolution of the civil wars in El Salvador and Nicaragua, and across-the-board budget cuts made in all ministries, has brought reduced funding for the Honduran armed forces. The abolition of the draft has created staffing gaps in the now all-volunteer armed forces. The military is now far below its authorized strength, and further reductions are expected. In January 1999, the Constitution was amended to abolish the position of military commander-in-chief of the armed forces, thus codifying civilian authority over the military. Since 2002, soldiers have been involved in crime prevention and law enforcement, patrolling the streets of the major cities alongside the national police. On 28 June 2009, in the context of a constitutional crisis, the military, acting on orders of the Supreme Court of Justice, arrested the president, Manuel Zelaya after which they forcibly removed elected President Zelaya from Honduras. See the article 2009 Honduran constitutional crisis regarding claims regarding legitimacy and illegitimacy of the event, and events preceding and following the removal of Zelaya from Honduras. The military's chief lawyer, Colonel Herberth Bayardo Inestroza Membreño, made public statements regarding the removal of Zelaya. On June 30, he showed a detention order, apparently signed June 26 by a Supreme Court judge, which ordered the armed forces to detain the president. Colonel Inestroza later stated that deporting Zelaya did not comply with the court order: "In the moment that we took him out of the country, in the way that he was taken out, there is a crime. Because of the circumstances of the moment this crime occurred, there is going to be a justification and cause for acquittal that will protect us." He said the decision was taken by the military leadership "in order to avoid bloodshed". Following the 2009 ouster of the president, the Honduran military together with other government security forces were allegedly responsible for thousands of allegedly arbitrary detentions and for several forced disappearances and extrajudicial executions of opponents to the "de facto" government, including members of the Democratic Unification Party. However, evidence about these actions has yet to be provided and there has been some questioning in local media about the actual perpetrators, suggesting that they could actually be related to disputes within the leftists organizations themselves. Land Bases The FAH operates from four air bases located at: "academia"> With the exception of Soto Cano Air Base, all other air bases operate as dual civil and military aviation facilities. Additionally, three air stations are located at: Also a radar station operates at: The navy is a small force dealing with coastal and riverine security. The navy has 31 patrol boats and landing craft. The Honduran navy has 4 naval bases: Additionally, the Honduran navy has the following unit and schools: According to a statement in July 2009 by a legal counsel of the Honduras military, Colonel Herberth Bayardo Inestroza, part of the elite Honduran military generals were opposed to President Manuel Zelaya, whom the military had removed from Honduras via a military Coup d'état, because of his left-wing politics. Inestroza stated, "It would be difficult for us [the military], with our training, to have a relationship with a leftist government. That's impossible." The current head of the armed forces is Carlos Antonio Cuéllar, graduate of the General Francisco Morazan Military Academy and the School of the Americas. In January 2011, the General Rene Arnoldo Osorio Canales former head of the Presidential Honor Guard, was appointed Commander. As of 2012 the Honduran Military has the highest military expenditures of all Central America.
https://en.wikipedia.org/wiki?curid=13402
Hong Kong Hong Kong (; , ), officially the Hong Kong Special Administrative Region of the People's Republic of China (HKSAR), is a metropolitan area and special administrative region of the People's Republic of China on the eastern Pearl River Delta of the South China Sea. With over 7.5 million people of various nationalities in a territory, Hong Kong is one of the most densely populated places in the world. Hong Kong became a colony of the British Empire after the Qing Empire ceded Hong Kong Island at the end of the First Opium War in 1842. The colony expanded to the Kowloon Peninsula in 1860 after the Second Opium War, and was further extended when Britain obtained a 99-year lease of the New Territories in 1898. The whole territory was transferred to China in 1997. As a special administrative region, Hong Kong maintains separate governing and economic systems from that of mainland China under a principle of "one country, two systems". Originally a sparsely populated area of farming and fishing villages, the territory has become one of the world's most significant financial centres and commercial ports. It is the world's tenth-largest exporter and ninth-largest importer. Hong Kong has a major capitalist service economy characterised by low taxation and free trade, and its currency, the Hong Kong dollar, is the eighth most traded currency in the world. Hong Kong is home to the second-highest number of billionaires of any city in the world, the highest number of billionaires of any city in Asia, and the largest concentration of ultra high-net-worth individuals of any city in the world. Although the city has one of the highest per capita incomes in the world, severe income inequality exists among its residents. Hong Kong is a highly developed territory and ranks fourth on the UN Human Development Index. The city also has the largest number of skyscrapers of any city in the world, and its residents have some of the highest life expectancies in the world. The dense space also led to a developed transportation network with public transport rates exceeding 90 percent. Hong Kong is ranked sixth in the Global Financial Centres Index and is the ranked fourth in Asia after Tokyo, Shanghai and Singapore. The name of the territory, first romanised as "He-Ong-Kong" in 1780, originally referred to a small inlet located between Aberdeen Island and the southern coast of Hong Kong Island. Aberdeen was an initial point of contact between British sailors and local fishermen. Although the source of the romanised name is unknown, it is generally believed to be an early phonetic rendering of the Cantonese pronunciation "hēung góng". The name translates as "fragrant harbour" or "incense harbour". "Fragrant" may refer to the sweet taste of the harbour's freshwater influx from the Pearl River or to the odour from incense factories lining the coast of northern Kowloon. The incense was stored near Aberdeen Harbour for export before Victoria Harbour developed. Sir John Davis (the second colonial governor) offered an alternative origin; Davis said that the name derived from "Hoong-keang" ("red torrent"), reflecting the colour of soil over which a waterfall on the island flowed. The simplified name "Hong Kong" was frequently used by 1810. The name was also commonly written as the single word "Hongkong" until 1926, when the government officially adopted the two-word name. Some corporations founded during the early colonial era still keep this name, including Hongkong Land, Hongkong Electric Company, Hongkong and Shanghai Hotels and the Hongkong and Shanghai Banking Corporation (HSBC). The region is first known to have been occupied by humans during the Neolithic period, about 6,000 years ago. However in 2003, stone tools were excavated at the Wong Tei Tung archaeological site, which optical luminescence testing showed date to between 35,000 and 39,000 years ago. Early Hong Kong settlers were a semi-coastal people who migrated from inland and brought knowledge of rice cultivation. The Qin dynasty incorporated the Hong Kong area into China for the first time in 214 BCE, after conquering the indigenous Baiyue. The region was consolidated under the Nanyue kingdom (a predecessor state of Vietnam) after the Qin collapse, and recaptured by China after the Han conquest. During the Mongol conquest of China in the 13th century, the Southern Song court was briefly located in modern-day Kowloon City (the Sung Wong Toi site) before its final defeat in the 1279 Battle of Yamen. By the end of the Yuan dynasty, seven large families had settled in the region and owned most of the land. Settlers from nearby provinces migrated to Kowloon throughout the Ming dynasty. The earliest European visitor was Portuguese explorer Jorge Álvares, who arrived in 1513. Portuguese merchants established a trading post called Tamão in Hong Kong waters, and began regular trade with southern China. Although the traders were expelled after military clashes in the 1520s, Portuguese-Chinese trade relations were re-established by 1549. Portugal acquired a permanent lease for Macau in 1557. After the Qing conquest, maritime trade was banned under the "Haijin" policies. The Kangxi Emperor lifted the prohibition, allowing foreigners to enter Chinese ports in 1684. Qing authorities established the Canton System in 1757 to regulate trade more strictly, restricting non-Russian ships to the port of Canton. Although European demand for Chinese commodities like tea, silk, and porcelain was high, Chinese interest in European manufactured goods was insignificant, so that Chinese goods could only be bought with precious metals. To reduce the trade imbalance, the British sold large amounts of Indian opium to China. Faced with a drug crisis, Qing officials pursued ever more aggressive actions to halt the opium trade. In 1839, the Daoguang Emperor rejected proposals to legalise and tax opium and ordered imperial commissioner Lin Zexu to eradicate the opium trade. The commissioner destroyed opium stockpiles and halted all foreign trade, triggering a British military response and the First Opium War. The Qing surrendered early in the war and ceded Hong Kong Island in the Convention of Chuenpi. However, both countries were dissatisfied and did not ratify the agreement. After more than a year of further hostilities, Hong Kong Island was formally ceded to the United Kingdom in the 1842 Treaty of Nanking. Administrative infrastructure was quickly built by early 1842, but piracy, disease, and hostile Qing policies initially prevented the government from attracting commerce. Conditions on the island improved during the Taiping Rebellion in the 1850s, when many Chinese refugees, including wealthy merchants, fled mainland turbulence and settled in the colony. Further tensions between the British and Qing over the opium trade escalated into the Second Opium War. The Qing were again defeated, and forced to give up Kowloon Peninsula and Stonecutter's Island in the Convention of Peking. By the end of this war, Hong Kong had evolved from a transient colonial outpost into a major entrepôt. Rapid economic improvement during the 1850s attracted foreign investment, as potential stakeholders became more confident in Hong Kong's future. The colony was further expanded in 1898, when Britain obtained a 99-year lease of the New Territories. The University of Hong Kong was established in 1911 as the territory's first institution of higher education. Kai Tak Airport began operation in 1924, and the colony avoided a prolonged economic downturn after the 1925–26 Canton–Hong Kong strike. At the start of the Second Sino-Japanese War in 1937, Governor Geoffry Northcote declared Hong Kong a neutral zone to safeguard its status as a free port. The colonial government prepared for a possible attack, evacuating all British women and children in 1940. The Imperial Japanese Army attacked Hong Kong on 8 December 1941, the same morning as its attack on Pearl Harbor. Hong Kong was occupied by Japan for almost four years before Britain resumed control on 30 August 1945. Its population rebounded quickly after the war, as skilled Chinese migrants fled from the Chinese Civil War, and more refugees crossed the border when the Communist Party took control of mainland China in 1949. Hong Kong became the first of the Four Asian Tiger economies to industrialise during the 1950s. With a rapidly increasing population, the colonial government began reforms to improve infrastructure and public services. The public-housing estate programme, Independent Commission Against Corruption (ICAC), and Mass Transit Railway were all established during the post-war decades to provide safer housing, integrity in the civil service, and more-reliable transportation. Although the territory's competitiveness in manufacturing gradually declined due to rising labour and property costs, it transitioned to a service-based economy. By the early 1990s, Hong Kong had established itself as a global financial centre and shipping hub. The colony faced an uncertain future as the end of the New Territories lease approached, and Governor Murray MacLehose raised the question of Hong Kong's status with Deng Xiaoping in 1979. Diplomatic negotiations with China resulted in the 1984 Sino-British Joint Declaration, in which the United Kingdom agreed to transfer the colony in 1997 and China would guarantee Hong Kong's economic and political systems for 50 years after the transfer. The impending transfer triggered a wave of mass emigration as residents feared an erosion of civil rights, the rule of law, and quality of life. Over half a million people left the territory during the peak migration period, from 1987 to 1996. Hong Kong was transferred to China on 1 July 1997, after 156 years of British rule. Immediately after the transfer, Hong Kong was severely affected by several crises. The government was forced to use substantial foreign-exchange reserves to maintain the Hong Kong dollar's currency peg during the 1997 Asian financial crisis, and the recovery from this was muted by an H5N1 avian-flu outbreak and a housing surplus. This was followed by the 2003 SARS epidemic, during which the territory experienced its most serious economic downturn. Political debates after the transfer of sovereignty have centred around the region's democratic development and the central government's adherence to the "one country, two systems" principle. After reversal of the last colonial era Legislative Council democratic reforms following the handover, the regional government unsuccessfully attempted to enact national security legislation pursuant to Article 23 of the Basic Law. The central government decision to implement nominee pre-screening before allowing Chief Executive elections triggered a series of protests in 2014 which became known as the Umbrella Revolution. Discrepancies in the electoral registry and disqualification of elected legislators after the 2016 Legislative Council elections and enforcement of national law in the West Kowloon high-speed railway station raised further concerns about the region's autonomy. In June 2019, large protests again erupted in response to a proposed extradition amendment bill permitting extradition of fugitives to mainland China. The protests have continued into December, possibly becoming the largest-scale political protest movement in Hong Kong history, with organisers claiming to have attracted more than one million Hong Kong residents. Hong Kong has been a special administrative region of China since 1997, with executive, legislative, and judicial powers devolved from the national government. The Sino-British Joint Declaration provided for economic and administrative continuity through the transfer of sovereignty, resulting in an executive-led governing system largely inherited from the territory's history as a British colony. Under these terms and the "one country, two systems" principle, the Basic Law of Hong Kong is the regional constitution. The regional government is composed of three branches: The Chief Executive is the head of government and serves for a maximum of two five-year terms. The State Council (led by the Premier of China) appoints the Chief Executive after nomination by the Election Committee, which is composed of 1,200 business, community, and government leaders. The Legislative Council has 70 members, each serving a four-year term. 35 are directly elected from geographical constituencies and 35 represent functional constituencies (FC). Thirty FC councillors are selected from limited electorates representing sectors of the economy or special interest groups, and the remaining five members are nominated from sitting District Council members and selected in region-wide double direct elections. All popularly elected members are chosen by proportional representation. The 30 limited electorate functional constituencies fill their seats using first-past-the-post or instant-runoff voting. Twenty-two political parties had representatives elected to the Legislative Council in the 2016 election. These parties have aligned themselves into three ideological groups: the pro-Beijing camp (the current government), the pro-democracy camp, and localist groups. The Communist Party does not have an official political presence in Hong Kong, and its members do not run in local elections. Hong Kong is represented in the National People's Congress by 36 deputies chosen through an electoral college, and 203 delegates in the Chinese People's Political Consultative Conference appointed by the central government. Chinese national law does not generally apply in the region and Hong Kong is treated as a separate jurisdiction. Its judicial system is based on common law, continuing the legal tradition established during British rule. Local courts may refer to precedents set in English law and overseas jurisprudence. However, interpretative and amending power over the Basic Law and jurisdiction over acts of state lie with the central authority, making regional courts ultimately subordinate to the mainland's socialist civil law system. Decisions made by the Standing Committee of the National People's Congress override any territorial judicial process. Furthermore, in circumstances where the Standing Committee declares a state of emergency in Hong Kong, the State Council may enforce national law in the region. The territory's jurisdictional independence is most apparent in its immigration and taxation policies. The Immigration Department issues passports for permanent residents which differ from those of the mainland or Macau, and the region maintains a regulated border with the rest of the country. All travellers between Hong Kong and China and Macau must pass through border controls, regardless of nationality. Mainland Chinese citizens do not have right of abode in Hong Kong and are subject to immigration controls. Public finances are handled separately from the national government; taxes levied in Hong Kong do not fund the central authority. The Hong Kong Garrison of the People's Liberation Army is responsible for the region's defence. Although the Chairman of the Central Military Commission is supreme commander of the armed forces, the regional government may request assistance from the garrison. Hong Kong residents are not required to perform military service and current law has no provision for local enlistment, so its defence is composed entirely of non-Hongkongers. The central government and Ministry of Foreign Affairs handle diplomatic matters, but Hong Kong retains the ability to maintain separate economic and cultural relations with foreign nations. The territory actively participates in the World Trade Organization, the Asia-Pacific Economic Cooperation forum, the International Olympic Committee, and many United Nations agencies. The regional government maintains trade offices in Greater China and other nations. The territory is divided into 18 districts, each represented by a district council. These advise the government on local issues such as public facility provisioning, community programme maintenance, cultural promotion, and environmental policy. There are a total of 479 district council seats, 452 of which are directly elected. Rural committee chairmen, representing outlying villages and towns, fill the 27 non-elected seats. Hong Kong is governed by a hybrid regime that is not fully representative of the population. Legislative Council members elected by functional constituencies composed of professional and special interest groups are accountable to those narrow corporate electorates and not the general public. This electoral arrangement has guaranteed a pro-establishment majority in the legislature since the transfer of sovereignty. Similarly, the Chief Executive is selected by establishment politicians and corporate members of the Election Committee rather than directly elected. Although universal suffrage for Chief Executive and all Legislative Council elections are defined goals of Basic Law Articles 45 and 68, the legislature is only partially directly elected and the executive continues to be nominated by an unrepresentative body. The government has been repeatedly petitioned to introduce direct elections for these positions. Ethnic minorities (except those of European ancestry) have marginal representation in government, and often experience discrimination in housing, education, and employment. Employment vacancies and public service appointments frequently have language requirements which minority job seekers do not meet, and language education resources remain inadequate for Chinese learners. Foreign domestic helpers, predominantly women from the Philippines and Indonesia, have little protection under regional law. Although they live and work in Hong Kong, these workers are not treated as ordinary residents and are ineligible for right of abode in the territory. Sex trafficking in Hong Kong is an issue. Hongkonger and foreign women and girls are forced into prostitution in brothels, homes, and businesses in the city. The Joint Declaration guarantees the Basic Law for 50 years after the transfer of sovereignty. It does not specify how Hong Kong will be governed after 2047, and the central government's role in determining the territory's future system of government is the subject of political debate and speculation. Hong Kong's political and judicial systems may be reintegrated with China's at that time, or the territory may continue to be administered separately. Hong Kong is on China's southern coast, east of Macau, on the east side of the mouth of the Pearl River estuary. It is surrounded by the South China Sea on all sides except the north, which neighbours the Guangdong city of Shenzhen along the Sham Chun River. The territory's area consists of Hong Kong Island, the Kowloon Peninsula, the New Territories, Lantau Island, and over 200 other islands. Of the total area, is land and is water. The territory's highest point is Tai Mo Shan, above sea level. Urban development is concentrated on the Kowloon Peninsula, Hong Kong Island, and in new towns throughout the New Territories. Much of this is built on reclaimed land, due to the lack of developable flat land; (six per cent of the total land or about 25 per cent of developed space in the territory) is reclaimed from the sea. Undeveloped terrain is hilly to mountainous, with very little flat land, and consists mostly of grassland, woodland, shrubland, or farmland. About 40 per cent of the remaining land area are country parks and nature reserves. The territory has a diverse ecosystem; over 3,000 species of vascular plants occur in the region (300 of which are native to Hong Kong), and thousands of insect, avian, and marine species. An envoy of Queen Victoria once called it a “barren rock.” Hong Kong has a humid subtropical climate (Köppen "Cwa"), characteristic of southern China. Summer is hot and humid, with occasional showers and thunderstorms and warm air from the southwest. Typhoons occur most often then, sometimes resulting in floods or landslides. Winters are mild and usually sunny at the beginning, becoming cloudy towards February; an occasional cold front brings strong, cooling winds from the north. The most temperate seasons are spring (which can be changeable) and autumn, which is generally sunny and dry. When there is snowfall, which is extremely rare, it is usually at high elevations. Hong Kong averages 1,709 hours of sunshine per year; the highest and lowest recorded temperatures at the Hong Kong Observatory are on 22 August 2017 and on 18 January 1893. The highest and lowest recorded temperatures in all of Hong Kong are at Wetland Park on 22 August 2017, and at Tai Mo Shan on 24 January 2016. Hong Kong has the world's largest number of skyscrapers, with 317 towers taller than , and the third-largest number of high-rise buildings in the world. The lack of available space restricted development to high-density residential tenements and commercial complexes packed closely together on buildable land. Single-family detached homes are extremely rare, and generally only found in outlying areas. The International Commerce Centre and Two International Finance Centre are the tallest buildings in Hong Kong and among the tallest in the Asia-Pacific region. Other distinctive buildings lining the Hong Kong Island skyline include the HSBC Main Building, the anemometer-topped triangular Central Plaza, the circular Hopewell Centre, and the sharp-edged Bank of China Tower. Demand for new construction has contributed to frequent demolition of older buildings, freeing space for modern high-rises. However, many examples of European and Lingnan architecture are still found throughout the territory. Older government buildings are examples of colonial architecture. The 1846 Flagstaff House, the former residence of the commanding British military officer, is the oldest Western-style building in Hong Kong. Some (including the Court of Final Appeal Building and the Hong Kong Observatory) retain their original function, and others have been adapted and reused; the Former Marine Police Headquarters was redeveloped into a commercial and retail complex, and Béthanie (built in 1875 as a sanatorium) houses the Hong Kong Academy for Performing Arts. The Tin Hau Temple, dedicated to the sea goddess Mazu (originally built in 1012 and rebuilt in 1266), is the territory's oldest existing structure. The Ping Shan Heritage Trail has architectural examples of several imperial Chinese dynasties, including the Tsui Sing Lau Pagoda (Hong Kong's only remaining pagoda). "Tong lau", mixed-use tenement buildings constructed during the colonial era, blended southern Chinese architectural styles with European influences. These were especially prolific during the immediate post-war period, when many were rapidly built to house large numbers of Chinese migrants. Examples include Lui Seng Chun, the Blue House in Wan Chai, and the Shanghai Street shophouses in Mong Kok. Mass-produced public-housing estates, built since the 1960s, are mainly constructed in modernist style. The Census and Statistics Department estimated Hong Kong's population at 7,482,500 in mid-2019. The overwhelming majority (92 per cent) is Han Chinese, most of whom are Taishanese, Teochew, Hakka, and a number of other Cantonese peoples. The remaining eight per cent are non-ethnic Chinese minorities, primarily Filipinos, Indonesians, and South Asians. About half the population have some form of British nationality, a legacy of colonial rule; 3.4 million residents have British National (Overseas) status, and 260,000 British citizens live in the territory. The vast majority also hold Chinese nationality, automatically granted to all ethnic Chinese residents at the transfer of sovereignty. Headline population density of about 6,800 people/km2 does not reflect true densities since only 6.9% of land is residential, the residential average population density calculates closer to a highly cramped 100,000/km2. The predominant language is Cantonese, a variety of Chinese originating in Guangdong. It is spoken by 94.6 per cent of the population, 88.9 per cent as a first language and 5.7 per cent as a second language. Slightly over half the population (53.2 per cent) speaks English, the other official language; 4.3 per cent are native speakers, and 48.9 per cent speak English as a second language. Code-switching, mixing English and Cantonese in informal conversation, is common among the bilingual population. Post-handover governments have promoted Mandarin, which is currently about as prevalent as English; 48.6 per cent of the population speaks Mandarin, with 1.9 per cent native speakers and 46.7 per cent speaking it as a second language. Traditional Chinese characters are used in writing, rather than the simplified characters used on the mainland. Among the religious population, the traditional "three teachings" of China, Buddhism, Confucianism, and Taoism, have the most adherents (20 per cent), and are followed by Christianity (12 per cent) and Islam (four per cent). Followers of other religions, including Sikhism, Hinduism, Judaism, and the Bahá'í Faith, generally originate from regions where their religion predominates. Life expectancy in Hong Kong was 82.2 years for males and 87.6 years for females in 2018, the sixth-highest in the world. Cancer, pneumonia, heart disease, cerebrovascular disease, and accidents are the territory's five leading causes of death. The universal public healthcare system is funded by general-tax revenue, and treatment is highly subsidised; on average, 95 per cent of healthcare costs are covered by the government. Income inequality has risen since the transfer of sovereignty, as the region's ageing population has gradually added to the number of nonworking people. Although median household income steadily increased during the decade to 2016, the wage gap remained high; the 90th percentile of earners receive 41 per cent of all income. The city has the most billionaires per capita, with one billionaire per 109,657 people. Despite government efforts to reduce the growing disparity, median income for the top 10 per cent of earners is 44 times that of the bottom 10 per cent. Hong Kong has a capitalist mixed service economy, characterised by low taxation, minimal government market intervention, and an established international financial market. It is the world's 35th-largest economy, with a nominal GDP of approximately US$373 billion. Although Hong Kong's economy has ranked at the top of the Heritage Foundation's economic freedom index since 1995, the territory has a relatively high level of income disparity. The Hong Kong Stock Exchange is the seventh-largest in the world, with a market capitalisation of HK$30.4 trillion (US$3.87 trillion) . Hong Kong is the tenth-largest trading entity in exports and imports (2017), trading more goods in value than its gross domestic product. Over half of its cargo throughput consists of transshipments (goods travelling through Hong Kong). Products from mainland China account for about 40 per cent of that traffic. The city's location allowed it to establish a transportation and logistics infrastructure which includes the world's seventh-busiest container port and the busiest airport for international cargo. The territory's largest export markets are mainland China and the United States. It has little arable land and few natural resources, importing most of its food and raw materials. More than 90 per cent of Hong Kong's food is imported, including nearly all its meat and rice. Agricultural activity is 0.1% of GDP, and consists of growing premium food and flower varieties. Although the territory had one of Asia's largest manufacturing economies during the latter half of the colonial era, Hong Kong's economy is now dominated by the service sector. The sector generates 92.7 per cent of economic output, with the public sector accounting for about 10 per cent. Between 1961 and 1997 Hong Kong's gross domestic product increased by a factor of 180, and per capita GDP increased by a factor of 87. The territory's GDP relative to mainland China's peaked at 27 per cent in 1993; it fell to less than three per cent in 2017, as the mainland developed and liberalised its economy. Economic and infrastructure integration with China has increased significantly since the 1978 start of market liberalisation on the mainland. Since resumption of cross-boundary train service in 1979, many rail and road links have been improved and constructed (facilitating trade between regions). The Closer Partnership Economic Arrangement formalised a policy of free trade between the two areas, with each jurisdiction pledging to remove remaining obstacles to trade and cross-boundary investment. A similar economic partnership with Macau details the liberalisation of trade between the special administrative regions. Chinese companies have expanded their economic presence in the territory since the transfer of sovereignty. Mainland firms represent over half of the Hang Seng Index value, up from five per cent in 1997. As the mainland liberalised its economy, Hong Kong's shipping industry faced intense competition from other Chinese ports. Fifty per cent of China's trade goods were routed through Hong Kong in 1997, dropping to about 13 per cent by 2015. The territory's minimal taxation, common law system, and civil service attract overseas corporations wishing to establish a presence in Asia. The city has the second-highest number of corporate headquarters in the Asia-Pacific region. Hong Kong is a gateway for foreign direct investment in China, giving investors open access to mainland Chinese markets through direct links with the Shanghai and Shenzhen stock exchanges. The territory was the first market outside mainland China for renminbi-denominated bonds, and is one of the largest hubs for offshore renminbi trading. The government has had a passive role in the economy. Colonial governments had little industrial policy, and implemented almost no trade controls. Under the doctrine of "positive non-interventionism", post-war administrations deliberately avoided the direct allocation of resources; active intervention was considered detrimental to economic growth. While the economy transitioned to a service basis during the 1980s, late colonial governments introduced interventionist policies. Post-handover administrations continued and expanded these programmes, including export-credit guarantees, a compulsory pension scheme, a minimum wage, anti-discrimination laws, and a state mortgage backer. Tourism is a major part of the economy, accounting for five per cent of GDP. In 2016, 26.6 million visitors contributed HK$258 billion (US$32.9 billion) to the territory, making Hong Kong the 14th most popular destination for international tourists. It is the most popular Chinese city for tourists, receiving over 70 per cent more visitors than its closest competitor (Macau). The city is ranked as one of the most expensive cities for expatriates. Hong Kong has a highly developed, sophisticated transport network. Over 90 per cent of daily trips are made on public transport, the highest percentage in the world. The Octopus card, a contactless smart payment card, is widely accepted on railways, buses and ferries, and can be used for payment in most retail stores. The Mass Transit Railway (MTR) is an extensive passenger rail network, connecting 93 metro stations throughout the territory. With a daily ridership of over five million, the system serves 41 per cent of all public transit passengers in the city and has an on-time rate of 99.9 per cent. Cross-boundary train service to Shenzhen is offered by the East Rail line, and longer-distance inter-city trains to Guangzhou, Shanghai, and Beijing are operated from Hung Hom Station. Connecting service to the national high-speed rail system is provided at West Kowloon railway station. Although public transport systems handle most passenger traffic, there are over 500,000 private vehicles registered in Hong Kong. Automobiles drive on the left (unlike in mainland China), due to historical influence of the British Empire. Vehicle traffic is extremely congested in urban areas, exacerbated by limited space to expand roads and an increasing number of vehicles. More than 18,000 taxicabs, easily identifiable by their bright colour, are licensed to carry riders in the territory. Bus services operate more than 700 routes across the territory, with smaller public light buses (also known as minibuses) serving areas standard buses do not reach as frequently or directly. Highways, organised with the Hong Kong Strategic Route and Exit Number System, connect all major areas of the territory. The Hong Kong–Zhuhai–Macau Bridge provides a direct route to the western side of the Pearl River estuary. Hong Kong International Airport is the territory's primary airport. Over 100 airlines operate flights from the airport, including locally based Cathay Pacific (flag carrier), Hong Kong Airlines, regional carrier Cathay Dragon, low-cost airline HK Express and cargo airline Air Hong Kong. It is the eighth-busiest airport by passenger traffic, and handles the most air-cargo traffic in the world. Most private recreational aviation traffic flies through Shek Kong Airfield, under the supervision of the Hong Kong Aviation Club. The Star Ferry operates two lines across Victoria Harbour for its 53,000 daily passengers. Ferries also serve outlying islands inaccessible by other means. Smaller kai-to boats serve the most remote coastal settlements. Ferry travel to Macau and mainland China is also available. Junks, once common in Hong Kong waters, are no longer widely available and are used privately and for tourism. The Peak Tram, Hong Kong's first public transport system, has provided funicular rail transport between Central and Victoria Peak since 1888. The Central and Western District has an extensive system of escalators and moving pavements, including the Mid-Levels escalator (the world's longest outdoor covered escalator system). Hong Kong Tramways covers a portion of Hong Kong Island. The MTR operates its Light Rail system, serving the northwestern New Territories. Hong Kong generates most of its electricity locally. The vast majority of this energy comes from fossil fuels, with 46 per cent from coal and 47 per cent from petroleum. The rest is from other imports, including nuclear energy generated on the mainland. Renewable sources account for a negligible amount of energy generated for the territory. Small-scale wind-power sources have been developed, and a small number of private homes have installed solar panels. With few natural lakes and rivers, high population density, inaccessible groundwater sources, and extremely seasonal rainfall, the territory does not have a reliable source of fresh water. The Dongjiang River in Guangdong supplies 70 per cent of the city's water, and the remaining demand is filled by harvesting rainwater. Toilets flush with seawater, greatly reducing freshwater use. Broadband Internet access is widely available, with 92.6 per cent of households connected. Connections over fibre-optic infrastructure are increasingly prevalent, contributing to the high regional average connection speed of 21.9 Mbit/s (the world's fourth-fastest). Mobile-phone use is ubiquitous; there are more than 18 million mobile-phone accounts, more than double the territory's population. Hong Kong is characterised as a hybrid of East and West. Traditional Chinese values emphasising family and education blend with Western ideals, including economic liberty and the rule of law. Although the vast majority of the population is ethnically Chinese, Hong Kong has developed a distinct identity. The territory diverged from the mainland due to its long period of colonial administration and a different pace of economic, social, and cultural development. Mainstream culture is derived from immigrants originating from various parts of China. This was influenced by British-style education, a separate political system, and the territory's rapid development during the late 20th century. Most migrants of that era fled poverty and war, reflected in the prevailing attitude toward wealth; Hongkongers tend to link self-image and decision-making to material benefits. Residents' sense of local identity has markedly increased post-handover: 53 per cent of the population identify as "Hongkongers", while 11 per cent describe themselves as "Chinese". The remaining population purport mixed identities, 23 per cent as "Hongkonger in China" and 12 per cent as "Chinese in Hong Kong". Traditional Chinese family values, including family honour, filial piety, and a preference for sons, are prevalent. Nuclear families are the most common households, although multi-generational and extended families are not unusual. Spiritual concepts such as "feng shui" are observed; large-scale construction projects often hire consultants to ensure proper building positioning and layout. The degree of its adherence to "feng shui" is believed to determine the success of a business. "Bagua" mirrors are regularly used to deflect evil spirits, and buildings often lack floor numbers with a 4; the number has a similar sound to the word for "die" in Cantonese. Food in Hong Kong is primarily based on Cantonese cuisine, despite the territory's exposure to foreign influences and its residents' varied origins. Rice is the staple food, and is usually served plain with other dishes. Freshness of ingredients is emphasised. Poultry and seafood are commonly sold live at wet markets, and ingredients are used as quickly as possible. There are five daily meals: breakfast, lunch, afternoon tea, dinner, and "siu yeh". Dim sum, as part of "yum cha" (brunch), is a dining-out tradition with family and friends. Dishes include congee, "cha siu bao", "siu yuk", egg tarts, and mango pudding. Local versions of Western food are served at "cha chaan teng" (fast, casual restaurants). Common "cha chaan teng" menu items include macaroni in soup, deep-fried French toast, and Hong Kong-style milk tea. Hong Kong developed into a filmmaking hub during the late 1940s as a wave of Shanghai filmmakers migrated to the territory, and these movie veterans helped rebuild the colony's entertainment industry over the next decade. By the 1960s, the city was well known to overseas audiences through films such as "The World of Suzie Wong". When Bruce Lee's "Way of the Dragon" was released in 1972, local productions became popular outside Hong Kong. During the 1980s, films such as "A Better Tomorrow", "As Tears Go By", and "Zu Warriors from the Magic Mountain" expanded global interest beyond martial arts films; locally made gangster films, romantic dramas, and supernatural fantasies became popular. Hong Kong cinema continued to be internationally successful over the following decade with critically acclaimed dramas such as "Farewell My Concubine", "To Live", and "Chungking Express". The city's martial arts film roots are evident in the roles of the most prolific Hong Kong actors. Jackie Chan, Donnie Yen, Jet Li, Chow Yun-fat, and Michelle Yeoh frequently play action-oriented roles in foreign films. At the height of the local movie industry in the early 1990s, over 400 films were produced each year; since then, industry momentum shifted to mainland China. The number of films produced annually has declined to about 60 in 2017. Cantopop is a genre of Cantonese popular music which emerged in Hong Kong during the 1970s. Evolving from Shanghai-style "shidaiqu", it is also influenced by Cantonese opera and Western pop. Local media featured songs by artists such as Sam Hui, Anita Mui, Leslie Cheung, and Alan Tam; during the 1980s, exported films and shows exposed Cantopop to a global audience. The genre's popularity peaked in the 1990s, when the Four Heavenly Kings dominated Asian record charts. Despite a general decline since late in the decade, Cantopop remains dominant in Hong Kong; contemporary artists such as Eason Chan, Joey Yung, and Twins are popular in and beyond the territory. Western classical music has historically had a strong presence in Hong Kong, and remains a large part of local musical education. The publicly funded Hong Kong Philharmonic Orchestra, the territory's oldest professional symphony orchestra, frequently host musicians and conductors from overseas. The Hong Kong Chinese Orchestra, composed of classical Chinese instruments, is the leading Chinese ensemble and plays a significant role in promoting traditional music in the community. Despite its small area, the territory is home to a variety of sports and recreational facilities. The city has hosted a number of major sporting events, including the 2009 East Asian Games, the 2008 Summer Olympics equestrian events, and the 2007 Premier League Asia Trophy. The territory regularly hosts the Hong Kong Sevens, Hong Kong Marathon, Hong Kong Tennis Classic and Lunar New Year Cup, and hosted the inaugural AFC Asian Cup and the 1995 Dynasty Cup. Hong Kong represents itself separately from mainland China, with its own sports teams in international competitions. The territory has participated in almost every Summer Olympics since 1952, and has earned three medals. Lee Lai-shan won the territory's first and only Olympic gold medal at the 1996 Atlanta Olympics. Hong Kong athletes have won 126 medals at the Paralympic Games and 17 at the Commonwealth Games. No longer part of the Commonwealth of Nations, the city's last appearance in the latter was in 1994. Dragon boat races originated as a religious ceremony conducted during the annual Tuen Ng Festival. The race was revived as a modern sport as part of the Tourism Board's efforts to promote Hong Kong's image abroad. The first modern competition was organised in 1976, and overseas teams began competing in the first international race in 1993. The Hong Kong Jockey Club, the territory's largest taxpayer, has a monopoly on gambling and provides over seven per cent of government revenue. Three forms of gambling are legal in Hong Kong: lotteries and betting on horse racing and football. Education in Hong Kong is largely modelled after that of the United Kingdom, particularly the English system. Children are required to attend school from the age of six until completion of secondary education, generally at age 18. At the end of secondary schooling, all students take a public examination and awarded the Hong Kong Diploma of Secondary Education on successful completion. Of residents aged 15 and older, 81.3 per cent completed lower-secondary education, 66.4 per cent graduated from an upper secondary school, 31.6 per cent attended a non-degree tertiary program, and 24 per cent earned a bachelor's degree or higher. Mandatory education has contributed to an adult literacy rate of 95.7 per cent. Lower than that of other developed economies, the rate is due to the influx of refugees from mainland China during the post-war colonial era. Much of the elderly population were not formally educated due to war and poverty. Comprehensive schools fall under three categories: public schools, which are government-run; subsidised schools, including government aid-and-grant schools; and private schools, often those run by religious organisations and that base admissions on academic merit. These schools are subject to the curriculum guidelines as provided by the Education Bureau. Private schools subsidised under the Direct Subsidy Scheme and international schools fall outside of this system and may elect to use differing curricula and teach using other languages. The government maintains a policy of "mother tongue instruction"; most schools use Cantonese as the medium of instruction, with written education in both Chinese and English. Secondary schools emphasise "bi-literacy and tri-lingualism", which has encouraged the proliferation of spoken Mandarin language education. Hong Kong has eleven universities. The University of Hong Kong was founded as the city's first institute of higher education during the early colonial period in 1911. The Chinese University of Hong Kong was established in 1963 to fill the need for a university that taught using Chinese as its primary language of instruction. Along with the Hong Kong University of Science and Technology and City University of Hong Kong, these universities are ranked among the best in Asia. The Hong Kong Polytechnic University, Hong Kong Baptist University, Lingnan University, Education University of Hong Kong, Open University of Hong Kong, Hong Kong Shue Yan University and The Hang Seng University of Hong Kong were all established in subsequent years. Hong Kong's major English-language newspaper is the "South China Morning Post", with "The Standard" serving as a business-oriented alternative. A variety of Chinese-language newspapers are published daily; the most prominent are "Ming Pao", "Oriental Daily News", and "Apple Daily". Local publications are often politically affiliated, with pro-Beijing or pro-democracy sympathies. The central government has a print-media presence in the territory through the state-owned "Ta Kung Pao" and "Wen Wei Po". Several international publications have regional operations in Hong Kong, including "The Wall Street Journal", "The Financial Times", "The New York Times International Edition", "USA Today", "Yomiuri Shimbun", and "The Nikkei". Three free-to-air television broadcasters operate in the territory; TVB, HKTVE, and Hong Kong Open TV air three analogue and eight digital channels. TVB, Hong Kong's dominant television network, has an 80 per cent viewer share. Pay TV services operated by Cable TV Hong Kong and PCCW offer hundreds of additional channels and cater to a variety of audiences. RTHK is the public broadcaster, providing seven radio channels and three television channels. Ten non-domestic broadcasters air programming for the territory's foreign population. Access to media and information over the Internet is not subject to mainland Chinese regulations, including the Great Firewall.
https://en.wikipedia.org/wiki?curid=13404
Geography of Hong Kong Hong Kong, a Special Administrative Region of the People's Republic of China, can be geographically divided into three territories: Kowloon, Hong Kong Island, and the New Territories. Hong Kong is a coastal city and major port in Southern China, bordering Guangdong province through city of Shenzhen to the north and the South China Sea to the West, East and South. Hong Kong and its 260 territorial islands and peninsulas are located at the mouth of the Pearl River Delta. The area of Hong Kong is distinct from Mainland China, but is considered part of "Greater China". Hong Kong has a total area of , of which 3.16% is water. 60 islands are dispersed around Hong Kong, the largest of which by area is Lantau Island, located Southwest of the main peninsula. Lantau Island and the majority of the remaining islands are part of the New Territories, an area that also encompasses the hilly terrain north of Kowloon. Hong Kong Island is separated from Kowloon by Victoria Harbour, a natural landform harbour. The Kowloon Peninsula to the south of Boundary Street and the New Territories to the north of Hong Kong Island were added to Colonial Hong Kong in 1860 and 1898, respectively. Further from Victoria Harbour and the coast, the landscape of Hong Kong is fairly hilly to mountainous with steep slopes. The highest point in the territory is Tai Mo Shan, at a height of 958 metres in the New Territories. Lowlands exist in the northwestern part of the New Territories. Portions of land in the New Territories and Hong Kong island are reserved as country parks and nature reserves. With the fourth highest population density of countries and dependencies in the world at 6,300 people per square kilometer, Hong Kong is known for its shortage of residential space. Hong Kong has undergone several land reclamation projects to provide more space for residential and economical purposes, increasing its land area. This has caused the distance between Hong Kong Island and Kowloon to decrease. Hong Kong International Airport is the sole public airport in the territory, and is mostly located on reclaimed land on the island of Chep Lap Kok. Politically, Hong Kong is divided into 18 districts, each having a district council. Nevertheless, most public services operate across the territory, and travel between the districts is not restricted. Sha Tin is the most populous district as of 2019. The name "Hong Kong", literally meaning "fragrant harbour", is derived from the area around present-day Aberdeen on Hong Kong Island, where fragrant wood products and incense were once traded. The narrow body of water separating Hong Kong Island and Kowloon Peninsula, Victoria Harbour, is one of the deepest natural maritime ports in the world. Hong Kong is east of Macau, on the opposite side of the Pearl River estuary. Hong Kong and Macau are connected through the Hong Kong–Zhuhai–Macau Bridge. Hong Kong's climate is subtropical and monsoonal with cool dry winters and hot and wet summers. As of 2006, its annual average rainfall is , though about 80% of the rain falls between May and September. It is occasionally affected by tropical cyclones between May and November, most often from July to September. The mean temperature of Hong Kong ranges from in January and February to in July and August. January and February are more cloudy, with occasional cold fronts followed by dry northerly winds. It is not uncommon for temperatures to drop below in urban areas. Sub-zero temperatures and frost occur at times on high ground and in the New Territories. March and April can be pleasant although there are occasional spells of high humidity. Fog and drizzle are common on high ground which is exposed to the southeast. May to August are hot and humid with occasional shower and thunderstorms. Afternoon temperatures often exceed whereas at night, temperatures generally remain around with high humidity. In November and December there are pleasant breezes, plenty of sunshine and comfortable temperatures. Hong Kong is on China's southern coast, 60 km (37 mi) east of Macau, on the east side of the mouth of the Pearl River estuary. It is surrounded by the South China Sea on all sides except the north, which neighbours the Guangdong city of Shenzhen along the Sham Chun River. The territory's 2,755 km2 (1,064 sq mi) area consists of Hong Kong Island, the Kowloon Peninsula, the New Territories, Lantau Island, and over 200 other islands. Of the total area, 1,073 km2 (414 sq mi) is land and 35 km2 (14 sq mi) is water. The territory's highest point is Tai Mo Shan, 957 metres (3,140 ft) above sea level. Urban development is concentrated on the Kowloon Peninsula, Hong Kong Island, and in new towns throughout the New Territories. Much of this is built on reclaimed land, due to the lack of developable flat land; 70 km2 (27 sq mi) (six per cent of the total land or about 25 per cent of developed space in the territory) is reclaimed from the sea. Undeveloped terrain is hilly to mountainous, with very little flat land, and consists mostly of grassland, woodland, shrubland, or farmland. About 40 per cent of the remaining land area are country parks and nature reserves. The territory has a diverse ecosystem; over 3,000 species of vascular plants occur in the region (300 of which are native to Hong Kong), and thousands of insect, avian, and marine species. "Total:" "Border city:" Shenzhen Special Economic Zone, Guangdong Province "Figures published by the United States Central Intelligence Agency "Total:" Maritime claims: "Territorial sea:" "Figures published by the United States Central Intelligence Agency Hong Kong has 263 islands over , including Hong Kong Island, Lantau Island, Cheung Chau, Lamma Island, Peng Chau and Tsing Yi Island. Hong Kong's terrain is hilly and mountainous with steep slopes. There are lowlands in the northern part of Hong Kong. A significant amount of land in Hong Kong, especially on the Hong Kong Island and the Kowloon peninsula, is reclaimed. The lowest elevation in Hong Kong is in South China Sea (0 m) while the highest elevation is at Tai Mo Shan () in Tsuen Wan, the New Territories. Victoria Peak, the highest point on Hong Kong Island, at is the 24th highest peak in Hong Kong. The natural resources of Hong Kong can be divided into three main categories: Despite its small size, Hong Kong has a relatively large number of mineral occurrences. Some mineral deposits have been exploited commercially. Metalliferous mineral occurrences are grouped into four broad categories: tin-tungsten-molybdenum mineralisation, copper-lead-zinc mineralisation, iron mineralisation and placer deposits of tin and gold. Mesozoic igneous activity is largely responsible for this diversity of mineral deposits and the mineral concentrations have been variably enhanced by hydrothermal activity associated with faulting. Concentrations of non-metalliferous minerals that have been commercially exploited include kaolin clay, feldspar, quartz, beryl and graphite. For many years, granite and volcanic rocks have been quarried locally for road base metal, riprap, armour stone and asphalt, although the main purpose now is for concrete aggregates. At present, there are three quarries operating in Hong Kong. These are principally in granite and are located at Lam Tei, Shek O and Anderson Road. All the quarries are in the process of rehabilitation and have a life expectancy of between two and eight years. Offshore sand bodies have been dredged for aggregate sand and reclamation fill in Hong Kong as the rate of urban development has increased. Additional natural resources include forest and wildlife. "Arable land:" 2.95% "Permanent crops:" 0.95% "Other:" 96.10% (2012 est.) "Figures published by the United States Central Intelligence Agency Big 22 Tropical cyclones are frequent in Hong Kong during the summer months between June and August. Landslides are common after rainstorms.
https://en.wikipedia.org/wiki?curid=13406
Demographics of Hong Kong This article is about the demographic features of the population of Hong Kong, including population density, ethnicity, education level, health of the populace, religious affiliations and other aspects of the population. Hong Kong is one of the most densely populated areas in the world, with an overall density of some 6,300 people per square kilometre. At the same time, Hong Kong has one of the world's lowest birth rates—1.11 per woman of child-bearing age as of 2012, far below the replacement rate of 2.1. It is estimated that 26.8% of the population will be aged 65 or more in 2033, up from 12.1% in 2005. Hong Kong recorded 8.2 births per 1,000 people in 2005–2010. Ethnically, Hong Kong mainly consists of Han Chinese who constitute approximately 92% of the population. Of these, many originate from various regions in Canton. There are also a number of descendants of immigrants from elsewhere in Southern China and around the world after the end of World War II. People from Hong Kong generally refer to themselves, in Cantonese, as "Hèung Góng Yàhn" (); however, the term is not restricted to those of Chinese descent, owing to Hong Kong's roughly 160-year colonial history that saw the civil servants and traders of British, Indian, Russian and other ethnic groups stationed in Hong Kong. In English, the term 'Hongkongers' (or sometimes 'Hong Kongers') is also used to refer to Hong Kongese people, while the term 'Hongkongese' is sometimes used as an adjective to describe people or things related to Hong Kong. The following census data is available for Hong Kong between the years 1841–2011. In 2011, Hong Kong had a population of just over 7 million, with a density of approximately 6,300 people per square kilometre. This makes Hong Kong the fourth most densely populated region in the world, after Macau, Monaco, and Singapore. According to the 2016 by-census, 92% of the Hong Kong population is ethnic Chinese. The Hong Kong census does not categorise Han Chinese subgroups. However, the majority of Hongkongers of Chinese descent trace their ancestry to various parts of Southern China: the Guangzhou area, followed by Siyi (a region of four counties neighbouring Guangzhou), Chaoshan (a region of North Guangdong home to Teochew speakers), Fujian, and Shanghai. Some Cantonese people also originate from Hakka-speaking villages in the New Territories. Most Teochew-speaking migrants immigrated to Hong Kong between the late 1940s and early 1970s, while migrants from Fujian (previously Southern Min speakers, and increasingly more Central Min and Northern Min speakers) have constituted a growing number of migrants since 1978. Many Taishanese and Cantonese also migrated after 1949. Currently, the major Chinese groups include the Punti, Hakka, Cantonese (including Toishanese), Hoklo, and Tanka (). The Punti, and Tanka people in Hong Kong are largely descendants of the indigenous population, while the Hakka and Hoklo groups are composed of both indigenous groups and more recent migrants. 8% of the population of Hong Kong are categorised as "ethnic minorities", including a large number of Filipinos and Indonesians, who together make up approximately 4.6% of the population. Circa 2018 there were about 2,000 people of African origins with about 800-1,000 in Yuen Long. Due to its history as a trading, business, and tourism hub, a large number of expatriates live in Hong Kong, representing 9.4% of the population. The following lists ethnic groups with significant presence in Hong Kong in alphabetical order by category: According to United Nations estimates from 1 July 2013, Hong Kong's population is distributed in the following age ranges, with the largest age group represented being 50–54 years: The Hong Kong government provides the following estimates for mid-2013: "Median age": 45.0 (2013 est.) As a former British colony, Hong Kong has 2 official languages: English, and Chinese, although the specific variety of Chinese is not specified. The majority of the population uses Cantonese as their usual spoken language. However, due to Hong Kong's role as an international trade and finance hub, there are also a wide variety of minority groups speaking dozens of languages present in the territory. However, a very large proportion of the population in Hong Kong are able to communicate in multiple languages. The school system is separated into English-medium and Chinese-medium school, both of which teach English and Mandarin. According to The World Factbook in 2013, the Hong Kong population was divided into the following male/female ratios: According to The World Factbook estimates in 2002, 93.5% of the population over the age of 15 had attended schooling, including 96.9% of males and 89.6% of females. The following table shows birth rates and mortality rates in Hong Kong between 1950 and 2019. At the end of the 20th century, Hong Kong had one of the lowest birth rates in the world. However, the number of births doubled in the decade between 2001 and 2011, largely due to an increase in the number of children born in Hong Kong to women with residence in Mainland China. In 2001 there were 7,810 births to Mainland women (16%) out of a total of 48,219 births. This increased to 37,253 births to Mainland women (45%) out of a total of 82,095 births. According to The World Factbook in 2013, the infant mortality rate in Hong Kong was 2.89 deaths/1,000 live births. According to The World Factbook in 2013, the average life expectancy for the total population was 82.2 years; 79.47 years for males and 85.14 years for females. Hong Kong is the territory with the world's highest life expectancy according to the United Nations. Source: "UN World Population Prospects" According to The World Factbook in 2006, the average marriage age in Hong Kong was 30 years for males and 27 years for females, and the population was subdivided into the following categories: The World Factbook in 2013 reported that the fertility rate in Hong Kong was 1.11 children born/woman. Over half of all people (56.1% as of 2010) are not religious. Religious people in Hong Kong follow a diverse range of religions, among which Taoist and Buddhist (specifically Chinese Buddhism) faiths are common for people of Chinese descent. Confucian beliefs are popular in Hong Kong, but it is arguable whether Confucianism can be considered as a religion. As such, Confucianism is excluded in some studies. Christian beliefs (Protestantism and Catholicism together) are also common, as well as non-organised Chinese folk religions, whose followers may state that they are not religious. Traditional religions including Chinese Buddhism were discouraged under British rule, which officially represented Christianity. The handover of sovereignty from Britain to China has led to a resurgence of Buddhist and Chinese religions.
https://en.wikipedia.org/wiki?curid=13407
Politics of Hong Kong The politics of Hong Kong takes place in a framework of a political system dominated by its quasi-constitutional document, the Hong Kong Basic Law, its own legislature, the Chief Executive as the head of government and of the Special Administrative Region and of a politically constrained multi-party system. The Government of the Hong Kong Special Administrative Region of the People's Republic of China is led by the Chief Executive, the head of government. On 1 July 1997, sovereignty of Hong Kong was transferred to China (PRC), ending over one and a half centuries of British rule. Hong Kong became a Special Administrative Region (SAR) of the PRC with a high degree of autonomy in all matters except foreign affairs and defence, which are responsibilities of the PRC government. According to the Sino-British Joint Declaration (1984) and the Basic Law, Hong Kong will retain its political, economic and judicial systems and unique way of life and continue to participate in international agreements and organisations as a dependent territory for at least 50 years after retrocession. For instance, the International Olympic Committee recognises Hong Kong as a participating dependency under the name, "Hong Kong, China", separate from the delegation from the People's Republic of China. In accordance with Article 31 of the Constitution of the People's Republic of China, Hong Kong has Special Administrative Region status which provides constitutional guarantees for implementing the policy of "one country, two systems". The Basic Law, Hong Kong's constitutional document, was approved in March 1990 by National People's Congress of China, and entered into force upon the transfer of sovereignty on 1 July 1997. The Hong Kong government is economically liberal, but currently universal suffrage is only granted in District Council elections, and in elections for half of the Legislative Council. The head of the government (Chief Executive of Hong Kong) is elected through an electoral college with the majority of its members elected by a limited number of voters mainly within business and professional sectors. The Chief Executive (CE) is the head of the special administrative region, and is also the highest-ranking official in the Government of Hong Kong Special Administrative Region, and is the head of the executive branch. The Chief Executive is elected by a 1200-member Election Committee drawn mostly from the voters in the functional constituencies but also from religious organisations and municipal and central government bodies. The Executive Council, the top policy organ of the executive government that advises on policy matters, is entirely appointed by the Chief Executive. In accordance with Article 26 of the Basic Law of the Hong Kong Special Administrative Region, permanent residents of Hong Kong are eligible to vote in direct elections for the 35 seats representing geographical constituencies and 35 seats from functional constituencies in the 70-seat, unicameral Legislative Council (LegCo). Within functional constituencies, 5 seats attribute to District Council (Second) which virtually regards the entire city as a single electoral constituency. The franchise for the other 30 seats is limited to about 230,000 voters in the other functional constituencies (mainly composed of business and professional sectors). The Judiciary consists of a series of courts, of which the court of final adjudication is the Court of Final Appeal. While Hong Kong retains the common law system, the Standing Committee of the National People's Congress of China has the power of final interpretation of national laws affecting Hong Kong, including the Basic Law, and its opinions are therefore binding on Hong Kong courts on a prospective basis. On 29 January 1999, the Court of Final Appeal, the highest judicial authority in Hong Kong interpreted several Articles of the Basic Law, in such a way that the Government estimated would allow 1.6 million Mainland China immigrants to enter Hong Kong within ten years. This caused widespread concerns among the public on the social and economic consequences. While some in the legal sector advocated that the National People's Congress (NPC) should be asked to amend the part of the Basic Law to redress the problem, the Government of Hong Kong (HKSAR) decided to seek an interpretation to, rather than an amendment of, the relevant Basic Law provisions from the Standing Committee of the National People's Congress (NPCSC). The NPCSC issued an interpretation in favour of the Hong Kong Government in June 1999, thereby overturning parts of the court decision. While the full powers of NPCSC to interpret the Basic Law is provided for in the Basic Law itself, some critics argue this undermines judicial independence. The Hong Kong 1 July March is an annual protest rally led by the Civil Human Rights Front since the 1997 handover on the HKSAR establishment day. However, it was only in 2003 when it drew large public attention by opposing the bill of the Article 23. It has become the annual platform for demanding universal suffrage, calling for observance and preservation civil liberties such as free speech, venting dissatisfaction with the Hong Kong Government or the Chief Executive, rallying against actions of the Pro-Beijing camp. In 2003, the HKSAR Government proposed to implement Article 23 of the Basic Law by enacting national security bill against acts such as treason, subversion, secession and sedition. However, there were concerns that the legislation would infringe human rights by introducing the mainland's concept of "national security" into the HKSAR. Together with the general dissatisfaction with the Tung administration, about 500,000 people participated in this protest. Article 23 enactment was "temporarily suspended". Towards the end of 2003, the focus of political controversy shifted to the dispute of how subsequent Chief Executives get elected. The Basic Law's Article 45 stipulates that the ultimate goal is universal suffrage; when and how to achieve that goal, however, remains open but controversial. Under the Basic Law, electoral law could be amended to allow for this as soon as 2007 (Hong Kong Basic Law Annex .1, Sect.7). Arguments over this issue seemed to be responsible for a series of Mainland Chinese newspapers commentaries in February 2004 which stated that power over Hong Kong was only fit for "patriots." The interpretation of the NPCSC to Annex I and II of the Basic Law, promulgated on 6 April 2004, made it clear that the National People's Congress' support is required over proposals to amend the electoral system under Basic Law. On 26 April 2004, the Standing Committee of National People's Congress denied the possibility of universal suffrage in 2007 (for the Chief Executive) and 2008 (for LegCo). The NPCSC interpretation and decision were regarded as obstacles to the democratic development of Hong Kong by the democratic camp, and were criticised for lack of consultation with Hong Kong residents. On the other hand, the pro-government camp considered them to be in compliance with the legislative intent of the Basic Law and in line with the 'One country, two systems' principle, and hoped that this would put an end to the controversies on development of political structure in Hong Kong. In 2007 Chief Executive Sir Donald Tsang requested for Beijing to allow direct elections for the Chief Executive. He referred to a survey which said more than half of the citizens of Hong Kong wanted direct elections by 2012. However, he said waiting for 2017 may be the best way to get two-thirds of the support of Legislative Council. Donald Tsang announced that the NPC said it planned to allow the 2017 Chief Executive elections and the 2020 Legislative Council elections to take place by universal suffrage. In 2013, public concern was sparked that the election process for the Chief Executive would involve a screening process that swipes out candidates deemed suitable for the position by Beijing, incited by a comment made by a Deputy of the National People's Congress at an off-the-recorded gathering. On 12 March 2005, the Chief Executive, Tung Chee-hwa, resigned. Immediately after Tung's resignation, there was dispute over the length of the term of the Chief Executive. To most local legal professionals, the length is obviously five years, under whatever circumstances. It should also be noted that the wording of the Basic Law on the term of the Chief Executive is substantially different from the articles in the PRC constitution concerning the length of term of the president, premier, etc. Nonetheless, legal experts from the mainland said it is a convention a successor will only serve the remainder of the term if the position is vacant because the predecessor resigned. The Standing Committee of the National People's Congress exercised its right to interpret the Basic Law, and affirmed that the successor would only serve the remainder of the term. Many in Hong Kong saw this as having an adverse impact on one country, two systems, as the Central People's Government interpret the Basic Law to serve its need, that is, a two-year probation for Tsang, instead of a five-year term. On 4 December 2005, people in Hong Kong demonstrated against Sir Donald Tsang's proposed reform package, before a vote on 21 December. According to the organisers, an estimated 250,000 turned out into the streets. The police supplied a figure of 63,000, and Michael de Golyer of Baptist University estimated between 70,000 and 100,000. The march has sent a strong message to hesitant pro-democracy legislators to follow public opinion. The pro-government camp claims to have collected 700,000 signatures on a petition backing Mr. Tsang's reform package. This number, however, is widely seen as too small to influence pro-democracy lawmakers. The Reform Package debate has seen the return of key political figure and former Chief Secretary Anson Chan, raising speculations of a possible run up for the 2007 Chief Executive election, though she dismissed having a personal interest in standing for the next election. In an attempt to win last minute votes from moderate pro-democracy lawmakers, the government amended its reform package on 19 December by proposing a gradual cut in the number of district council members appointed by the Chief Executive. Their number would be reduced from 102 to 68 by 2008. It would then be decided in 2011 whether to scrap the remaining seats in 2012 or in 2016. The amendment has been seen as a reluctant response by Sir Donald Tsang to give satisfaction to the democratic demands made by demonstrators on 4 December. The move has been qualified "Too little, too late" by pan-democrats in general. On 21 December 2005, the reform political reform package was vetoed by the pro-democracy lawmakers. Chief Secretary Rafael Hui openly criticised pro-democracy Martin Lee and Bishop Zen for blocking the proposed changes. The 24 non-civil service positions under the political appointment system comprise 11 undersecretaries and 13 political assistants. The government named eight newly appointed Undersecretaries on 20 May, and nine Political Assistants on 22 May 2008. The posts were newly created, ostensibly to work closely with bureau secretaries and top civil servants in implementing the Chief Executive's policy blueprint and agenda in an executive-led government. Donald Tsang described the appointments as a milestone in the development of Hong Kong's political appointment system. Controversies arose with the disclosure of foreign passports and salaries. Pressure for disclosure continued to mount despite government insistence on the right of the individuals to privacy: on 10 June 2008, newly appointed Undersecretaries and political assistants, who had previously argued were contractually forbidden from disclosing their remuneration, revealed their salaries. The Government news release stated that the appointees had "voluntarily disclosed their salaries, given the sustained public interest in the issue." On 16 July 2008, Donald Tsang announced some "extraordinary measures for extraordinary times", giving a total of HK$11 billion in inflation relief to help families' finances. Of which, the Employee Retraining levy on the employment of Foreign domestic helpers would be temporarily waived, at an estimated cost of $HK2 billion. It was intended that the levy would be waived for a two-year period on all helpers' employment contracts signed on or after 1 September 2008, but would not apply to ongoing contracts. The Immigration Department said it would not reimburse levies, which are prepaid half-yearly or yearly in advance. The announcement resulted in chaos and confusion, and uncertainty for the helpers as some employers deferred contracts or had dismissed helpers pending confirmation of the effective date, leaving helpers in limbo. On 20 July, Secretary for Labour and Welfare Matthew Cheung announced the waiver commencement date would be brought forward by one month. The Immigration Department would relax its 14-day re-employment requirement for helpers whose contracts expired. On 30 July, the Executive Council approved the measures. After widespread criticism of the situation, the government also conceded that maids having advanced renewal of contract would not be required to leave Hong Kong through the discretion exercised by the Director of Immigration, and employers would benefit from the waiver simply by renewing the contract within the two-year period, admitting that some employers could benefit from the waiver for up to 4 years. The administration's poor handling of the matter came in for heavy criticism. The administrative credibility and competence were called into question by journals from all sides of the political spectrum, and by helpers and employers alike. In August 2008, the appointment of Leung Chin-man as deputy managing director and executive director of New World China Land, subsidiary of New World Development (NWD), was greeted with uproar amidst widespread public suspicion that job offer was a "quid pro quo" for the favours he allegedly granted to NWD. Leung was seen to have been involved with the sale of the Hung Hom Peninsula Home Ownership Scheme (HOS) public housing estate to NWD at under-value in 2004. After a 12-month 'sterilisation period' after retirement, Leung submitted an application to the government on 9 May for approval to take up employment with New World China Land. The Secretary for the Civil Service, Denise Yue Chung-yee, signed off on the approval for him to take up the job after his request passed through the vetting committee. Controversies surrounded not only the suspicions of Leung's own conflict of interest, but also of the insensitivity of the committee which recommended the approval for him to take up his lucrative new job less than two years after his official retirement. New World argued that they hired Leung in good faith after government clearance. On 15 August, the Civil Service Bureau issued the report requested by Donald Tsang, where they admitted that they had neglected to consider Leung's role in the Hung Hom Peninsula affair. Donald Tsang asked the SCS to reassess the approval, and submit a report to him. New World Development announced in the early hours of 16 August that Leung had resigned from his post, without any compensation from either side or from the government, for the termination. The next day, Donald Tsang confirmed that Denise Yue would not have to resign. He was satisfied with her apology and with the explanations offered by her. Tsang ordered a committee, of which Yue was to be a member, to be set up to perform a sweeping review of the system to process applications for former civil servants. In January 2010, five pan-democrats resigned from the Legislative Council of Hong Kong to trigger a by-election in response to the lack of progress in the move towards universal suffrage. They wanted to use the by-election as a de facto referendum for universal suffrage and the abolition of the functional constituencies. The Umbrella Revolution erupted spontaneously in September 2014 in protest of a decision by China's Standing Committee of the National People's Congress (NPCSC) on proposed electoral reform. The austere package provoked mobilisation by students, and the effects became amplified into a political movement involving hundreds of thousands of Hong Kong citizens by heavy-handed policing and government tactics. In February 2019, the Legislative Council proposed a bill to amend extradition rights between Hong Kong and other countries. This bill was proposed because of an incident in which a Hong Kong citizen killed his pregnant girlfriend on vacation in Taiwan. However, there is no agreement to extradite to Taiwan, so he was unable to be charged in Taiwan. The bill proposes a mechanism for transfers of fugitives not only for Taiwan, but also for Mainland China and Macau, which are not covered in the existing laws. There have been a series of protests against the bill, such as on 9 June and 16 June, which were estimated to number one million and two million protesters, respectively. Police brutality and subsequent further oppression to the protesters by the government have led to even more demonstrations, including the anniversary of the handover on 1 July 2019 saw the storming of the Legislative Council Complex, and subsequent protests throughout the summer spread to different districts. On 15 June 2019, Chief Executive Carrie Lam decided to indefinitely suspend the bill in light of the protest, but also made it clear in her remarks that the bill was not withdrawn. On 4 September 2019, Chief Executive Carrie Lam announced that the government would "formally withdraw" the Fugitive Offenders Bill, as well as enacting a number of other reforms. The 2019 Hong Kong District Council election was held on 24 November, the first poll since the beginning of the protests, and one that had been billed as a "referendum" on the government. More than 2.94 million votes were cast for a turnout rate of 71.2%, up from 1.45 million and 47% from the previous election. This was the highest turnout in Hong Kong's history, both in absolute numbers and in turnout rates. The results were a resounding landslide victory for the pro-democracy bloc, as they saw their seat share increased from 30% to almost 88%, with a jump in vote share from 40% to 57%. The largest party before the election, DAB, fell to third place, with its leader's vote share cut from a consistent 80% to 55%, and their three vice-chairs losing. Among those who were also legislators, the overwhelming majority of the losing candidates were from the pro-Beijing bloc. Commenting on the election results, "New Statesman" declared it "the day Hong Kong's true "silent majority" spoke. After the election, the protests slowly became quiet due to the COVID-19 pandemic. All people of Chinese descent, who were born in Hong Kong on or before 30 June 1997, had access to only British nationality. They are therefore British nationals by birth, with the designation of "second class citizen" with no rights of abode in the U.K. The Chinese nationality of such British nationals was enforced involuntarily after 1 July 1997. Before and after the handover, the People's Republic of China has recognised ethnic Chinese people in Hong Kong as its citizens. The PRC issues Home Return Permits for them to enter mainland China. Hong Kong issues the HKSAR passport through its Immigration Department to all PRC citizens who are permanent residents of Hong Kong fitting the right of abode rule. The HKSAR passport is not the same as the ordinary PRC passport, which is issued to residents of mainland China. Only permanent residents of Hong Kong who are PRC nationals are eligible to apply. To acquire the status of permanent resident one has to have "ordinarily resided" in Hong Kong for a period of seven years and adopted Hong Kong as their permanent home. Therefore, citizenships rights enjoyed by residents of mainland China and residents Hong Kong are differentiated even though both hold the same citizenship. New immigrants from mainland China (still possess Chinese Citizenship) to Hong Kong are denied from getting PRC passport from the mainland authorities, and are not eligible to apply for an HKSAR passport. They usually hold the Document of Identity (DI) as the travel document, until the permanent resident status is obtained after seven years of residence. Naturalisation as a PRC Citizen is common among ethnic Chinese people in Hong Kong who are not PRC Citizens. Some who have surrendered their PRC citizenship, usually those who have emigrated to foreign countries and have retained the permanent resident status, can apply for PRC citizenship at the Immigration Department, though they must renounce their original nationality in order to acquire the PRC citizenship. Naturalisation of persons of non-Chinese ethnicity is rare because China does not allow dual citizenship and becoming a Chinese citizen requires the renouncement of other passports. A notable example is Michael Rowse, a permanent resident of Hong Kong and the current Director-General of Investment Promotion of Hong Kong Government, naturalised and became a PRC citizen, for the offices of secretaries of the policy bureaux are only open to PRC citizens. In 2008, a row erupted over political appointees. Five newly appointed Undersecretaries declared that they were in the process of renouncing foreign citizenship as at 4 June 2008, citing public opinion as an overriding factor, and one Assistant had initiated the renunciation process. This was done despite there being no legal or constitutional barrier for officials at this level of government to have foreign nationality. Hong Kong residents who were born in Hong Kong in the British-administered era could acquire the British Dependent Territories citizenship. Hong Kong residents who were not born in Hong Kong could also naturalise as a British Dependent Territories Citizen (BDTC) before the handover. To allow them to retain the status of British national while preventing a possible flood of immigrants from Hong Kong, the United Kingdom created a new nationality status, British National (Overseas) that Hong Kong British Dependent Territories citizens could apply for. Holders of the British National (Overseas) passport - BN(O) - have no right of abode in the United Kingdom. See British nationality law and Hong Kong for details. British National (Overseas) status was given effect by the Hong Kong (British Nationality) Order 1986. Article 4(1) of the Order provided that on and after 1 July 1987, there would be a new form of British nationality, the holders of which would be known as British Nationals (Overseas). Article 4(2) of the Order provided that adults and minors who had a connection to Hong Kong were entitled to make an application to become British Nationals (Overseas) by "registration". Becoming a British National (Overseas) was therefore not an automatic or involuntary process and indeed many eligible people who had the requisite connection with Hong Kong never applied to become British Nationals (Overseas). Acquisition of the new status had to be voluntary and therefore a conscious act. To make it involuntary or automatic would have been contrary to the assurances given to the Chinese government which led to the words "eligible to" being used in paragraph (a) of the United Kingdom Memorandum to the Sino-British Joint Declaration. The deadline for applications passed in 1997. Any person who failed to register as a British Nationals (Overseas) by 1 July 1997 and were eligible to become PRC citizens became solely PRC citizens on 1 July 1997. However, any person who would be rendered stateless by failure to register as a British Nationals (Overseas) automatically became a British Overseas citizen under article 6(1) of the Hong Kong (British Nationality) Order 1986. After the Tiananmen Square protests of 1989, people urged the British Government to grant full British citizenship to all Hong Kong BDTCs – but this request was never accepted. However, it was considered necessary to devise a British Nationality Selection Scheme to enable some of the population to obtain British citizenship. The United Kingdom made provision to grant citizenship to 50,000 families whose presence was important to the future of Hong Kong under the British Nationality Act (Hong Kong) 1990. After reunification, all PRC citizens with the right of abode in Hong Kong (holding Hong Kong permanent identity cards) are eligible to apply for the HKSAR passport issued by the Hong Kong Immigration Department. As the visa-free-visit destinations of the HKSAR passport are very similar with that of a BN(O) passport and the application fee for the former is much lower (see articles HKSAR passport and British passport for comparison and verification), the HKSAR passport is becoming more popular among residents of Hong Kong. Hong Kong residents who were not born in Hong Kong (and had not naturalised as a BDTC) could only apply for the Certificate of identity (CI) from the colonial government as travel document. They are not issued (by neither the British nor Chinese authorities) after handover. Former CI holders holding PRC Citizenship (e.g. born in mainland China or Macau) and are permanent residents of Hong Kong are now eligible for the HKSAR passports, making the HKSAR passports more popular. Recent changes to India's Citizenship Act, 1955 (see Indian nationality law) will also allow some children of Indian origin, born in Hong Kong after 7 January 2004, who have a solely BN(O) parent to automatically acquire British Overseas citizenship at birth under the provisions for reducing statelessness in article 6(2) or 6(3) of the Hong Kong (British Nationality) Order 1986. If they have acquired no other nationality after birth, they will be entitled to subsequently register for full British citizenship] with right of abode in the UK. The four main political parties are as follows. Each holds a significant portion of LegCo. Thirteen members are registered as affiliated with the DAB, eight with the Democratic Party, five with the Civic Party, three with the Liberal Party and three with the League of Social Democrats. There are also many unofficial party members: politicians who are members of political parties but have not registered such status in their election applications. There are two major blocs: the pro-democracy camp (opposition camp) and the pro-Beijing camp (pro-establishment camp).
https://en.wikipedia.org/wiki?curid=13408
Economy of Hong Kong The economy of Hong Kong is a highly developed free-market economy characterised by low taxation, almost free port trade and well-established international financial market. Its currency, called the Hong Kong dollar, is legally issued by three major international commercial banks, and is pegged to the US dollar. Interest rates are determined by the individual banks in Hong Kong to ensure they are market driven. There is no officially recognised central banking system, although the Hong Kong Monetary Authority functions as a financial regulatory authority. According to the Index of Economic Freedom, Hong Kong has had the highest degree of economic freedom in the world since the inception of the index in 1995. Its economy is governed under positive non-interventionism, and is highly dependent on international trade and finance. For this reason it is regarded as among the most favorable places to start a company. In fact, a recent study shows that Hong Kong has come from 998 registered start-ups in 2014 to over 2800 in 2018, with eCommerce (22%), Fintech (12%), Software (12%) and Advertising (11%) companies comprising the majority. The Economic Freedom of the World Index listed Hong Kong as the number one country, with a score of 8.97, in 2015. Hong Kong's economic strengths include a sound banking system, virtually no public debt, a strong legal system, ample foreign exchange reserves at around US $408 billion as of mid-2017, rigorous anti-corruption measures and close ties with mainland China. The Hong Kong Stock Exchange is a favourable destination for international firms and firms from mainland China to be listed due to Hong Kong's highly internationalised and modernised financial industry along with its capital market in Asia, its size, regulations and available financial tools, which are comparable to London and New York. Hong Kong's gross domestic product has grown 180 times between 1961 and 1997. Also, the GDP per capita rose by 87 times within the same time frame. Its economy is slightly larger than Israel's or Ireland's and its GDP per capita at purchasing power parity was the sixth highest globally in 2011, higher than the United States and the Netherlands and slightly lower than Brunei. In 2009, Hong Kong's real economic growth fell by 2.8% as a result of the global financial turmoil. By the late 20th century, Hong Kong was the seventh largest port in the world and second only to New York and Rotterdam in terms of container throughput. Hong Kong is a full Member of World Trade Organization. The Kwai Chung container complex was the largest in Asia; while Hong Kong shipping owners were second only to those of Greece in terms of total tonnage holdings in the world. The Hong Kong Stock Exchange is the sixth largest in the world, with a market capitalisation of about US$3.732 trillion. Hong Kong has also had an abundant supply of labour from the regions nearby. A skilled labour force coupled with the adoption of modern British/Western business methods and technology ensured that opportunities for external trade, investment, and recruitment were maximised. Prices and wages in Hong Kong are relatively flexible, depending on the performance and stability of the economy of Hong Kong. Hong Kong raises revenues from the sale and taxation of land and through attracting international businesses to provide capital for its public finance, due to its low tax policy. According to Healy Consultants, Hong Kong has the most attractive business environment within East Asia, in terms of attracting foreign direct investment (FDI). In 2013, Hong Kong was the third largest recipient of FDI in the world. Hong Kong ranked fourth on the Tax Justice Network's 2011 Financial Secrecy Index. The Hong Kong Government was the fourth highest ranked Asian government in the World Economic Forum's Network Readiness Index (NRI), a measure of a government's information and communication technologies in 2016, and ranked 13th globally. The Hong Kong Stock Exchange is the sixth largest in the world, with a market capitalisation of about US$3.732 trillion as of mid-2017. In 2006, the value of initial public offerings (IPO) conducted in Hong Kong was second highest in the world after London. In 2009, Hong Kong raised 22 percent of IPO capital, becoming the largest centre of IPOs in the world. The exchange is the world's 10th largest by turnover and third largest in China. Since the 1997 handover, Hong Kong's economic future became far more exposed to the challenges of economic globalisation and the direct competition from cities in mainland China. In particular, Shanghai claimed to have a geographical advantage. The Shanghai municipal government dreamt of turning the city into China's main economic centre by as early as 2010. The target is to allow Shanghai to catch up to New York by 2040–2050. Hong Kong's economic policy has often been cited by economists such as Milton Friedman and the Cato Institute as an example of laissez-faire capitalism, attributing the city's success to the policy. However, others have argued that the economic strategy is not adequately characterised by the term "laissez-faire". They point out that there are still many ways in which the government is involved in the economy, some of which exceed the degree of involvement in other capitalist countries. For example, the government is involved in public works projects, healthcare, education, and social welfare spending. Further, although rates of taxation on personal and corporate income are low by international standards, unlike most other countries Hong Kong's government raises a significant portion of its revenues from land leases and land taxation. All land in Hong Kong is owned by the government and is leased to private developers and users on fixed terms, for fees which are paid to the state treasury. By restricting the sale of land leases, the Hong Kong government keeps the price of land at what some consider as artificially high prices and this allows the government to support public spending with a low tax rate on income and profit. Hong Kong has been ranked as the world's freest economy in the Index of Economic Freedom of The Heritage Foundation for 24 consecutive years, since its inception in 1995. The index measures restrictions on business, trade, investment, finance, property rights and labour, and considers the impact of corruption, government size and monetary controls in 183 economies. Hong Kong is the only economy to have scored 90 points or above on the 100-point scale, achieved in 2014 and 2018. The following table shows the main economic indicators in 1980–2017. Inflation under 2 % is in green. The international poverty line is a monetary threshold under which an individual is considered to be living in poverty. This threshold is calculated using Purchasing Power Parity. According to the World Bank, the international poverty line was most recently updated in October 2015, in which it was increased from $1.25 per day to $1.90 per day using the value of 2011 dollars. Raising this threshold helps account for changes in costs of living, which directly effects individuals ability to obtain basic necessities across countries. With Hong Kong being one of the largest and most expensive cities in the world, there is no surprise that a portion of the population is living in poverty. Recent figures show that 1.37 million people are living below the poverty line and struggling to survive on HK$4,000 ($510 USD) per month for a one person household, HK$9,800 for a two-person household earning , and HK$15,000 or a three-person household. The poverty rate in Hong Kong hit a high of 20.1%, but recent efforts by government programs have lowered this number to 14.7%. In December 2012, the Commission on Poverty (CoP) was reinstated to prevent and alleviate poverty with three primary functions; analyze the poverty situation, assist policy formulation and to assess policy effectiveness. Cash handouts have been credited with alleviating much of the poverty, but the extent in which poverty has been alleviated is still questionable. Although cash handouts raise households above the poverty line, they are still struggling to meet certain standards as the cost of living in Hong Kong steadily increases. Coupled with these cash payments, statutory minimum wage is set to increase for a second time in the past 10 years. Statutory Minimum Wage (SMW) came into existence on May 1st 2011 and the SMW rate has been HK$34.5 per hour since May 2017. The Legislative Council in Hong Kong most recently approved the revision on the SMW rate to increase to  HK$37.5 per hour, effective May 1st 2019. Although the total statistics for Hong Kong show declining poverty, child poverty has recently increased .3 percentage points, up to a total of 23.1%, as a result of larger households due to children staying with their elderly parents. With economic growth projected to slow in the coming years, poverty becomes an increasingly pressing issue. Beyond benefiting the younger generation through cash handouts and minimum wage increases, expanded elderly allowances have been implemented to increase disposable incomes of the elderly population that can no longer work. As of February 1st 2019 the amount payable per month for eligible elderly population became HK$1,385 in an effort to raise households incomes living with elderly tenants. Although Hong Kong has become one of the largest growing cities in the world, much of the population is struggling to keep up with the rising costs of living. One of the largest issues effecting low income families is the availability of affordable housing. Over the past decade, residential Hong Kong property prices have increased close to 242%, with growth finally starting to decelerate this year. Considering housing is a basic necessity, prices have continuously increased while disposable incomes remain virtually unchanged. As the amount of affordable housing diminishes, it has become much harder for families to find homes in their home country. Public housing programs have been implemented by the government, but delayed construction and growing waitlists have not helped to the extent they planned for. Recent results from a Hong Kong think tank show that by 2022, the average citizen could wait up to 6 years for public housing. Evidence shows that the availability of affordable housing has declined, forcing households to spend more on shelter and less on other necessities. These issues can lead to worse living conditions and imbalanced diets, both of which pose problems beyond just financial well being.
https://en.wikipedia.org/wiki?curid=13409
Communications in Hong Kong Communications in Hong Kong includes a wide-ranging and sophisticated network of radio, television, telephone, Internet, and related online services, reflecting Hong Kong's thriving commerce and international importance. There are some 60 online newspapers (in various languages, but mostly in Traditional Chinese) and the numbers of online periodicals run into the hundreds. The territory is in addition the East and Southeast Asian headquarters for most of the major international communications and media services. Broadcast media and news is provided by several television and radio companies, one of which is government-run. Television provides the major source of news and entertainment for the average family. Chinese television programs are produced for both local and overseas markets. Hong Kong also ranks as an important centre of publishing and printing: numerous books are published yearly for local consumption, several leading foreign publishers have their regional offices in Hong Kong, and many international magazines are printed in the territory. There are a total of nine terrestrial television channels in Hong Kong, owned by three television networks, one of which is a public broadcaster. Hong Kong's terrestrial commercial TV networks can also be seen in Macau, via cable. Television Broadcasts Limited operates TVB Jade, TVB Pearl, J2, iNews and J5, of which Jade and Pearl are available on analogue frequencies. TVB is the city's first commercial terrestrial television network (Asia Television (ATV) began as a subscription television network), and is the city's predominant TV network. HK Television Entertainment operates ViuTV, which is a Cantonese general entertainment channel. The network is mandated by its service license to launch a 17-hour English television channel on or before 31 March 2017. ViuTV is not broadcast on analogue frequencies. Public broadcaster RTHK operates three digital channels, two of which have been simulcast on analogue frequencies formerly used by ATV since April 2, 2016. Paid cable and satellite television have also been widespread, with Cable TV Hong Kong, Now TV, TVB Network Vision and HKBN bbTV being the more prominent providers. The production of Hong Kong's soap drama, comedy series and variety shows have reached mass audiences throughout the Chinese-speaking world. Many international and pan-Asian broadcasters are based in Hong Kong, including News Corporation's STAR TV. The Hong Kong telecommunication industry was deregulated in 1995. There are no foreign ownership restrictions. The Office of Telecommunications Authority (OFTA) is the legislative body responsible for regulating the telecommunications industry. Competition in this sector is fierce. Since 2008, one can get 10 Mbit/s up and down unlimited VDSL, telephone line rental, unlimited local calls, and 100 minutes of international calls for US$25/month. Telephone line rental and unlimited local calls is only US$3/month. , the penetration rate in Hong Kong was estimated at 240.8% over a population estimate of over 7.325 million. Hong Kong's telecom regulator is the Office of the Communications Authority (OFCA). As of April 2006, HKBN offers its customers Internet access with speeds starting from 10 Mbit/s up to 1000 Mbit/s (1 Gbit/s) via Fiber to the building and Fiber to the Home. However the speed to non-Hong Kong destinations is capped to 20 Mbit/s. As of November 2009, the company was offering 100 Mbit/s service for HK$99 (about $13 US) per month. Major Internet Service Providers (ISPs) include: There is very little Internet censorship in Hong Kong beyond laws that criminalise the distribution of unlicensed copyrighted material and obscene images, particularly child pornography. Hong Kong law provides for freedom of speech and press, and the government generally respects these rights in practice. Freedom of expression is well protected by the Hong Kong Bill of Rights. No websites, regardless of their political views, are blocked and government licenses are not required to operate a website. There is some monitoring of the Internet. Democratic activists claim central government authorities closely monitor their e-mails and Internet use.
https://en.wikipedia.org/wiki?curid=13410
Transport in Hong Kong Hong Kong has a highly developed and sophisticated transport network, encompassing both public and private transport. Based on Hong Kong Government's Travel Characteristics Survey, over 90% of the daily journeys are on public transport, the highest rate in the world. However, in 2014 the Transport Advisory Committee, which advises the Government on transportation issues, issued a report on the much worsened congestion problem in Hong Kong and pointed at the excessive growth of private cars during the past 10–15 years. The Octopus card, a smart electronic money payment system, was introduced in September 1997 to provide an alternative to the traditional banknotes and coins. Available for purchase in every stop of the Mass Transit Railway system, the Octopus card is a non-touch payment system which allows payment not only for public transport (such as trains, buses, trams, ferries and minibuses), but also at parking meters, convenience stores, supermarkets, fast-food restaurants and most vending machines. Hong Kong Island is dominated by steep, hilly terrain, which required the development of unusual methods of transport up and down the slopes. In Central and Western district, there is an extensive system of zero-fare escalators and moving pavements. The Mid-levels Escalator is the longest outdoor covered escalator system in the world, operating downhill until 10am for commuters going to work, and then operating uphill until midnight. The Mid-levels Escalator consists of twenty escalators and three moving pavements. It is 800 metres long, and climbs 135 vertical metres. Total travel time is approximately 25 minutes, but most people walk while the escalator moves to shorten the travel time. Due to its vertical climb, the same distance is equivalent to several miles of zigzagging roads if travelled by car. Daily traffic exceeds 35,000 people. It has been operating since 1993 and cost HK$240,000,000 (US$30,000,000) to build. A second Mid-Levels escalator set is planned in Sai Ying Pun: the Centre Street Escalator Link. Hong Kong has an extensive railway network, and the Hong Kong Government has long established that the public transit system has "railway as its backbone". Public transport trains are operated by the MTR Corporation. The MTR operates the metro network within inner urban Hong Kong, Kowloon Peninsula and the northern part of Hong Kong Island with newly developed areas, Tsuen Wan, Tseung Kwan O, Tung Chung, Hong Kong Disneyland, the Hong Kong International Airport, the northeastern and northwestern parts of the New Territories. The Hong Kong Tramways operates a tram service exclusively on northern Hong Kong Island. The Peak Tram connects Central, Hong Kong's central business district, with Victoria Peak. Opened in 1979, the system now includes 218.2 km (135.6 mi) of rail with 161 stations, including 93 railway stations and 68 light rail stops. The railway lines include the East Rail, Kwun Tong, Tsuen Wan, Island, Tung Chung, Tseung Kwan O, West Rail, Ma On Shan, South Island, the Airport Express and the Disneyland Resort lines. Nine of the lines provide general metro services, whereas the Airport Express provides a direct link from the Hong Kong International Airport into the city centre, and the Disneyland Resort Line exclusively takes passengers to and from Hong Kong Disneyland. The Light Rail possesses many characteristics of a tramway, including running on streets with other traffic (at grade) on some of its tracks and providing services for the public in the northwestern New Territories, including Tuen Mun and Yuen Long. All trains and most MTR stations are air-conditioned. The Hong Kong Tramways is the tram system run exclusively with double deckers. The electric tram system was proposed in 1881; however nobody was willing to invest in a system at the time. In August 1901, the Second Tramway Bill was introduced and passed into law as the 1902 Tramway Ordinance. Hong Kong Tramway Electric Company Limited, a British company, was authorised to take the responsibilities in construction and daily operation. In 1904, the tram system first got into service. It was soon taken over by another company, Electric Tranction Company of Hong Kong Limited and then the name was changed to Hong Kong Tramways Company Limited in 1910. The rail system is long, with a total track length of , and it runs together with other vehicles on the street. Its operation relies on the 550V direct current (d.c.) from the overhead cables, on 3'6" gauge (1067 mm) tracks. The trams provide service to only parts of Hong Kong Island: they run on a double track along the northern coast of Hong Kong Island from Kennedy Town to Shau Kei Wan, with a single clockwise-running track of about around Happy Valley Racecourse. There are two funicular railway services in Hong Kong: The Hong Kong International Airport Automated People Mover is a driverless people-mover system located within the Hong Kong International Airport in Chek Lap Kok. It operates in two "segments". For departures, the train runs from Terminal 2 to the East Hall to the West Hall. For arrivals, the train runs only from the West Hall to the East Hall, where all passengers must disembark for immigration, customs, and baggage claim. Operation of the first segment commenced in 1998, and the operation of the second segment commenced in early-2007. There is another system between the terminals. There is also a travellator which can be used. Inter-city train services crossing the Hong Kong-China boundary are known as Intercity Through Trains. They are jointly operated by Hong Kong's MTR Corporation and China Railway High-speed. Hung Hom Station (formerly called "Kowloon Station") and West Kowloon Terminus are the stations in Hong Kong where passengers can catch these trains. With the exception of the XRL, passengers have to go through immigration and customs before boarding. There are currently four through train routes: Bus services have a long history in Hong Kong. As of 2015, five companies operate franchised public bus services, each granted ten-year exclusive operating rights to the set of routes that they operate. Franchise buses altogether carry about one-third of the total daily public transport market of around 12,000,000 passengers, with KMB having 67% of the franchised bus market share, CityBus with 16% and New World First Bus with 13%. There are also a variety of non-franchised public buses services, including feeder bus services to railway stations operated by the railway companies, and residents' services for residential estates (particularly those in the New Territories). The five franchised bus companies are: Founded in 1933, the Kowloon Motor Bus Company (1933) Limited (KMB) is one of the largest privately owned public bus operators in the world. KMB's fleet consist of about 3,900 buses on 400 routes and a staff of over 12,000 people. In 1979, Citybus began its operations in Hong Kong with one double-decker, providing shuttle service for the Hong Kong dockyard. It later expanded into operating a residential bus route between City One, Shatin and Kowloon Tong MTR station. New World First Bus Services Limited was established in 1998, taking over China Motor Bus's franchise to provide bus services on Hong Kong Island together with Citybus. NWFB's owner company later bought Citybus, but the two companies have basically been operating independently. Public light buses (小巴) (widely referred to as minibuses, or sometimes "maxicabs", a "de facto" share taxi) run the length and breadth of Hong Kong, through areas which the standard bus lines can not or do not reach as frequently, quickly or directly. Minibuses carry a maximum of 16 (19 for some routes since 2017) passengers; standing is not permitted. The Hong Kong Transport Department (HKTD) allows and licenses the operation of two types of public light buses: Red minibuses often provide more convenient transport for passengers not served by green minibuses or other public buses, and are thus quite popular. Where green minibus drivers are paid fixed wages to drive their routes, red minibus drivers often rely on their fares for a living and thus are often seen to be more aggressive drivers. The prevalence of aggressive driving has resulted in the Transport Department making it mandatory for Hong Kong minibuses to be equipped with large read-out speedometers which allow passengers to track the speed at which minibus drivers operate. Currently, if minibuses exceed 80 km/h, the speedometer will sound an audible warning signal to the driver and passengers. If the minibus exceeds 100 km/h, the beeping will turn into a sustained tone. The Transport Department has also regulated, after a series of minibus accidents, that all new minibuses brought into service after August 2005 must have seat belts installed, and passengers must use seat belts when they are provided. , there were 18,138 taxis in Hong Kong, operating in three distinct (but slightly overlapping) geographical areas, and distinguished by their colour. Of these, 15,250 are red urban taxis, 2,838 green New Territories taxis, and 50 blue Lantau taxis. Every day, they serve 1,100,000, 207,900, and 1,400 passengers respectively. Taxis carry an average of 1,000,000 passengers each day, occupying about 12% of the daily patronage carried by all modes of public transport in Hong Kong. Most of the taxis in Hong Kong run on LPG (liquified petroleum gas) to reduce emissions. In August 2000, a one-off grant was paid in cash to taxi owners who replaced their diesel taxi with an LPG one. Since August 2001, all newly purchased taxis run on LPG. By the end of 2003, over 99.8% of the taxi fleet in Hong Kong ran on LPG. Taxi fares are charged according to the taximeter; however, additional charges on the fare table may apply, such as road tolls and luggage fees. Urban taxis are the most expensive, while Lantau taxis are the cheapest. The standard of service among different kinds of taxis is mostly the same. The reason for having three types of taxis is to ensure service availability in less populated regions, as running in the urban centre is considered to be more profitable. As of May 2015, the Census and Statistics Department of Hong Kong reports that there are 504,798 licensed vehicles in Hong Kong. In terms of private car ownership, the number of cars per capita is half that of Singapore and one-third that of Taiwan. However, the Transport Advisory Committee, which advises the government on transport policies, issued a report stating that the growth of private cars is too fast and must be contained so as to alleviate congestion problems of Hong Kong. Private cars are most popular in newly developed areas such as New Territories and Lantau and areas near the border with mainland China, as there are fewer public transportation options, and more parking spaces compared to other areas of Hong Kong. Most cars are right-hand drive models, from Japanese or European manufacturers. Almost all private vehicles in Hong Kong have dual airbags and are tested by JNCAP. Vehicles must also be maintained to a high standard, contrary to mainland China regulations. Hong Kong does not allow left-hand drive vehicles to be primarily registered in Hong Kong. However, Hong Kong registered vehicles may apply for secondary mainland Chinese registration plates, and these can be driven across the border to mainland China; likewise, left-hand drive cars seen in Hong Kong are usually primarily registered in mainland China and carry supplementary Hong Kong registration plates. Cars are subjected to a first-time registration tax, which varies from 35% to over 100%, based on the size and value of the car. The level of vehicle taxation was increased by a law passed on 2 June 1982 to discourage private car ownership, and also as an incentive to buy smaller, more efficient cars, as these have less tax levied on them. First-time registration tax was doubled, annual licensing fees were increased by 300%, and $0.70 duty fee was imposed on each litre of light oils. In addition to the heavy traffic at times, parking may be problematic. Due to high urban density, there are not many filling stations; petrol in Hong Kong averages around US$2.04 per litre, of which over half the cost is taxes. It was suggested in the news that the government had deliberately impeded the use of new environmentally friendly diesel engines by allowing only light goods vehicles to be fuelled by diesel. While it cannot be determined why exactly the government does not allow private cars to be fuelled by diesel, it has been pointed out that the government does receive a tax that is 150% of the actual fuel cost. This is mostly to discourage car ownership for environmental reasons. There is a waiting list for local driving tests, while a full (private car) driving licence valid for ten years costs around US$115. Residents of Hong Kong holding licences issued by other Chinese authorities and some foreign countries can get a Hong Kong driving licence exempt from tests if they can adequately show that they obtained their licence while residing in the place concerned (common proofs are school transcripts or employer's documentation). Some private car owners, known as white card drivers, provide a taxi service for a nominal fee. Cycling is a popular means of transport in many parts of the New Territories, where new towns such as Shatin, Tai Po and Sheung Shui have significant cycle track networks. In the auto congested urban areas of Hong Kong and Kowloon, cycling is less common, despite the relatively flat topography of populated areas, in part because it is government policy not to support cycling as part of the transportation system. In 2011, MTR Corporation announced that bicycles were permitted to be taken on all MTR rail lines. Motorcycles by the private users in Hong Kong urban districts are not as popular as in South East Asian countries like Vietnam. They are mostly used for commercial and business purposes. A large number of buses leave various parts of Hong Kong (usually from side streets and hotel entrances) to various cities in the Pearl River Delta, Shenzhen and Guangzhou. Most ferry services are provided by licensed ferry operators. , there were 27 regular licensed passenger ferry services operated by 11 licensees, serving outlying islands, new towns and inner-Victoria Harbour. Two of the routes operated by the Star Ferry are franchised. Additionally, 78 "kai-to" ferries are licensed to serve remote coastal settlements. The following companies operate ferry services in Hong Kong: Star Ferry: New World First Ferry: Hong Kong & Kowloon Ferry: Chuen Kee Ferry: HKR International Limited: Park Island Transport Company Ltd.: Fortune Ferry (富裕小輪) Coral Sea Ferry (珊瑚海船務) Tsui Wah Ferry: In Hong Kong, there are three piers that provides ferry services to Macau and cities in southern China: Ferry services are provided by several different ferry companies at these piers. Fastferry hydrofoil and catamaran service is available at all times of the week between Hong Kong and Macau. TurboJet provides 24-hour services connecting Central and Macau at a frequency of up to every 15 to 30 minutes. It also provides these regular services: Cotai Water Jet provides about 18-hour services connecting Central and Taipa or Outer Harbour, Macau at a frequency of up to every 30 to 60 minutes. It also provides these regular services: Chu Kong Passenger Transport (CKS) connects Hong Kong to cities in Guangdong province, including Zhuhai (Jiuzhou), Shenzhen (Shekou), Zhongshan (Zhongshan Kong), Lianhua Shan (Panyu), Jiangmen, Gongyi, Sanbu, Gaoming, Heshan, Humen, Nanhai, Shunde, Doumen. The average amount of time people spend commuting with public transit in Hong Kong, for example to and from work, on a weekday is 73 min. 21% of public transit riders, ride for more than 2 hours every day. The average amount of time people wait at a stop or station for public transit is 14 min, while 19% of riders wait for over 20 minutes on average every day. The average distance people usually ride in a single trip with public transit is 11.2 km, while 31% travel for over 12 km in a single direction. Hong Kong International Airport (HKG) is the primary airport for the territory and has been at Chep Lap Kok for 20 years. Over 100 airlines operate flights to international and Mainland China destinations from the airport; it is the main hub of flag carrier Cathay Pacific as well as Cathay Dragon, Air Hong Kong, and Hong Kong Airlines. HKG is an important regional transhipment centre, passenger hub, and gateway for destinations in mainland China and the rest of Asia. It also handles the most air cargo traffic in the world. With over 70 million passengers annually, it is the eighth busiest airport worldwide by passenger traffic. HKG is constructed on an artificial island north of Lantau Island and was built to replace the overcrowded Kai Tak Airport in Kowloon Bay. A third runway is also being constructed for the airport. Ferry services link Hong Kong and Macau International Airport; there is an express service at the Hong Kong–Macau Ferry Terminal in which passengers can check in to flights at Macau Airport. Macau Airport has an "Express Link" service operating from the Hong Kong-Macau terminal, China Ferry Terminal, and Tuen Mun Ferry Terminal in which transiting passengers to Macau Airport are not processed through Macau customs. In addition there is a bus service between Hong Kong and Shenzhen Bao'an International Airport in Shenzhen, and people going to Shenzhen Airport may also board a ferry that goes to Fuyong Ferry Terminal at Shenzhen Airport. The majority of area private recreational aviation traffic, under the supervision of the Hong Kong Aviation Club (HKAC), goes in and out of Shek Kong Airfield in the New Territories. The HKAC sent most of its aircraft to Shek Kong in 1994 after the hours for general aviation at Kai Tak Airport were sharply reduced, to two hours per morning, as of July 1 that year. Usage of private aircraft at Shek Kong is restricted to weekends. Externally, frequent passenger helicopter flights to Macau are scheduled daily. There are also chartered services for the VIP and business community within Hong Kong. There are two cable car systems in Hong Kong: The port of Hong Kong has always been a key factor in the development and prosperity of the territory, which is strategically located on the Far East trade routes and is in the geographical centre of the fast-developing Asia-Pacific Basin. The sheltered harbour provides good access and a safe haven for vessels calling at the port from around the world. The Victoria Harbour is one of the busiest ports in the world. An average of 220,000 ships visit the harbour each year, including both oceanliners and river vessels, carrying both goods and passengers. The container port in Hong Kong is one of the busiest in the world. The Kwai Chung Terminal operates 24 hours a day. Together with other facilities in Victoria Harbour, they handled more than in 2005. Some 400 container liners serve Hong Kong weekly, connecting to over 500 destinations around the world. Hong Kong has a fully active international airport. The famous former Kai Tak International Airport retired in favour of the recently constructed Hong Kong International Airport, also known as Chek Lap Kok International Airport. The airport now serves as a transport hub for East Asia, and as the hub for Cathay Pacific, Dragonair, Hong Kong Express, Hong Kong Airlines (former CR Airways), and Air Hong Kong. Ferry services link the airport with several piers in Pearl River Delta, where immigrations and customs are exempted. Kai Tak airport was closed because of privacy reasons and also because of safety reasons; the aircraft came very close to the skyscrapers. Besides, the runway was surrounded by water. HKIA’s network to China is also expanded by the opening of SkyPier in late-September 2003, offering millions in the PRD direct access to the airport. Passengers coming to SkyPier by high-speed ferries can board buses for onward flights while arriving air passengers can board ferries at the pier for their journeys back to the PRD. Passengers travelling in both directions can bypass custom and immigration formalities, which reduces transit time. Four ports – Shekou, Shenzhen, Macau and Humen (Dongguan) – were initially served. As of August 2007, SkyPier serves Shenzhen's Shekou and Fuyong, Dongguan's Humen, Macau, Zhongshan and Zhuhai. Moreover, passengers travelling from Shekou and Macau piers can even complete airline check-in procedures with participating airlines before boarding the ferries and go straight to the boarding gate for the connecting flight at HKIA. The provision of cross boundary coach and ferry services has transformed HKIA into an inter-modal transportation hub combining air, sea and land transport. , the airport is the third busiest airport for passenger traffic, and second-busiest airport for cargo traffic in the world. It is popular with travellers – from 2001 to 2005 and 2007–2008 Hong Kong International Airport has been voted the World's Best Airport in an annual survey of several million passengers worldwide by Skytrax. According to the Guinness World Records, the passenger terminal of the HKIA was the world's largest airport terminal upon opening, and is at present the world's third-largest airport terminal building, with a covered area of 550,000 m² and recently increased to 570,000 m². The Airport Core Programme was the most expensive airport project in the world. Shek Kong Airfield, located near Yuen Long, is a military airfield for the People's Liberation Army, which is of limited operating capabilities due to surrounding terrain. The only aircraft operating on the airfield are PLA's Z-9 helicopters, which is the license-built version of the Eurocopter Dauphin. Hong Kong has three heliports. Shun Tak Heliport (ICAO: VHST) is located in the Hong Kong-Macau Ferry Terminal, by the Shun Tak Centre, in Sheung Wan, on Hong Kong Island. Another is located in southwest Kowloon, near Kowloon Station. The other is located inside Hong Kong International Airport. Heli Express operates regular helicopter service between Macao Heliport (ICAO:VMMH) on the Macau Ferry Terminal in Macau and the Shun Tak Heliport. There are around 16 flights daily. Flights take approximately 20 minutes in the eight-seater aircraft. There are also a number of helipads across the territory, including the roof of the Peninsula Hotel (which is the only rooftop helipad in Kowloon and Hong Kong Island, excluding the rooftop heliport of Shun Tak Centre and those in hospitals) and Cheung Chau Island, between Tung Wan Beach and Kwun Yam Beach. There are a total of 1,831 km of paved highways in Hong Kong. These roads are built to British standards with a maximum of four lanes with hard shoulders. There are nine roads classified as highways in Hong Kong and were renumbered from 1 to 9 in 2004. Routes 1 to 3 are in a north–south direction (with each crossing one of the cross-harbour tunnels) while the others are in an east–west direction: Route 6 is a proposed highway, and is now under construction. There are 120 CCTV cameras monitoring traffic on these highways and connecting roads which are available on-demand (Now TV) and on the Transport Department's website. Highways in Hong Kong use two types of barrier system for divided highways. Older roads use metal guard rails and newer roads use the British Concrete step barrier. All signage on highways and roads in Hong Kong are bilingual (traditional Chinese below and English above). Street signs use black text on a white background. Highway and directional signage are white lettering on blue or green background. There are 12 major vehicular tunnels in Hong Kong. They include three cross-harbour tunnels and nine road tunnels. Other road tunnels and bridges which are proposed or under construction are: There are approximately 22 km of bus priority lanes in Hong Kong. There are 298 bus termini in Hong Kong. Notable examples include: Pedestrian infrastructure in Hong Kong includes: This is a list of ports of entry (i.e. immigration control points) in Hong Kong.
https://en.wikipedia.org/wiki?curid=13411
Foreign relations of Hong Kong Under the Basic Law, the Hong Kong Special Administrative Region is exclusively in charge of its internal affairs and external relations, whilst the Government of the People's Republic of China is responsible for its foreign affairs and defence. As a separate customs territory, Hong Kong maintains and develops relations with foreign states and regions, and plays an active role in such international organisations as World Trade Organization (WTO) and the Asia-Pacific Economic Cooperation (APEC) in its own right under the name of "Hong Kong, China". Hong Kong participates in 16 projects of United Nations Sustainable Development Goals. Hong Kong was under British rule before 1 July 1997. Prior to the implementation of the "Hong Kong Economic and Trade Office Act 1996" enacted by the British Parliament, Hong Kong represented its interests abroad through the Hong Kong Economic and Trade Offices (HKETOs) and via a special office in the British Embassies or High Commissions, but the latter ceased after the sovereignty of Hong Kong was transferred to the PRC and became a special administrative region (SAR) of the PRC in 1997. At present, the "Hong Kong Economic and Trade Offices" under the Government of the Hong Kong Special Administrative Region in countries that are the major trading partners of Hong Kong, including Japan, Canada, Australia, Singapore, Indonesia, the United Kingdom, Germany, the United States, the European Union as well as an ETO in Geneva to represent HKSAR Government in the WTO. These offices serve as the official representative of the Government of the Hong Kong SAR in these countries and international organisations. Its major functions include facilitating trade negotiations and handling trade related matters, inter-government relations with foreign governments; the promoting of investment in Hong Kong; and liaising with the media and business community. The Hong Kong Government has also set up the Hong Kong Tourism Board with offices in other countries and regions to promote tourism. The Hong Kong SAR Government also has an office in Beijing, and three HKETOs at Guangzhou (Guangdong ETO), Shanghai and Chengdu. An HKETO will be set up at Wuhan in the future. The Central People's Government of the PRC also maintains a liaison office in Hong Kong. The Ministry of Foreign Affairs has a representative office in Hong Kong. Hong Kong makes strenuous law enforcement efforts, but faces serious challenges in controlling transit of heroin and methamphetamine to regional and world markets; modern banking systems that provide a conduit for money laundering; rising indigenous use of synthetic drugs, especially among young people. Hong Kong has its own immigration policy and administration. Permanent residents of Hong Kong with PRC nationality hold a different type of passport, called the Hong Kong Special Administrative Region Passport, which is different from that for PRC citizens in Mainland China. Hong Kong permanent residents and mainland Chinese need a passport-like document (the "Home Return Permit" for Hong Kong permanent residents and the Two-way Permit for Mainland Chinese) to cross the Sino-Hong Kong border. Visitors from other countries and regions not participating in waiver programme are required to apply for visas directly to the Hong Kong Immigration Department. In accordance with Article 151 of the Basic Law, Hong Kong concluded over 20 agreements with foreign states in 2010 on matters such as economic and financial co-operation, maritime technical co-operation, postal co-operation and co-operation on wine-related businesses. With the authorisation of the Central People's Government of the PRC, Hong Kong also concluded 12 bilateral agreements with foreign states on air services, investment promotion and protection, mutual legal assistance and visa abolition during the year. The Chief Executive of Hong Kong & other senior officials often make a duty visit to foreign countries. These visits usually aim to advance Hong Kong's economic and trade relations with the foreign countries. During these visits, the Chief Executive will meet with political and business leaders. Usually, the head of state or head of government of the foreign countries will receive the Chief Executive. For example, former Chief Executive Tung Chee-hwa made three visits to the United States during his term. In these three visits, Tung Chee-hwa met with the U.S. President in the Oval Office at the White House. Chief Executive Donald Tsang had visited Japan, South Korea, Russia, United Kingdom, United States, Australia, New Zealand, Chile, Brazil, India, France and other countries during his term of government. For example, the then Chief Executive Donald Tsang visited London and Edinburgh in 2011 as part of his European tour to renew ties with the UK and promote Hong Kong as a gateway to Asia. He met Prime Minister David Cameron and Foreign Secretary William Hague, and the Chancellor of the Exchequer George Osborne. In mid-2011, Donald Tsang visited Australia in June to strengthen ties between Hong Kong and Australia, promote trade opportunities, and encourage more Australian companies, particularly resources companies, to list in Hong Kong. During his visit, Mr Tsang held meetings with the Prime Minister, Julia Gillard, and the Minister for Foreign Affairs, Kevin Rudd, as well as the leader of the Opposition, Tony Abbott, and the Shadow Minister for Foreign Affairs, Julie Bishop. Many foreign dignitaries visit Hong Kong each year. The number of such visits has grown since 1997 as many of them have included Hong Kong as a destination on their trips to China, while others have visited Hong Kong specifically to see "one country, two systems" in operation. The level of VIP visits is also boosted by major international conferences held in Hong Kong in recent years. In 2009–2012, there were 11 official visits to Hong Kong, including the visits of the Prime Minister of Canada, Secretary of State of the United States of America, President of the Russian Federation, President of the Republic of Indonesia, President of the Republic of Korea and other foreign dignitaries. When Hong Kong was under British rule, most Commonwealth member states, unlike other countries, were represented in Hong Kong by Commissions. However, following the 1997 handover, they were all renamed Consulates-General. Owing to Hong Kong's economic importance, and the large number of British passport holders, the British Consulate-General is the largest of its kind in the world and bigger than many British Embassies and High Commissions abroad. Most countries maintain Consulates-General or Consulates in Hong Kong. However, despite their name, many Consulates-General are not subordinate to their country's embassy to the PRC in Beijing. For example, the British Consulate-General is directly subordinate to the Foreign and Commonwealth Office of the UK rather than the British embassy in the Chinese capital. The Consul-General of the United States, likewise, holds ambassadorial rank, and reports to the Assistant Secretary of State for East Asian Affairs in the US Department of State. By contrast, the US Consuls-General posted to Chengdu, Guangzhou, Shanghai, and Shenyang report to the Deputy Chief of Mission of the US Embassy in Beijing who is directly subordinate to the US ambassador. From 2010, the relationship between the territory and Taiwan is managed through the Hong Kong-Taiwan Economic and Cultural Co-operation and Promotion Council (ECCPC) and Taiwan-Hong Kong Economic and Cultural Co-operation Council (ECCC). Meanwhile, the Taipei Economic and Cultural Office is a "de facto" mission of the Republic of China (Taiwan) in Hong Kong.
https://en.wikipedia.org/wiki?curid=13413
Howland Island Howland Island () is an uninhabited coral island located just north of the equator in the central Pacific Ocean, about southwest of Honolulu. The island lies almost halfway between Hawaii and Australia and is an unincorporated, unorganized territory of the United States. Together with Baker Island it forms part of the Phoenix Islands. For statistical purposes, Howland is grouped as one of the United States Minor Outlying Islands. The island has an elongated banana-shape on a north–south axis, , and covers . Howland Island National Wildlife Refuge consists of the entire island and the surrounding of submerged land. The island is managed by the U.S. Fish and Wildlife Service as an insular area under the U.S. Department of the Interior and is part of the Pacific Remote Islands Marine National Monument. The atoll has no economic activity. It is perhaps best known as the island Amelia Earhart was searching for but never reached when her airplane disappeared on , during her planned round-the-world flight. Airstrips constructed to accommodate her planned stopover were subsequently damaged, not maintained and gradually disappeared. There are no harbors or docks. The fringing reefs may pose a maritime hazard. There is a boat landing area along the middle of the sandy beach on the west coast, as well as a crumbling day beacon. The island is visited every two years by the U.S. Fish and Wildlife Service. The climate is equatorial, with little rainfall and intense sunshine. Temperatures are moderated somewhat by a constant wind from the east. The terrain is low-lying and sandy: a coral island surrounded by a narrow fringing reef with a slightly raised central area. The highest point is about six meters above sea level. There are no natural fresh water resources. The landscape features scattered grasses along with prostrate vines and low-growing pisonia trees and shrubs. A 1942 eyewitness description spoke of "a low grove of dead and decaying kou trees" on a very shallow hill at the island's center. In 2000, a visitor accompanying a scientific expedition reported seeing "a flat bulldozed plain of coral sand, without a single tree" and some traces of building ruins from colonization or World War II building efforts, though it was all wood and stone ruins covered in flora and fauna that continues to grow on this island to this day. Howland is primarily a nesting, roosting and foraging habitat for seabirds, shorebirds and marine wildlife. The U.S. claims an Exclusive Economic Zone of and a territorial sea of around the island. Since Howland Island is uninhabited, no time zone is specified. It lies within a nautical time zone which is 12 hours behind UTC, named International Date Line West (). Howland Island and Baker Island are the only places on Earth observing this time zone. This time zone is also called AoE, Anywhere on Earth, which is a calendar designation which indicates that a period expires when the date passes everywhere on Earth. Sparse remnants of trails and other artifacts indicate a sporadic early Polynesian presence. A canoe, a blue bead, pieces of bamboo, and other relics of early settlers have been found. The island's prehistoric settlement may have begun about 1000 BC when eastern Melanesians traveled north and may have extended down to Rawaki, Kanton, Manra and Orona of the Phoenix Islands, 500 to 700 km southeast. K.P. Emery, an ethnologist for Honolulu's Bernice P. Bishop Museum, indicated that settlers on Manra Island were apparently of two distinct groups, one Polynesian and the other Micronesian, hence the same might have been true on Howland Island, though no proof of this has been found. The difficult life on these isolated islands along with unreliable fresh water supplies may have led to the dereliction or extinction of the settlements, much the same as other islands in the vast Pacific region (such as Kiritimati and Pitcairn) were abandoned. Captain George B. Worth of the Nantucket whaler "Oeno" sighted Howland around 1822 and called it Worth Island. Daniel MacKenzie of the American whaler "Minerva Smith" was unaware of Worth's sighting when he charted the island in 1828 and named it after his ship's owners on . Howland Island was at last named on after a lookout who sighted it from the whaleship "Isabella" under Captain Geo. E. Netcher of New Bedford. Howland Island was uninhabited when the United States took possession of it under the Guano Islands Act of 1856. The island was a known navigation hazard for many decades and several ships were wrecked there. Its guano deposits were mined by American companies from about 1857 until October1878, although not without controversy. Captain Geo. E. Netcher of the "Isabella" informed Captain Taylor of its discovery. As Taylor had discovered another guano island in the Indian Ocean, they agreed to share the benefits of the guano on the two islands. Taylor put Netcher in communication with Alfred G. Benson, president of the American Guano Company, which was incorporated in 1857. Other entrepreneurs were approached as George and Matthew Howland, who later became members of the United States Guano Company, engaged Mr. Stetson to visit the Island on the ship "Rousseau" under Captain Pope. Mr. Stetson arrived on the Island in 1854 and described it as being occupied by birds and a plague of rats. The American Guano Company established claims in respect to Baker Island and Jarvis Island which was recognised under the U.S. Guano Islands Act of 1856. Benson tried to interest the American Guano Company in the Howland Island deposits, however the company directors considered they already had sufficient deposits. In October1857 the American Guano Company sent Benson's son Arthur to Baker and Jarvis Islands to survey the guano deposits. He also visited Howland Island and took samples of the guano. Subsequently, Alfred G. Benson resigned from the American Guano Company and together with Netcher, Taylor and George W. Benson formed the United States Guano Company to exploit the guano on Howland Island, with this claim being recognised under the U.S. Guano Islands Act of 1856. However, when the United States Guano Company dispatched a vessel in 1859 to mine the guano they found that Howland Island was already occupied by men sent there by the American Guano Company. The companies ended up in New York state court, with the American Guano Company arguing that United States Guano Company had in effect abandoned the island, since the continual possession and actual occupation required for ownership by the Guano Islands Act did not occur. The end result was that both companies were allowed to mine the guano deposits, which were substantially depleted by October1878. In the late 19th Century there were British claims on the island, as well as attempts at setting up mining. John T. Arundel and Company, a British firm using laborers from the Cook Islands and Niue, occupied the island from 1886 to 1891. To clarify American sovereignty, Executive Order 7368 was issued on . In 1935, colonists from the American Equatorial Islands Colonization Project arrived on the island to establish a permanent U.S. presence in the Central Pacific. It began with a rotating group of four alumni and students from the Kamehameha School for Boys, a private school in Honolulu. Although the recruits had signed on as part of a scientific expedition and expected to spend their three-month assignment collecting botanical and biological samples, once out to sea they were told, "Your names will go down in history" and that the islands would become "famous air bases in a route that will connect Australia with California". The settlement was named Itascatown after the USCGC "Itasca" that brought the colonists to Howland and made regular cruises between the other equatorial islands during that era. Itascatown was a line of a half-dozen small wood-framed structures and tents near the beach on the island's western side. The fledgling colonists were given large stocks of canned food, water, and other supplies including a gasoline-powered refrigerator, radio equipment, medical kits and (characteristic of that era) vast quantities of cigarettes. Fishing provided variety in their diet. Most of the colonists' endeavors involved making hourly weather observations and constructing rudimentary infrastructure on the island, including the clearing of a landing strip for airplanes. During this period the island was on Hawaii time, which was then 10.5hours behind UTC. Similar colonization projects were started on nearby Baker Island, Jarvis Island and two other islands. Ground was cleared for a rudimentary aircraft landing area during the mid-1930s, in anticipation that the island might eventually become a stopover for commercial trans-Pacific air routes and also to further U.S. territorial claims in the region against rival claims from Great Britain. Howland Island was designated as a scheduled refueling stop for American pilot Amelia Earhart and navigator Fred Noonan on their round-the-world flight in 1937. Works Progress Administration (WPA) funds were used by the Bureau of Air Commerce to construct three graded, unpaved runways meant to accommodate Earhart's twin-engined Lockheed Model 10 Electra. The facility was named "Kamakaiwi Field" after James Kamakaiwi, a young Hawaiian who arrived with the first group of four colonists. He was selected as the group's leader and he spent more than three years on Howland, far longer than the average recruit. It has also been referred to as "WPA Howland Airport" (the WPA contributed about 20 percent of the $12,000 cost). Earhart and Noonan took off from Lae, New Guinea, and their radio transmissions were picked up near the island when their aircraft reached the vicinity but they were never seen again. A Japanese air attack on , by 14 twin-engined Mitsubishi G3M "Nell" bombers of Chitose Kōkūtai, from Kwajalein islands, killed colonists Richard "Dicky" Kanani Whaley and Joseph Kealoha Keliʻhananui. The raid came one day after the Japanese attack on Pearl Harbor and damaged the three airstrips of Kamakaiwi Field. Two days later, shelling from a Japanese submarine destroyed what was left of the colony's buildings. A single bomber returned twice during the following weeks and dropped more bombs on the rubble. The two survivors were finally evacuated by the USS "Helm", a U.S. Navy destroyer, on . Thomas Bederman, one of the two survivors, later recounted his experience during the incident in a edition of LIFE. Howland was occupied by a battalion of the United States Marine Corps in September1943 and was known as Howland Naval Air Station until May1944. All attempts at habitation were abandoned after 1944. Colonization projects on the other four islands, also disrupted by the war, were also abandoned. No aircraft is known to have landed on the island, though anchorages nearby were used by float planes and flying boats during World War II. For example, on , a U.S. Navy Martin PBM-3-D Mariner flying boat (BuNo 48199), piloted by William Hines, had an engine fire and made a forced landing in the ocean off Howland. Hines beached the aircraft and, though it burned, the crew were unharmed, rescued by the (the same ship that later took the USCG's Construction Unit 211 and LORAN Unit 92 to Gardner Island), transferred to a sub chaser and taken to Canton Island. On June 27, 1974, Secretary of the Interior Rogers Morton created Howland Island National Wildlife Refuge which was expanded in 2009 to add submerged lands within of the island. The refuge now includes of land and of water. Along with six other islands, the island was administered by the U.S. Fish and Wildlife Service as part of the Pacific Remote Islands National Wildlife Refuge Complex. In January2009, that entity was upgraded to the Pacific Remote Islands Marine National Monument by President George W. Bush. The island habitat has suffered from the presence from multiple invasive exotic species. Black rats were introduced in 1854 and eradicated in 1938 by feral cats introduced the year before. The cats proved to be destructive to bird species, and the cats were eliminated by 1985. Pacific crabgrass continues to compete with local plants. Public entry to the island is only by special use permit from the U.S. Fish and Wildlife Service and it is generally restricted to scientists and educators. Representatives from the agency visit the island on average once every two years, often coordinating transportation with amateur radio operators or the U.S. Coast Guard to defray the high cost of logistical support. Colonists sent to the island in the mid-1930s, to establish possession by the United States, built the Earhart Light , named after Amelia Earhart, as a day beacon or navigational landmark. It is shaped like a short lighthouse. It was constructed of white sandstone with painted black bands and a black top meant to be visible several miles out to sea during daylight hours. It is located near the boat landing at the middle of the west coast, near the site of Itascatown. The beacon was partially destroyed early in World War II by Japanese attacks, but was rebuilt in the early 1960s by men from the U.S. Coast Guard ship "Blackhaw". By 2000, the beacon was reported to be crumbling and it had not been repainted in decades. Ann Pellegreno overflew the island in 1967, and Linda Finch did so in 1997, during memorial circumnavigation flights to commemorate Earhart's 1937 world flight. No landings were attempted but both Pellegreno and Finch flew low enough to drop a wreath on the island.
https://en.wikipedia.org/wiki?curid=13414
Geography of Hungary Hungary is a landlocked country in East-Central Europe with a land area of 93,030 square km. It measures about 250 km from north to south and 524 km from east to west. It has 2,106 km of boundaries, shared with Austria to the west, Serbia, Croatia and Slovenia to the south and southwest, Romania to the southeast, Ukraine to the northeast, and Slovakia to the north. Hungary's modern borders were first established after World War I when, by the terms of the Treaty of Trianon in 1920, it lost more than 71% of what had formerly been the Kingdom of Hungary, 58.5% of its population, and 32% of the Hungarians. The country secured some boundary revisions from 1938 to 1941: In 1938 the First Vienna Award gave back territory from Czechoslovakia, in 1939 Hungary occupied Carpatho-Ukraine. In 1940 the Second Vienna Award gave back Northern Transylvania and finally Hungary occupied the Bácska and Muraköz regions during the Invasion of Yugoslavia. However, Hungary lost these territories again with its defeat in World War II. After World War II, the Trianon boundaries were restored with a small revision that benefited Czechoslovakia. Most of the country has an elevation of less than 200 m. Although Hungary has several moderately high ranges of mountains, those reaching heights of 300 m or more cover less than 2% of the country. The highest point in the country is Kékes (1,014 m) in the Mátra Mountains northeast of Budapest. The lowest spot is 77.6 m above sea level, located in the south of Hungary, near Szeged. The major rivers in the country are the Danube and Tisza. The Danube is navigable within Hungary for 418 kilometers. The Tisza River is navigable for 444 km in the country. Less important rivers include the Drava along the Croatian border, the Rába, the Szamos, the Sió, and the Ipoly along the Slovakian border. Hungary has three major lakes. Lake Balaton, the largest, is 78 km long and from 3 to 14 km wide, with an area of 600 square km . Hungarians often refer to it as the "Hungarian Sea". It is Central Europe's largest freshwater lake and an important recreation area. Its shallow waters offer good summer swimming, and in winter its frozen surface provides excellent opportunities for winter sports. Smaller bodies of water are Lake Velence (26 square km) in Fejér County and Lake Fertő (Neusiedler See—about 82 square km within Hungary), and the artificial Lake Tisza. Hungary has three major geographic regions (which are subdivided to seven smaller ones): the Great Alföld, lying east of the Danube River; the Transdanubia, a hilly region lying west of the Danube and extending to the Austrian foothills of the Alps; and the North Hungarian Mountains, which is a mountainous and hilly country beyond the northern boundary of the Great Hungarian Plain. The country's best natural resource is fertile land, although soil quality varies greatly. About 70% of the country's total territory is suitable for agriculture; of this portion, 72% is arable land. Hungary lacks extensive domestic sources of energy and raw materials needed for industrial development. The Little Alföld or Little Hungarian Plain is a plain (tectonic basin) of approximately 8,000 km2 in northwestern Hungary, southwestern Slovakia and eastern Austria, along the lower course of the Rába River, with high quality fertile soils. The Transdanubia region lies in the western part of the country, bounded by the Danube River, the Drava River, and the remainder of the country's border with Slovenia and Croatia. It lies south and west of the course of the Danube. It contains Lake Fertő and Lake Balaton. The region consists mostly of rolling hills. Transdanubia is primarily an agricultural area, with flourishing crops, livestock, and viticulture. Mineral deposits and oil are found in Zala county close to the border of Croatia. The Great Alföld contains the basin of the Tisza River and its branches. It encompasses more than half of the country's territory. Bordered by mountains on all sides, it has a variety of terrains, including regions of fertile soil, sandy areas, wastelands, and swampy areas. Hungarians have inhabited the Great Plain for at least a millennium. Here is found the puszta, a long, and uncultivated expanse (the most famous such area still in existence is the Hortobágy National Park), with which much Hungarian folklore is associated. In earlier centuries, the Great Plain was unsuitable for farming because of frequent flooding. Instead, it was the home of massive herds of cattle and horses. In the last half of the 19th century, the government sponsored programs to control the riverways and expedite inland drainage in the Great Plain. With the danger of recurrent flooding largely eliminated, much of the land was placed under cultivation, and herding ceased to be a major contributor to the area's economy. Although the majority of the country has an elevation lesser than 300 m, Hungary has several moderately high ranges of mountains. They can be classified to four geographic regions, from west to east: Alpokalja, Transdanubian Mountains, Mecsek and North Hungarian Mountains. Alpokalja (literally "the foothills of the Alps") is located along the Austrian border; its highest point is Írott-kő with an elevation of 882 metres. The Transdanubian Mountains stretch from the west part of Lake Balaton to the Danube Bend near Budapest, where it meets the North Hungarian Mountains. Its tallest peak is the 757 m high Pilis. Mecsek is the southernmost Hungarian mountain range, located north from Pécs - Its highest point is the Zengő with 682 metres. The North Hungarian Mountains lie north of Budapest and run in a northeasterly direction south of the border with Slovakia. The higher ridges, which are mostly forested, have rich coal and iron deposits. Minerals are a major resource of the area and have long been the basis of the industrial economies of cities in the region. Viticulture is also important, producing the famous Tokaji wine. The highest peak of it is the Kékes, located in the Mátra mountain range. Hungary has a mainly continental climate, with cold winters and warm to hot summers. The average annual temperature is about , in summer , and in winter , with extremes ranging from about in summer to in winter. Average yearly rainfall is about . Distribution and frequency of rainfall are unpredictable. The western part of the country usually receives more rain than the eastern part, where severe droughts may occur in summertime. Weather conditions in the Great Plain can be especially harsh, with hot summers, cold winters, and scant rainfall. By the 1980s, the countryside was beginning to show the effects of pollution, both from herbicides used in agriculture and from industrial pollutants. Most noticeable was the gradual contamination of the country's bodies of water, endangering fish and wildlife. Although concern was mounting over these disturbing threats to the environment, no major steps had yet been taken to arrest them. Hungary, with its plains and hilly regions, is highly suitable for agriculture. Doubtless, one of Hungary's most important natural resources is arable land. It covers about 48.57% of the country, which is outstanding in the world (see the related ). The mass majority of the fertile soil has a good quality. The most important agricultural zones are the Little Hungarian Plain (it has the highest quality fertile soil in average), Transdanubia, and the Great Hungarian Plain. The last covers more than half of the country (52,000 km2 in number), whereas soil quality varies extremely; the territory even contains a small, grassy semi-desert, the so-called puszta (steppe in English). Puszta is exploited by sheep and cattle raising. The most important Hungarian agricultural products include corn, wheat, barley, oat, sunflower, poppy, [[potato]], [[millet]], [[beet|sugar-beet]], [[flax]], and many other plants. There are also some newly naturalized plants too, for example [[amaranth]]. [[Poppy seed]] is part of the traditional [[Hungarian cuisine]]. [[File:Paprikahungarian.jpg|right|thumb|A greengrocer's in Hungary]] The country is well known for producing high quality [[Capsicum annuum|peppers]], which are often made into [[paprika]]. There are numerous fruits reared, including many subspecies of [[apple]], [[pear]], [[peach]], [[grape]], [[apricot]], [[watermelon]], [[cantaloupe]], etc. Hungary does not grow any [[Genetically modified organism|GMO]] products, thus these products are mainly imported from the [[United States]]. They cannot, however, be distributed without a mark on the wrapping. Wine production has a long history in Hungary. There are two languages in [[Europe]] in which the word for "wine" does not derive from the [[Latin language|Latin]], being [[Greek language|Greek]] – and [[Hungarian language|Hungarian]]. The Hungarian word is "bor". Viticulture has been recorded in the territory of today's Hungary since the Roman times, who were responsible for the introduction of the cultivation of wines. The arriving Hungarians took over the practice and have maintained it ever since. Today, there are numerous wine regions in Hungary, producing quality and inexpensive wines as well, comparable to Western European ones. The majority of the country's wine regions are located in the mountains or in the hills, such as [[Transdanubian Mountains]], [[North Hungarian Mountains]], [[Villány Mountains]], and so on. Important ones include the regions of [[Eger]], [[Hajós]], [[Somló]], [[Sopron]], [[Villány]], [[Szekszárd]], and [[Tokaj-Hegyalja]]. 19% of the country is covered by [[forest]]s. These are mainly mountainous areas, such as the North Hungarian and the Transdanubian Mountains, and the [[Alpokalja]]. The composition of forests is various, with trees like [[fir]], [[beech]], [[oak]], [[willow]], [[acacia]], [[Platanus|plane]], etc. Hungary's current [[Counties of Hungary|counties]] are largely based on the country's historic regions. The counties are subdivided into "[[Districts of Hungary|district]]s" ("járás"), and these are further divided into "[[municipality|municipalities]]" ("település"). Hungary has 19 counties, 174 districts + 23 [[List of districts in Budapest|districts in Budapest]] and 2,722 "municipality". [[Image:Map of counties of Hungary 2004.png|500px|right]] [[File:Natural resources of Hungary.png|thumb|300px|Natural resources of Hungary. Metals are in blue (Al — [[aluminium]] ore, Mn — [[manganese]], Fe — [[iron]] ore, U — [[uranium]], PM — polymetallic ores (Cu, Zn, Pb)). Fossil fuels are in red (C — [[coal]], L — [[lignite]], P — [[petroleum]], G — [[natural gas]]).]] Natural hazards: occasional flooding [[List of national parks of Hungary|National parks]] Environment - current issues: The approximation of Hungary's standards in waste management, energy efficiency, and air, soil, and water pollution with environmental requirements for EU accession will require large investments. Environment - international agreements: "party to:" Air Pollution, Air Pollution-Nitrogen Oxides, Air Pollution-Sulphur 85, Air Pollution-Volatile Organic Compounds, Antarctic Treaty, Biodiversity, Climate Change, Desertification, Endangered Species, Environmental Modification, Hazardous Wastes, [[Law of the Sea]], Marine Dumping, Nuclear Test Ban, Ozone Layer Protection, Ship Pollution, Wetlands "signed, but not ratified:" Air Pollution-Persistent Organic Pollutants, Air Pollution-Sulphur 94, Antarctic-Environmental Protocol Geography - note: landlocked; strategic location astride main land routes between [[Western Europe]] and [[Balkan Peninsula]] as well as between Ukraine and Mediterranean basin [[Image:Hungary topographic map.jpg|425px|right|Map of the extreme points of Hungary]] [[Category:Geography of Hungary| ]]
https://en.wikipedia.org/wiki?curid=13425
Demographics of Hungary This article is about the demographic features of the population of Hungary, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. Hungary's population has been declining since 1980. The population composition at the foundation of Hungary (895) depends on the size of the arriving Hungarian population and the size of the Slavic (and remains of Avar-Slavic) population at the time. One source mentions 200 000 Slavs and 400 000 Hungarians, while other sources often don't give estimates for both, making comparison more difficult. The size of the Hungarian population around 895 is often estimated between 120 000 and 600 000, with a number of estimates in the 400-600 000 range. Other sources only mention a fighting force of 25 000 Magyar warriors used in the attack, while declining to estimate the total population including women and children and warriors not participating in the invasion. In the historical demographics the largest earlier shock was the Mongol Invasion of Hungary, several plagues also took a toll on the country's population. According to the demographers, about 80 percent of the population was made up of Hungarians before the Battle of Mohács, however the Hungarian ethnic group became a minority in its own country in the 18th century due to the resettlement policies and continuous immigration from neighboring countries. Major territorial changes made Hungary ethnically homogeneous after World War I. Nowadays, more than nine-tenths of the population is ethnically Hungarian and speaks Hungarian as the mother tongue. Note: The data refer to the territory of the Kingdom of Hungary, and not that of the present-day republic. The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period in the present-day Hungary. Sources: Our World In Data and Gapminder Foundation. Unless otherwise indicated, vital statistics are from the Hungarian Statistical Office. The infant mortality rate (IMR) decreased considerably after WW II. In 1949, the IMR was 91.0. The rate decreased to 47.6 in 1960, 35.9 in 1970, 23.2 in 1980, 14.8 in 1990, 9.2 in 2000 and reached an all-time low in 2009: 5.1 per 1000 live born children. There are large variations in the birth rates as of 2016: Zala County has the lowest birth rate with 7.5 births per thousand inhabitants, while Szabolcs-Szatmár-Bereg County has the highest birth rate with 11.2 births per thousand inhabitants. The death rates also differ greatly from as low as 11.3 deaths per thousand inhabitants in Pest County to as high as 15.7 deaths per thousand inhabitants in Békés County. Demographic statistics according to the World Population Review. Demographic statistics according to the CIA World Factbook, unless otherwise indicated. Roman Catholic 37.2%, Calvinist 11.6%, Lutheran 2.2%, Greek Catholic 1.8%, other 1.9%, none 18.2%, unspecified 27.2% (2011 est.) Hungarian (official) 99.6%, English 16%, German 11.2%, Russian 1.6%, Romanian 1.3%, French 1.2%, other 4.2% note: shares sum to more than 100% because some respondents gave more than one answer on the census; Hungarian is the mother tongue of 98.9% of Hungarian speakers (2011 est.) "at birth:" 1.06 male(s)/female "under 15 years:" 1.06 male(s)/female "15–64 years:" 0.97 male(s)/female "65 years and over:" 0.57 male(s)/female "total population:" 0.91 male(s)/female (2009 est.) Hungary lost 64% of its total population in consequence of the Treaty of Trianon, decreasing from 20.9 million to 7.6 million, and 31% (3.3 out of 10.7 million) of its ethnic Hungarians, Hungary lost five of its ten most populous cities. According to the census of 1910, the largest ethnic group in the Kingdom of Hungary were Hungarians, who were 54.5% of the population of Kingdom of Hungary, excluding Croatia-Slavonia. Although the territories of the former Kingdom of Hungary that were assigned by the treaty to neighbouring states in total had a majority of non-Hungarian population, they also included areas of Hungarian majority and significant Hungarian minorities, numbering 3,318,000 in total. The number of Hungarians in the different areas based on census data of 1910 (This census was recorded by language, thus amongst Hungarians also others - mainly Jews - were included who declared their primary language as Hungarian). The present day location of each area is given in parenthesis. Slovaks, Romanians, Ruthenians, Serbs, Croats and Germans, who represented the majority of the populations of the above-mentioned territories: According to the 1920 census 10.4% of the population spoke one of the minority languages as mother language: The number of bilingual people was much higher, for example 1,398,729 people spoke German (17%), 399,176 people spoke Slovak (5%), 179,928 people spoke Croatian (2.2%) and 88,828 people spoke Romanian (1.1%). Hungarian was spoken by 96% of the total population and was the mother language of 89%. The percentage and the absolute number of all non-Hungarian nationalities decreased in the next decades, although the total population of the country increased. Note: 300.000 Hungarian refugees fled to Hungary from the territory of successor states (Romania, Czechoslovakia and Yugoslavia) after the WW I. Hungary expanded its borders and regained territories from Czechoslovakia, Romania and Yugoslavia at the outset of the war. These annexations were affirmed under the Munich Agreement (1938), two Vienna Awards (1938 and 1940), Carpathian Ruthenia and parts of Yugoslavia were occupied and annexed in 1939 and 1941, respectively. The population of Northern Transylvania, according to the Hungarian census from 1941 counted 53.5% Hungarians and 39.1% Romanians. The territory of Bácska had 789,705 inhabitants, and 45,4% or 47,2% declared themselves to be Hungarian native speakers or ethnic Hungarians. The percentage of Hungarian speakers was 84% in southern Czechoslovakia and 25% in the Sub-Carpathian Rus. After World War II, about 200,000 Germans were deported to Germany according to the decree of the Potsdam Conference. Under the forced exchange of population between Czechoslovakia and Hungary, approximately 73,000 Slovaks left Hungary. After these population movements Hungary became an ethnically almost homogeneous country except the rapidly growing number of Romani people in the second half of the 20th century. For historical reasons, significant Hungarian minority populations can be found in the surrounding countries, notably in Ukraine (in Transcarpathia), Slovakia, Romania (in Transylvania), and Serbia (in Vojvodina). Austria (in Burgenland), Croatia, and Slovenia (Prekmurje) are also host to a number of ethnic Hungarians. When the Hungarians invaded the Carpathian Basin, it was inhabited by West Slavic and Avar peoples. Written sources from the 9th century also suggest that some groups of Onogurs and Bulgars occupied the valley of the river Mureş at the time of the Magyars’ invasion. There is a dispute as to whether Romanian population existed in Transylvania during that time. The first Romani groups arrived in Hungary in the fifteenth century from Turkey. Nowadays, the real number of Roma in Hungary is a disputed question. In the 2001 census only 190 046 (2%) called themselves Roma, but experts and Roma organisations estimate that there are between 450,000 and 1,000,000 Roma living in Hungary. Since then, the size of the Roma population has increased rapidly. Today every fifth or sixth newborn child belongs to the Roma minority. Based on current demographic trends, a 2006 estimate by Central European Management Intelligence claims that the proportion of the Roma population will double by 2050, putting the percentage of its Roma community at around 14-15% of the country's population. There are problems related to the Roma minority in Hungary, and the very subject is a heated and disputed topic. Objective problems: Three Kabar tribes joined to the Hungarians and participated in the Hungarian conquest of Hungary. They settled mostly in Bihar county. The Muslim Böszörménys migrated to the Carpathian Basin in the course of the 10th-12th centuries and they were composed of various ethnic groups. Most of them must have arrived from Volga Bulgaria and Khwarezm. Communities of Pechenegs (Besenyő in Hungarian) lived in the Kingdom of Hungary from the 11-12th centuries. They were most numerous in the county of Tolna. Smaller groups of Oghuz Turk settlers ('Úzok' or 'Fekete Kunok/Black Cumans' in Hungarian) came to the Carphatian Basin from the middle of the 11th century. They were settled mostly in Barcaság. The city of Ózd got its name after them. The Jassic (Jász in Hungarian) people were a nomadic tribe which settled -with the Cumans- in the Kingdom of Hungary during the 13th century. Their name is almost certainly related to that of the Iazyges. Béla IV, king of Hungary granted them asylum and they became a privileged community with the right of self-government. During the centuries they were fully assimilated to the Hungarian population, their language disappeared, but they preserved their Jassic identity and their regional autonomy until 1876. Over a dozen settlements in Central Hungary (e.g. Jászberény, Jászárokszállás, Jászfényszaru) still bear their name. During the Russian campaign, the Mongols drove some 200,000 Cumans, a nomadic tribe who had opposed them, west of the Carpathian Mountains. There, the Cumans appealed to King Béla IV of Hungary for protection. In the Kingdom of Hungary, Cumans created two regions named Cumania ("Kunság" in Hungarian): Greater Cumania ("Nagykunság") and Little Cumania ("Kiskunság"), both located the Great Hungarian Plain. Here, the Cumans maintained their autonomy, language and some ethnic customs well into the modern era. According to Pálóczi's estimation originally 70-80,000 Cumans settled in Hungary. The oldest extant documents from Transylvania make reference to Vlachs too. Regardless of the subject of Romanian presence/non-presence in Transylvania prior to the Hungarian conquest, the first chronicles to write of Vlachs in the intra-Carpathian regions is the "Gesta Hungarorum", while the first written Hungarian sources about Romanian settlements derive from the 13th century, record was written about Olahteluk village in Bihar County from 1283. The 'land of Romanians', Terram Blacorum (1222,1280) showed up in Fogaras and this area was mentioned under different name (Olachi) in 1285. The first appearance of a supposed Romanian name 'Ola' in Hungary derives from a charter (1258). They were a significant population in Transylvania, Banat, Maramureș and Partium. There are different estimations in connection with number of Romanians in Kingdom of Hungary. According to a research based on place-names, 511 villages of Transylvania and Banat appear in documents at the end of the 13th century, however only 3 of them bore Romanian names. Around 1400 AD, Transylvania and Banat consisted of 1757 villages, though only 76 (4.3%) of them were Romanian. The same research suggests that the number of Romanians started to increase significantly from the Early modern period, and that by 1700 the Romanian ethnic group consisted of 40 percent of the Transylvanian population and their number raised even more in the 18th century. Although, according to other estimates, the Romanian inhabitants who were primarily peasants, consisted of more than 60 percent of the population in 1600. Jean W.Sedlar estimates that Vlachs (Romanians) constituted about two-thirds of Transylvania's population in 1241 on the eve of the Mongol invasion, however according to other researches the Hungarian ethnic group in Transylvania was in decent majority before Battle of Mohács and only lost its relative majority by the 17th century. Official censuses with information on Hungary's ethnic composition have been conducted since the 19th century. In 1881, Romanian-majority settlements projected to the present-day territory of Hungary were: Bedő, Csengerújfalu, Kétegyháza, Körösszakál, Magyarcsanád, Méhkerék, Mezőpeterd, Pusztaottlaka and Vekerd. Important communities lived in Battonya, Elek, Gyula, Körösszegapáti, Létavértes, Nyíradony, Pocsaj, Sarkadkeresztúr, and Zsáka. The Slovak people lived mainly in Upper Hungary, northern parts of the Kingdom of Hungary. Due to post-Ottoman resettlements, the regions of Vojvodina, Banat and Békés county received bigger Slovak communities in the 18th century. After WWII a major population exchange with Czechoslovakia was carried out: about 73,000 Slovaks were transferred to Slovakia, replaced by an incomparable number of Hungarians. From the 14th century, escaping from the Ottoman threat, a large number of Serbs migrated to the Hungarian Kingdom. After the Battle of Mohács, most of the territory of Hungary got into Ottoman rule. In that time, especially in the 17th century, many Serb, and other Southern Slavic immigrants settled in Hungary. Most of the Ottoman soldiers in the territory of present-day Hungary were South Slavs (the Janissary). After the Turkish withdrawal, Kingdom of Hungary came under Habsburg rule, a new wave of Serb refugees migrated to the area around 1690, as a consequence of the Habsburg-Ottoman war. In the first half of the 18th century, Serbs and South Slavs were ethnic majority in several cities in Hungary. Three waves of German migration can be distinguished in Hungary before the 20th century. The first two waves of settlers arrived to the Hungarian Kingdom in the Middle Ages (11th and 13th centuries) in Upper Hungary and in Southern Transylvania (Transylvanian Saxons). The third, largest wave of German-speaking immigrants into Hungary occurred after the withdrawal of the Ottoman Empire from Hungarian territory, after the Treaty of Karlowitz. Between 1711 and 1780, German-speaking settlers immigrated to the regions of Southern Hungary, mostly region of Bánát, Bács-Bodrog, Baranya and Tolna counties (as well as into present-day Romania and Yugoslavia), which had been depopulated by the Ottoman wars. At the end of the 18th century, the Kingdom of Hungary contained over one million German-speaking residents (collectively known as Danube Swabians). In 2011, 131,951 people declared to be German in Hungary (1,6%). Rusyns had lived mostly in Carpathian Ruthenia, Northeast Hungary, however significant Rusyn population appeared in Vojvodina from the 18th century. Croatia was in personal union with Hungary from 1102. Croat communities were spread mostly in the western and southern part of the country and along the Danube, including Budapest. The Poles lived at the northern borders of Kingdom of Hungary from the arrival of the Hungarians. The Slovenes ("Vendek" in Hungarian) lived in the western part of the Carpathian basin before the Hungarian conquest. In the 11th and 12th century, the current linguistic and ethnic border between the Hungarian and Slovene people was established. Nowadays, they live in Vendvidék ("Slovenska krajina" in Slovenians) between the Mura and the Rába rivers. In 2001, there were around 5,000 Slovenes in Hungary. The first historical document about Jews of Hungary is the letter written about 960 to King Joseph of the Khazars by Hasdai ibn Shaprut, the Jewish statesman of Córdoba, in which he says Jews living in "the country of Hungarin". There are Jewish inscriptions on tombs and monuments in Pannonia (Roman Hungary) dated to the second or third century CE. The first Armenians came to Hungary from the Balkans in the 10 - 11th century. Greeks migrated to Kingdom of Hungary from the 15th and 16th centuries. Mass migrations did not occur until the 17th century, the largest waves being in 1718 and 1760–1770; they were primarily connected to the economic conditions of the period. It is estimated that 10,000 Greeks emigrated to Hungary in the second half of the 18th century. A number of Greeks Communists escaped to Hungary after the Greek Civil War, notably in the 'Greek' village of Beloiannisz. The town of Szentendre and the surrounding villages were inhabited by Bulgarians since the Middle Ages. However, present day Bulgarians are largely descended from gardeners who migrated to Hungary from the 18th century. The majority of Hungarians became Christian in the 11th century. Hungary remained predominantly Catholic until the 16th century, when the Reformation took place and, as a result, first Lutheranism, then soon afterwards Calvinism, became the religion of almost the entire population. In the second half of the 16th century, however, Jesuits led a successful campaign of counterreformation among the Hungarians, although Protestantism survived as the faith of a significant minority, especially in the far east and northeast of the country. Orthodox Christianity in Hungary has been the religion mainly of some national minorities in the country, notably Romanians, Rusyns, Ukrainians, and Serbs. Faith Church, one of Europe's largest Pentecostal churches, is also located in Hungary. Hungary has historically been home to a significant Jewish community. According to 2011 census data, Christianity is the largest religion in Hungary, with around 5.2 million adherents (52.9%), while the largest denomination in Hungary is Catholicism (38.9% — Roman Catholicism 37.1%; Greek Catholicism 1.8%). There is a significant Calvinist minority (11.6% of the population) and smaller Lutheran (2.2%), Orthodox (0.1%) and Jewish (0.1%) minorities. However, these census figures are representative of religious affiliation rather than attendance; around 12% of Hungarians attend religious services more than least once a week and around 50% more than once a year, while 30% of Hungarians do not believe in God at all. The census showed a large drop of religious adherents who wish to answer, from 74.6% to 54.7% in ten years' time, replacing them by people either who do not wish answer or people who are not following a religion.
https://en.wikipedia.org/wiki?curid=13426
Politics of Hungary Politics of Hungary takes place in a framework of a parliamentary representative democratic republic. The Prime Minister is the head of government of a pluriform multi-party system, while the President is the head of state and holds a largely ceremonial position. Executive power is exercised by the government. Legislative power is vested in both the government and the parliament. The party system since the last elections is dominated by the conservative Fidesz. The two larger oppositions are Hungarian Socialist Party (MSZP) and Jobbik; there are also opposition parties with no formal faction but representation in parliament (e. g. Politics Can Be Different) The Judiciary is independent of the executive and the legislature. Hungary is an independent, democratic and constitutional state, which has been a member of the European Union since 2004. Since 1989 Hungary has been a parliamentary republic. Legislative power is exercised by the unicameral National Assembly that consists of 199 members. Members of the National Assembly are elected for four years. The President of the Republic, elected by the National Assembly every five years, has a largely ceremonial role, but he is nominally the Commander-in-Chief of the armed forces and his powers include the nomination of the Prime Minister who is to be elected by a majority of the votes of the Members of Parliament, based on the recommendation made by the President of the Republic. If the President dies, resigns or is otherwise unable to carry out his duties, the Speaker of the National Assembly becomes acting President. Due to the Hungarian Constitution, based on the post-World War II Basic Law of the Federal Republic of Germany, the Prime Minister has a leading role in the executive branch as he selects Cabinet ministers and has the exclusive right to dismiss them (similarly to the competences of the German federal chancellor). Each cabinet nominee appears before one or more parliamentary committees in consultative open hearings, survive a vote by the Parliament and must be formally approved by the president. In Communist Hungary, the executive branch of the People's Republic of Hungary was represented by the Council of Ministers. The unicameral, 199-member National Assembly ("Országgyűlés") is the highest organ of state authority and initiates and approves legislation sponsored by the prime minister. Its members are elected for a four-year term. The election threshold is 5%, but it only applies to the multi-seat constituencies and the compensation seats, not the single-seat constituencies. A fifteen-member Constitutional Court has power to challenge legislation on grounds of unconstitutionality. This body was last filled on July 2010. Members are elected for a term of twelve years. The President of the Supreme Court of Hungary and the Hungarian civil and penal legal system he leads is fully independent of the Executive Branch. The Attorney General or Chief Prosecutor of Hungary is currently fully independent of the Executive Branch, but his status is actively debated Several ombudsman offices exist in Hungary to protect civil, minority, educational and ecological rights in non-judicial matters. They have held the authority to issue legally binding decisions since late 2003. The central bank, the Hungarian National Bank was fully self-governing between 1990–2004, but new legislation gave certain appointment rights to the Executive Branch in November 2004 which is disputed before the Constitutional Court. Hungary is divided in 19 counties ("megyék", singular - "megye"), 23 urban counties* ("megyei jogú városok", singular - "megyei jogú város"), and 1 capital city** ("főváros"); Bács-Kiskun, Baranya, Békés, Békéscsaba*, Borsod-Abaúj-Zemplén, Budapest**, Csongrád, Debrecen*, Dunaújváros*, Eger*, Érd*, Fejér, Győr*, Győr-Moson-Sopron, Hajdú-Bihar, Heves, Hódmezővásárhely*, Jász-Nagykun-Szolnok, Kaposvár*, Kecskemét*, Komárom-Esztergom, Miskolc*, Nagykanizsa*, Nógrád, Nyíregyháza*, Pécs*, Pest, Salgótarján*, Somogy, Sopron*, Szabolcs-Szatmár-Bereg, Szeged*, Szekszárd*, Székesfehérvár*, Szolnok*, Szombathely*, Tatabánya*, Tolna, Vas, Veszprém, Veszprém*, Zala, Zalaegerszeg* Hungary is a member of the ABEDA, Australia Group, BIS, CE, CEI, CERN, CEPI EAPC, EBRD, ECE, EU (member, as by 1 May 2004), FAO, G-9, IAEA, IBRD, ICAO, ICC, ICRM, IDA, IEA, IFC, IFRCS, ILO, IMF, IMO, Inmarsat, Intelsat, Interpol, IOC, IOM, ISO, ITU, ITUC, NAM (guest), NATO, NEA, NSG, OAS (observer), OECD, OPCW, OSCE, PCA, PFP, SECI, UN, UNCTAD, UNESCO, UNFICYP, UNHCR, UNIDO, UNIKOM, UNMIBH, UNMIK, UNOMIG, UNU, UPU, WCO, WEU (associate), WFTU, Visegrád group, WHO, WIPO, WMO, WToO, WTrO, and the Zangger Committee. "Note: with restructuring and reorganization, this information may change even within a governmental period."
https://en.wikipedia.org/wiki?curid=13427
Economy of Hungary The economy of Hungary is a high-income mixed economy, ranked as the 10th most complex economy according to the Economic Complexity Index. Hungary is an OECD member with a very high human development index and a skilled labour force, with the 13th lowest income inequality in the world. The Hungarian economy is the 57th-largest economy in the world (out of 188 countries measured by IMF) with $265.037 billion annual output, and ranks 49th in the world in terms of GDP per capita measured by purchasing power parity. Hungary has an export-oriented market economy with a heavy emphasis on foreign trade; thus the country is the 35th largest export economy in the world. The country had more than $100 billion of exports in 2015, with a high trade surplus of $9.003 billion, of which 79% went to the EU and 21% was extra-EU trade. Hungary's productive capacity is more than 80% privately owned, with 39.1% overall taxation, which funds the country's welfare economy. On the expenditure side, household consumption is the main component of GDP and accounts for 50% of its total, followed by gross fixed capital formation with 22% and government expenditure with 20%. In 2009 Hungary, due to strong economic difficulties, had to request the help of the IMF for about € 9 billion. Hungary continues to be one of the leading nations in Central and Eastern Europe for attracting foreign direct investment: the inward FDI in the country was $119.8 billion in 2015, while Hungary invests more than $50 billion abroad. As of 2015, the key trading partners of Hungary were Germany, Austria, Romania, Slovakia, France, Italy, Poland and the Czech Republic. Major industries include food processing, pharmaceuticals, motor vehicles, information technology, chemicals, metallurgy, machinery, electrical goods, and tourism (in 2014 Hungary welcomed 12.1 million international tourists). Hungary is the largest electronics producer in Central and Eastern Europe. Electronics manufacturing and research are among the main drivers of innovation and economic growth in the country. In the past 20 years Hungary has also grown into a major center for mobile technology, information security, and related hardware research. The employment rate in the economy was 68.7% in January 2017, the employment structure shows the characteristics of post-industrial economies, 63.2% of the employed workforce work in the service sector, industry contributed by 29.7%, while agriculture employed 7.1%. The unemployment rate was 3.8% in September–November 2017, down from 11% during the financial crisis of 2007–08. Hungary is part of the European single market which represents more than 508 million consumers. Several domestic commercial policies are determined by agreements among European Union members and by EU legislation. Large Hungarian companies are included in the BUX, the Hungarian stock market index listed on Budapest Stock Exchange. Well-known companies include MOL Group, the OTP Bank, Gedeon Richter Plc., Magyar Telekom, CIG Pannonia, FHB Bank, Zwack Unicum; Hungary also has a large number of specialised small and medium enterprises, for example many automotive industry suppliers and technology start ups, among others. Budapest is the financial and business capital of Hungary. The capital is a significant economic hub, classified as an Alpha- world city in the study by the Globalization and World Cities Research Network and it is the second fastest-developing urban economy in Europe: the per capita GDP in the city increased by 2.4% and employment by 4.7% compared to the previous year, 2014. On the national level, Budapest is the primary city of Hungary for business, accounting for 39% of the national income. The city had a gross metropolitan product of more than $100 billion in 2015, making it one of the largest regional economies in the European Union. Budapest is also among the Top100 GDP performing cities in the world, as measured by PricewaterhouseCoopers. In a global city competitiveness ranking by EIU, Budapest is ranked above Tel Aviv, Lisbon, Moscow and Johannesburg, among others. Hungary maintains its own currency, the Hungarian forint (HUF), although the economy fulfills the Maastricht criteria with the exception of public debt, but the ratio of public debt to GDP is significantly below the EU average at 66.4% in 2019. The Hungarian National Bank—founded in 1924, after the dissolution of Austro-Hungarian Empire—is currently focusing on price stability with an inflation target of 3%. In the age of feudalism the key economic factor was land. The new economic and social orders created private ownership of land. There are three forms of existence: the royal, ecclesiastical and secular private estate. The royal estate of the Árpád dynasty had evolved from the tribal lands. The origin of the secular private holdings dates back to the conquest tribal common estates, which are increasingly in charge of the society and grows over private ownership of the becoming leaders. However, from the founding of the state the royal gift also entered the multiplying factors secular private property line. This organization developed a feudal estate, which had two elements: the ancient estate and the possessions which were awarded by Saint Stephen I, and then the royal donations. Over the holder unrestricted right granted by the latter lineal heir almost returned to the king. In the Order of the laws changed in 1351, which abolished the nobility's possessions for free disposal. It forbidden the nobility to sale their inherited land. The Carpathian Basin was more suitable for agriculture than large livestock grazing, and therefore increased steadily in the former weight. In the 11th and 12th centuries natural farming and soil changer tillage systems met: grazing the animals, and they used the fertilized land until depletion. The most important tools for the agriculture were the plow and the ox. The Hungarian economy prior to World War II was primarily oriented toward agriculture and small-scale manufacturing. Hungary's strategic position in Europe and its relative high lack of natural resources also have dictated a traditional reliance on foreign trade. For instance, its largest car manufacturer, Magomobil (maker of the "Magosix"), produced a total of a few thousand units. In the early 1920s the textile industry began to expand rapidly, by 1928 it became the most important industry in the foreign trade of Hungary exporting textile goods worth more than 60 million pengős in that year. Companies like MÁVAG exported locomotives to India and South-America, its locomotive no. 601 was the largest and most powerful in Europe at the time. From the late 1940s, the Communist government started to nationalize the industry. At first, only factories with more than 100 workers were nationalized; later, this limit was reduced to only 10. In the agriculture, the government started a successful program of collectivization. From the early 1950s, more and more new factories were built. This rapid and forced industrialization followed the standard Stalinist pattern in an effort to encourage a more self-sufficient economy. Most economic activity was conducted by state-owned enterprises or cooperatives and state farms. In 1968, Stalinist self-sufficiency was replaced by the "New Economic Mechanism", which reopened Hungary to foreign trade, gave limited freedom to the workings of the market, and allowed a limited number of small businesses to operate in the services sector. Although Hungary enjoyed one of the most liberal and economically advanced economies of the former Eastern Bloc, both agriculture and industry began to suffer from a lack of investment in the 1970s, and Hungary's net foreign debt rose significantly—from $1 billion in 1973 to $15 billion in 1993—due largely to consumer subsidies and unprofitable state enterprises. In the face of economic stagnation, Hungary opted to liberalize further by passing a joint venture law, instating an income tax, and joining the International Monetary Fund (IMF) and the World Bank. By 1988, Hungary had developed a two-tier banking system, and had enacted significant corporate legislation that paved the way for the ambitious market-oriented reforms of the post-communist years. After the fall of communism, the former Eastern Bloc had to transition from a one-party, centrally planned economy to a market economy with a multi-party political system. With the collapse of the Soviet Union, the Eastern Bloc countries suffered a significant loss in both markets for goods, and subsidizing from the Soviet Union. Hungary, for example, "lost nearly 70% of its export markets in Eastern and Central Europe." The loss of external markets in Hungary left "800,000 unemployed people because all the unprofitable and unsalvageable factories had been closed." Another form of Soviet subsidizing that greatly affected Hungary after the fall of communism was the loss of social welfare programs. Because of the lack of subsidies and a need to reduce expenditures, many social programs in Hungary had to be cut in an attempt to lower spending. As a result, many people in Hungary suffered incredible hardships during the transition to a market economy. Following privatization and tax reductions on Hungarian businesses, unemployment suddenly rose to 12% in 1991 (it was 1.7% in 1990 ), gradually decreasing until 2001. Economic growth, after a fall in 1991 to −11.9%, gradually grew until the end of the 1990s at an average annual rate of 4.2%. With the stabilization of the new market economy, Hungary has experienced growth in foreign investment with a "cumulative foreign direct investment totaling more than $60 billion since 1989." The Antall government of 1990–94 began market reforms with price and trade liberation measures, a revamped tax system, and a nascent market-based banking system. By 1994, however, the costs of government overspending and hesitant privatization had become clearly visible. Cuts in consumer subsidies led to increases in the price of food, medicine, transportation services, and energy. Reduced exports to the former Soviet bloc and shrinking industrial output contributed to a sharp decline in GDP. Unemployment rose rapidly to about 12% in 1993. The external debt burden, one of the highest in Europe, reached 250% of annual export earnings, while the budget and current account deficits approached 10% of GDP. The devaluation of the currency (in order to support exports), without effective stabilization measures, such as indexation of wages, provoked an extremely high inflation rate, that in 1991 reached 35% and slightly decreased until 1994, growing again in 1995. In March 1995, the government of Prime Minister Gyula Horn implemented an austerity program, coupled with aggressive privatization of state-owned enterprises and an export-promoting exchange raw regime, to reduce indebtedness, cut the current account deficit, and shrink public spending. By the end of 1997 the consolidated public sector deficit decreased to 4.6% of GDP—with public sector spending falling from 62% of GDP to below 50%—the current account deficit was reduced to 2% of GDP, and government debt was paid down to 94% of annual export earnings. The Government of Hungary no longer requires IMF financial assistance and has repaid all of its debt to the fund. Consequently, Hungary enjoys favorable borrowing terms. Hungary's sovereign foreign currency debt issuance carries investment-grade ratings from all major credit-rating agencies, although recently the country was downgraded by Moody's, S&P and remains on negative outlook at Fitch. In 1995 Hungary's currency, the Forint (HUF), became convertible for all current account transactions, and subsequent to OECD membership in 1996, for almost all capital account transactions as well. Since 1995, Hungary has pegged the forint against a basket of currencies (in which the U.S. dollar is 30%), and the central rate against the basket is devalued at a preannounced rate, originally set at 0.8% per month, the Forint is now an entirely free-floating currency. The government privatization program ended on schedule in 1998: 80% of GDP is now produced by the private sector, and foreign owners control 70% of financial institutions, 66% of industry, 90% of telecommunications, and 50% of the trading sector. After Hungary's GDP declined about 18% from 1990 to 1993 and grew only 1%–1.5% up to 1996, strong export performance has propelled GDP growth to 4.4% in 1997, with other macroeconomic indicators similarly improving. These successes allowed the government to concentrate in 1996 and 1997 on major structural reforms such as the implementation of a fully funded pension system (partly modelled after Chile's pension system with major modifications), reform of higher education, and the creation of a national treasury. Remaining economic challenges include reducing fiscal deficits and inflation, maintaining stable external balances, and completing structural reforms of the tax system, health care, and local government financing. Recently, the overriding goal of Hungarian economic policy has been to prepare the country for entry into the European Union, which it joined in late 2004. Prior to the change of regime in 1989, 65% of Hungary's trade was with Comecon countries. By the end of 1997, Hungary had shifted much of its trade to the West. Trade with EU countries and the OECD now comprises over 70% and 80% of the total, respectively. Germany is Hungary's single most important trading partner. The US has become Hungary's sixth-largest export market, while Hungary is ranked as the 72nd largest export market for the U.S. Bilateral trade between the two countries increased 46% in 1997 to more than $1 billion. The U.S. has extended to Hungary most-favored-nation status, the Generalized System of Preferences, Overseas Private Investment Corporation insurance, and access to the Export-Import Bank. With about $18 billion in foreign direct investment (FDI) since 1989, Hungary has attracted over one-third of all FDI in central and eastern Europe, including the former Soviet Union. Of this, about $6 billion came from American companies. Foreign capital is attracted by skilled and relatively inexpensive labor, tax incentives, modern infrastructure, and a good telecommunications system. By 2006 Hungary's economic outlook had deteriorated. Wage growth had kept up with other nations in the region; however, this growth has largely been driven by increased government spending. This has resulted in the budget deficit ballooning to over 10% of GDP and inflation rates predicted to exceed 6%. This prompted Nouriel Roubini, a White House economist in the Clinton administration, to state that "Hungary is an accident waiting to happen." In January 1990, the State Privatization Agency (SPA, "Állami Vagyonügynökség") was established to manage the first steps of privatization. Because of Hungary's $21.2 billion foreign debt, the government decided to sell state property instead of distributing it to the people for free. The SPA was attacked by populist groups because several companies' management had the right to find buyers and discuss sale terms with them thus "stealing" the company. Another reason for discontent was that the state offered large tax subsidies and environmental investments, which sometimes cost more than the selling price of the company. Along with the acquisition of companies, foreign investors launched many "greenfield investments". The center-right Hungarian Democratic Forum government of 1990–1994 decided to demolish agricultural co-operatives by splitting them up and giving machinery and land to their former members. The government also introduced a Recompensation Law which offered vouchers to people who had owned land before it was nationalized in 1948. These people (or their descendants) could exchange their vouchers for land previously owned by agricultural co-operatives, who were forced to give up some of their land for this purpose. Small stores and retail businesses were privatized between 1990 and 1994, however, greenfield investments by foreign retail companies like Tesco, Cora and IKEA had a much bigger economic impact. Many public utilities, including the national telecommunications company Matáv, the national oil and gas conglomerate MOL Group, and electricity supply and production companies were privatized as well. Though most banks were sold to foreign investors, the largest bank, National Savings Bank (OTP), remained Hungarian-owned. 20%-20% of the shares were sold to foreign institutional investors and given to the Social Security organizations, 5% were bought by employees, and 8% was offered at the Budapest Stock Exchange. Reaching 1995, Hungary's fiscal indices deteriorated: foreign investment fell as well as judgement of foreign analysts on economic outlook. Due to high demand in import goods, Hungary also had a high trade deficit and budget gap, and it could not reach an agreement with the IMF, either. After not having a minister of finance for more than a month, prime minister Gyula Horn appointed Lajos Bokros as Finance Minister on 1 March 1995. He introduced a string of austerity measures (the "Bokros Package") on 12 March 1995 which had the following key points: one-time 9% devaluation of the forint, introducing a constant sliding devaluation, 8% additional customs duty on all goods except for energy sources, limitation of growth of wages in the public sector, simplified and accelerated privatization. The package also included welfare cutbacks, including abolition of free higher education and dental service; reduced family allowances, child-care benefits, and maternity payments depending on income and wealth; lowering subsidies of pharmaceuticals, and raising retirement age. These reforms not only increased investor confidence, but they were also supported by the IMF and the World Bank, however, they were not welcome widely by the Hungarians; Bokros broke the negative record of popularity: 9% of the population wanted to see him in an "important political position" and only 4% were convinced that the reforms would "improve the country's finances in a big way" In 1996, the Ministry of Finance introduced a new pension system instead of the fully state-backed one: private pension savings accounts were introduced, which were 50% social security based and 50% funded. In 2006 Prime Minister Ferenc Gyurcsány was reelected on a platform promising economic “reform without austerity.” However, after the elections in April 2006, the Socialist coalition under Gyurcsány unveiled a package of austerity measures which were designed to reduce the budget deficit to 3% of GDP by 2008. Because of the austerity program, the economy of Hungary slowed down in 2007. Declining exports, reduced domestic consumption and fixed asset accumulation hit Hungary hard during the financial crisis of 2008, making the country enter a severe recession of -6.4%, one of the worst economic contractions in its history. On 27 October 2008, Hungary reached an agreement with the IMF and EU for a rescue package of US$25 billion, aiming to restore financial stability and investors' confidence. Because of the uncertainty of the crisis, banks gave less loans which led to a decrease in investment. This along with price-awareness and fear of bankruptcy led to a fallback in consumption which then increased job losses and decreased consumption even further. Inflation did not rise significantly, but real wages decreased. The fact that the euro and the Swiss franc are worth a lot more in forints than they were before affected a lot of people. According to The Daily Telegraph, "statistics show that more than 60 percent of Hungarian mortgages and car loans are denominated in foreign currencies". After the election in 2010 of the new Fidesz-party government of Prime Minister Viktor Orbán, Hungarian banks were forced to allow the conversion of foreign-currency mortgages to the forint. The new government also nationalised $13 billion of private pension-fund assets, which could then be used to support the government debt position. The economy showed signs of recovery in 2011 with decreasing tax rates and a moderate 1.7 percent GDP growth. From November 2011 to January 2012, all three major credit rating agencies downgraded Hungarian debt to a non-investment speculative grade, commonly called "junk status". In part this is because of political changes creating doubts about the independence of the Hungarian National Bank. European Commission President José Manuel Barroso wrote to Prime Minister Viktor Orbán stating that new central bank regulations, allowing political intervention, "seriously harm" Hungary's interests, postponing talks on a financial aid package. Orbán responded "If we don’t reach an agreement, we’ll still stand on our own feet." The European Commission launched legal proceedings against Hungary on 17 January 2012. The procedures concern Hungary's central bank law, the retirement age for judges and prosecutors and the independence of the data protection office, respectively. One day later Orbán indicated in a letter his willingness to find solutions to the problems raised in the infringement proceedings. On 18 January he participated in plenary session of the European Parliament which also dealt with the Hungarian case. He said "Hungary has been renewed and reorganised under European principles". He also said that the problems raised by the European Union can be resolved “easily, simply and very quickly”. He added that none of the EC's objections affected Hungary's new constitution. Following the mild recession of 2012, the GDP picked up again from 2014, and based on the Commission's Winter 2015 forecast it was projected to have accelerated to 3.3%. The more dynamic economic performance attributed to a moderately growing domestic demand and supported the growth of gross fixed capital formation. The surge (3.8% in the first half of 2014), however was only achieved via temporary measures and factors, such as the stepped-up absorption of EU-funds and the central bank's Funding for Growth Scheme, which subsidised loans for small-and medium-sized enterprises. The fundaments of growth didn't considerably change in 2015 as well - the government supported EU-fund transfers along with the moderately successful central bank loans of economic revitalization - fueled the fair GDP growth. Hungary's total land area is 93,030 km2 along with 690 km2 of water surface area which altogether makes up 1% of Europe's area. Nearly 75% of Hungary's landscape consists of flat plains. Additional 20% of the country's area consists of foothills whose altitude is 400 m at the most; higher hills and water surface makes up the remaining 5%. The two flat plains that take up three quarters of Hungary's area are the Great Hungarian Plain and the Little Hungarian Plain. Hungary's most significant natural resource is arable land. About 83% of the country's total territory is suitable for cultivation; of this portion, 75% (around 50% of the country's area) is covered by arable land, which is an outstanding ratio compared to other EU countries. Hungary lacks extensive domestic sources of energy and raw materials needed for further industrial development. 19% of the country is covered by forests. These are located mainly in the foothills such as the North Hungarian and the Transdanubian Mountains, and the Alpokalja. The composition of forests is various; mostly oak or beech, but the rest include fir, willow, acacia and plane. In European terms, Hungary's underground water reserve is one of the largest. Hence the country is rich in brooks and hot springs as well as medicinal springs and spas; as of 2003, there are 1250 springs that provide water warmer than 30 degrees C. 90% of Hungary's drinking water is mostly retrieved from such sources. The major rivers of Hungary are the Danube and the Tisza. The Danube also flows through parts of Germany, Austria, Slovakia, Serbia, and Romania. It is navigable within Hungary for 418 km. The Tisza River is navigable for 444 km in the country. Hungary has three major lakes. Lake Balaton, the largest, is 78 km long and from 3 to 14 km wide, with an area of 592 km2. Lake Balaton is Central Europe's largest lake and a prosperous tourist spot and recreation area. Its shallow waters offer summer bathing and during the winter its frozen surface provides facilities for winter sports. Smaller bodies of water include Lake Velence (26 km2) in Fejér County and Lake Fertő (82 km2 within Hungary). Hungary has 31 058 km of roads and motorways of 1118 km. The total length of motorways has doubled in the last ten years with the most (106) kilometers built in 2006. Budapest is directly connected to the Austrian, Slovakian, Slovenian, Croatian, Romanian and Serbian borders via motorways. Due to its location and geographical features, several transport corridors cross Hungary. Pan-European corridors no. IV, V, and X, and European routes no. E60, E71, E73, E75, and E77 go through Hungary. Thanks to its radial road system, all of these routes touch Budapest. There are five international, four domestic, four military and several non-public airports in Hungary. The largest airport is the Budapest Ferihegy International Airport (BUD) located at the southeastern border of Budapest. In 2008, the airport had 3,866,452 arriving and 3,970,951 departing passengers. In 2006, the Hungarian railroad system was 7685 km long, 2791 km of it electrified. Electricity is available in every settlement in Hungary. Piped gas is available in 2873 settlements, 91.1% of all of them. To avoid gas shortages due to Ukrainian pipeline shutdowns like the one in January 2009, Hungary participates both in the Nabucco and the South Stream gas pipeline projects. Hungary also has strategical gas reserves: the latest reserve of 1.2 billion cubic meters was opened in October 2009. In 2008, 94.9% of households had running water. Though it is the responsibility of municipal governments to provide people with healthy water supply, the Hungarian government and the European Union offer subsidies to those who wish to develop water supplies or sewage systems. Partly because of these subsidies, 71.3% of all dwellings are connected to the sewage system, up from 50,1% in 2000. Internet penetration has been rising significantly over the past few years: the ratio of households having an internet connection has risen from 22.1% (49% of which was broadband) in 2005 to 48.4% (87.3% of which was broadband) in 2008. The Ministry of Economy and Transport introduced the eHungary program in 2004 aiming to provide every person in Hungary with internet access by setting up "eHungary points" in public spaces like libraries, schools and cultural centers. The program also includes "the introduction of the eCounsellor network – a service through which professionals provide assistance for citizens in the effective usage of electronic information, services and knowledge". Agriculture accounted for 4.3% of GDP in 2008 and along with the food industry occupied roughly 7.7% of the labor force. These two figures represent only the primary agricultural production: along with related businesses, agriculture makes up about 13% of the GDP. Hungarian agriculture is virtually self-sufficient and due to traditional reasons export-oriented: exports related to agriculture make up 20-25% of the total. About half of Hungary's total land area is agricultural area under cultivation; this ratio is prominent among other EU members. This is due to the country's favorable conditions including continental climate and the plains that make up about half of Hungary's landscape. The most important crops are wheat, corn, sunflower, potato, sugar beet, canola and a wide variety of fruits (notably apple, peach, pear, grape, watermelon, plum etc.). Hungary has several wine regions producing among others the worldwide famous white dessert wine Tokaji and the red Bull's Blood. Another traditional world-famous alcoholic drink is the fruit brandy "pálinka". Mainly cattle, pigs, poultry and sheep are raised in the country. The livestock includes the Hungarian Grey Cattle which is a major tourist attraction in the Hortobágy National Park. An important component of the country's gastronomic heritage is foie gras with about 33000 farmers engaged in the industry. Hungary is the second largest world producer and the biggest exporter of foie gras (exporting mainly to France). Another symbol of Hungarian agriculture and cuisine is the "paprika" (both sweet and hot types). The country is one of the leading paprika producers of the world with Szeged and Kalocsa being the centres of production. Hungary has a tax-funded universal healthcare system, organized by the state-owned National Healthcare Fund (). Health insurance is not directly paid for by children, mothers or fathers with baby, students, pensioners, people with socially poor background, handicapped people (including physical and mental disorders), priests and other church employees. Health in Hungary can be described with a rapidly growing life expectancy and a very low infant mortality rate (4.9 per 1,000 live births in 2012). Hungary spent 7.4% of the GDP on health care in 2009 (it was 7.0% in 2000), lower than the average of the OECD. Total health expenditure was 1,511 US$ per capita in 2009, 1,053 US$ governmental-fund (69.7%) and 458 US$ private-fund (30.3%) but has now risen to 2047 US$ per capita (as per 2018 data), roughly a 33% increase total, with the government funding 1439 US$ (70.3%) of the total versus the private funding 608 US$ (29.7%). This amount totals to 6.6% of the country's total GDP, roughly a percent decrease overall. The main sectors of Hungarian industry are heavy industry (mining, metallurgy, machine and steel production), energy production, mechanical engineering, chemicals, food industry and automobile production. The industry is leaning mainly on processing industry and (including construction) accounted for 29,32% of GDP in 2008. Due to the sparse energy and raw material resources, Hungary is forced to import most of these materials to satisfy the demands of the industry. Following the transition to market economy, the industry underwent restructuring and remarkable modernization. The leading industry is machinery, followed by chemical industry (plastic production, pharmaceuticals), while mining, metallurgy and textile industry seemed to be losing importance in the past two decades. In spite of the significant drop in the last decade, food industry is still giving up to 14% of total industrial production and amounts to 7-8% of the country's exports. Nearly 50% of energy consumption is dependent on imported energy sources. Gas and oil are transported through pipelines from Russia forming 72% of the energy structure, while nuclear power produced by the nuclear power station of Paks accounts for 53,6%. Hungary is a favoured destination of foreign investors of automotive industry resulting in the presence of General Motors (Szentgotthárd), Magyar Suzuki (Esztergom), Mercedes-Benz (Kecskemét), and Audi factory (Győr) in Central Europe. 17% of the total Hungarian exports comes from the exports of Audi, Opel and Suzuki. The sector employs about 90.000 people in more than 350 car component manufacturing companies. Audi has built the largest engine manufacturing plant of Europe (third largest in the world) in Győr becoming Hungary's largest exporter with total investments reaching over €3,300 million until 2007. Audi's workforce assembles the Audi TT, the Audi TT Roadster and the A3 Cabriolet in Hungary. The plant delivers engines to carmakers Volkswagen, Skoda, Seat and also to Lamborghini. Daimler-Benz invests €800 million ($1.2 billion) and creates up to 2,500 jobs at a new assembly plant in Kecskemét, Hungary with capacity for producing 100,000 Mercedes-Benz compact cars a year. Opel produced 80,000 Astra and 4,000 Vectra cars from March 1992 until 1998 in Szentgotthárd, Hungary. Today, the plant produces about half million engines and cylinder heads a year. The tertiary sector accounted for 64% of GDP in 2007 and its role in the Hungarian economy is steadily growing due to constant investments into transport and other services in the last 15 years. Located in the heart of Central-Europe, Hungary's geostrategic location plays a significant role in the rise of the service sector as the country's central position makes it suitable and rewarding to invest. The total value of imports was 68,62 billion euros, the value of exports was 68,18 billion euros in 2007. The external trade deficit decreased by 12,5% since the previous year, easing down from 2,4 billion to 308 million euros in 2007. In the same year, 79% of Hungary's export and 70% of the imports were transacted inside the EU. Tourism employs nearly 150 thousand people and the total income from tourism was 4 billion euros in 2008. One of Hungary's top tourist destinations is Lake Balaton, the largest freshwater lake in Central Europe, with a number of 1,2 million visitors in 2008. The most visited region is Budapest, the Hungarian capital attracted 3,61 million visitors in 2008. Hungary was the world's 24th most visited country in 2011. The Hungarian spa culture is world-famous, with thermal baths of all sorts and over 50 spa hotels located in many towns, each of which offer the opportunity of a pleasant, relaxing holiday and a wide range of quality medical and beauty treatments. The currency of Hungary is the Hungarian Forint (HUF, Ft) since 1 August 1946. A Forint consists of 100 Fillérs; however, since these have not been in circulation since 1999, they are only used in accounting. There are six coins (5, 10, 20, 50, 100, 200) and six banknotes (500, 1000, 2000, 5000, 10000 and 20000). The 1 and 2 Forint coins were withdrawn in 2008, yet prices remained the same as stores follow the official rounding scheme for the final price. The 200 Forint note was withdrawn on 16 November 2009. As a member of the European Union, the long term aim of the Hungarian government is to replace the Forint with the Euro. "See also: Fiscal policy." 1 Current EU member states that have not yet adopted the Euro, candidates and official potential candidates. ² No more than 1.5% higher than the 3 best-performing EU member states. ³ No more than 2% higher than the 3 best-performing EU member states. 4 Formal obligation for Euro adoption in the country EU Treaty of Accession or the Framework for membership negotiations. 5 Values from May 2008 report. To be updated each year. Education in Hungary is free and compulsory from the age of 5 to 16. The state provides free pre-primary schooling for all children, 8 years of general education and 4 years of upper secondary level general or vocational education. Higher education system follows the three-cycle structure and the credit system of the bologna process. Governments aim to reach European standards and encourage international mobility by putting emphasis on digital literacy, and enhancing foreign language studies: all secondary level schools teach foreign languages and at least one language certificate is needed for the acquisition of a diploma. Over the past decade, this resulted in a drastic increase in the number of people speaking at least one foreign language. Hungary's most prestigious universities are: Financial sources for education are mainly provided by the state (making up 5.1-5.3% of the annual GDP). In order to improve the quality of higher education, the government encourages contributions by students and companies. Another important contributor is the EU. The system has weaknesses, the most important being segregation and unequal access to quality education. The 2006 PISA report concluded that while students from comprehensive schools did better than the OECD average, pupils from vocational secondary schools did much worse. Another problem is of the higher education's: response to regional and labour market needs is insufficient. Government plans include improving the career guidance system and establishing a national digital network that will enable the tracking of jobs and facilitate the integration into the labour market. As most post-communist countries, Hungary's economy is affected by its social stratification in terms of income and wealth, age, gender and racial inequalities. Hungary's Gini coefficient of 0.269 ranks 11th in the world. The graph on the right shows that Hungary is close in equality to the world-leader Denmark. The highest 10% of the population gets 22.2% of the incomes. According to the business magazine "Napi Gazdaság", the owner of the biggest fortune, 300 billion HUF, is Sándor Demján. On the other hand, the lowest 10% gets 4% of the incomes. Considering the standard EU indicators (Percentage of the population living under 60% of the per capita median income), 13% of the Hungarian population is stricken by poverty. According to the Human Development Report, the country's HPI-1 value is 2.2% (3rd among 135 countries), and its HDI value is 0.879 (43rd out of 182). The fertility rate in Hungary, just like in many European countries, is very low: 1.34 children/women (205th in the world) Life expectancy at birth is 73.3 years., while the expected number of healthy years is 57.6 for females and 53.5 for males. The average life expectancy overall is 73.1 years. Hungary's GDI (gender-related development index) value of 0.879 is 100% of its HDI value (3rd best in the world). 55.5% of the female population (between 15 and 64) participate in the labour force, and the ratio of girls to boys in primary and secondary education is 99%. Ethnic inequality, which strikes primarily Roma in Hungary, is a serious problem. Although the definition of the Roma identity is controversial, qualitative studies prove that the Roma employment rate decreased significantly following the fall of Communism: due to the tremendous layoffs of unskilled workers during the transition years, more than one-third of Roma were excluded from the labour market. Therefore, this ethnic conflict is inherently interconnected with the income inequalities in the country – at least two-thirds of the poorest 300,000 people in Hungary are Romas. Furthermore, ethnic discrimination is outstandingly high, 32% of Romas experience discrimination when looking for work. Consequently, new Roma entrants to the labour market are rarely able to find employment, which creates a motivation deficit and further reinforces segregation and unemployment. Twenty years after the change of the regime, corruption remains a severe issue in Hungary. According to Transparency International Hungary, almost one-third of top managers claim they regularly bribe politicians. Most people (42%) in Hungary think that the sector most affected by bribery is the political party system. Bribery is common in the healthcare system in the form of gratitude payment–92% of all people think that some payment should be made to the head surgeon conducting a heart operation or an obstetrician for a child birth. Another problem is the administrative burden: in terms of the ease of doing business, Hungary ranks 47th out of 183 countries in the world. The five days’ time required to start a new business ranks 29th, and the country is 122nd concerning the ease of paying taxes. In accordance with the theory of the separation of powers, the judicial system is independent from the legislative and the executive branches. Consequently, courts and prosecutions are not influenced by the government. However, the legal system is slow and overburdened, which makes proceedings and rulings lengthy and inefficient. Such a justice system is hardly capable of prosecuting corruption and protecting the country's financial interests. The Hungarian organization responsible for controlling the country's monetary policy is the Hungarian National Bank (Hungarian: "Magyar Nemzeti Bank", MNB) which is the central bank in Hungary. According to the Hungarian Law of National Bank (which became operative in 2001. – LVIII. Law about The Hungarian National Bank), the primary objective of MNB is to achieve and maintain price stability. This aim is in line with the European and international practice. Price stability means achieving and maintaining a basically low, but positive inflation rate. This level is around 2-2.5% according to international observations, while the European Central Bank "aims at inflation rates of below, but close to 2% over the medium term". Since Hungary is in the process of catching up (Balassa-Samuelson effect), the long-term objective is a slightly higher figure, around 2.3-3.2%. Therefore, the medium term inflation target of the Hungarian National Bank is 3%. Concerning the exchange rate system, the floating exchange rate system is in use since 26 February 2008, as a result of which HUF is fluctuating in accordance with the effects of the market in the face of the reference currency, the euro. The chart on the right shows forint exchange rates for the British pound (GBP), euro (EUR), Swiss franc (CHF), and the U.S. dollar (USD) from June 2008 to September 2009. It indicates that a relatively strong forint weakened since the beginning of the financial crisis, and that its value has recently taken an upward turn. Compared to the euro the forint was at peak on 18 June 2008 when 1000 Ft was €4.36 and €1 was 229.11Ft. The forint was worth the least on 6 March 2009; this day 1000 Ft was €3.16 and €1 was 316Ft). Compared to USD, most expensive/cheapest dates are 22 June 2008 and 6 March 2009 with 1000HUF/USD rates 6.94 and 4.01 respectively. On March 24, 2015 the Euro was at 299.1450 and USD was at 274.1650, In Hungary, state revenue makes up 44% and expenditure makes up 45% of the GDP which is relatively high compared to other EU members. This can be traced back to historical reasons such as socialist economic tradition as well as cultural characteristics that endorse paternalist behaviour on the state's part, meaning that people have a habitual reflex that make them call for state subsidies. Some economists dispute this point, claiming that expenditure ran up to today's critical amount from 2001, during two left-wing government cycles. Along with joining the EU the country undertook the task of joining the Eurozone as well. Therefore, the Maastricht criteria which forms the condition of joining the Eurozone acts as an authoritative guideline to Hungarian fiscal politics. Although there has been remarkable progress, recent years’ statistics still point at significant discrepancies between the criteria and fiscal indices. The target date for adapting the Euro has not been fixed, either. General government deficit has shown a drastic decline to -3.4% (2008) from -9.2% (2006). According to an MNB forecast however, until 2011, the deficit will by a small margin fall short of the 3.0% criterion. Another criterion that is found lacking is the ratio of gross government debt to GDP which, since 2005, exceeds the allowed 60%. According to an ESA95 figure, in 2008 the ratio increased from 65.67% to 72.61%, which primarily results from the requisition of an IMF-arranged financial assistance package. Hungary's balance of payments on its current account has been negative since 1995, around 6-8% in the 2000s reaching a negative peak 8.5% in 2008. Still, current account deficit will expectedly decrease in the following period, as imports will diminish compared to exports as an effect of the financial crisis. In Hungary, the 1988 reform of taxes introduced a comprehensive tax system which mainly consists of central and local taxes, including a personal income tax, a corporate income tax and a value added tax. Among the total tax income the ratio of local taxes is solely 5% while the EU average is 30%. Until 2010, the taxation of an individual was progressive, determining the tax rate based on the individual's income: with earning up to 1,900,000 forints a year, the tax was 18%, the tax on incomes above this limit was 36% since 1 of July, 2009. Based on the new one-rate tax regime introduced January 2011, the overall tax rate for all income-earnings bands has been 16%. According to the income-tax returns of 2008, 14,6% of taxpayers was charged for 64,5% of the total tax burdens. Before the new Corporate income tax regime, the Corporate tax was fixed at 16% of the positive rateable value, with an additional tax called solidarity tax of 4%, the measure of which is calculated based on the result before tax of the company (the solidarity tax has been in use since September, 2006). The actual rateable value might be different is the two cases. From January 2011, under the new Corporate income tax regime the tax rate was divided into two parts (i) corporations having income before tax below 500 million HUF (appr. USD 2.5 million) was lowered to 10% and (ii) 16% remained for all other companies until 2013. After this, the unified corporate income tax rate will be 10%, irrespectively from the size of the net income before tax.In January 2017, corporate tax was unified at a rate of 9% — the lowest in the European Union. The rate of value added tax in Hungary is 27%, the highest in Europe, since 1 of January, 2012. The following table shows the main economic indicators in 1980–2018. Inflation under 2% is in green. Households with access to fixed and mobile telephony Quick facts - Telecommunication market in Hungary - Hungarian Statistical office (3Q 2011) Broadband penetration rate Individuals using computer and internet Hungary joined the European Union on 05/01/2004 after a successful referendum among the EU-10. The EU's free trade system helps Hungary, as it is a relatively small country and thus needs export and import. After the accession to the EU, Hungarian workers could immediately go to work to Ireland, Sweden and the United Kingdom. Other countries imposed restrictions. In 2007, 25% of all exports of Hungary were of high technology, which is the 5th largest ratio in the European Union after Malta, Cyprus, Ireland, and the Netherlands. The EU10 average was 17.1% and the Eurozone average was 16% in 2007.
https://en.wikipedia.org/wiki?curid=13428
Transport in Hungary Transport in Hungary relies on several main modes, including transport by road, rail, air and water. Hungary has a total of of public roads, of which are paved (including 1481 km of motorways, as of 2016); and are unpaved (2005 etc.): Hungarian road categories are as follows: Hungarian motorways and expressways are part of the national road network. As of October 2016, there are of controlled-access highways. M1 | M3 | M4 | M5 | M6 | M7 | M8 | "M15" | M19 | M30 | M31 | M35 | M43 | M60 M0 | M2 | M9 | M51 | M70 | M85 | M86 New motorway sections are being added to the existing network, which already connects many major economically important cities to the capital. Bus transport between municipalities was provided by Volán Companies, twenty-four bus companies founded in 1970 and named after the regions they served. They also provided local transport in cities and towns that did not have their own public transport company (all cities except for Budapest, Miskolc, Pécs, Kaposvár and also Debrecen after 2009), and operated bus lines in cities where the local company operated only tram and trolley bus lines (Szeged and Debrecen, the latter until 2009, when DKV took over the bus lines). In early 2015 the 24 companies were organized into seven regional companies. "Note:" Hungary and Austria jointly manage the cross-border standard-gauge railway between Győr–Sopron–Ebenfurt (GySEV/ROeEE), a distance of about 101 km in Hungary and 65 km in Austria. In Budapest, the three main railway stations are the Eastern (Keleti), Western (Nyugati) and Southern (Déli), with other outlying stations like Kelenföld. Of the three, the Southern is the most modern but the Eastern and the Western are more decorative and architecturally interesting. Other important railway stations countrywide include Szolnok (the most important railway intersection outside Budapest), Tiszai Railway Station in Miskolc and the stations of Pécs, Győr, Debrecen, Szeged and Székesfehérvár. The only city with an underground railway system is Budapest with its Metro. In Budapest there is also a suburban rail service in and around the city, operated under the name HÉV. There are 43-45 airports in Hungary, including smaller, unpaved airports, too. The five international airports are Budapest-Liszt Ferenc, Debrecen Airport, Hévíz–Balaton International Airport (previously "Sármellék," also called FlyBalaton for its proximity to Lake Balaton, Hungary's number one tourist attraction), Győr-Pér and Pécs-Pogány (as of 2015. there are no regular passenger flights from Győr-Pér and Pécs-Pogány). Malév Hungarian Airlines ceased operations in 2012. Total: 20 (1999 est.) Total: 27 (1999 est.) List of airports in Hungary; The following are the largest airports in Hungary (In descending order for 2015): Hungary has five heliports. 1,373 km permanently navigable (1997) The most important port is Budapest, the capital. Other important ones include Dunaújváros and Baja. Ports on the Danube: Ports on the Tisza: In the rest of the cities and towns local transport is provided by Volánbusz companies that also provide intercity bus lines. The Budapest Metro () is the rapid transit system in the Hungarian capital Budapest. Its line 1 (opened in 1896) is the oldest electrified underground railway on the European continent. The second (red) line was opened in 1970, third (blue) line was opened in 1976, the newest line is the fourth (green), it was opened in 2014. The busiest traditional city tram line in the world is still route 4/6 in Budapest, where 50-meter long trams run at 120 to 180 second intervals at peak time and are usually packed with people. A part of this route is the same as where electric trams made their world first run in 1887. Since the 2000s, the Budapest tram network has been improved, by ordering new trams (Combino Supra and CAF Urbos 3) as well as extending some lines (such as line 1 to Kelenföld railway station). There were some towns, where narrow gauge railways were used as tram lines or interurban lines (for example: Sárospatak, Sátoraljaújhely, Békéscsaba, Békés, Cegléd). These lines were closed in the 1970s. Trolleybuses can be found in three cities: Budapest, Debrecen and Szeged.
https://en.wikipedia.org/wiki?curid=13430
Hungarian Defence Forces The Hungarian Defence Forces () is the national defence force of Hungary. Since 2007, the Hungarian Armed Forces is under a unified command structure. The Ministry of Defence maintains the political and civil control over the army. A subordinate Joint Forces Command is coordinating and commanding the HDF corps. In 2018, the armed forces had 27,800 personnel on active duty. In 2019, military spending will be $2.080 billion, about 1.21% of the country's GDP, well below the NATO target of 2%. In 2016, the government adopted a resolution in which it pledged to increase defence spending to 2.0% of GDP and the number of active personnel to 37,650 by 2026. Military service is voluntary, though conscription may occur in wartime. In a significant move for modernization, Hungary decided in 2001 to buy 14 JAS 39 Gripen fighter aircraft for about 800 million EUR. Hungary bought two used Airbus A319 and two Falcon 7X transport aircraft. Three C-17 III Globemaster transport aircraft are operating from Pápa Air Base under Hungarian nationality mark but are maintained by the NATO Heavy Airlift Wing (HAW). In 2017 Hungary signed a contract to buy 20 new Airbus military helicopters and ground attack bombs for the Gripens. Hungarian National Cyber Security Center was re-organized in 2016. In 2016, the Hungarian military has about 700 troops stationed in foreign countries as part of international peacekeeping forces, including 100 HDF troops in the NATO-led ISAF force in Afghanistan, 210 Hungarian soldiers in Kosovo under command of KFOR, and 160 troops in Bosnia and Herzegovina. Hungary sent a 300 strong, logistics unit to Iraq in order to help the US occupation with armed transport convoys, though public opinion opposed the country's participation in the war. One soldier was killed in action by a roadside bomb in Iraq. During the Hungarian Revolution of 1848, the HDF drove Habsburg forces from the country in the Spring Campaign of 1849, but was defeated by an Austro-Russian offensive in the summer. The Royal Hungarian Honvéd was established in 1868. During World War I out of the eight million men mobilized by Austria-Hungary, over one million died. Conscription was introduced on a national basis in 1939. The peacetime strength of the Royal Hungarian Army grew to 80,000 men organized into seven corps commands. During World War II the Hungarian Second Army was destroyed on the banks of the Don River in December 1942 in the Battle of Stalingrad. During the Socialist and the Warsaw Pact era (1947–1989), the entire 200,000 strong Southern Group of Forces was garrisoned in Hungary, complete with artillery, tank regiments, air force and missile troops with nuclear weapons. As of 2016 Global Peace Index shows, Hungary is one of the world's most peaceful countries, placed 19th out of 163. The Hungarian tribes of "Árpád vezér" who came to settle in the Carpathian Basin were noted for their fearsome light cavalry, which conducted frequent raids throughout much of Western Europe (as far as present-day Spain), maintaining their military supremacy with long range and rapid-firing reflex bows. Not until the introduction of well-regulated, plate-armored knight heavy cavalry could German emperors stop the Hungarian armies. During the Árpáds the light cavalry based army was transformed slowly into a western-style one. The light cavalry lost its privileged position, replaced by a feudal army formed mainly from heavy cavalry. The Hungarian field armies were drawn up into an articulated formation (as it happened in Battle of Przemyśl (1099), Battle at Leitha (1146), Battle of Morvamező (1278), (1349), in three main battle (formation) (1146, 1278, 1349). According to the contemporary sources and later speculations, the first line was formed by light cavalry archers (Battle of Oslava (1116, 1146, 1260, 1278). Usually, they started the battle followed by a planned retreat (1116, 1146), Battle of Kressenbrunn (1260). The major decisive battles of the Hungarian army were placed in the second or third lines consisted mainly of the most valuable parts of the army – in general heavy cavalry (1146, 1278, 1349). The commanders of the Hungarian Kingdom's army used different tactics, based on a recognition of their own and the enemies' (Holy Roman Empire, Pechenegs, Uzes, Cumans, Mongols, Byzantine Empire) abilities and deficiencies. The Hungarian knight army had its golden age under King Louis the Great, who himself was a famed warrior and conducted successful campaigns in Italy due to family matters (his younger brother married Joanna I, Queen of Naples who murdered him later.) King Matthias Corvinus maintained very modern mercenary-based royal troops, called the "Black Army". King Matthias favoured ancient artillery (catapults) as opposed to cannons, which were the favourite of his father, Johannes Hunyadi, former Regent of Hungary. During the Ottoman invasion of Central Europe (between late 14th century and circa 1700) Hungarian soldiers protected fortresses and launched light cavalry attacks against the Turks (see Hungarian Hussars). The northern fortress of Eger was famously defended in the autumn of 1552 during the 39-day Siege of Eger against the combined force of two Ottoman armies numbering circa 120,000 men and 16 ultra-heavy siege guns. The victory was very important, because two much stronger forts of Szolnok and Temesvár had fallen quickly during the summer. Public opinion attributed Eger's success to the all-Hungarian garrison, as the above two forts have fallen due to treason by the foreign mercenaries manning them. In 1596, Eger fell to the Ottomans for the same reason. In the 1566 Battle of Szigetvár, Miklós Zrínyi defended Szigetvár for 30 days against the largest Ottoman army ever seen up to that day, and died leading his remaining few soldiers on a final suicide charge to become one of the best known national heroes. His great-grandson, Miklós Zrínyi, poet and general became one of the better-known strategists of the 1660s. In 1686, the capital city Buda was freed from the Ottomans by an allied Christian army composed of Austrian, Hungarian, and Western European troops, each roughly one-third of the army. The Habsburg empire then annexed Hungary. Under Habsburg rule, Hungarian Hussars rose to international fame and served as a model for light cavalry in many European countries. During the 18th and 19th centuries hundreds of thousands of forcibly enrolled Hungarian males served 12 years or more each as line infantry in the Austrian Imperial Army. Two independence wars interrupted this era, that of Prince Francis II Rákóczi between 1703 and 1711 and that of Lajos Kossuth in 1848–1849. A July 11, 1848 act of parliament in Budapest called for the formation of an army, the "Honvédség", of 200,000 which would use the Magyar language of command. It was to be formed around already extant imperial units, twenty battalions of infantry, ten hussar regiments, and two regiments of Székely from the Transylvanian Military Frontier. They were further joined by eight companies of two Italian regiments stationed in Hungary and parts of the Fifth Bohemian Artillery Regiment. In 1848–1849 the Honvédség (mostly made up of enthusiastic patriots with no prior military training) achieved incredible successes against better-trained and -equipped Austrian forces, despite the obvious advantage in numbers on the Austrian side. The Winter Campaign of Józef Bem and the Spring Campaign of Artúr Görgey are to this day taught at prestigious military schools around the globe, including at West Point Academy in the United States. Having suffered initial setbacks, including the loss of Pest-Buda, the Honvéd took advantage of the Austrians' lack of initiative and re-formed around the Debrecen-based Kossuth government. The Hungarians advanced again and by the end of spring 1849, Hungary was basically cleared of foreign forces, and would have achieved independence, were it not for the Russian intervention. At the request of the Austrian emperor Franz Joseph, the Russians invaded with a force of 190,000 soldiers – against the Honvédség's 135,000 – and decisively defeated Bem's Second Army in Transylvania, opening the path into the heart of Hungary. This way the Austrian-Russian coalition outnumbered Hungarian forces 3:1, which led to Hungary's surrender at Világos on 13 August 1849. Sándor Petőfi, the great Hungarian poet, went missing in action in the Battle of Segesvár, against invading Russian forces. In April 1867, the Austro-Hungarian Empire was established. Franz Josef, the head of the ancient Habsburg dynasty, was recognized as both Emperor of Austria and King of Hungary. Nevertheless, the issue of what form the Hungarian military would take remained a matter of serious contention between Hungarian patriots and Austrian leaders. As the impasse threatened the political union, Emperor Franz Josef ordered a council of generals in November of the same year. Ultimately, the leaders resolved on the following solution: in addition to the joint (k.u.k.) army, Hungary would have its own defence force, whose members would swear their oath to the King of Hungary (who was also Emperor of Austria) and the national constitution, use the Hungarian language of command, and display their own flags and insignia. (Austria would also form its own parallel national defence force, the "Landwehr".) As a result of these negotiations, on 5 December 1868, the Royal Hungarian Landwehr ("Magyar Kiralyi Honvédség", or Defence Force) was established. The Honvédség was usually treated generously by the Diet in Budapest. By 1873 it already had over 2,800 officers and 158,000 men organized into eighty-six battalions and fifty-eight squadrons. In 1872, the Ludovika Academy officially began training cadets (and later staff officers). Honvédség units engaged in manoeuvres and were organized into seven divisions in seven military districts. While artillery was not allowed, the force did form batteries of Gatling guns in the 1870s. In the midst of trouble between the imperial government and the parliament in 1906, the Honvédség was further expanded and finally received its own artillery units. In this form, the force approached the coming world war in most respects as a truly "national" Hungarian army. Hungarian soldiers "fought with distinction" on every front contested by Austria-Hungary in the First World War. Honvédség units (along with the Austrian Landwehr) were considered fit for front line combat service and equal to those of the joint k.u.k. army. They saw combat especially on the Eastern Front and at the Battles of the Isonzo on the Italian Front. Out of the eight million men mobilized by Austria-Hungary, over one million died. Hungarians as a national group were second only to German Austrians in their share of this burden, experiencing twenty-eight war deaths for every thousand persons. After the collapse of the Austro-Hungarian empire in late 1918, the Red Army of the Hungarian communist state (Hungarian Soviet Republic) conducted successful campaigns to protect the country's borders. However, in the Hungarian–Romanian War of 1919 Hungary came under occupation by the Romanian, Serbian, American, and French troops, as after four years of extensive fighting, the country lacked both the necessary manpower and equipment to fend off foreign invaders. In accordance with the Treaty of Bucharest, upon leaving, the Romanian army took substantial compensation for reparations. This included agricultural goods and industrial machinery as well as raw materials. The Trianon Treaty limited the Hungarian National Army to 35,000 men and forbade conscription. The army was forbidden to possess tanks, heavy armor, or an air force. On 9 August 1919, Admiral Miklós Horthy united various anti-communist military units into an 80,000-strong National Army ("Nemzeti Hadsereg"). On 1 January 1922, the National Army was once again redesignated the Royal Hungarian Army. During the 1930s and early 1940s, Hungary was preoccupied with the regaining the vast territories and huge amount of population lost in the Trianon peace treaty at Versailles in 1920. This required strong armed forces to defeat the neighbouring states and this was something Hungary could not afford. Instead, the Hungarian Regent, Admiral Miklós Horthy, made an alliance with German dictator Adolf Hitler's Third Reich. In exchange for this alliance and via the First and Second Vienna Awards, Hungary received back parts of its lost territories from Yugoslavia, Romania, and Czechoslovakia. Hungary was to pay dearly during and after World War II for these temporary gains. On 5 March 1938, Prime Minister Kálmán Darányi announced a rearmament program (the so-called "Győr Programme", named after the city where it was announced to the public). Starting 1 October, the armed forces established a five-year expansion plan with Huba I-III revised orders of battle. Conscription was introduced on a national basis in 1939. The peacetime strength of the Royal Hungarian Army grew to 80,000 men organized into seven corps commands. In March 1939, Hungary launched an invasion of the newly formed Slovak Republic. Both the Royal Hungarian Army and the Royal Hungarian Air Force fought in the brief Slovak-Hungarian War. This invasion was launched to reclaim a part of the Slovakian territory lost after World War I. On 1 March 1940, Hungary organized its ground forces into three field armies. The Royal Hungarian Army fielded the Hungarian First Army, the Hungarian Second Army, and the Hungarian Third Army. With the exception of the independent "Fast Moving Army Corps" ("Gyorshadtest"), all three Hungarian field armies were initially relegated to defensive and occupation duties within the regained Hungarian territories. In November 1940, Hungary signed the Tripartite Pact and became a member of the Axis with Nazi Germany and Fascist Italy. In April 1941, in order to regain territory and because of the German pressure, Hungary allowed the Wehrmacht to cross her territory in order to launch the invasion of Yugoslavia. The Hungarian foreign minister, Pál Teleki who wanted to maintain a pro-allied neutral stance for Hungary, could no longer keep the country out of the war, as the British Foreign Secretary Anthony Eden had threatened to break diplomatic relations with Hungary if it did not actively resist the passage of German troops across its territory, and General Henrik Werth, chief of the Hungarian General Staff made a private arrangement - unsanctioned by the Hungarian government - with the German High Command for the transport of the German troops across Hungary. Pál Teleki, no longer being able to stop the unfolding events, committed suicide on April 3rd, 1941, and Hungary joined the war on April 11 after the proclamation of the Independent State of Croatia. After the controversial "Kassa attack", elements of the Royal Hungarian Army joined the German invasion of the Soviet Union, Operation Barbarossa, one week later than the start of the operation. In spite of the arguments made that Hungary (unlike Romania) had no territorial claims in the Soviet Union, the fateful decision was made to join the war in the East. In the late summer of 1941, the Hungarian "Rapid Corps" ("Gyorshadtest"), alongside German and Romanian army groups, scored a huge success against the Soviets at the Battle of Uman. A little more than a year later and contrasting sharply with the success at Uman, was the near total devastation of the Hungarian Second Army on banks of the Don River in December 1942 during the Battle for Stalingrad. During 1943, the Hungarian Second Army was re-built. In late 1944, as part of "Panzerarmee Fretter-Pico", it participated in the destruction of a Soviet mechanized group at the Battle of Debrecen. But this proved to be a Pyrrhic victory. Unable to re-build again, the Hungarian Second Army was disbanded towards the end of 1944. To keep Hungary as an ally, the Germans launched Operation Margarethe and occupied Hungary in March 1944. However, during the Warsaw Uprising, Hungarian troops refused to participate. On 15 October 1944, the Germans launched Operation Panzerfaust and forced Horthy to abdicate. Pro-Nazi Ferenc Szálasi was made Prime Minister by the Germans. On 28 December 1944, a provisional government under the control of the Soviet Union was formed in liberated Debrecen with Béla Miklós as its Prime Minister. Miklós was the commander of the Hungarian First Army, but most of the First Army sided with the Germans and most of what remained of it was destroyed about 200 kilometres north of Budapest between 1 January and 16 February. The pro-Communist government formed by Miklós competed with the pro-Nazi government of Ferenc Szálasi. The Germans, Szálasi, and pro-German Hungarian forces loyal to Szálasi fought on. On 20 January 1945, representatives of the provisional government of Béla Miklós signed an armistice in Moscow. But forces loyal to Szálasi still continued to fight on. The Red Army, with assistance from Romanian army units, completed the encirclement of Budapest on 29 December 1944 and the Siege of Budapest began. On 2 February 1945, the strength of the Royal Hungarian Army was 214,465 men, but about 50,000 of these had been formed into unarmed labor battalions. The siege of Budapest ended with the surrender of the city on 13 February. But, while the German forces in Hungary were generally in a state of defeat, the Germans had one more surprise for the Soviets. In early March 1945, the Germans launched the Lake Balaton Offensive with support from the Hungarians. This offensive was almost over before it began. By 19 March 1945, Soviet troops had recaptured all the territory lost during a 13-day German offensive. After the failed offensive, the Germans in Hungary were defeated. Most of what remained of the Hungarian Third Army was destroyed about 50 kilometres west of Budapest between 16 March and 25 March 1945. Officially, Soviet operations in Hungary ended on 4 April 1945 when the last German troops were expelled. Some pro-fascist Hungarians like Szálasi retreated with the Germans into Austria and Czechoslovakia. During the very last phase of the war, Fascist Hungarian forces fought in Vienna, Breslau, Küstrin, and along the Oder River. On 7 May 1945, General Alfred Jodl, the German Chief of Staff, signed the document of unconditional surrender for all German forces. Jodl signed this document during a ceremony in France. On 8 May, in accordance with the wishes of the Soviet Union, the ceremony was repeated in Germany by Field Marshal Wilhelm Keitel. On 11 June, the Allies agreed to make 9 May 1945 the official "Victory in Europe" day. Szálasi and many other pro-fascist Hungarians were captured and ultimately returned to Hungary's provisional government for trial. During the Socialist and the Warsaw Pact era (1947–1989), the entire 200,000 strong Southern Group of Forces was garrisoned in Hungary, complete with artillery, tank regiments, air force and missile troops (with nuclear weapons). It was, by all means, a very capable force that made little contact with the local population. Between 1949 and 1955 there was also a huge effort to build a big Hungarian army. All procedures, disciplines, and equipment were exact copies of the Soviet Red Army in methods and material, but the huge costs collapsed the economy by 1956. During the autumn 1956 revolution, the army was divided. When the opening demonstrations on 23 October 1956 were fired upon by ÁVH secret policemen, Hungarian troops sent to crush the demonstrators instead provided their arms to the latter or joined them outright. While most major military units in the capital were neutral during the fighting, thousands of rank-and-file soldiers went over to the Revolution or at least provided the revolutionaries with arms. Many significant military units went over to the uprising in full, such as the armored unit commanded by Colonel Pál Maléter which joined forces with the insurgents at the Battle of the Corvin Passage. However, there were 71 recorded clashes between the people and the army between 24 and 29 October in fifty localities; depending on the commander; these were typically either defending certain military targets from rebel attack or fighting the insurgents outright, depending on the commander. When the Soviets crushed the Revolution on 4 November, the Army put up sporadic and disorganized resistance; lacking orders, many of their divisions were simply overpowered by the invading Soviets. After the Revolution was crushed in Budapest, the Soviets took away most of the Hungarian People's Army's equipment, including dismantling the entire Hungarian Air Force, because a sizable percentage of the Army fought alongside the Hungarian revolutionaries. Three years later in 1959, the Soviets began helping rebuild the Hungarian People's Army and resupplying them with new arms and equipment as well as rebuilding the Hungarian Air Force. Satisfied that Hungary was stable and firmly committed once again to the Warsaw Pact, the Soviets offered the Hungarians a choice of withdrawal for all Soviet troops in the country. The new Hungarian leader, János Kádár, asked for all the 200,000 Soviet troops to stay, because it allowed the socialist Hungarian People's Republic to neglect its own draft-based armed forces, quickly leading to deterioration of the military. Large sums of money were saved that way and spent on feel-good socialist measures for the population, thus Hungary could become "the happiest barrack" in the Soviet Bloc. Limited modernization, through, would happen in from the mid 1970s onward to replace older stocks of military equipment with newer ones to enable the HPA, in a small way, to honor its Warsaw Pact commitments. The HPA was divided into the Ground and Air Forces. The Ground Forces were organized into: Air Forces Headquarters at Veszprém Training for conscripts was poor and most of those drafted were actually used as a free labour force (esp. railway track construction and agricultural work) after just a few weeks of basic rifle training. Popular opinion grew very negative towards the Hungarian People's Army and most young men tried to avoid the draft with bogus medical excuses. In 1997, Hungary spent about 123 billion HUF (560 million USD) on defence. Hungary became a member of NATO on 12 March 1999. Hungary provided airbases and support for NATO's air campaign against Serbia and has provided military units to serve in Kosovo as part of the NATO-led KFOR operation. Hungary has sent a 300 strong logistics unit to Iraq in order to help the US occupation with armed transport convoys, though public opinion opposed the country's participation in the war. One soldier was killed in action due to a roadside bomb in Iraq. The parliament refused to extend the one year mandate of the logistics unit and all troops have returned from Iraq as of mid-January 2005. Hungarian troops are still in Afghanistan as of early 2005 to assist in peace-keeping and de-talibanization. Hungary will most probably replace its old UAZ 4x4 vehicles with the modern Iveco LMV types. Hungarian forces deploy the Gepárd anti-materiel rifle, which is a heavy 12.7 mm portable gun. This equipment is also in use by the Turkish and Croatian armed forces, among other armies. New transport helicopter purchases are on the list before. Most probably this will happen before 2015. In a significant move for modernization, Hungary decided in 2001 to lease 14 JAS 39 Gripen fighter aircraft (the contract includes 2 dual-seater airplanes and 12 single-seaters as well as ground maintenance facilities, a simulator, and training for pilots and ground crews) for 210 billion HUF (about 800 million EUR). Five Gripens (3 single-seaters and 2 two-seaters) arrived in Kecskemét on 21 March 2006, expected to be transferred to the Hungarian Air Force on March 30. 10 or 14 more aircraft of this type might follow up in the coming years. In early 2015, Hungary and Sweden extended the lease-program for another 10 years with a total of 32,000 flight-hours (95% increase) for only a 45% increase in cost. In late 2019, Hungary signed a contract for 44 Leopard 2 A7+ tanks and 24 PzH 2000 howitzers for €300 million to be delivered in 2021 to 2025.
https://en.wikipedia.org/wiki?curid=13431
Foreign relations of Hungary Hungary wields considerable influence in Central and Eastern Europe and is a middle power in international affairs. The foreign policy of Hungary is based on four basic commitments: to Atlantic co-operation, to European integration, to international development and to international law. The Hungarian economy is fairly open and relies strongly on international trade. Hungary has been a member of the United Nations since December 1955 and member of European Union, the NATO, the OECD, the Visegrád Group, the WTO, the World Bank, the AIIB and the IMF. Hungary took on the presidency of the Council of the European Union for half a year in 2011 and the next will be in 2024. In 2015, Hungary was the fifth largest OECD Non-DAC donor of development aid in the world, which represents 0.13% of its Gross National Income, in this regard Hungary stands before Spain, Israel or Russia. Hungary's capital city, Budapest is home to more than 100 embassies and representative bodies as an international political actor. Hungary hosts the main and regional headquarters of many international organizations as well, including European Institute of Innovation and Technology, European Police College, United Nations High Commissioner for Refugees, Food and Agriculture Organization of the United Nations, International Centre for Democratic Transition, Institute of International Education, International Labour Organization, International Organization for Migration, International Red Cross, Regional Environmental Center for Central and Eastern Europe, Danube Commission and even others. Since 1989, Hungary's top foreign policy goal was achieving integration into Western economic and security organizations. Hungary joined the Partnership for Peace program in 1994 and has actively supported the IFOR and SFOR missions in Bosnia. Hungary since 1989 has also improved its often frosty neighborly relations by signing basic treaties with Ukraine, Slovakia, and Romania. These renounce all outstanding territorial claims and lay the foundation for constructive relations. However, the issue of ethnic Hungarian minority rights in Romania, Slovakia and Ukraine periodically causes bilateral tensions to flare up. Hungary since 1989 has signed all of the OSCE documents, and served as the OSCE's Chairman-in-Office in 1997. Hungary's record of implementing CSCE "Helsinki Final Act" provisions, including those on reunification of divided families, remains among the best in Central and Eastern Europe. Except for the short-lived neutrality declared by the anti-Soviet leader Imre Nagy in November 1956, Hungary's foreign policy generally followed the Soviet lead from 1947 to 1989. During the Communist period, Hungary maintained treaties of friendship, cooperation, and mutual assistance with the Soviet Union, Poland, Czechoslovakia, the German Democratic Republic, Romania, and Bulgaria. It was one of the founding members of the Soviet-led Warsaw Pact and Comecon, and it was the first central European country to withdraw from those organizations, now defunct. After 1989, Hungary oriented more towards the West, joined NATO in 1999 and the European Union in 2004. As with any country, Hungarian security attitudes are shaped largely by history and geography. For Hungary, this is a history of more than 400 years of domination by great powers—the Ottomans, the Habsburg dynasty, the Germans during World War II, and the Soviets during the Cold War—and a geography of regional instability and separation from Hungarian minorities living in neighboring countries. Hungary's foreign policy priorities, largely consistent since 1990, represent a direct response to these factors. Since 1990, Hungary's top foreign policy goal has been achieving integration into Western economic and security organizations. Hungary joined the Partnership for Peace program in 1994 and has actively supported the IFOR and SFOR missions in Bosnia. The Horn government achieved Hungary's most important foreign policy successes of the post-communist era by securing invitations to join both NATO and the European Union in 1997. Hungary became a member of NATO in 1999, and a member of the EU in 2004. Hungary also has improved its often frosty neighborly relations by signing basic treaties with Romania, Slovakia, and Ukraine. These renounce all outstanding territorial claims and lay the foundation for constructive relations. However, the issue of ethnic Hungarian minority rights in Slovakia and Romania periodically causes bilateral tensions to flare up. Hungary was a signatory to the Helsinki Final Act in 1975, has signed all of the CSCE/OSCE follow-on documents since 1989, and served as the OSCE's Chairman-in-Office in 1997. Hungary's record of implementing CSCE Helsinki Final Act provisions, including those on reunification of divided families, remains among the best in eastern Europe. Hungary has been a member of the United Nations since December 1955. This involves Hungary and Czechoslovakia, and was agreed on September 16, 1977 ("Budapest Treaty"). The treaty envisioned a cross-border barrage system between the towns Gabčíkovo, Czechoslovakia and Nagymaros, Hungary. After intensive campaign the project became widely hated as a symbol of the old communist regime. In 1989 the Hungarian government decided to suspend it. In its sentence from September 1997, the International Court of Justice stated that both sides breached their obligation and that the 1977 Budapest Treaty is still valid. In 1998 the Slovak government turned to the International Court, demanding the Nagymaros part to be built. The international dispute is still not solved as of 2008. On March 19, 2008 Hungary recognized Kosovo as an independent country. Disputes – international: Ongoing Gabčíkovo - Nagymaros Dams dispute with Slovakia Illicit drugs: Major trans-shipment point for Southwest Asian heroin and cannabis and transit point for South American cocaine destined for Western Europe; limited producer of precursor chemicals, particularly for amphetamines and methamphetamines Refugee protection: The hungarian border barrier was built in 2015, and Hungary was criticized by other European countries for using tear gas and water cannons on refugees of the Syrian Civil War as they were – illegally – trying to pass the country. Since 2017, the Hungary–Ukraine relations rapidly deteriorated over the issue of the Hungarian minority in Ukraine. A number of Hungarian anthropologists and linguists have long had an interest in the Turkic peoples, fueled by the eastern origin of the Hungarians' ancestors. The Hungarian ethnomusicologist Bence Szabolcsi explained this motivation as follows: "Hungarians are the outermost branch leaning this way from age-old tree of the great Asian musical culture rooted in the souls of a variety of peoples living from China through Central Asia to the Black Sea". In December 2010, the Fidesz government adopted a press and media law which threatens fines on media that engage in "unbalanced coverage". The law aroused criticism in the European Union as possibly "a direct threat to democracy". In 2013, the government adopted a new constitution that modified several aspects of the institutional and legal framework in Hungary. These changes have been criticized by the Council of Europe, the European Union and Human Rights Watch as possibly undermining the rule of law and human rights protection.
https://en.wikipedia.org/wiki?curid=13432
Henryk Sienkiewicz Henryk Adam Aleksander Pius Sienkiewicz ( , ; 5 May 1846 – 15 November 1916), also known by the pseudonym Litwos , was a Polish journalist, novelist and Nobel Prize laureate. He is best remembered for his historical novels, especially for his internationally known best-seller "Quo Vadis" (1896). Born into an impoverished Polish noble family in Russian-ruled Congress Poland, in the late 1860s he began publishing journalistic and literary pieces. In the late 1870s he traveled to the United States, sending back travel essays that won him popularity with Polish readers. In the 1880s he began serializing novels that further increased his popularity. He soon became one of the most popular Polish writers of the turn of the 19th and 20th centuries, and numerous translations gained him international renown, culminating in his receipt of the 1905 Nobel Prize in Literature for his "outstanding merits as an epic writer." Many of his novels remain in print. In Poland he is best known for his "Trilogy" of historical novels – "With Fire and Sword", "The Deluge", and "Sir Michael" – set in the 17th-century Polish–Lithuanian Commonwealth; internationally he is best known for "Quo Vadis", set in Nero's Rome. "The Trilogy" and "Quo Vadis" have been filmed, the latter several times, with Hollywood's 1951 version receiving the most international recognition. Sienkiewicz was born on 5 May 1846 in Wola Okrzejska, now a village in the central part of eastern Polish region of Lubelskie, then part of the Russian Empire. His family were impoverished Polish nobles, on his father's side deriving from Tatars who had settled in the Grand Duchy of Lithuania. His parents were Józef Sienkiewicz (1813–96) of the Oszyk coat of arms and Stefania Cieciszowska (1820–73). His mother descended from an old and affluent Podlachian family. He had five siblings: an older brother, Kazimierz (died during January Uprising), and four sisters, Aniela, Helena, Zofia and Maria. His family were entitled to use the Polish Oszyk coat of arms. Wola Okrzejska belonged to the writer's maternal grandmother, Felicjana Cieciszowska. His family moved several times, and young Henryk spent his childhood on family estates in Grabowce Górne, Wężyczyn and Burzec. In September 1858 he began his education in Warsaw, where the family would finally settle in 1861, having bought a tenement house ("kamienica") in eastern Warsaw's Praga district. He received relatively poor school grades except in the humanities, notably Polish language and history. Due to the hard times, 19-year-old Sienkiewicz took a job as tutor to the Weyher family in Płońsk. It was probably in this period that he wrote his first novel, "Ofiara" (Sacrifice); he is thought to have destroyed the manuscript of the never-published novel. He also worked on his first novel to be published, "Na marne" (In Vain). He completed extramural secondary-school classes, and in 1866 he received his secondary-school diploma. He first tried to study medicine, then law, at the Imperial University of Warsaw, but he soon transferred to the university's Institute of Philology and History, where he acquired a thorough knowledge of literature and Old Polish Language. Little is known about this period of his life, other than that he moved out of his parents' home, tutored part-time, and lived in poverty. His situation improved somewhat in 1868 when he became tutor to the princely Woroniecki family. In 1867 he wrote a rhymed piece, ""Sielanka Młodości"" ("Idyll of Youth"), which was rejected by "Tygodnik Ilustrowany" (The Illustrated Weekly). In 1869 he debuted as a journalist; "Przegląd Tygodniowy" (1866–1904) (The Weekly Review) ran his review of a play on 18 April 1869, and shortly afterward "The Illustrated Weekly" printed an essay of his about the late-Renaissance Polish poet Mikołaj Sęp Szarzyński. He completed his university studies in 1871, though he failed to receive a diploma because he did not pass the examination in Greek language. Sienkiewicz also wrote for "Gazeta Polska" (The Polish Gazette) and "Niwa" (magazine), under the pen name "Litwos". In 1873 he began writing a column, ""Bez tytułu"" ("Without a title"), in "The Polish Gazette"; in 1874 a column, ""Sprawy bieżące"" ("Current matters") for "Niwa"; and in 1875 the column, ""Chwila obecna"" ("The present moment"). He also collaborated on a Polish translation, published in 1874, of Victor Hugo's last novel, "Ninety-Three". In June that year, he became co-owner of "Niwa" (in 1878, he would sell his share in the magazine). Meanwhile, in 1872, he had debuted as a fiction writer with his short novel Na marne (In Vain), published in the magazine Wieniec (magazine) (Garland). This was followed by Humoreski z teki Woroszyłły (Humorous Sketches from Woroszyłła's Files, 1872), Stary Sługa (The Old Servant, 1875), Hania (Sienkiewicz) (1876) and Selim Mirza (1877). The last three are known as the "Little Trilogy". These publications made him a prominent figure in Warsaw's journalistic-literary world, and a guest at popular dinner parties hosted by actress Helena Modrzejewska. In 1874 Henryk Sienkiewicz was briefly engaged to Maria Keller, and traveled abroad to Brussels and Paris. Soon after he returned, his fiancée's parents cancelled the engagement. In 1876 Sienkiewicz went to the United States with Helena Modrzejewska (soon to become famous in the U.S. as actress Helena Modjeska) and her husband. He traveled via London to New York and then on to San Francisco, staying for some time in California. His travels were financed by "Gazeta Polska" (The Polish Gazette) in exchange for a series of travel essays: Sienkiewicz wrote Listy z podróży (Letters from a Journey) and "Listy Litwosa z Podróży" (Litwos' Letters from a Journey), which were published in "The Polish Gazette" in 1876–78 and republished as a book in 1880. Other articles by him also appeared in "Przegląd Tygodniowy" (The Weekly Review) and "Przewodnik Naukowy i Literacki" (The Learned and Literary Guide), discussing the situation of American Polonia. He briefly lived in the town of Anaheim, later in Anaheim Landing (now Seal Beach, California). He hunted, visited Native American camps, traveled in the nearby mountains (the Santa Ana, Sierra Madre, San Jacinto, and San Bernardino Mountains), and visited the Mojave Desert, Yosemite Valley, and the silver mines at Virginia City, Nevada. On 20 August 1877 he witnessed Modjeska's U.S. theatrical debut at San Francisco's California Theatre, which he reviewed for "The Polish Gazette"; and on 8 September he published in the Daily Evening Post an article, translated into English for him by Modjeska, on "Poland and Russia". In America he also continued writing fiction, in 1877 publishing Szkice węglem (Charcoal Sketches) in "The Polish Gazette". He wrote a play, "Na przebój", soon retitled "Na jedną kartę" (On a Single Card), later staged at Lviv (1879) and, to a better reception, at Warsaw (1881). He also wrote a play for Modjeska, aimed at an American public, "Z walki tutejszych partii" (Partisan Struggles), but it was never performed or published, and the manuscript appears to be lost. On 24 March 1878 Sienkiewicz left the U.S. for Europe. He first stayed in London, then for a year in Paris, delaying his return to Poland due to rumors of possible conscription into the Imperial Russian Army on the eve of a predicted new war with Turkey. In April 1879 Sienkiewicz returned to Polish soil. In Lviv (Lwów) he gave a lecture that was not well attended: ""Z Nowego Jorku do Kalifornii"" ("From New York to California"). Subsequent lectures in Szczawnica and Krynica in July–August that year, and in Warsaw and Poznań the following year, were much more successful. In late summer 1879 he went to Venice and Rome, which he toured for the next few weeks, on 7 November 1879 returning to Warsaw. There he met Maria Szetkiewicz, whom he married on 18 August 1881. The marriage was reportedly a happy one. The couple had two children, Henryk Józef (1882–1959) and Jadwiga Maria (1883–1969). It was a short-lived marriage, however, because on 18 August 1885 Maria died of tuberculosis. In 1879 the first collected edition of Sienkiewicz's works was published, in four volumes; the series would continue to 1917, ending with a total of 17 volumes. He also continued writing journalistic pieces, mainly in "The Polish Gazette" and "Niwa". In 1881 he published a favorable review of the first collected edition of works by Bolesław Prus. In 1880 Sienkiewicz wrote a historical novella, Niewola tatarska (Tartar Captivity). In late 1881 he became editor-in-chief of a new Warsaw newspaper, "Słowo" (The Word). This substantially improved his finances. The year 1882 saw him heavily engaged in the running of the newspaper, in which he published a number of columns and short stories. Soon, however, he lost interest in the journalistic aspect and decided to focus more on his literary work. He paid less and less attention to his post of editor-in-chief, resigning it in 1887 but remaining editor of the paper's literary section until 1892. From 1883 he increasingly shifted his focus from short pieces to historical novels. He began work on the historical novel, "Ogniem i Mieczem" (With Fire and Sword). Initially titled "Wilcze gniazdo" (The Wolf's Lair), it appeared in serial installments in "The Word" from May 1883 to March 1884. It also ran concurrently in the Kraków newspaper, "Czas" (Time). Sienkiewicz soon began writing the second volume of his Trilogy, "Potop" (The Deluge). It ran in "The Word" from December 1884 to September 1886. Beginning in 1884, Sienkiewicz accompanied his wife Maria to foreign sanatoriums. After her death, he kept on traveling Europe, leaving his children with his late wife's parents, though he often returned to Poland, particularly staying for long periods in Warsaw and Kraków beginning in the 1890s. After his return to Warsaw in 1887, the third volume of his Trilogy appeared – "Pan Wołodyjowski" (Sir Michael) – running in "The Word" from May 1887 to May 1888. The Trilogy established Sienkiewicz as the most popular contemporary Polish writer. Sienkiewicz received 15,000 rubles, in recognition of his achievements, from an unknown admirer who signed himself "Michał Wołodyjowski" after the Trilogy character. Sienkiewicz used the money to set up a fund, named for his wife and supervised by the Academy of Learning, to aid artists endangered by tuberculosis. In 1886, he visited Istanbul; in 1888, Spain. At the end of 1890 he went to Africa, resulting in "Listy z Afryki" (Letters from Africa, published in "The Word" in 1891–92, then collected as a book in 1893). The turn of the 1880s and 1890s was associated with intensive work on several novels. In 1891 his novel "Without dogma" ("Bez Dogmatu"), previously serialized in 1889–90 in "The Word", was published in book form. In 1892 Sienkiewicz signed an agreement for another novel, Rodzina Połanieckich (Children of the Soil), which was serialized in "The Polish Gazette" from 1893 and came out in book form in 1894. Sienkiewicz had several romances, and in 1892 Maria Romanowska-Wołodkowicz, stepdaughter of a wealthy Odessan, entered his life. He and Romanowska became engaged there in 1893 and married in Kraków on 11 November. Just two weeks later, however, his bride left him; Sienkiewicz blamed "in-law intrigues". On 13 December 1895 he obtained papal consent to dissolution of the marriage. In 1904 he married his niece, Maria Babska. Sienkiewicz used his growing international fame to influence world opinion in favor of the Polish cause (throughout his life and since the late 18th century, Poland remained partitioned by her neighbors, Russia, Austria and Prussia, later Germany). He often criticized German policies of Germanization of the Polish minority in Germany; in 1901 he expressed support of Września schoolchildren who were protesting the banning of the Polish language. More cautiously, he called on Russia's government to introduce reforms in Russian-controlled Congress Poland. During the Revolution in the Kingdom of Poland, he advocated broader Polish autonomy within the Russian Empire. Sienkiewicz maintained some ties with Polish right-wing National Democracy politicians and was critical of the socialists, but he was generally a moderate and declined to become a politician and a deputy to the Russian Duma. In the cultural sphere, he was involved in the creation of the Kraków and Warsaw monuments to Adam Mickiewicz. He supported educational endeavors and co-founded the Polska Macierz Szkolna organization. "Reasonably wealthy" by 1908 thanks to sales of his books, he often used his new wealth to support struggling writers. He helped gather funds for social-welfare projects such as starvation relief, and for construction of a tuberculosis sanatorium at Zakopane. He was as prominent in philanthropy as in literature. In February 1895 he wrote the first chapters of "Quo Vadis". The novel was serialized, beginning in March 1895, in Warsaw's "Polish Gazette", Kraków's "Czas" (Time), and Poznań's "Dziennik Poznański" (Poznań Daily). The novel was finished by March 1896. The book edition appeared later the same year, and soon gained international renown. In February 1897 he began serializing a new novel, "Krzyżacy" (The Teutonic Knights, or The Knights of the Cross); serialization finished in 1900, and the book edition appeared that year. In 1900, with a three-year delay due to the approaching centenary of Mickiewicz's birth, Sienkiewicz celebrated his own quarter-century, begun in 1872, as a writer. Special events were held in a number of Polish cities, including Kraków, Lwów and Poznań. A jubilee committee presented him with a gift from the Polish people: an estate at Oblęgorek, near Kielce, where he later opened a school for children. In 1905 he won a Nobel Prize for his lifetime achievements as an epic writer. In his acceptance speech, he said this honor was of particular value to a son of Poland: "She was pronounced dead – yet here is proof that she lives on... She was pronounced defeated – and here is proof that she is victorious." His social and political activities resulted in a diminished literary output. He wrote a new historical novel, Na polu chwały (On the Field of Glory), that was meant as the beginning of a new trilogy; it was, however, criticized as being a lesser version of his original Trilogy, and was never continued. Similarly, his contemporary novel "Wiry" Whirlpool (novel), 1910, which sought to criticize some of Sienkiewicz's political opponents, received a mostly polemical and politicized response. His 1910 novel for young people, "W pustyni i w puszczy" (In Desert and Wilderness), serialized in "Kurier Warszawski" (The Warsaw Courier), finishing in 1911, was much better received and became widely popular among children and young adults. After the outbreak of World War I, Sienkiewicz was visited at Oblęgorek by a Polish Legions cavalry unit under Bolesław Wieniawa-Długoszowski. Soon after, he left for Switzerland. Together with Ignacy Paderewski and Erazm Piltz, he established an organization for Polish war relief. He also supported the work of the Red Cross. Otherwise, he eschewed politics, though shortly before his death he endorsed the Act of 5th November 1916, a declaration by Emperors Wilhelm II of Germany and Franz Joseph of Austria and king of Hungary, pledging the creation of a Kingdom of Poland envisioned as a puppet state allied with, and controlled by, the Central Powers. Sienkiewicz died on 15 November 1916, at the Grand Hotel du Lac in Vevey, Switzerland, where he was buried on 22 November. The cause of death was ischemic heart disease. His funeral was attended by representatives of both the Central Powers and the Entente, and an address by Pope Benedict XV was read. In 1924, after Poland had regained her independence, Sienkiewicz's remains were repatriated to Warsaw, Poland, and placed in the crypt of St. John's Cathedral. During the coffin's transit, solemn memorial ceremonies were held in a number of cities. Thousands accompanied the coffin to its Warsaw resting place, and Poland's President Stanisław Wojciechowski delivered a eulogy. Sienkiewicz's early works (e.g., the 1872 "Humoreski z teki Woroszyłły") show him a strong supporter of Polish Positivism, endorsing constructive, practical characters such as an engineer. Polish "Positivism" advocated economic and social modernization and deprecated armed irredentist struggle. Unlike most other Polish Positivist writers, Sienkiewicz was a conservative. His Little Trilogy ("Stary Sługa", 1875; "Hania", 1876; "Selim Mirza", 1877) shows his interest in Polish history and his literary maturity, including fine mastery of humor and drama. His early works focused on three themes: the oppression and poverty of the peasants ("Charcoal Sketches", 1877); criticism of the partitioning powers (""Z pamiętnika korepetytora"", ""Janko Muzykant"" ["Janko the Musician"], 1879); and his voyage to the United States (""Za chlebem"", "For Bread", 1880). His most common motif was the plight of the powerless: impoverished peasants, schoolchildren, emigrants. His ""Latarnik"" ("The Lighthouse keeper", 1881) has been described as one of the best Polish short stories. His 1882 stories ""Bartek Zwycięzca"" ("Bart the Conqueror") and ""Sachem"" draw parallels between the tragic fates of their heroes and that of the occupied Polish nation. His novel "With Fire and Sword" (1883–84) was enthusiastically received by readers (as were the next two volumes of The Trilogy), becoming an "instant classic", though critical reception was lukewarm. The Trilogy is set in 17th-century Poland. While critics generally praised its style, they noted that some historic facts are misrepresented or distorted. The Trilogy merged elements of the epic and the historical novel, infused with special features of Sienkiewicz's style. The Trilogy's patriotism worried the censors; Warsaw's Russian censor I. Jankul warned Sienkiewicz that he would not allow publication of any further works of his dealing with Polish history. Sienkiewicz's "Without dogma" ("Bez dogmatu", 1889–90) was a notable artistic experiment, a self-analytical novel written as a fictitious diary. His works of the period are critical of decadent and naturalistic philosophies. He had expressed his opinions on naturalism and writing, generally, early on in ""O naturaliźmie w powieści"" ("Naturalism in the Novel", 1881). A dozen years later, in 1893, he wrote that novels should strengthen and ennoble life, rather than undermining and debasing it. Later, in the early 1900s, he fell into mutual hostility with the Young Poland movement in Polish literature. These views informed his novel "Quo Vadis" (1896). This story of early Christianity in Rome, with protagonists struggling against the Emperor Nero's regime, draws parallels between repressed early Christians and contemporary Poles; and, due to its focus on Christianity, it became widely popular in the Christian West. The triumph of spiritual Christianity over materialist Rome was a critique of materialism and decadence, and also an allegory for the strength of the Polish spirit. His "Teutonic Knights" returned to Poland's history. Describing the Battle of Grunwald (1410), a Polish-Lithuanian victory over the Germans in the Polish-Lithuanian-Teutonic War, the book had a substantial contemporary political context in the ongoing Germanization efforts ("Kulturkampf") in German Poland. The book quickly became another Sienkiewicz bestseller in Poland, and was received by critics better than his Trilogy had been; it was also applauded by the Polish right-wing, anti-German National Democracy political movement, and became part of the Polish school curriculum after Poland regained independence in 1918. It is often incorrectly asserted that Sienkiewicz received his Nobel Prize for "Quo Vadis". While "Quo Vadis" is the novel that brought him international fame, the Nobel Prize does not name any particular novel, instead citing "his outstanding merits as an epic writer". Sienkiewicz often carried out substantial historic research for his novels, but he was selective in the findings that made it into the novels. Thus, for example, he prioritized Polish military victories over defeats. Sienkiewicz kept a diary, but it has been lost. About the turn of the 20th century, Sienkiewicz was the most popular writer in Poland, and one of the most popular in Germany, France, Russia, and the English-speaking world. The Trilogy went through many translations; "With Fire and Sword" saw at least 26 in his lifetime. "Quo Vadis" became extremely popular, in at least 40 different language translations, including English-language editions totaling a million copies. The American translator Jeremiah Curtin has been credited with helping popularize his works abroad. However, as Russia (of which Sienkiewicz was a citizen) was not a signatory to the Berne Convention, he rarely received any royalties from the translations. Already in his lifetime his works were adapted for theatrical, operatic and musical presentations and for the emerging film industry. Writers and poets devoted works to him, or used him or his works as inspiration. Painters created works inspired by Sienkiewicz's novels, and their works were gathered in Sienkiewicz-themed albums and exhibitions. The names of his characters were given to a variety of products. The popularity of "Quo Vadis" in France, where it was the best-selling book of 1900, is shown by the fact that horses competing in a Grand Prix de Paris event were named for characters in the book. In the United States, "Quo Vadis" sold 800,000 copies in eighteen months. To avoid intrusive journalists and fans, Sienkiewicz sometimes traveled incognito. He was inducted into many international organizations and societies, including the Polish Academy of Learning, the Russian Academy of Sciences, the Serbian Academy of Sciences and Arts, the Royal Czech Society of Sciences, and the Italian Academy of Arcadia. He received the French "Légion d'honneur" (1904), honorary doctorates from the Jagiellonian University (1900) and Lwów University (1911), and honorary citizenship of Lwów (1902). In 1905 he received the most prestigious award in the world of literature, the Nobel Prize, after having been nominated in that year by Hans Hildebrand, member of the Swedish Academy. Named for Sienkiewicz, in Poland, are numerous streets and squares (the first street to bear his name was in Lwów, in 1907). Named for him is Białystok's "Osiedle Sienkiewicza"; city parks in Wrocław and Łódź; and over 70 schools in Poland. He has statues in a number of Polish cities, including Warsaw's Łazienki Park (the first statue was erected at Zbaraż, now in Ukraine), and in Rome A Sienkiewcz Mound stands at Okrzeja, near his birthplace, Wola Okrzejska. He has been featured on a number of postage stamps. There are three museums dedicated to him in Poland. The first, the Henryk Sienkiewicz Museum in Oblęgorek (his residence), opened in 1958. The second, founded in 1966, is in his birthplace: the Henryk Sienkiewicz Museum in Wola Okrzejska. The third opened in 1978 at Poznań. In Rome (Italy), in the small church of "Domine Quo Vadis", there is a bronze bust of Henryk Sienkiewicz. It is said that Sienkiewicz was inspired to write his novel "Quo Vadis" while sitting in this church. Outside Poland, Sienkiewicz's popularity declined beginning in the interbellum, except for "Quo Vadis", which retained relative fame thanks to several film adaptations, including a notable American one in 1951. In Poland his works are still widely read; he is seen as a classic author, and his works are often required reading in schools. They have also been adapted for Polish films and television series. The first critical analyses of his works were published in his lifetime. He has been the subject of a number of biographies. His works have received criticism, in his lifetime and since, as being simplistic: a view expressed notably by the 20th-century Polish novelist and dramatist Witold Gombrowicz, who described Sienkiewicz as a "first-rate second-rate writer". Vasily Rozanov described "Quo Vadis" as "not a work of art", but a "crude factory-made oleograph", while Anton Chekhov called Sienkiewicz's writing "sickeningly cloying and clumsy". Nonetheless, the Polish historian of literature Henryk Markiewicz, writing the "Polski słownik biograficzny" (Polish Biographical Dictionary) entry on Sienkiewicz (1997), describes him as a master of Polish prose, as the foremost Polish writer of historical fiction, and as Poland's internationally best-known writer.
https://en.wikipedia.org/wiki?curid=13433
Hg Hg is the chemical symbol of Mercury. Hg, hg, HG, inHg or "Hg may also refer to:
https://en.wikipedia.org/wiki?curid=13434
Hydrology Hydrology (from Greek: ὕδωρ, "hýdōr" meaning "water" and λόγος, "lógos" meaning "study") is the scientific study of the movement, distribution and management of water on Earth and other planets, including the water cycle, water resources and environmental watershed sustainability. A practitioner of hydrology is called a hydrologist. Hydrologists are scientists studying earth or environmental science, civil or environmental engineering and physical geography. Using various analytical methods and scientific techniques, they collect and analyze data to help solve water related problems such as environmental preservation, natural disasters, and water management. Hydrology subdivides into surface water hydrology, groundwater hydrology (hydrogeology), and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management and water quality, where water plays the central role. Oceanography and meteorology are not included because water is only one of many important aspects within those fields. Hydrological research can inform environmental engineering, policy and planning. Hydrology has been a subject of investigation and engineering for millennia. For example, about 4000 BC the Nile was dammed to improve agricultural productivity of previously barren lands. Mesopotamian towns were protected from flooding with high earthen walls. Aqueducts were built by the Greeks and Ancient Romans, while the history of China shows they built irrigation and flood control works. The ancient Sinhalese used hydrology to build complex irrigation works in Sri Lanka, also known for invention of the Valve Pit which allowed construction of large reservoirs, anicuts and canals which still function. Marcus Vitruvius, in the first century BC, described a philosophical theory of the hydrologic cycle, in which precipitation falling in the mountains infiltrated the Earth's surface and led to streams and springs in the lowlands. With the adoption of a more scientific approach, Leonardo da Vinci and Bernard Palissy independently reached an accurate representation of the hydrologic cycle. It was not until the 17th century that hydrologic variables began to be quantified. Pioneers of the modern science of hydrology include Pierre Perrault, Edme Mariotte and Edmund Halley. By measuring rainfall, runoff, and drainage area, Perrault showed that rainfall was sufficient to account for the flow of the Seine. Mariotte combined velocity and river cross-section measurements to obtain a discharge, again in the Seine. Halley showed that the evaporation from the Mediterranean Sea was sufficient to account for the outflow of rivers flowing into the sea. Advances in the 18th century included the Bernoulli piezometer and Bernoulli's equation, by Daniel Bernoulli, and the Pitot tube, by Henri Pitot. The 19th century saw development in groundwater hydrology, including Darcy's law, the Dupuit-Thiem well formula, and Hagen-Poiseuille's capillary flow equation. Rational analyses began to replace empiricism in the 20th century, while governmental agencies began their own hydrological research programs. Of particular importance were Leroy Sherman's unit hydrograph, the infiltration theory of Robert E. Horton, and C.V. Theis' aquifer test/equation describing well hydraulics. Since the 1950s, hydrology has been approached with a more theoretical basis than in the past, facilitated by advances in the physical understanding of hydrological processes and by the advent of computers and especially geographic information systems (GIS). (See also GIS and hydrology) The central theme of hydrology is that water circulates throughout the Earth through different pathways and at different rates. The most vivid image of this is in the evaporation of water from the ocean, which forms clouds. These clouds drift over the land and produce rain. The rainwater flows into lakes, rivers, or aquifers. The water in lakes, rivers, and aquifers then either evaporates back to the atmosphere or eventually flows back to the ocean, completing a cycle. Water changes its state of being several times throughout this cycle. The areas of research within hydrology concern the movement of water between its various states, or within a given state, or simply quantifying the amounts in these states in a given region. Parts of hydrology concern developing methods for directly measuring these flows or amounts of water, while others concern modeling these processes either for scientific knowledge or for making a prediction in practical applications. Ground water is water beneath Earth's surface, often pumped for drinking water. Groundwater hydrology (hydrogeology) considers quantifying groundwater flow and solute transport. Problems in describing the saturated zone include the characterization of aquifers in terms of flow direction, groundwater pressure and, by inference, groundwater depth (see: aquifer test). Measurements here can be made using a piezometer. Aquifers are also described in terms of hydraulic conductivity, storativity and transmissivity. There are a number of geophysical methods for characterising aquifers. There are also problems in characterising the vadose zone (unsaturated zone). Infiltration is the process by which water enters the soil. Some of the water is absorbed, and the rest percolates down to the water table. The infiltration capacity, the maximum rate at which the soil can absorb water, depends on several factors. The layer that is already saturated provides a resistance that is proportional to its thickness, while that plus the depth of water above the soil provides the driving force (hydraulic head). Dry soil can allow rapid infiltration by capillary action; this force diminishes as the soil becomes wet. Compaction reduces the porosity and the pore sizes. Surface cover increases capacity by retarding runoff, reducing compaction and other processes. Higher temperatures reduce viscosity, increasing infiltration. Soil moisture can be measured in various ways; by capacitance probe, time domain reflectometer or Tensiometer. Other methods include solute sampling and geophysical methods. Hydrology considers quantifying surface water flow and solute transport, although the treatment of flows in large rivers is sometimes considered as a distinct topic of hydraulics or hydrodynamics. Surface water flow can include flow both in recognizable river channels and otherwise. Methods for measuring flow once the water has reached a river include the stream gauge (see: discharge), and tracer techniques. Other topics include chemical transport as part of surface water, sediment transport and erosion. One of the important areas of hydrology is the interchange between rivers and aquifers. Groundwater/surface water interactions in streams and aquifers can be complex and the direction of net water flux (into surface water or into the aquifer) may vary spatially along a stream channel and over time at any particular location, depending on the relationship between stream stage and groundwater levels. In some considerations, hydrology is thought of as starting at the land-atmosphere boundary and so it is important to have adequate knowledge of both precipitation and evaporation. Precipitation can be measured in various ways: disdrometer for precipitation characteristics at a fine time scale; radar for cloud properties, rain rate estimation, hail and snow detection; rain gauge for routine accurate measurements of rain and snowfall; satellite for rainy area identification, rain rate estimation, land-cover/land-use, and soil moisture, for example. Evaporation is an important part of the water cycle. It is partly affected by humidity, which can be measured by a sling psychrometer. It is also affected by the presence of snow, hail, and ice and can relate to dew, mist and fog. Hydrology considers evaporation of various forms: from water surfaces; as transpiration from plant surfaces in natural and agronomic ecosystems. Direct measurement of evaporation can be obtained using Simon's evaporation pan. Detailed studies of evaporation involve boundary layer considerations as well as momentum, heat flux, and energy budgets. Remote sensing of hydrologic processes can provide information on locations where "in situ" sensors may be unavailable or sparse. It also enables observations over large spatial extents. Many of the variables constituting the terrestrial water balance, for example surface water storage, soil moisture, precipitation, evapotranspiration, and snow and ice, are measurable using remote sensing at various spatial-temporal resolutions and accuracies. Sources of remote sensing include land-based sensors, airborne sensors and satellite sensors which can capture microwave, thermal and near-infrared data or use lidar, for example. In hydrology, studies of water quality concern organic and inorganic compounds, and both dissolved and sediment material. In addition, water quality is affected by the interaction of dissolved oxygen with organic material and various chemical transformations that may take place. Measurements of water quality may involve either in-situ methods, in which analyses take place on-site, often automatically, and laboratory-based analyses and may include microbiological analysis. Observations of hydrologic processes are used to make predictions of the future behavior of hydrologic systems (water flow, water quality). One of the major current concerns in hydrologic research is "Prediction in Ungauged Basins" (PUB), i.e. in basins where no or only very few data exist. By analyzing the statistical properties of hydrologic records, such as rainfall or river flow, hydrologists can estimate future hydrologic phenomena. When making assessments of how often relatively rare events will occur, analyses are made in terms of the return period of such events. Other quantities of interest include the average flow in a river, in a year or by season. These estimates are important for engineers and economists so that proper risk analysis can be performed to influence investment decisions in future infrastructure and to determine the yield reliability characteristics of water supply systems. Statistical information is utilized to formulate operating rules for large dams forming part of systems which include agricultural, industrial and residential demands. Hydrological models are simplified, conceptual representations of a part of the hydrologic cycle. They are primarily used for hydrological prediction and for understanding hydrological processes, within the general field of scientific modeling. Two major types of hydrological models can be distinguished: Recent research in hydrological modeling tries to have a more global approach to the understanding of the behavior of hydrologic systems to make better predictions and to face the major challenges in water resources management. Water movement is a significant means by which other material, such as soil, gravel, boulders or pollutants, are transported from place to place. Initial input to receiving waters may arise from a point source discharge or a line source or area source, such as surface runoff. Since the 1960s rather complex mathematical models have been developed, facilitated by the availability of high-speed computers. The most common pollutant classes analyzed are nutrients, pesticides, total dissolved solids and sediment.
https://en.wikipedia.org/wiki?curid=13435
Heinrich Himmler Heinrich Luitpold Himmler (; 7 October 1900 – 23 May 1945) was "Reichsführer" of the "Schutzstaffel" (Protection Squadron; SS), and a leading member of the Nazi Party (NSDAP) of Germany. Himmler was one of the most powerful men in Nazi Germany and a main architect of the Holocaust. As a member of a reserve battalion during World War I, Himmler did not see active service. He studied agronomy in university, and joined the Nazi Party in 1923 and the SS in 1925. In 1929, he was appointed "Reichsführer-SS" by Adolf Hitler. Over the next 16 years, he developed the SS from a mere 290-man battalion into a million-strong paramilitary group, and set up and controlled the Nazi concentration camps. He was known for good organisational skills and for selecting highly competent subordinates, such as Reinhard Heydrich in 1931. From 1943 onwards, he was both Chief of German Police and Minister of the Interior, overseeing all internal and external police and security forces, including the Gestapo (Secret State Police). He controlled the Waffen-SS, the military branch of the SS. Himmler had a lifelong interest in occultism, interpreting Germanic neopagan and "Völkisch" beliefs to promote the racial policy of Nazi Germany, and incorporating esoteric symbolism and rituals into the SS. Himmler formed the "Einsatzgruppen" and built extermination camps. As facilitator and overseer of the concentration camps, Himmler directed the killing of some six million Jews, between 200,000 and 500,000 Romani people, and other victims; the total number of civilians killed by the regime is estimated at eleven to fourteen million people. Most of them were Polish and Soviet citizens. Late in World War II, Hitler briefly appointed him a military commander and later Commander of the Replacement (Home) Army and General Plenipotentiary for the administration of the entire Third Reich ("Generalbevollmächtigter für die Verwaltung"). Specifically, he was given command of the Army Group Upper Rhine and the Army Group Vistula; he failed to achieve his assigned objectives and Hitler replaced him in these posts. Realising the war was lost, Himmler attempted to open peace talks with the western Allies without Hitler's knowledge, shortly before the end of the war. Hearing of this, Hitler dismissed him from all his posts in April 1945 and ordered his arrest. Himmler attempted to go into hiding, but was detained and then arrested by British forces once his identity became known. While in British custody, he committed suicide on 23 May 1945. Heinrich Luitpold Himmler was born in Munich on 7 October 1900 into a conservative middle-class Roman Catholic family. His father was Joseph Gebhard Himmler (17 May 1865 – 29 October 1936), a teacher, and his mother was Anna Maria Himmler (née Heyder; 16 January 1866 – 10 September 1941), a devout Roman Catholic. Heinrich had two brothers, Gebhard Ludwig (29 July 1898 – 22 June 1982) and Ernst Hermann (23 December 1905 – 2 May 1945). Himmler's first name, Heinrich, was that of his godfather, Prince Heinrich of Bavaria, a member of the royal family of Bavaria, who had been tutored by Gebhard Himmler. He attended a grammar school in Landshut, where his father was deputy principal. While he did well in his schoolwork, he struggled in athletics. He had poor health, suffering from lifelong stomach complaints and other ailments. In his youth he trained daily with weights and exercised to become stronger. Other boys at the school later remembered him as studious and awkward in social situations. Himmler's diary, which he kept intermittently from the age of 10, shows that he took a keen interest in current events, dueling, and "the serious discussion of religion and sex". In 1915, he began training with the Landshut Cadet Corps. His father used his connections with the royal family to get Himmler accepted as an officer candidate, and he enlisted with the reserve battalion of the 11th Bavarian Regiment in December 1917. His brother, Gebhard, served on the western front and saw combat, receiving the Iron Cross and eventually being promoted to lieutenant. In November 1918, while Himmler was still in training, the war ended with Germany's defeat, denying him the opportunity to become an officer or see combat. After his discharge on 18 December, he returned to Landshut. After the war, Himmler completed his grammar-school education. From 1919–22, he studied agronomy at the Munich "Technische Hochschule" (now Technical University Munich) following a brief apprenticeship on a farm and a subsequent illness. Although many regulations that discriminated against non-Christians—including Jews and other minority groups—had been eliminated during the unification of Germany in 1871, antisemitism continued to exist and thrive in Germany and other parts of Europe. Himmler was antisemitic by the time he went to university, but not exceptionally so; students at his school would avoid their Jewish classmates. He remained a devout Catholic while a student, and spent most of his leisure time with members of his fencing fraternity, the "League of Apollo", the president of which was Jewish. Himmler maintained a polite demeanor with him and with other Jewish members of the fraternity, in spite of his growing antisemitism. During his second year at university, Himmler redoubled his attempts to pursue a military career. Although he was not successful, he was able to extend his involvement in the paramilitary scene in Munich. It was at this time that he first met Ernst Röhm, an early member of the Nazi Party and co-founder of the "Sturmabteilung" ("Storm Battalion"; SA). Himmler admired Röhm because he was a decorated combat soldier, and at his suggestion Himmler joined his antisemitic nationalist group, the "Bund Reichskriegsflagge" (Imperial War Flag Society). In 1922, Himmler became more interested in the "Jewish question", with his diary entries containing an increasing number of antisemitic remarks and recording a number of discussions about Jews with his classmates. His reading lists, as recorded in his diary, were dominated by antisemitic pamphlets, German myths, and occult tracts. After the murder of Foreign Minister Walther Rathenau on 24 June, Himmler's political views veered towards the radical right, and he took part in demonstrations against the Treaty of Versailles. Hyperinflation was raging, and his parents could no longer afford to educate all three sons. Disappointed by his failure to make a career in the military and his parents' inability to finance his doctoral studies, he was forced to take a low-paying office job after obtaining his agricultural diploma. He remained in this position until September 1923. Himmler joined the Nazi Party (NSDAP) in August 1923, receiving Party number 14,303. As a member of Röhm's paramilitary unit, Himmler was involved in the Beer Hall Putsch—an unsuccessful attempt by Hitler and the NSDAP to seize power in Munich. This event would set Himmler on a life of politics. He was questioned by the police about his role in the putsch, but was not charged because of insufficient evidence. However, he lost his job, was unable to find employment as an agronomist, and had to move in with his parents in Munich. Frustrated by these failures, he became ever more irritable, aggressive, and opinionated, alienating both friends and family members. In 1923–24, Himmler, while searching for a world view, came to abandon Catholicism and focused on the occult and in antisemitism. Germanic mythology, reinforced by occult ideas, became a religion for him. Himmler found the NSDAP appealing because its political positions agreed with his own views. Initially, he was not swept up by Hitler's charisma or the cult of Führer worship. However, as he learned more about Hitler through his reading, he began to regard him as a useful face of the party, and he later admired and even worshipped him. To consolidate and advance his own position in the NSDAP, Himmler took advantage of the disarray in the party following Hitler's arrest in the wake of the Beer Hall Putsch. From mid-1924 he worked under Gregor Strasser as a party secretary and propaganda assistant. Travelling all over Bavaria agitating for the party, he gave speeches and distributed literature. Placed in charge of the party office in Lower Bavaria by Strasser from late 1924, he was responsible for integrating the area's membership with the NSDAP under Hitler when the party was re-founded in February 1925. That same year, he joined the "Schutzstaffel" (SS) as an SS-Führer (SS-Leader); his SS number was 168. The SS, initially part of the much larger SA, was formed in 1923 for Hitler's personal protection, and was re-formed in 1925 as an elite unit of the SA. Himmler's first leadership position in the SS was that of "SS-Gauführer" (district leader) in Lower Bavaria from 1926. Strasser appointed Himmler deputy propaganda chief in January 1927. As was typical in the NSDAP, he had considerable freedom of action in his post, which increased over time. He began to collect statistics on the number of Jews, Freemasons, and enemies of the party, and following his strong need for control, he developed an elaborate bureaucracy. In September 1927, Himmler told Hitler of his vision to transform the SS into a loyal, powerful, racially pure elite unit. Convinced that Himmler was the man for the job, Hitler appointed him Deputy "Reichsführer-SS", with the rank of "SS-Oberführer". Around this time, Himmler joined the Artaman League, a "Völkisch" youth group. There he met Rudolf Höss, who was later commandant of Auschwitz concentration camp, and Walther Darré, whose book, "The Peasantry as the Life Source of the Nordic Race", caught Hitler's attention, leading to his later appointment as Reich Minister of Food and Agriculture. Darré was a firm believer in the superiority of the Nordic race, and his philosophy was a major influence on Himmler. Upon the resignation of SS commander Erhard Heiden in January 1929, Himmler assumed the position of "Reichsführer-SS" with Hitler's approval; he still carried out his duties at propaganda headquarters. One of his first responsibilities was to organise SS participants at the Nuremberg Rally that September. Over the next year, Himmler grew the SS from a force of about 290 men to about 3,000. By 1930 Himmler had persuaded Hitler to run the SS as a separate organisation, although it was officially still subordinate to the SA. To gain political power, the NSDAP took advantage of the economic downturn during the Great Depression. The coalition government of the Weimar Republic was unable to improve the economy, so many voters turned to the political extreme, which included the NSDAP. Hitler used populist rhetoric, including blaming scapegoats—particularly the Jews—for the economic hardships. In the 1932 election, the Nazis won 37.3 percent of the vote and 230 seats in the Reichstag. Hitler was appointed Chancellor of Germany by President Paul von Hindenburg on 30 January 1933, heading a short-lived coalition of his Nazis and the German National People's Party. The new cabinet initially included only three members of the NSDAP: Hitler, Hermann Göring as minister without portfolio and Minister of the Interior for Prussia, and Wilhelm Frick as Reich Interior Minister. Less than a month later, the Reichstag building was set on fire. Hitler took advantage of this event, forcing von Hindenburg to sign the Reichstag Fire Decree, which suspended basic rights and allowed detention without trial. The Enabling Act, passed by the Reichstag in 1933, gave the Cabinet—in practice, Hitler—full legislative powers, and the country became a de facto dictatorship. On 1 August 1934, Hitler's cabinet passed a law which stipulated that upon von Hindenburg's death, the office of president would be abolished and its powers merged with those of the chancellor. Von Hindenburg died the next morning, and Hitler became both head of state and head of government under the title "Führer und Reichskanzler" (leader and chancellor). The Nazi Party's rise to power provided Himmler and the SS an unfettered opportunity to thrive. By 1933, the SS numbered 52,000 members. Strict membership requirements ensured that all members were of Hitler's Aryan "Herrenvolk" ("Aryan master race"). Applicants were vetted for Nordic qualities—in Himmler's words, "like a nursery gardener trying to reproduce a good old strain which has been adulterated and debased; we started from the principles of plant selection and then proceeded quite unashamedly to weed out the men whom we did not think we could use for the build-up of the SS." Few dared mention that by his own standards, Himmler did not meet his own ideals. Himmler's organised, bookish intellect served him well as he began setting up different SS departments. In 1931 he appointed Reinhard Heydrich chief of the new Ic Service (intelligence service), which was renamed the "Sicherheitsdienst" (SD: Security Service) in 1932. He later officially appointed Heydrich his deputy. The two men had a good working relationship and a mutual respect. In 1933, they began to remove the SS from SA control. Along with Interior Minister Frick, they hoped to create a unified German police force. In March 1933, Reich Governor of Bavaria Franz Ritter von Epp appointed Himmler chief of the Munich Police. Himmler appointed Heydrich commander of Department IV, the political police. That same year, Hitler promoted Himmler to the rank of SS-"Obergruppenführer", equal in rank to the senior SA commanders. Thereafter, Himmler and Heydrich took over the political police of state after state; soon only Prussia was controlled by Göring. Himmler further established the SS Race and Settlement Main Office ("Rasse- und Siedlungshauptamt" or RuSHA). He appointed Darré as its first chief, with the rank of SS-"Gruppenführer". The department implemented racial policies and monitored the "racial integrity" of the SS membership. SS men were carefully vetted for their racial background. On 31 December 1931, Himmler introduced the "marriage order", which required SS men wishing to marry to produce family trees proving that both families were of Aryan descent to 1800. If any non-Aryan forebears were found in either family tree during the racial investigation, the person concerned was excluded from the SS. Each man was issued a "Sippenbuch", a genealogical record detailing his genetic history. Himmler expected that each SS marriage should produce at least four children, thus creating a pool of genetically superior prospective SS members. The programme had disappointing results; less than 40 per cent of SS men married and each produced only about one child. In March 1933, less than three months after the Nazis came to power, Himmler set up the first official concentration camp at Dachau. Hitler had stated that he did not want it to be just another prison or detention camp. Himmler appointed Theodor Eicke, a convicted felon and ardent Nazi, to run the camp in June 1933. Eicke devised a system that was used as a model for future camps throughout Germany. Its features included isolation of victims from the outside world, elaborate roll calls and work details, the use of force and executions to exact obedience, and a strict disciplinary code for the guards. Uniforms were issued for prisoners and guards alike; the guards' uniforms had a special "Totenkopf" insignia on their collars. By the end of 1934, Himmler took control of the camps under the aegis of the SS, creating a separate division, the "SS-Totenkopfverbände". Initially the camps housed political opponents; over time, undesirable members of German society—criminals, vagrants, deviants—were placed in the camps as well. A Hitler decree issued in December 1937 allowed for the incarceration of anyone deemed by the regime to be an undesirable member of society. This included Jews, Gypsies, communists, and those persons of any other cultural, racial, political, or religious affiliation deemed by the Nazis to be "Untermensch" (sub-human). Thus, the camps became a mechanism for social and racial engineering. By the outbreak of World War II in autumn 1939, there were six camps housing some 27,000 inmates. Death tolls were high. In early 1934, Hitler and other Nazi leaders became concerned that Röhm was planning a coup d'état. Röhm had socialist and populist views, and believed that the real revolution had not yet begun. He felt that the SA—now numbering some three million men, far dwarfing the army—should become the sole arms-bearing corps of the state, and that the army should be absorbed into the SA under his leadership. Röhm lobbied Hitler to appoint him Minister of Defence, a position held by conservative General Werner von Blomberg. Göring had created a Prussian secret police force, the "Geheime Staatspolizei" or Gestapo in 1933, and appointed Rudolf Diels as its head. Göring, concerned that Diels was not ruthless enough to use the Gestapo effectively to counteract the power of the SA, handed over its control to Himmler on 20 April 1934. Also on that date, Hitler appointed Himmler chief of all German police outside Prussia. This was a radical departure from long-standing German practice that law enforcement was a state and local matter. Heydrich, named chief of the Gestapo by Himmler on 22 April 1934, also continued as head of the SD. Hitler decided on 21 June that Röhm and the SA leadership had to be eliminated. He sent Göring to Berlin on 29 June, to meet with Himmler and Heydrich to plan the action. Hitler took charge in Munich, where Röhm was arrested; he gave Röhm the choice to commit suicide or be shot. When Röhm refused to kill himself, he was shot dead by two SS officers. Between 85 and 200 members of the SA leadership and other political adversaries, including Gregor Strasser, were killed between 30 June and 2 July 1934 in these actions, known as the Night of the Long Knives. With the SA thus neutralised, the SS became an independent organisation answerable only to Hitler on 20 July 1934. Himmler's title of "Reichsführer-SS" became the highest formal SS rank, equivalent to a field marshal in the army. The SA was converted into a sports and training organisation. On 15 September 1935, Hitler presented two laws—known as the Nuremberg Laws—to the Reichstag. The laws banned marriage between non-Jewish and Jewish Germans and forbade the employment of non-Jewish women under the age of 45 in Jewish households. The laws also deprived so-called "non-Aryans" of the benefits of German citizenship. These laws were among the first race-based measures instituted by the Third Reich. Himmler and Heydrich wanted to extend the power of the SS; thus, they urged Hitler to form a national police force overseen by the SS, to guard Nazi Germany against its many enemies at the time—real and imagined. Interior Minister Frick also wanted a national police force, but one controlled by him, with Kurt Daluege as his police chief. Hitler left it to Himmler and Heydrich to work out the arrangements with Frick. Himmler and Heydrich had greater bargaining power, as they were allied with Frick's old enemy, Göring. Heydrich drew up a set of proposals and Himmler sent him to meet with Frick. An angry Frick then consulted with Hitler, who told him to agree to the proposals. Frick acquiesced, and on 17 June 1936 Hitler decreed the unification of all police forces in the Reich, and named Himmler Chief of German Police. In this role, Himmler was still nominally subordinate to Frick. In practice, however, the police was now effectively a division of the SS, and hence independent of Frick's control. This move gave Himmler operational control over Germany's entire detective force. He also gained authority over all of Germany's uniformed law enforcement agencies, which were amalgamated into the new "Ordnungspolizei" (Orpo: "order police"), which became a branch of the SS under Daluege. Shortly thereafter, Himmler created the "Kriminalpolizei" (Kripo: criminal police) as the umbrella organisation for all criminal investigation agencies in Germany. The Kripo was merged with the Gestapo into the "Sicherheitspolizei" (SiPo: security police), under Heydrich's command. In September 1939, following the outbreak of World War II, Himmler formed the "SS-Reichssicherheitshauptamt" (RSHA: Reich Main Security Office) to bring the SiPo (which included the Gestapo and Kripo) and the SD together under one umbrella. He again placed Heydrich in command. Under Himmler's leadership, the SS developed its own military branch, the "SS-Verfügungstruppe" (SS-VT), which later evolved into the Waffen-SS. Nominally under the authority of Himmler, the Waffen-SS developed a fully militarised structure of command and operations. It grew from three regiments to over 38 divisions during World War II, serving alongside the "Heer" (army), but never being formally part of it. In addition to his military ambitions, Himmler established the beginnings of a parallel economy under the umbrella of the SS. To this end, administrator Oswald Pohl set up the "Deutsche Wirtschaftsbetriebe" (German Economic Enterprise) in 1940. Under the auspices of the SS Economy and Administration Head Office, this holding company owned housing corporations, factories, and publishing houses. Pohl was unscrupulous and quickly exploited the companies for personal gain. In contrast, Himmler was honest in matters of money and business. In 1938, as part of his preparations for war, Hitler ended the German alliance with China, and entered into an agreement with the more modern Japan. That same year, Austria was unified with Nazi Germany in the Anschluss, and the Munich Agreement gave Nazi Germany control over the Sudetenland, part of Czechoslovakia. Hitler's primary motivations for war included obtaining additional "Lebensraum" ("living space") for the Germanic peoples, who were considered racially superior according to Nazi ideology. A second goal was the elimination of those considered racially inferior, particularly the Jews and Slavs, from territories controlled by the Reich. From 1933 to 1938, hundreds of thousands of Jews emigrated to the United States, Palestine, Great Britain, and other countries. Some converted to Christianity. Himmler believed that a major task of the SS should be "acting as the vanguard in overcoming Christianity and restoring a 'Germanic' way of living" as part of preparations for the coming conflict between "humans and subhumans". Himmler biographer Peter Longerich wrote that, while the Nazi movement as a whole launched itself against Jews and Communists, "by linking de-Christianisation with re-Germanization, Himmler had provided the SS with a goal and purpose all of its own". Himmler was vehemently opposed to Christian sexual morality and the "principle of Christian mercy", both of which he saw as dangerous obstacles to his planned battle with "subhumans". In 1937, Himmler declared: When Hitler and his army chiefs asked for a pretext for the invasion of Poland in 1939, Himmler, Heydrich, and Heinrich Müller masterminded and carried out a false flag project code-named Operation Himmler. German soldiers dressed in Polish uniforms undertook border skirmishes which deceptively suggested Polish aggression against Germany. The incidents were then used in Nazi propaganda to justify the invasion of Poland, the opening event of World War II. At the beginning of the war against Poland, Hitler authorised the killing of Polish civilians, including Jews and ethnic Poles. The "Einsatzgruppen" (SS task forces) had originally been formed by Heydrich to secure government papers and offices in areas taken over by Germany before World War II. Authorised by Hitler and under the direction of Himmler and Heydrich, the "Einsatzgruppen" units—now repurposed as death squads—followed the "Heer" (army) into Poland, and by the end of 1939 they had murdered some 65,000 intellectuals and other civilians. Militias and "Heer" units also took part in these killings. Under Himmler's orders via the RSHA, these squads were also tasked with rounding up Jews and others for placement in ghettos and concentration camps. Germany subsequently invaded Denmark and Norway, the Netherlands, and France, and began bombing Great Britain in preparation for Operation Sea Lion, the planned invasion of the United Kingdom. On 21 June 1941, the day before invasion of the Soviet Union, Himmler commissioned the preparation of the "Generalplan Ost" (General Plan for the East); the plan was finalised in July 1942. It called for the Baltic States, Poland, Western Ukraine, and Byelorussia to be conquered and resettled by ten million German citizens. The current residents—some 31 million people—would be expelled further east, starved, or used for forced labour. The plan would have extended the border of Germany a thousand kilometres to the east (620 miles). Himmler expected that it would take twenty to thirty years to complete the plan, at a cost of 67 billion Reichsmarks. Himmler stated openly: "It is a question of existence, thus it will be a racial struggle of pitiless severity, in the course of which 20 to 30 million Slavs and Jews will perish through military actions and crises of food supply." Himmler declared that the war in the east was a pan-European crusade to defend the traditional values of old Europe from the "Godless Bolshevik hordes". Constantly struggling with the Wehrmacht for recruits, Himmler solved this problem through the creation of Waffen-SS units composed of Germanic folk groups taken from the Balkans and eastern Europe. Equally vital were recruits from among the Germanic considered peoples of northern and western Europe, in the Netherlands, Norway, Belgium, Denmark and Finland. Spain and Italy also provided men for Waffen-SS units. Among western countries, the number of volunteers varied from a high of 25,000 from the Netherlands to 300 each from Sweden and Switzerland. From the east, the highest number of men came from Lithuania (50,000) and the lowest from Bulgaria (600). After 1943 most men from the east were conscripts. The performance of the eastern "Waffen-SS" units was, as a whole, sub-standard. In late 1941, Hitler named Heydrich as Deputy Reich Protector of the newly established Protectorate of Bohemia and Moravia. Heydrich began to racially classify the Czechs, deporting many to concentration camps. Members of a swelling resistance were shot, earning Heydrich the nickname "the Butcher of Prague". This appointment strengthened the collaboration between Himmler and Heydrich, and Himmler was proud to have SS control over a state. Despite having direct access to Hitler, Heydrich's loyalty to Himmler remained firm. With Hitler's approval, Himmler re-established the "Einsatzgruppen" in the lead-up to the planned invasion of the Soviet Union. In March 1941, Hitler addressed his army leaders, detailing his intention to smash the Soviet Empire and destroy the Bolshevik intelligentsia and leadership. His special directive, the "Guidelines in Special Spheres re Directive No. 21 (Operation Barbarossa)", read: "In the operations area of the army, the "Reichsführer-SS" has been given special tasks on the orders of the "Führer", in order to prepare the political administration. These tasks arise from the forthcoming final struggle of two opposing political systems. Within the framework of these tasks, the "Reichsführer-SS" acts independently and on his own responsibility." Hitler thus intended to prevent internal friction like that occurring earlier in Poland in 1939, when several German Army generals had attempted to bring "Einsatzgruppen" leaders to trial for the murders they had committed. Following the army into the Soviet Union, the "Einsatzgruppen" rounded up and killed Jews and others deemed undesirable by the Nazi state. Hitler was sent frequent reports. In addition, 2.8 million Soviet prisoners of war died of starvation, mistreatment or executions in just eight months of 1941–42. As many as 500,000 Soviet prisoners of war died or were executed in Nazi concentration camps over the course of the war; most of them were shot or gassed. By early 1941, following Himmler's orders, ten concentration camps had been constructed in which inmates were subjected to forced labour. Jews from all over Germany and the occupied territories were deported to the camps or confined to ghettos. As the Germans were pushed back from Moscow in December 1941, signalling that the expected quick defeat of the Soviet Union had failed to materialize, Hitler and other Nazi officials realised that mass deportations to the east would no longer be possible. As a result, instead of deportation, many Jews in Europe were destined for death. Nazi racial policies, including the notion that people who were racially inferior had no right to live, date back to the earliest days of the party; Hitler discusses this in "Mein Kampf". Somewhere around the time of the German declaration of war on the United States in December 1941, Hitler finally resolved that the Jews of Europe were to be "exterminated." Heydrich arranged a meeting, held on 20 January 1942 at Wannsee, a suburb of Berlin. Attended by top Nazi officials, it was used to outline the plans for the "final solution to the Jewish question". Heydrich detailed how those Jews able to work would be worked to death; those unable to work would be killed outright. Heydrich calculated the number of Jews to be killed at 11 million, and told the attendees that Hitler had placed Himmler in charge of the plan. In June 1942, Heydrich was assassinated in Prague in Operation Anthropoid, led by Jozef Gabčík and Jan Kubiš, members of Czechoslovakia's army-in-exile who had been trained by the British Special Operations Executive. During the two funeral services, Himmler—the chief mourner—took charge of Heydrich's two young sons, and he gave the eulogy in Berlin. On 9 June, after discussions with Himmler and Karl Hermann Frank, Hitler ordered brutal reprisals for Heydrich's death. Over 13,000 people were arrested, and the village of Lidice was razed to the ground; its male inhabitants and all adults in the village of Ležáky were murdered. At least 1,300 people were executed by firing squads. Himmler took over leadership of the RSHA and stepped up the pace of the killing of Jews in "Aktion Reinhard" (Operation Reinhard), named in Heydrich's honour. He ordered the "Aktion Reinhard" camps—three extermination camps—to be constructed at Bełżec, Sobibór, and Treblinka. Initially the victims were killed with gas vans or by firing squad, but these methods proved impracticable for an operation of this scale. In August 1941, Himmler attended the shooting of 100 Jews at Minsk. Nauseated and shaken by the experience, he was concerned about the impact such actions would have on the mental health of his SS men. He decided that alternate methods of killing should be found. On his orders, by early 1942 the camp at Auschwitz had been greatly expanded, including the addition of gas chambers, where victims were killed using the pesticide Zyklon B. Himmler visited the camp in person on 17 and 18 July 1942. He was given a demonstration of a mass killing using the gas chamber in Bunker 2 and toured the building site of the new IG Farben plant being constructed at the nearby town of Monowitz. By the end of the war, at least 5.5 million Jews had been killed by the Nazi regime; most estimates range closer to six million. Himmler visited the camp at Sobibór in early 1943, by which time 250,000 people had been killed at that location alone. After witnessing a gassing, he gave 28 people promotions, and ordered the operation of the camp to be wound down. In a revolt that October, prisoners killed most of the guards and SS personnel, and 300 prisoners escaped. Two hundred managed to get away; some joined partisan units operating in the area. The remainder were killed. The camp was dismantled by December 1943. The Nazis also targeted Romani (Gypsies) as "asocial" and "criminals". By 1935, they were confined into special camps away from ethnic Germans. In 1938, Himmler issued an order in which he said that the 'Gypsy question' would be determined by "race". Himmler believed that the Romani were originally Aryan but had become a mixed race; only the "racially pure" were to be allowed to live. In 1939, Himmler ordered thousands of Gypsies to be sent to the Dachau concentration camp and by 1942, ordered all Romani sent to Auschwitz concentration camp. Himmler was one of the main architects of the Holocaust, using his deep belief in the racist Nazi ideology to justify the murder of millions of victims. Longerich surmises that Hitler, Himmler, and Heydrich designed the Holocaust during a period of intensive meetings and exchanges in April–May 1942. The Nazis planned to kill Polish intellectuals and restrict non-Germans in the General Government and conquered territories to a fourth-grade education. They further wanted to breed a master race of racially pure Nordic Aryans in Germany. As an agronomist and farmer Himmler was acquainted with the principles of selective breeding, which he proposed to apply to humans. He believed that he could engineer the German populace, for example, through eugenics, to be Nordic in appearance within several decades of the end of the war. On 4 October 1943, during a secret meeting with top SS officials in the city of Poznań (Posen), and on 6 October 1943, in a speech to the party elite—the Gau and Reich leaders—Himmler referred explicitly to the "extermination" (German: "Ausrottung") of the Jewish people. A translated excerpt from the speech of 4 October reads: Because the Allies had indicated that they were going to pursue criminal charges for German war crimes, Hitler tried to gain the loyalty and silence of his subordinates by making them all parties to the ongoing genocide. Hitler therefore authorised Himmler's speeches to ensure that all party leaders were complicit in the crimes, and could not later deny knowledge of the killings. As Reich Commissioner for the Consolidation of German Nationhood (RKFDV) with the incorporated VoMi Himmler was deeply involved in the Germanization program for the East, particularly Poland. As laid out in the General Plan for the East, the aim was to enslave, expel or exterminate the native population and to make "Lebensraum" ("living space") for "Volksdeutsche" (ethnic Germans). He continued his plans to colonise the east, even when many Germans were reluctant to relocate there, and despite negative effects on the war effort. Himmler's racial groupings began with the "Volksliste", the classification of people deemed of German blood. These included Germans who had collaborated with Germany before the war, but also those who considered themselves German but had been neutral; those who were partially "Polonized" but "Germanizable"; and Germans who were of Polish nationality. Himmler ordered that those who refused to be classified as ethnic Germans should be deported to concentration camps, have their children taken away, or be assigned to forced labour. Himmler's belief that "it is in the nature of German blood to resist" led to his conclusion that Balts or Slavs who resisted Germanization were racially superior to more compliant ones. He declared that no drop of German blood would be lost or left behind to mingle with an "alien race". The plan also included the kidnapping of Eastern European children by Nazi Germany. Himmler urged: The "racially valuable" children were to be removed from all contact with Poles, and raised as Germans, with German names. Himmler declared, "We have faith above all in this our own blood, which has flowed into a foreign nationality through the vicissitudes of German history. We are convinced that our own philosophy and ideals will reverberate in the spirit of these children who racially belong to us." The children were to be adopted by German families. Children who passed muster at first but were later rejected were taken to Kinder KZ in Łódź Ghetto, where most of them eventually died. By January 1943, Himmler reported that 629,000 ethnic Germans had been resettled; however, most resettled Germans did not live in the envisioned small farms, but in temporary camps or quarters in towns. Half a million residents of the annexed Polish territories, as well as from Slovenia, Alsace, Lorraine, and Luxembourg were deported to the General Government or sent to Germany as slave labour. Himmler instructed that the German nation should view all foreign workers brought to Germany as a danger to their German blood. In accordance with German racial laws, sexual relations between Germans and foreigners were forbidden as "Rassenschande" (race defilement). On 20 July 1944, a group of German army officers led by Claus von Stauffenberg and including some of the highest-ranked members of the German armed forces attempted to assassinate Hitler, but failed to do so. The next day, Himmler formed a special commission that arrested over 5,000 suspected and known opponents of the regime. Hitler ordered brutal reprisals that resulted in the execution of more than 4,900 people. Though Himmler was embarrassed by his failure to uncover the plot, it led to an increase in his powers and authority. General Friedrich Fromm, commander-in-chief of the Reserve (or Replacement) Army ("Ersatzheer") and Stauffenberg's immediate superior, was one of those implicated in the conspiracy. Hitler removed Fromm from his post and named Himmler as his successor. Since the Reserve Army consisted of two million men, Himmler hoped to draw on these reserves to fill posts within the Waffen-SS. He appointed Hans Jüttner, director of the SS Leadership Main Office, as his deputy, and began to fill top Reserve Army posts with SS men. By November 1944 Himmler had merged the army officer recruitment department with that of the Waffen-SS and had successfully lobbied for an increase in the quotas for recruits to the SS. By this time, Hitler had appointed Himmler as Minister of the Interior and Plenipotentiary General for Administration ("Generalbevollmächtigter für die Verwaltung"). In August 1944 Hitler authorised him to restructure the organisation and administration of the Waffen-SS, the army, and the police services. As head of the Reserve Army, Himmler was now responsible for prisoners of war. He was also in charge of the Wehrmacht penal system, and controlled the development of Wehrmacht armaments until January 1945. On 6 June 1944 the Western Allied armies landed in northern France during Operation Overlord. In response, Army Group Upper Rhine ("Heeresgruppe Oberrhein") group was formed to engage the advancing US 7th Army (under command of General Alexander Patch) and French 1st Army (led by General Jean de Lattre de Tassigny) in the Alsace region along the west bank of the Rhine. In late 1944, Hitler appointed Himmler commander-in-chief of Army Group Upper Rhine. On 26 September 1944 Hitler ordered Himmler to create special army units, the "Volkssturm" ("People's Storm" or "People's Army"). All males aged sixteen to sixty were eligible for conscription into this militia, over the protests of Armaments Minister Albert Speer, who noted that irreplaceable skilled workers were being removed from armaments production. Hitler confidently believed six million men could be raised, and the new units would "initiate a people's war against the invader". These hopes were wildly optimistic. In October 1944, children as young as fourteen were being enlisted. Because of severe shortages in weapons and equipment and lack of training, members of the "Volkssturm" were poorly prepared for combat, and about 175,000 of them lost their lives in the final months of the war. On 1 January 1945 Hitler and his generals launched Operation North Wind. The goal was to break through the lines of the US 7th Army and French 1st Army to support the southern thrust in the Ardennes offensive, the final major German offensive of the war. After limited initial gains by the Germans, the Americans halted the offensive. By 25 January, Operation North Wind had officially ended. On 25 January 1945, despite Himmler's lack of military experience, Hitler appointed him as commander of the hastily formed Army Group Vistula ("Heeresgruppe Weichsel") to halt the Soviet Red Army's Vistula–Oder Offensive into Pomerania. Himmler established his command centre at Schneidemühl, using his special train, "Sonderzug Steiermark", as his headquarters. The train had only one telephone line, inadequate maps, and no signal detachment or radios with which to establish communication and relay military orders. Himmler seldom left the train, only worked about four hours per day, and insisted on a daily massage before commencing work and a lengthy nap after lunch. General Heinz Guderian talked to Himmler on 9 February and demanded, that Operation Solstice, an attack from Pomerania against the northern flank of Marshal Georgy Zhukov's 1st Belarusian Front, should be in progress by the 16th. Himmler argued that he was not ready to commit himself to a specific date. Given Himmler's lack of qualifications as an army group commander, Guderian convinced himself that Himmler tried to conceal his incompetence. On 13 February Guderian met Hitler and demanded that General Walther Wenck be given a special mandate to command the offensive by Army Group Vistula. Hitler sent Wenck with a "special mandate", but without specifying Wenck's authority. The offensive was launched on 16 February 1945, but soon stuck in rain and mud, facing mine fields and strong antitank defenses. That night Wenck was severely injured in a car accident, but it is doubtful that he could have salvaged the operation, as Guderian later claimed. Himmler ordered the offensive to stop on the 18th by a "directive for regrouping". Hitler officially ended Operation Solstice on 21 February and ordered Himmler to transfer a corps headquarter and three divisions to Army Group Center. Himmler was unable to devise any viable plans for completion of his military objectives. Under pressure from Hitler over the worsening military situation, Himmler became anxious and unable to give him coherent reports. When the counter-attack failed to stop the Soviet advance, Hitler held Himmler personally liable and accused him of not following orders. Himmler's military command ended on 20 March, when Hitler replaced him with General Gotthard Heinrici as Commander-in-Chief of Army Group Vistula. By this time Himmler, who had been under the care of his doctor since 18 February, had fled to a sanatorium at Hohenlychen. Hitler sent Guderian on a forced medical leave of absence, and he reassigned his post as chief of staff to Hans Krebs on 29 March. Himmler's failure and Hitler's response marked a serious deterioration in the relationship between the two men. By that time, the inner circle of people whom Hitler trusted was rapidly shrinking. In early 1945, the German war effort was on the verge of collapse and Himmler's relationship with Hitler had deteriorated. Himmler considered independently negotiating a peace settlement. His masseur, Felix Kersten, who had moved to Sweden, acted as an intermediary in negotiations with Count Folke Bernadotte, head of the Swedish Red Cross. Letters were exchanged between the two men, and direct meetings were arranged by Walter Schellenberg of the RSHA. Himmler and Hitler met for the last time on 20 April 1945—Hitler's birthday—in Berlin, and Himmler swore unswerving loyalty to Hitler. At a military briefing on that day, Hitler stated that he would not leave Berlin, in spite of Soviet advances. Along with Göring, Himmler quickly left the city after the briefing. On 21 April, Himmler met with Norbert Masur, a Swedish representative of the World Jewish Congress, to discuss the release of Jewish concentration camp inmates. As a result of these negotiations, about 20,000 people were released in the White Buses operation. Himmler falsely claimed in the meeting that the crematoria at camps had been built to deal with the bodies of prisoners who had died in a typhus epidemic. He also claimed very high survival rates for the camps at Auschwitz and Bergen-Belsen, even as these sites were liberated and it became obvious that his figures were false. On 23 April, Himmler met directly with Bernadotte at the Swedish consulate in Lübeck. Representing himself as the provisional leader of Germany, he claimed that Hitler would be dead within the next few days. Hoping that the British and Americans would fight the Soviets alongside what remained of the Wehrmacht, Himmler asked Bernadotte to inform General Dwight Eisenhower that Germany wished to surrender to the West. Bernadotte asked Himmler to put his proposal in writing, and Himmler obliged. Meanwhile, Göring had sent a telegram, a few hours earlier, asking Hitler for permission to assume leadership of the "Reich" in his capacity as Hitler's designated deputy—an act that Hitler, under the prodding of Martin Bormann, interpreted as a demand to step down or face a coup. On 27 April, Himmler's SS representative at Hitler's HQ in Berlin, Hermann Fegelein, was caught in civilian clothes preparing to desert; he was arrested and brought back to the "Führerbunker". On the evening of 28 April, the BBC broadcast a Reuters news report about Himmler's attempted negotiations with the western Allies. Hitler had long considered Himmler to be second only to Joseph Goebbels in loyalty; he called Himmler "the loyal Heinrich" (). Hitler flew into a rage at this apparent betrayal, and told those still with him in the bunker complex that Himmler's secret negotiations were the worst treachery he had ever known. Hitler ordered Himmler's arrest, and Fegelein was court-martialed and shot. By this time, the Soviets had advanced to the Potsdamer Platz, only from the Reich Chancellery, and were preparing to storm the Chancellery. This report, combined with Himmler's treachery, prompted Hitler to write his last will and testament. In the testament, completed on 29 April—one day prior to his suicide—Hitler declared both Himmler and Göring to be traitors. He stripped Himmler of all of his party and state offices and expelled him from the Nazi Party. Hitler named Grand Admiral Karl Dönitz as his successor. Himmler met Dönitz in Flensburg and offered himself as second-in-command. He maintained that he was entitled to a position in Dönitz's interim government as "Reichsführer-SS", believing the SS would be in a good position to restore and maintain order after the war. Dönitz repeatedly rejected Himmler's overtures and initiated peace negotiations with the Allies. He wrote a letter on 6 May—two days before the German Instrument of Surrender—formally dismissing Himmler from all his posts. Rejected by his former comrades and hunted by the Allies, Himmler attempted to go into hiding. He had not made extensive preparations for this, but he carried a forged paybook under the name of Sergeant Heinrich Hitzinger. With a small band of companions, he headed south on 11 May to Friedrichskoog, without a final destination in mind. They continued on to Neuhaus, where the group split up. On 21 May, Himmler and two aides were stopped and detained at a checkpoint set up by former Soviet POWs. Over the following two days, he was moved around to several camps and was brought to the British 31st Civilian Interrogation Camp near Lüneburg, on 23 May. The officials noticed that Himmler's identity papers bore a stamp which British military intelligence had seen being used by fleeing members of the SS. The duty officer, Captain Thomas Selvester, began a routine interrogation. Himmler admitted who he was, and Selvester had the prisoner searched. Himmler was taken to the headquarters of the Second British Army in Lüneburg, where a doctor conducted a medical exam on him. The doctor attempted to examine the inside of Himmler's mouth, but the prisoner was reluctant to open it and jerked his head away. Himmler then bit into a hidden potassium cyanide pill and collapsed onto the floor. He was dead within 15 minutes. Shortly afterward, Himmler's body was buried in an unmarked grave near Lüneburg. The grave's location remains unknown. Himmler was interested in mysticism and the occult from an early age. He tied this interest into his racist philosophy, looking for proof of Aryan and Nordic racial superiority from ancient times. He promoted a cult of ancestor worship, particularly among members of the SS, as a way to keep the race pure and provide immortality to the nation. Viewing the SS as an "order" along the lines of the Teutonic Knights, he had them take over the Church of the Teutonic Order in Vienna in 1939. He began the process of replacing Christianity with a new moral code that rejected humanitarianism and challenged the Christian concept of marriage. The Ahnenerbe, a research society founded by Himmler in 1935, searched the globe for proof of the superiority and ancient origins of the Germanic race. All regalia and uniforms of Nazi Germany, particularly those of the SS, used symbolism in their designs. The stylised lightning bolt logo of the SS was chosen in 1932. The logo is a pair of runes from a set of 18 Armanen runes created by Guido von List in 1906. The ancient Sowilō rune originally symbolised the sun, but was renamed "Sig" (victory) in List's iconography. Himmler modified a variety of existing customs to emphasise the elitism and central role of the SS; an SS naming ceremony was to replace baptism, marriage ceremonies were to be altered, a separate SS funeral ceremony was to be held in addition to Christian ceremonies, and SS-centric celebrations of the summer and winter solstices were instituted. The "Totenkopf" (death's head) symbol, used by German military units for hundreds of years, had been chosen for the SS by Schreck. Himmler placed particular importance on the death's-head rings; they were never to be sold, and were to be returned to him upon the death of the owner. He interpreted the deaths-head symbol to mean solidarity to the cause and a commitment unto death. As second in command of the SS and then Reichsführer-SS, Himmler was in regular contact with Hitler to arrange for SS men as bodyguards; Himmler was not involved with Nazi Party policy-making decisions in the years leading up to the seizure of power. From the late 1930s, the SS was independent of the control of other state agencies or government departments, and he reported only to Hitler. Hitler's leadership style was to give contradictory orders to subordinates and to place them into positions where their duties and responsibilities overlapped with those of others. In this way, Hitler fostered distrust, competition, and infighting among his subordinates to consolidate and maximise his own power. His cabinet never met after 1938, and he discouraged his ministers from meeting independently. Hitler typically did not issue written orders, but gave them orally at meetings or in phone conversations; he also had Bormann convey orders. Bormann used his position as Hitler's secretary to control the flow of information and access to Hitler. Hitler promoted and practised the "Führerprinzip". The principle required absolute obedience of all subordinates to their superiors; thus Hitler viewed the government structure as a pyramid, with himself—the infallible leader—at the apex. Accordingly, Himmler placed himself in a position of subservience to Hitler, and was unconditionally obedient to him. However, he—like other top Nazi officials—had aspirations to one day succeed Hitler as leader of the Reich. Himmler considered Speer to be an especially dangerous rival, both in the Reich administration and as a potential successor to Hitler. Speer refused to accept Himmler's offer of the high rank of SS-Oberst-Gruppenführer, as he felt to do so would put him in Himmler's debt and obligate him to allow Himmler a say in armaments production. Hitler called Himmler's mystical and pseudoreligious interests "nonsense". Himmler was not a member of Hitler's inner circle; the two men were not very close, and rarely saw each other socially. Himmler socialised almost exclusively with other members of the SS. His unconditional loyalty and efforts to please Hitler earned him the nickname of "der treue Heinrich" ("the faithful Heinrich"). In the last days of the war, when it became clear that Hitler planned to die in Berlin, Himmler left his long-time superior to try to save himself. Himmler met his future wife, Margarete Boden, in 1927. Seven years his senior, she was a nurse who shared his interest in herbal medicine and homoeopathy, and was part owner of a small private clinic. They were married in July 1928, and their only child, Gudrun, was born on 8 August 1929. The couple were also foster parents to a boy named Gerhard von Ahe, son of an SS officer who had died before the war. Margarete sold her share of the clinic and used the proceeds to buy a plot of land in Waldtrudering, near Munich, where they erected a prefabricated house. Himmler was constantly away on party business, so his wife took charge of their efforts—mostly unsuccessful—to raise livestock for sale. They had a dog, Töhle. After the Nazis came to power the family moved first to Möhlstrasse in Munich, and in 1934 to Lake Tegern, where they bought a house. Himmler also later obtained a large house in the Berlin suburb of Dahlem, free of charge, as an official residence. The couple saw little of each other as Himmler became totally absorbed by work. The relationship was strained. The couple did unite for social functions; they were frequent guests at the Heydrich home. Margarete saw it as her duty to invite the wives of the senior SS leaders over for afternoon coffee and tea on Wednesday afternoons. Hedwig Potthast, Himmler's young secretary starting in 1936, became his mistress by 1939. She left her job in 1941. He arranged accommodation for her, first in Mecklenburg and later at Berchtesgaden. He fathered two children with her: a son, Helge (born 15 February 1942) and a daughter, Nanette Dorothea (born 20 July 1944, Berchtesgaden). Margarete, by then living in Gmund with her daughter, learned of the relationship sometime in 1941; she and Himmler were already separated, and she decided to tolerate the relationship for the sake of her daughter. Working as a nurse for the German Red Cross during the war, Margarete was appointed supervisor in Military District III (Berlin-Brandenburg). Himmler was close to his first daughter, Gudrun, whom he nicknamed "Püppi" ("dolly"); he phoned her every few days and visited as often as he could. Margarete's diaries reveal that Gerhard had to leave the National Political Educational Institute in Berlin because of poor results. At the age of 16 he joined the SS in Brno and shortly afterwards went "into battle." He was captured by the Russians but later returned to Germany. Hedwig and Margarete both remained loyal to Himmler. Writing to Gebhard in February 1945, Margarete said, "How wonderful that he has been called to great tasks and is equal to them. The whole of Germany is looking to him." Hedwig expressed similar sentiments in a letter to Himmler in January. Margarete and Gudrun left Gmund as Allied troops advanced into the area. They were arrested by American troops in Bolzano, Italy, and held in various internment camps in Italy, France, and Germany. They were brought to Nuremberg to testify at the trials and were released in November 1946. Gudrun emerged from the experience embittered by her alleged mistreatment and remained devoted to her father's memory. She later worked for the West German spy agency "Bundesnachrichtendienst" (BND) from 1961 to 1963. Peter Longerich observes that Himmler's ability to consolidate his ever-increasing powers and responsibilities into a coherent system under the auspices of the SS led him to become one of the most powerful men in the Third Reich. Historian Wolfgang Sauer says that "although he was pedantic, dogmatic, and dull, Himmler emerged under Hitler as second in actual power. His strength lay in a combination of unusual shrewdness, burning ambition, and servile loyalty to Hitler." In 2008, the German news magazine "Der Spiegel" described Himmler as one of the most brutal mass murderers in history, and the architect of the Holocaust. Historian John Toland relates a story by Günter Syrup, a subordinate of Heydrich. Heydrich showed him a picture of Himmler and said, "The top half is the teacher but the lower half is the sadist." Historian Adrian Weale comments that Himmler and the SS followed Hitler's policies without question or ethical considerations. Himmler accepted Hitler and Nazi ideology, and saw the SS as a chivalric Teutonic order of new Germans. Himmler adopted the doctrine of "Auftragstaktik" ("mission command"), whereby orders were given as broad directives, with authority delegated downward to the appropriate level to carry them out in a timely and efficient manner. Weale states that the SS ideology gave the men a doctrinal framework, and the mission command tactics allowed the junior officers leeway to act on their own initiative to obtain the desired results.
https://en.wikipedia.org/wiki?curid=13436
Hypertext Transfer Protocol The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of early HTTP Requests for Comments (RFCs) was a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF. HTTP/1.1 was first documented in in 1997. That specification was obsoleted by in 1999, which was likewise replaced by the family of RFCs in 2014. HTTP/2 is a more efficient expression of HTTP's semantics "on the wire", and was published in 2015; it is now supported by virtually all web browsers and major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required. HTTP/3 is the proposed successor to HTTP/2, which is already in use on the web, using UDP instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. Support for HTTP/3 was added to Cloudflare and Google Chrome in September 2019, and can be enabled in the stable versions of Chrome and Firefox. HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the "client" and an application running on a computer hosting a website may be the "server". The client submits an HTTP "request" message to the server. The server, which provides "resources" such as HTML files and other content, or performs other functions on behalf of the client, returns a "response" message to the client. The response contains completion status information about the request and may also contain requested content in its message body. A web browser is an example of a "user agent" (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its definition presumes an underlying and reliable transport layer protocol, and Transmission Control Protocol (TCP) is commonly used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol (UDP), for example in HTTPU and Simple Service Discovery Protocol (SSDP). HTTP resources are identified and located on the network by Uniform Resource Locators (URLs), using the Uniform Resource Identifiers (URI's) schemes "http" and "https". For example, including all optional components: codice_1 As defined in RFC 3986, URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, scripts, stylesheets, "etc" after the page has been delivered. HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web. The first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page. The first documented version of HTTP was HTTP V0.9 (1991). Dave Raggett led the HTTP Working Group (HTTP WG) in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields. officially introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the then developing (called HTTP-NG) was rapidly adopted by the major browser developers in early 1996. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in was officially released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under in June 1999. In 2007, the HTTP Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting : HTTP/2 was published as in May 2015. An HTTP session is a sequence of network request-response transactions. An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a server (typically port 80, occasionally port 8080; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for a client's request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own. The body of this message is typically the requested resource, although an error message or other information may also be returned. In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request. Such "persistent connections" reduce request latency perceptibly, because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism. Version 1.1 of the protocol also made bandwidth optimization improvements to HTTP/1.0. For example, HTTP/1.1 introduced chunked transfer encoding to allow content on persistent connections to be streamed rather than buffered. HTTP pipelining further reduces lag time, allowing clients to send multiple requests before waiting for each response. Another addition to the protocol was byte serving, where a server transmits just the portion of a resource explicitly requested by a client. HTTP is a stateless protocol. A stateless protocol does not require the HTTP server to retain information or status about each user for the duration of multiple requests. However, some web applications implement states or server side sessions using for instance HTTP cookies or hidden variables within web forms. HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge-response mechanism whereby the server identifies and issues a challenge before serving the requested content. HTTP provides a general framework for access control and authentication, via an extensible set of challenge-response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information. The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI. The client sends requests to the server and the server sends responses. The request message consists of the following: The request line and other header fields must each end with (that is, a carriage return character followed by a line feed character). The empty line must consist of only and no other whitespace. In the HTTP/1.1 protocol, all header fields except "Host" are optional. A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in . HTTP defines methods (sometimes referred to as "verbs", but nowhere in the specification does it mention "verb", nor is OPTIONS or HEAD a verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification defined the GET, HEAD and POST methods and the HTTP/1.1 specification added five new methods: OPTIONS, PUT, DELETE, TRACE and CONNECT. By being specified in these documents, their semantics are well-known and can be depended on. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined and this allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined seven new methods and specified the PATCH method. Method names are case sensitive. This is in contrast to HTTP header field names which are case-insensitive. All general-purpose HTTP servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification. Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as "safe", which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed. By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences. Despite the prescribed safety of "GET" requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause non-trivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as "http://example.com/article/1234/delete", which, if arbitrarily fetched, even using "GET", would simply delete the article. One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted "en masse". The beta was suspended only weeks after its first release, following widespread criticism. Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request. Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol. In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once. Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not. The TRACE method can be used as part of a class of attacks known as cross-site tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled. The response message consists of the following: The status line and other header fields must all end with . The empty line must consist of only and no other whitespace. This strict requirement for is relaxed somewhat within message bodies for consistent use of other system linebreaks such as or alone. In HTTP/1.0 and since, the first line of the HTTP response is called the "status line" and includes a numeric "status code" (such as "404") and a textual "reason phrase" (such as "Not Found"). The way the user agent handles the response depends primarily on the code, and secondarily on the other response header fields. Custom status codes can be used, for if the user agent encounters a code it does not recognize, it can use the first digit of the code to determine the general class of the response. The standard "reason phrases" are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the "reason phrase" to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the "reason phrase", though this might be unwise since the standard explicitly specifies that status codes are machine-readable and "reason phrases" are human-readable. HTTP status code is primarily divided into five groups for better explanation of request and responses between client and server as named: The most popular way of establishing an encrypted HTTP connection is HTTPS. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent. Below is a sample conversation between an HTTP client and an HTTP server running on www.example.com, port 80. GET / HTTP/1.1 Host: www.example.com A client request (consisting in this case of the request line and only one header field) is followed by a blank line, so that the request ends with a double newline, each in the form of a carriage return followed by a line feed. The "Host" field distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (The "/" means /index.html if there is one.) HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 155 Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) ETag: "3f80f-1b6-3e1cb03b" Accept-Ranges: bytes Connection: close The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. "Content-Type" specifies the Internet media type of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called byte serving. When "Connection: close" is sent, it means that the web server will close the TCP connection immediately after the transfer of this response. Most of the header lines are optional. When "Content-Length" is missing the length is determined in other ways. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. "Identity" encoding without "Content-Length" reads content until the socket is closed. A "Content-Encoding" like "gzip" can be used to compress the transmitted data.
https://en.wikipedia.org/wiki?curid=13443
Heinrich Hertz Heinrich Rudolf Hertz (; ; 22 February 1857 – 1 January 1894) was a German physicist who first conclusively proved the existence of the electromagnetic waves predicted by James Clerk Maxwell's equations of electromagnetism. The unit of frequency, cycle per second, was named the "hertz" in his honor. Heinrich Rudolf Hertz was born in 1857 in Hamburg, then a sovereign state of the German Confederation, into a prosperous and cultured Hanseatic family. His father was Gustav Ferdinand Hertz. His mother was Anna Elisabeth Pfefferkorn. While studying at the Gelehrtenschule des Johanneums in Hamburg, Hertz showed an aptitude for sciences as well as languages, learning Arabic and Sanskrit. He studied sciences and engineering in the German cities of Dresden, Munich and Berlin, where he studied under Gustav R. Kirchhoff and Hermann von Helmholtz. In 1880, Hertz obtained his PhD from the University of Berlin, and for the next three years remained for post-doctoral study under Helmholtz, serving as his assistant. In 1883, Hertz took a post as a lecturer in theoretical physics at the University of Kiel. In 1885, Hertz became a full professor at the University of Karlsruhe. In 1886, Hertz married Elisabeth Doll, the daughter of Max Doll, a lecturer in geometry at Karlsruhe. They had two daughters: Johanna, born on 20 October 1887 and Mathilde, born on 14 January 1891, who went on to become a notable biologist. During this time Hertz conducted his landmark research into electromagnetic waves. Hertz took a position of Professor of Physics and Director of the Physics Institute in Bonn on 3 April 1889, a position he held until his death. During this time he worked on theoretical mechanics with his work published in the book "Die Prinzipien der Mechanik in neuem Zusammenhange dargestellt" ("The Principles of Mechanics Presented in a New Form"), published posthumously in 1894. In 1892, Hertz was diagnosed with an infection (after a bout of severe migraines) and underwent operations to treat the illness. He died of granulomatosis with polyangiitis at the age of 36 in Bonn, Germany in 1894, and was buried in the Ohlsdorf Cemetery in Hamburg. Hertz's wife, Elisabeth Hertz née Doll (1864–1941), did not remarry. Hertz left two daughters, Johanna (1887–1967) and Mathilde (1891–1975). Hertz's daughters never married and he has no descendants. In 1864 Scottish mathematical physicist James Clerk Maxwell proposed a comprehensive theory of electromagnetism, now called Maxwell's equations. Maxwell's theory predicted that coupled electric and magnetic fields could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of short wavelength, but no one had been able to prove this, or generate or detect electromagnetic waves of other wavelengths. During Hertz's studies in 1879 Helmholtz suggested that Hertz's doctoral dissertation be on testing Maxwell's theory. Helmholtz had also proposed the "Berlin Prize" problem that year at the Prussian Academy of Sciences for anyone who could experimentally prove an electromagnetic effect in the polarization and depolarization of insulators, something predicted by Maxwell's theory. Helmholtz was sure Hertz was the most likely candidate to win it. Not seeing any way to build an apparatus to experimentally test this, Hertz thought it was too difficult, and worked on electromagnetic induction instead. Hertz did produce an analysis of Maxwell's equations during his time at Kiel, showing they did have more validity than the then prevalent "action at a distance" theories. After Hertz received his professorship at Karlsruhe he was experimenting with a pair of Riess spirals in the autumn of 1886 when he noticed that discharging a Leyden jar into one of these coils would produce a spark in the other coil. With an idea on how to build an apparatus, Hertz now had a way to proceed with the "Berlin Prize" problem of 1879 on proving Maxwell's theory (although the actual prize had expired uncollected in 1882). He used a Ruhmkorff coil-driven spark gap and one-meter wire pair as a radiator. Capacity spheres were present at the ends for circuit resonance adjustments. His receiver was a loop antenna with a micrometer spark gap between the elements. This experiment produced and received what are now called radio waves in the very high frequency range. Between 1886 and 1889 Hertz would conduct a series of experiments that would prove the effects he was observing were results of Maxwell's predicted electromagnetic waves. Starting in November 1887 with his paper "On Electromagnetic Effects Produced by Electrical Disturbances in Insulators", Hertz would send a series of papers to Helmholtz at the Berlin Academy, including papers in 1888 that showed transverse free space electromagnetic waves traveling at a finite speed over a distance. In the apparatus Hertz used, the electric and magnetic fields would radiate away from the wires as transverse waves. Hertz had positioned the oscillator about 12 meters from a zinc reflecting plate to produce standing waves. Each wave was about 4 meters long. Using the ring detector, he recorded how the wave's magnitude and component direction varied. Hertz measured Maxwell's waves and demonstrated that the velocity of these waves was equal to the velocity of light. The electric field intensity, polarization and reflection of the waves were also measured by Hertz. These experiments established that light and these waves were both a form of electromagnetic radiation obeying the Maxwell equations. Hertz did not realize the practical importance of his radio wave experiments. He stated that, Asked about the applications of his discoveries, Hertz replied, Hertz's proof of the existence of airborne electromagnetic waves led to an explosion of experimentation with this new form of electromagnetic radiation, which was called "Hertzian waves" until around 1910 when the term "radio waves" became current. Within 10 years researchers such as Oliver Lodge, Ferdinand Braun, and Guglielmo Marconi employed radio waves in the first wireless telegraphy radio communication systems, leading to radio broadcasting, and later television. In 1909, Braun and Marconi received the Nobel Prize in physics for their "contributions to the development of wireless telegraphy". Today radio is an essential technology in global telecommunication networks, and the transmission medium underlying modern wireless devices. In 1892, Hertz began experimenting and demonstrated that cathode rays could penetrate very thin metal foil (such as aluminium). Philipp Lenard, a student of Heinrich Hertz, further researched this "ray effect". He developed a version of the cathode tube and studied the penetration by X-rays of various materials. Philipp Lenard, though, did not realize that he was producing X-rays. Hermann von Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. It was formed on the basis of the electromagnetic theory of light ("Wiedmann's Annalen", Vol. XLVIII). However, he did not work with actual X-rays. Hertz helped establish the photoelectric effect (which was later explained by Albert Einstein) when he noticed that a charged object loses its charge more readily when illuminated by ultraviolet radiation (UV). In 1887, he made observations of the photoelectric effect and of the production and reception of electromagnetic (EM) waves, published in the journal Annalen der Physik. His receiver consisted of a coil with a spark gap, whereby a spark would be seen upon detection of EM waves. He placed the apparatus in a darkened box to see the spark better. He observed that the maximum spark length was reduced when in the box. A glass panel placed between the source of EM waves and the receiver absorbed UV that assisted the electrons in jumping across the gap. When removed, the spark length would increase. He observed no decrease in spark length when he substituted quartz for glass, as quartz does not absorb UV radiation. Hertz concluded his months of investigation and reported the results obtained. He did not further pursue investigation of this effect, nor did he make any attempt at explaining how the observed phenomenon was brought about. In 1886–1889, Hertz published two articles on what was to become known as the field of contact mechanics, which proved to be an important basis for later theories in the field. Joseph Valentin Boussinesq published some critically important observations on Hertz's work, nevertheless establishing this work on contact mechanics to be of immense importance. His work basically summarises how two axi-symmetric objects placed in contact will behave under loading, he obtained results based upon the classical theory of elasticity and continuum mechanics. The most significant failure of his theory was the neglect of any nature of adhesion between the two solids, which proves to be important as the materials composing the solids start to assume high elasticity. It was natural to neglect adhesion in that age as there were no experimental methods of testing for it. To develop his theory Hertz used his observation of elliptical Newton's rings formed upon placing a glass sphere upon a lens as the basis of assuming that the pressure exerted by the sphere follows an elliptical distribution. He used the formation of Newton's rings again while validating his theory with experiments in calculating the displacement which the sphere has into the lens. K. L. Johnson, K. Kendall and A. D. Roberts (JKR) used this theory as a basis while calculating the theoretical displacement or "indentation depth" in the presence of adhesion in 1971. Hertz's theory is recovered from their formulation if the adhesion of the materials is assumed to be zero. Similar to this theory, however using different assumptions, B. V. Derjaguin, V. M. Muller and Y. P. Toporov published another theory in 1975, which came to be known as the DMT theory in the research community, which also recovered Hertz's formulations under the assumption of zero adhesion. This DMT theory proved to be rather premature and needed several revisions before it came to be accepted as another material contact theory in addition to the JKR theory. Both the DMT and the JKR theories form the basis of contact mechanics upon which all transition contact models are based and used in material parameter prediction in nanoindentation and atomic force microscopy. So Hertz's research from his days as a lecturer, preceding his great work on electromagnetism, which he himself considered with his characteristic soberness to be trivial, has come down to the age of nanotechnology. Hertz also described the "Hertzian cone", a type of fracture mode in brittle solids caused by the transmission of stress waves. Hertz always had a deep interest in meteorology, probably derived from his contacts with Wilhelm von Bezold (who was his professor in a laboratory course at the Munich Polytechnic in the summer of 1878). As an assistant to Helmholtz in Berlin, he contributed a few minor articles in the field, including research on the evaporation of liquids, a new kind of hygrometer, and a graphical means of determining the properties of moist air when subjected to adiabatic changes. Heinrich Hertz was a Lutheran throughout his life and would not have considered himself Jewish, as his father's family had all converted to Lutheranism when his father was still in his childhood (aged seven) in 1834. Nevertheless, when the Nazi regime gained power decades after Hertz's death, his portrait was removed by them from its prominent position of honor in Hamburg's City Hall ("Rathaus") because of his partly Jewish ethnic ancestry. (The painting has since been returned to public display.) Hertz's widow and daughters left Germany in the 1930s and went to England. Heinrich Hertz's nephew Gustav Ludwig Hertz was a Nobel Prize winner, and Gustav's son Carl Helmut Hertz invented medical ultrasonography. His daughter Mathilde Carmen Hertz was a well-known biologist and comparative psychologist. Hertz's grandnephew Hermann Gerhard Hertz, professor at the University of Karlsruhe, was a pioneer of NMR-spectroscopy and in 1995 published Hertz's laboratory notes. The SI unit "hertz" (Hz) was established in his honor by the International Electrotechnical Commission in 1930 for frequency, an expression of the number of times that a repeated event occurs per second. It was adopted by the CGPM (Conférence générale des poids et mesures) in 1960, officially replacing the previous name, "cycles per second" (cps). In 1928 the Heinrich-Hertz Institute for Oscillation Research was founded in Berlin. Today known as the "Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI". In 1969, in East Germany, a Heinrich Hertz memorial medal was cast. The IEEE Heinrich Hertz Medal, established in 1987, is ""for outstanding achievements in Hertzian waves "[...]" presented annually to an individual for achievements which are theoretical or experimental in nature"". In 1980, in Italy a High School called "Istituto Tecnico Industriale Statale Heinrich Hertz" was founded in the neighborhood of Cinecittà Est, in Rome. A crater that lies on the far side of the Moon, just behind the eastern limb, is named in his honor. The Hertz market for radio electronics products in Nizhny Novgorod, Russia, is named after him. The Heinrich-Hertz-Turm radio telecommunication tower in Hamburg is named after the city's famous son. Hertz is honored by Japan with a membership in the Order of the Sacred Treasure, which has multiple layers of honor for prominent people, including scientists. Heinrich Hertz has been honored by a number of countries around the world in their postage issues, and in post-World War II times has appeared on various German stamp issues as well. On his birthday in 2012, Google honored Hertz with a Google doodle, inspired by his life's work, on its home page.
https://en.wikipedia.org/wiki?curid=13445
Hebrew alphabet The Hebrew alphabet (, ), known variously by scholars as the Jewish script, square script and block script, is an abjad script used in the writing of the Hebrew language and other Jewish languages, most notably Yiddish, Judeo-Spanish, Judeo-Arabic and Judeo-Persian. Historically, two separate abjad scripts have been used to write Hebrew. The original, old Hebrew script, known as the paleo-Hebrew alphabet, has been largely preserved in a variant form as the Samaritan alphabet. The present "Jewish script" or "square script", on the contrary, is a stylized form of the Aramaic alphabet and was technically known by Jewish sages as Ashurit (lit. "Assyrian script"), since its origins were alleged to be from Assyria. Various "styles" (in current terms, "fonts") of representation of the Jewish script letters described in this article also exist, including a variety of cursive Hebrew styles. In the remainder of this article, the term "Hebrew alphabet" refers to the square script unless otherwise indicated. The Hebrew alphabet has 22 letters. It does not have case. Five letters have different forms when used at the end of a word. Hebrew is written from right to left. Originally, the alphabet was an abjad consisting only of consonants, but is now considered an "impure abjad". As with other abjads, such as the Arabic alphabet, during its centuries-long use scribes devised means of indicating vowel sounds by separate vowel points, known in Hebrew as "niqqud." In both biblical and rabbinic Hebrew, the letters can also function as "matres lectionis", which is when certain consonants are used to indicate vowels. There is a trend in Modern Hebrew towards the use of "matres lectionis" to indicate vowels that have traditionally gone unwritten, a practice known as "full spelling". The Yiddish alphabet, a modified version of the Hebrew alphabet used to write Yiddish, is a true alphabet, with all vowels rendered in the spelling, except in the case of inherited Hebrew words, which typically retain their Hebrew spellings. The Arabic and Hebrew alphabets have similarities because they are both derived from the Aramaic alphabet, and both derive from paleo-Hebrew or Phoenician alphabet. Phoenicia is the Greek term referring to Canaan or "kn'n". A distinct Hebrew variant, called the paleo-Hebrew alphabet by scholars, emerged around 800 BCE. Examples of related early inscriptions from the area include the tenth-century Gezer calendar, and the Siloam inscription (c. 700 BCE). The paleo-Hebrew alphabet was used in the ancient kingdoms of Israel and Judah. Following the exile of the Kingdom of Judah in the 6th century BCE during the Babylonian captivity, Jews began using a form of the Assyrian Aramaic alphabet, which was another offshoot of the same family of scripts. The Samaritans, who remained in the Land of Israel, continued to use the paleo-Hebrew alphabet. During the 3rd century BCE, Jews began to use a stylized, "square" form of the Aramaic alphabet that was used by the Persian Empire (and which in turn had been adopted from the Assyrians), while the Samaritans continued to use a form of the paleo-Hebrew script called the Samaritan alphabet. After the fall of the Persian Empire in 330 BCE, Jews used both scripts before settling on the square Assyrian form. The square Hebrew alphabet was later adapted and used for writing languages of the Jewish diaspora – such as Karaim, the Judeo-Arabic languages, Judaeo-Spanish, and Yiddish. The Hebrew alphabet continued in use for scholarly writing in Hebrew and came again into everyday use with the rebirth of the Hebrew language as a spoken language in the 18th and 19th centuries, especially in Israel. In the traditional form, the Hebrew alphabet is an abjad consisting only of consonants, written from right to left. It has 22 letters, five of which use different forms at the end of a word. In the traditional form, vowels are indicated by the weak consonants Aleph (), He (), Waw/Vav (), or Yodh () serving as vowel letters, or "matres lectionis": the letter is combined with a previous vowel and becomes silent, or by imitation of such cases in the spelling of other forms. Also, a system of vowel points to indicate vowels (diacritics), called niqqud, was developed. In modern forms of the alphabet, as in the case of Yiddish and to some extent Modern Hebrew, vowels may be indicated. Today, the trend is toward full spelling with the weak letters acting as true vowels. When used to write Yiddish, vowels are indicated, using certain letters, either with niqqud diacritics (e.g. or ) or without (e.g. or ), except for Hebrew words, which in Yiddish are written in their Hebrew spelling. To preserve the proper vowel sounds, scholars developed several different sets of vocalization and diacritical symbols called "nequdot" (, literally "points"). One of these, the Tiberian system, eventually prevailed. Aaron ben Moses ben Asher, and his family for several generations, are credited for refining and maintaining the system. These points are normally used only for special purposes, such as Biblical books intended for study, in poetry or when teaching the language to children. The Tiberian system also includes a set of cantillation marks, called "trope" or , used to indicate how scriptural passages should be chanted in synagogue recitations of scripture (although these marks do not appear in the scrolls). In everyday writing of modern Hebrew, "niqqud" are absent; however, patterns of how words are derived from Hebrew roots (called "shorashim" or "triliterals") allow Hebrew speakers to determine the vowel-structure of a given word from its consonants based on the word's context and part of speech. Unlike the Paleo-Hebrew writing script, the modern Ashuri script has five letters that have special final forms, called sofit (, meaning in this context "final" or "ending") form, used only at the end of a word, somewhat as in the Greek or in the Arabic and Mandaic alphabets. These are shown below the normal form in the following table (letter names are Unicode standard). Although Hebrew is read and written from right to left, the following table shows the letters in order from left to right. The descriptions that follow are based on the pronunciation of modern standard Israeli Hebrew. Note that dotless tav, ת, would be expected to be pronounced /θ/ (voiceless dental fricative), but this pronunciation was lost among most Jews due to its not existing in the countries where they lived (such as in nearly all of Eastern Europe). Yiddish modified this /θ/ to /s/ (cf. seseo in Spanish), but in modern Israeli Hebrew, it is simply pronounced /t/. "Shin" and "sin" are represented by the same letter, , but are two separate phonemes. When vowel diacritics are used, the two phonemes are differentiated with a "shin"-dot or "sin"-dot; the "shin"-dot is above the upper-right side of the letter, and the "sin"-dot is above the upper-left side of the letter. Historically, "left-dot-sin" corresponds to Proto-Semitic *, which in biblical-Judaic-Hebrew corresponded to the voiceless alveolar lateral fricative , as evidenced in the Greek transliteration of Hebrew words such as "balsam" () (the "ls" - 'שׂ') as is evident in the "Targum Onkelos". Historically, the consonants "beth", "gimel", "daleth", "kaf", "pe" and "tav" each had two sounds: one hard (plosive), and one soft (fricative), depending on the position of the letter and other factors. When vowel diacritics are used, the hard sounds are indicated by a central dot called "dagesh" (), while the soft sounds lack a "dagesh". In modern Hebrew, however, the "dagesh" only changes the pronunciation of "beth", "kaf", and "pe", and does not affect the name of the letter. The differences are as follows: In other dialects (mainly liturgical) there are variations from this pattern. The sounds , , , written ⟨⟩, ⟨⟩, ⟨⟩, and , non-standardly sometimes transliterated ⟨⟩, are often found in slang and loanwords that are part of the everyday Hebrew colloquial vocabulary. The apostrophe-looking symbol after the Hebrew letter modifies the pronunciation of the letter and is called a "geresh". The pronunciation of the following letters can also be modified with the geresh diacritic. The represented sounds are however foreign to Hebrew phonology, i.e., these symbols mainly represent sounds in foreign words or names when transliterated with the Hebrew alphabet, and not loanwords. A "geresh" is also used to denote acronyms pronounced as a string of letters, and to denote a Hebrew numeral. Geresh also is the name of one of the notes of cantillation in the reading of the Torah, but its appearance and function is different. In much of Israel's general population, especially where Ashkenazic pronunciation is prevalent, many letters have the same pronunciation. They are as follows: * Varyingly Some of the variations in sound mentioned above are due to a systematic feature of Ancient Hebrew. The six consonants were pronounced differently depending on their position. These letters were also called "BeGeD KeFeT" letters . The full details are very complex; this summary omits some points. They were pronounced as plosives at the beginning of a syllable, or when doubled. They were pronounced as fricatives when preceded by a vowel (commonly indicated with a macron, ḇ ḡ ḏ ḵ p̄ ṯ). The plosive and double pronunciations were indicated by the "dagesh". In Modern Hebrew the sounds ḏ and ḡ have reverted to and , respectively, and ṯ has become , so only the remaining three consonants show variation. "resh" may have also been a "doubled" letter, making the list "BeGeD KePoReT". (Sefer Yetzirah, 4:1) The following table contains the pronunciation of the Hebrew letters in reconstructed historical forms and dialects using the . The apostrophe-looking symbol after some letters is not a yud but a geresh. It is used for loanwords with non-native Hebrew sounds. The dot in the middle of some of the letters, called a "dagesh kal", also modifies the sounds of the letters ב, כ and פ in modern Hebrew (in some forms of Hebrew it modifies also the sounds of the letters ג, ד and/or ת; the "dagesh chazak" – orthographically indistinguishable from the "dagesh kal" – designates gemination, which today is realized only rarely – e.g. in biblical recitations or when using Arabic loanwords). "alef", ע "ayin", "waw/vav" and "yod" are letters that can sometimes indicate a vowel instead of a consonant (which would be, respectively, ). When they do, and are considered to constitute part of the vowel designation in combination with a niqqud symbol – a vowel diacritic (whether or not the diacritic is marked), whereas and ע are considered to be mute, their role being purely indicative of the non-marked vowel. "Niqqud" is the system of dots that help determine vowels and consonants. In Hebrew, all forms of "niqqud" are often omitted in writing, except for children's books, prayer books, poetry, foreign words, and words which would be ambiguous to pronounce. Israeli Hebrew has five vowel phonemes, , but many more written symbols for them: Note 1: The circle represents whatever Hebrew letter is used. Note 2: The pronunciation of "tsere" and sometimes "segol" – with or without the letter "yod" – is sometimes "ei" in Modern Hebrew. This is not correct in the normative pronunciation and not consistent in the spoken language. Note 3: The "dagesh", "mappiq", and "shuruk" have different functions, even though they look the same. Note 4: The letter ו ("waw/vav") is used since it can only be represented by that letter. By adding a vertical line (called "Meteg") underneath the letter and to the left of the vowel point, the vowel is made long. The "meteg" is only used in Biblical Hebrew, not Modern Hebrew. By adding two vertical dots (called "Sh'va") underneath the letter, the vowel is made very short. When sh'va is placed on the first letter of the word, mostly it is "è" (but in some instances, it makes the first letter silent without a vowel (vowel-less): e.g. וְ "wè" to "w") The symbol is called a gershayim and is a punctuation mark used in the Hebrew language to denote acronyms. It is written before the last letter in the acronym, e.g. . Gershayim is also the name of a note of cantillation in the reading of the Torah, printed above the accented letter, e.g. . The following table displays typographic and chirographic variants of each letter. For the five letters that have a different final form used at the end of words, the final forms are displayed beneath the regular form. The block (square, or "print" type) and cursive ("handwritten" type) are the only variants in widespread contemporary use. Rashi is also used, for historical reasons, in a handful of standard texts. Following the adoption of Greek Hellenistic alphabetic numeration practice, Hebrew letters started being used to denote numbers in the late 2nd century BC, and performed this arithmetic function for about a thousand years. Nowadays alphanumeric notation is used only in specific contexts, e.g. denoting dates in the Hebrew calendar, denoting grades of school in Israel, other listings (e.g. שלב א׳, שלב ב׳ – "phase a, phase b"), commonly in Kabbalah (Jewish mysticism) in a practice known as gematria, and often in religious contexts. The numbers 500, 600, 700, 800 and 900 are commonly represented by the juxtapositions ק״ת, ר״ת, ש״ת, ת״ת, and ק״תת respectively. Adding a geresh ("׳") to a letter multiplies its value by one thousand, for example, the year 5778 is portrayed as ה׳תשע״ח, where ה׳ represents 5000, and תשע״ח represents 778. The following table lists transliterations and transcriptions of Hebrew letters used in Modern Hebrew. Clarifications: Note: SBL's transliteration system, recommended in its "Handbook of Style", differs slightly from the 2006 "precise" transliteration system of the Academy of the Hebrew Language; for "צ" SBL uses "ṣ" (≠ AHL "ẓ"), and for בג״ד כפ״ת with no dagesh, SBL uses the same symbols as for with dagesh (i.e. "b", "g", "d", "k", "f", "t"). A1234In transliterations of modern Israeli Hebrew, initial and final ע (in regular transliteration), silent or initial א, and silent ה are "not" transliterated. To the eye of readers orientating themselves on Latin (or similar) alphabets, these letters might seem to be transliterated as vowel letters; however, these are in fact transliterations of the vowel diacritics – niqqud (or are representations of the spoken vowels). E.g., in אִם ("if", ), אֵם ("mother", ) and אֹם ("nut", ), the letter א always represents the same consonant: (glottal stop), whereas the vowels /i/, /e/ and /o/ respectively represent the spoken vowel, whether it is orthographically denoted by diacritics or not. Since the Academy of the Hebrew Language ascertains that א in initial position is not transliterated, the symbol for the glottal stop  ʾ  is omitted from the transliteration, and only the subsequent vowels are transliterated (whether or not their corresponding vowel diacritics appeared in the text being transliterated), resulting in "im", "em" and "om", respectively. B123The diacritic geresh – "׳" – is used with some other letters as well (ד׳, ח׳, ט׳, ע׳, ר׳, ת׳), but only to transliterate "from" other languages "to" Hebrew – never to spell Hebrew words; therefore they were not included in this table (correctly translating a Hebrew text with these letters would require using the spelling in the language from which the transliteration to Hebrew was originally made). The non-standard "ו׳" and "וו" are sometimes used to represent , which like , and appears in Hebrew slang and loanwords. C12The Sound (as "ch" in loch) is often transcribed "ch", inconsistently with the guidelines specified by the Academy of the Hebrew Language: חם → "cham"; סכך → "schach". DAlthough the Bible does include a single occurrence of a final pe with a dagesh (Book of Proverbs 30, 6: ""), in modern Hebrew is always represented by pe in its regular, not final, form "פ", even when in final word position, which occurs with loanwords (e.g. שׁוֹפּ ""shop""), foreign names (e.g. פִילִיפּ ""Philip"") and some slang (e.g. חָרַפּ ""slept deeply""). The letters of the Hebrew alphabet have played varied roles in Jewish religious literature over the centuries, primarily in mystical texts. Some sources in classical rabbinical literature seem to acknowledge the historical provenance of the currently used Hebrew alphabet and deal with them as a mundane subject (the Jerusalem Talmud, for example, records that "the Israelites took for themselves square calligraphy", and that the letters "came with the Israelites from Ashur [Assyria]"); others attribute mystical significance to the letters, connecting them with the process of creation or the redemption. In mystical conceptions, the alphabet is considered eternal, pre-existent to the Earth, and the letters themselves are seen as having holiness and power, sometimes to such an extent that several stories from the Talmud illustrate the idea that they cannot be destroyed. The idea of the letters' creative power finds its greatest vehicle in the Sefer Yezirah, or "Book of Creation", a mystical text of uncertain origin which describes a story of creation highly divergent from that in the Book of Genesis, largely through exposition on the powers of the letters of the alphabet. The supposed creative powers of the letters are also referenced in the Talmud and Zohar. Another book, the 13th-century Kabbalistic text Sefer HaTemunah, holds that a single letter of unknown pronunciation, held by some to be the four-pronged shin on one side of the teffilin box, is missing from the current alphabet. The world's flaws, the book teaches, are related to the absence of this letter, the eventual revelation of which will repair the universe. Another example of messianic significance attached to the letters is the teaching of Rabbi Eliezer that the five letters of the alphabet with final forms hold the "secret of redemption". In addition, the letters occasionally feature in aggadic portions of non-mystical rabbinic literature. In such aggada the letters are often given anthropomorphic qualities and depicted as speaking to God. Commonly their shapes are used in parables to illustrate points of ethics or theology. An example from the Babylonian Talmud (a parable intended to discourage speculation about the universe before creation): Extensive instructions about the proper methods of forming the letters are found in Mishnat Soferim, within Mishna Berura of Yisrael Meir Kagan. See aleph number and beth number and gimel function. In set theory, formula_1, pronounced aleph-naught or aleph-zero, is used to mark the cardinal number of an infinite countable set, such as formula_2, the set of all integers. More generally, the formula_3 (aleph) notation marks the ordered sequence of all distinct infinite cardinal numbers. Less frequently used, the formula_4 (beth) notation is used for the iterated power sets of formula_1. The 2nd element formula_6 is the cardinality of the continuum. Very occasionally, gimel is used in cardinal notation. The Unicode Hebrew block extends from U+0590 to U+05FF and from U+FB1D to U+FB4F. It includes letters, ligatures, combining diacritical marks ("Niqqud" and cantillation marks) and punctuation. The Numeric Character References is included for HTML. These can be used in many markup languages, and they are often used in Wiki to create the Hebrew glyphs compatible with the majority of web browsers. Standard Hebrew keyboards have a 101-key layout. Like the standard QWERTY layout, the Hebrew layout was derived from the order of letters on Hebrew typewriters. a"Alef-bet" is commonly written in Israeli Hebrew without the "" (, "[Hebrew] hyphen"), , as opposed to with the hyphen, . bThe Arabic letters generally (as six of the primary letters can have only two variants) have four forms, according to their place in the word. The same goes with the Mandaic ones, except for three of the 22 letters, which have only one form. cIn forms of Hebrew older than Modern Hebrew, כ״ף, בי״ת and פ״א can only be read "b", "k" and "p", respectively, at the beginning of a word, while they will have the sole value of "v", "kh" and "f" in a "sofit" (final) position, with few exceptions. In medial positions, both pronunciations are possible. In Modern Hebrew this restriction is not absolute, e.g. פִיזִיקַאי and never (= "physicist"), סְנוֹבּ and never (= "snob"). A "dagesh" may be inserted to unambiguously denote the plosive variant: בּ = , כּ = , פּ =; similarly (though today very rare in Hebrew and common only in Yiddish) a rafé placed above the letter unambiguously denotes the fricative variant: בֿ = , כֿ = and פֿ = . In Modern Hebrew orthography, the sound at the end of a word is denoted by the regular form "פ", as opposed to the final form "ף", which always denotes (see table of transliterations and transcriptions, comment). dHowever, וו (two separate vavs), used in Ktiv male, is to be distinguished from the "Yiddish ligature" װ (also two vavs but together as one character). e1e2e3e4e5The Academy of the Hebrew Language states that both and be indistinguishably represented in Hebrew using the letter Vav. Sometimes the Vav is indeed doubled, however not to denote as opposed to but rather, when spelling without niqqud, to denote the phoneme /v/ at a non-initial and non-final position in the word, whereas a single Vav at a non-initial and non-final position in the word in spelling without niqqud denotes one of the phonemes /u/ or /o/. To pronounce foreign words and loanwords containing the sound , Hebrew readers must therefore rely on former knowledge and context.
https://en.wikipedia.org/wiki?curid=13446
Horace Walpole Horatio Walpole (), 4th Earl of Orford (24 September 1717 – 2 March 1797), also known as Horace Walpole, was an English writer, art historian, man of letters, antiquarian and Whig politician. He had Strawberry Hill House built in Twickenham, south-west London, reviving the Gothic style some decades before his Victorian successors. His literary reputation rests on the first Gothic novel, "The Castle of Otranto" (1764), and his "Letters", which are of significant social and political interest. They have been published by Yale University Press in 48 volumes. He was the son of the first British Prime Minister, Sir Robert Walpole. As Horace Walpole was childless, on his death his barony of Walpole descended to his cousin of the same surname, who created the new Earl of Orford. Walpole was born in London, the youngest son of British Prime Minister Sir Robert Walpole and his wife Catherine. Like his father, he received early education in Bexley; in part under Edward Weston. He was also educated at Eton College and King's College, Cambridge. Walpole's first friends were probably his cousins Francis and Henry Conway, to whom Walpole became strongly attached, especially Henry. At Eton he formed with Charles Lyttelton (later an antiquary and bishop) and George Montagu (later a member of parliament and Private Secretary to Lord North) the "Triumvirate", a schoolboy confederacy. More important were another group of friends dubbed the "Quadruple Alliance": Walpole, Thomas Gray, Richard West and Thomas Ashton. At Cambridge Walpole came under the influence of Conyers Middleton, an unorthodox theologian. Walpole came to accept the sceptical nature of Middleton's attitude to some essential Christian doctrines for the rest of his life, including a hatred of superstition and bigotry. Walpole ceased to reside at Cambridge at the end of 1738 and left without taking a degree. In 1737 Walpole's mother died. According to one biographer his love for his mother "was the most powerful emotion of his entire life ... the whole of his psychological history was dominated by it". Walpole did not have any serious relationships with women; he has been called "a natural celibate". Walpole's sexual orientation has been the subject of speculation. He never married, engaging in a succession of unconsummated flirtations with unmarriageable women, and counted among his close friends a number of women such as Anne Seymour Damer and Mary Berry named by a number of sources as lesbian. Many contemporaries described him as effeminate (one political opponent called him "a hermaphrodite horse"). Biographers such as Timothy Mowl explore his possible homosexuality, including a passionate but ultimately unhappy love affair with the 9th Earl of Lincoln. Some previous biographers such as Lewis, Fothergill, and Robert Wyndham Ketton-Cremer, however, have interpreted Walpole as asexual. Walpole's father secured for him three sinecures which afforded him an income: in 1737 he was appointed Inspector of the Imports and Exports in the Custom House, which he resigned to become Usher of the Exchequer, which gave him at first £3900 per annum but this increased over the years. Upon coming of age he became Comptroller of the Pipe and Clerk of the Estreats which gave him an income of £300 per annum. Walpole decided to go travelling with Thomas Gray and wrote a will whereby he left Gray all his belongings. In 1744 Walpole wrote in a letter to Conway that these offices gave him nearly £2,000 per annum; after 1745 when he was appointed Collectorship of Customs, his total income from these offices was around £3,400 per annum. Walpole went on the Grand Tour with Gray, but as Walpole recalled in later life: "We had not got to Calais before Gray was dissatisfied, for I was a boy, and he, though infinitely more a man, was not enough to make allowances". They left Dover on 29 March and arrived at Calais later that day. They then travelled through Boulogne, Amiens and Saint-Denis, arriving at Paris on 4 April. Here they met many aristocratic Englishmen. In early June they left Paris for Rheims, then in September going to Dijon, Lyons, Dauphiné, Savoy, Aix-les-Bains, Geneva, and then back to Lyons. In October they left for Italy, arriving in Turin in November, then going to Genoa, Piacenza, Parma, Reggio, Modena, Bologna, and in December arriving at Florence. Here he struck up a friendship with Horace Mann, an assistant to the British Minister at the Court of Tuscany. In Florence he also wrote "Epistle from Florence to Thomas Ashton, Esq., Tutor to the Earl of Plymouth", a mixture of Whig history and Middleton's teachings. In February 1740 Walpole and Gray left for Rome with the intention of witnessing the papal conclave upon the death of Pope Clement XII (which they never did see). Walpole wanted to attend fashionable parties and Gray wanted to visit all the antiquities. At social occasions in Rome he saw the Old Pretender James Francis Edward Stuart and his two sons, Charles Edward Stuart and Henry Stuart, although there is no record of them conversing. Walpole and Gray returned to Florence in July. However, Gray disliked the idleness of Florence as compared to the educational pursuits in Rome, and an animosity grew between them, eventually leading to an end to their friendship. On their way back to England they had a furious argument, although it is unknown what it was about. Gray went to Venice, leaving Walpole at Reggio. In later life Walpole admitted that the fault lay primarily with himself: Walpole then visited Venice, Genoa, Antibes, Toulon, Marseilles, Aix, Montpellier, Toulouse, Orléans and Paris. He returned to England on 12 September 1741, reaching London on the 14th. At the 1741 general election Walpole was elected Whig Member of Parliament for the rotten borough of Callington, Cornwall. He held this seat for thirteen years, although he never visited Callington. Walpole entered Parliament shortly before his father's fall from power: in December 1741 the Opposition won its first majority vote in the Commons for twenty years. In January 1742 Walpole's government was still struggling in Parliament although by the end of the month Horace and other family members had successfully urged the Prime Minister to resign after a parliamentary defeat. Walpole's philosophy mirrored that of Edmund Burke, who was his contemporary. He was a classical liberal on issues like imperialism, slavery, and the Americans' fight for independence. Walpole delivered his maiden speech on 19 March against the successful motion that a Secret Committee be set up to enquire into Sir Robert Walpole's last ten years as Prime Minister. For the next three years Walpole spent most of his time with his father at his country house Houghton Hall in Norfolk. His father died in 1745 and left Walpole the remainder of the lease of his house in Arlington Street, London; £5,000 in cash; and the office of Collector of the Customs (worth £1,000 per annum). However he had died in debt, the total of which was in between £40,000 and £50,000. In late 1745 Walpole and Gray resumed their friendship. Also that year the Jacobite Rising began. The position of Walpole was the fruit of his father's support for the Hanoverian dynasty and he knew he was in danger, saying: "Now comes the Pretender's boy, and promises all my comfortable apartments in the Exchequer and Custom House to some forlorn Irish peer, who chooses to remove his pride and poverty out of some large old unfurnished gallery at St. Germain's. Why really, Mr. Montagu, this is not pleasant! I shall wonderfully dislike being a loyal sufferer in a threadbare coat, and shivering in an antechamber at Hanover, or reduced to teach Latin and English to the young princes at Copenhagen". Walpole's lasting architectural creation is Strawberry Hill, the home he built in Twickenham, south west of London, which at the time overlooked the Thames. Here he revived the Gothic style many decades before his Victorian successors. This fanciful neo-Gothic concoction began a new architectural trend. Walpole was a member of parliament for one of the many rotten boroughs, Castle Rising, consisting in underlying freeholds in four villages near Kings Lynn, Norfolk, from 1754 until 1757. At his home he hung a copy of the warrant for the execution of Charles I with the inscription "Major Charta" and wrote of "the least bad of all murders, that of a King". In 1756 he wrote: Walpole was worried that while his fellow Whigs fought amongst themselves the Tories were gaining power, the end result of which would be England delivered to an unlimited, absolute monarchy, "that authority, that torrent which I should in vain extend a feeble arm to stem". In 1757 he wrote the anonymous pamphlet, " A Letter from Xo Ho, a Chinese Philosopher at London, to his Friend Lien Chi at Peking", the first of his works to be widely reviewed. Early in 1757 old Horace Walpole of Wolterton died and was succeeded in the peerage by his son, who was then an MP for King's Lynn, thereby creating a vacancy. The electors of King's Lynn did not wish to be represented by a stranger and instead wanted someone with a connection to the Walpole family. The new Lord Walpole therefore wrote to his cousin requesting that he stand for the seat, saying his friends "were all unanimously of opinion that you were the only person who from your near affinity to my grandfather, whose name is still in the greatest veneration, and your own known personal abilities and qualifications, could stand in the gap on this occasion and prevent opposition and expence and perhaps disgrace to the family". In early 1757 Walpole was out of Parliament after vacating Castle Rising until his election that year to King's Lynn, a seat he would hold until his retirement from the Commons in 1768. Walpole was a prominent opponent of the decision to execute Admiral Byng. Without a seat in Parliament, Walpole recognised his limitations as to political influence. He opposed the recent Catholic accommodative measures, writing to Mann in 1784: "You know I have ever been averse to toleration of an intolerant religion". He wrote to Mann in 1785 that "as there are continually allusions to parliamentary speeches and events, they are often obscure to me till I get them explained; and besides, I do not know several of the satirized heroes even by sight". His political sympathies were with the Foxite Whigs, the successors of the Rockingham Whigs, who were themselves the successors of the Whig Party as revived by Walpole's father. He wrote to William Mason, expounding his political philosophy: Walpole was horrified by the French Revolution and commended Edmund Burke's "Reflections on the Revolution in France": "Every page shows how sincerely he is in earnest—a wondrous merit in a political pamphlet—All other party writers "act" zeal for the public, but it never seems to flow from the heart". He admired the purple passage in the book on Marie Antoinette: "I know the tirade on the Queen of France is condemned and yet I must avow I admire it much. It paints her exactly as she appeared to me the first time I saw her when Dauphiness. She...shot through the room like an aerial being, all brightness and grace and without seeming to touch earth". After he heard of the execution of King Louis XVI he wrote to Lady Ossory on 29 January 1793: He was not impressed with Thomas Paine's reply to Burke, "Rights of Man", writing that it was "so coarse, that you would think he means to degrade the language as much as the government". His father was created Earl of Orford in 1742. Horace's elder brother, the 2nd Earl of Orford (), passed the title on to his son, the 3rd Earl of Orford (1730–1791). When the 3rd Earl died unmarried, Horace Walpole became the 4th Earl of Orford, and the title died with him in 1797. The massive amount of correspondence he left behind has been published in many volumes, starting in 1798. Likewise, a large collection of his works, including historical writings, was published immediately after his death. Horace Walpole was buried in the same location as his father Sir Robert Walpole, at St Martin's Church in Houghton, Norfolk. After Walpole's death, Lady Louisa Stuart, in the introduction to the letters of her grandmother, Lady Mary Wortley Montagu (1837), wrote of rumours that Horace's biological father was not Sir Robert Walpole but Carr, Lord Hervey (1691-1723), elder half-brother of the more famous John Hervey. T.H. White writes: "Catherine Shorter, Sir Robert Walpole's first wife, had five children. Four of them were born in a sequence after the marriage; the fifth, Horace, was born eleven years later, at a time when she was known to be on bad terms with Sir Robert, and known to be on romantic terms with Carr, Lord Hervey." The lack of physical resemblance between Horace and Sir Robert, and his close resemblance to members of the Hervey family, encouraged these rumours. Peter Cunningham, in his introduction to the letters of Horace Walpole (1857), vol. 1, p. x, wrote: For a portrait of Carr, Lord Hervey, see External links below. The novelist Laetitia Matilda Hawkins, a younger contemporary of Walpole, wrote of him as follows: In his old age, according to G.G. Cunningham, he "was afflicted with fits of an hereditary gout which a rigid temperance failed to remove." Strawberry Hill had its own printing press, the Strawberry Hill Press, which supported Horace Walpole's intensive literary activity. In 1764, not using his own press, he anonymously published his Gothic novel, "The Castle of Otranto", claiming on its title page that it was a translation "from the Original Italian of Onuphirio Muralto". The second edition's preface, according to James Watt, "has often been regarded as a manifesto for the modern Gothic romance, stating that his work, now subtitled 'A Gothic Story', sought to restore the qualities of imagination and invention to contemporary fiction". However, there is a playfulness in the prefaces to both editions and in the narration within the text itself. The novel opens with the son of Manfred (the Prince of Otranto) being crushed under a massive helmet that appears as a result of supernatural causes. However, that moment, along with the rest of the unfolding plot, includes a mixture of both ridiculous and sublime supernatural elements. The plot finally reveals how Manfred's family is tainted in a way that served as a model for successive Gothic plots. From 1762 on, Walpole published his "Anecdotes of Painting in England", based on George Vertue's manuscript notes. His memoirs of the Georgian social and political scene, though heavily biased, are a useful primary source for historians. Smith, noting that Walpole never did any work for his well-paid government sinecures, turns to the letters and argues that: Walpole's numerous letters are often used as a historical resource. In one, dating from 28 January 1754, he coined the word serendipity which he said was derived from a "silly fairy tale" he had read, "The Three Princes of Serendip". The oft-quoted epigram, "This world is a comedy to those that think, a tragedy to those that feel", is from a letter of Walpole's to Anne, Countess of Ossory, on 16 August 1776. The original, fuller version appeared in a letter to Sir Horace Mann on 31 December 1769: "I have often said, and oftener think, that this world is a comedy to those that think, a tragedy to those that feel – a solution of why Democritus laughed and Heraclitus wept." In "Historic Doubts on the Life and Reign of King Richard III" (1768), Walpole defended Richard III against the common belief that he murdered the Princes in the Tower. In this he has been followed by other writers, such as Josephine Tey and Valerie Anand. This work, according to Emile Legouis, shows that Walpole was "capable of critical initiative". However, Walpole later changed his views following The Terror and declared that Richard could have committed the crimes he was accused of. The Walpole Society was formed in 1911 to promote the study of the history of British art. Its headquarters is located in the Department of Prints and Drawings at The British Museum and its director is Simon Swynfen Jervis, FSA.
https://en.wikipedia.org/wiki?curid=13447
Horace Engdahl Horace Oscar Axel Engdahl (born 30 December 1948) is a Swedish literary historian and critic, and has been a member of the Swedish Academy since 1997. He was the permanent secretary of the Swedish Academy from 1999 to June 2009, when he was succeeded by Swedish author and historian Peter Englund. Engdahl was born in Karlskrona, Blekinge, Sweden. He earned his B.A. in 1970 at Stockholm University; he earned his doctoral degree (fil. dr.) in 1987, with a study on Swedish romanticism, but had meanwhile been active as a literary critic, translator and journal editor, and was one of the introducers of the continental tradition of literary scholarship in Sweden. He is adjunct professor of Scandinavian Literature at the University of Aarhus in Denmark. He speaks Swedish, English, German, French and Russian fluently. Engdahl was member of the "Kris" editorial staff. On 16 October 1997, Engdahl became a member of the Swedish Academy, elected to seat number 17 vacated by the death of Johannes Edfelt; on 1 June 1999, he succeeded Sture Allén as the Academy's permanent secretary, i.e. its executive member and spokesperson. As such, he had the annual task of announcing the recipient of the Nobel prize in literature to the public. On 20 December 2008 it was announced that after ten years Engdahl would step down as the Academy's permanent secretary on 1 June 2009. Between 1989 and 2014 he was married to Ebba Witt-Brattström, professor of literature at Södertörn University outside Stockholm. They have three sons. In October 2008, Engdahl told the "Associated Press" that the United States is "too isolated, too insular" to challenge Europe as "the center of the literary world" and that "they don't translate enough and don't really participate in the big dialogue of literature ...That ignorance is restraining." At the time of the interview, no American author had received a Nobel Prize in Literature since 1993. His comments generated controversy across the Atlantic, with Harold Augenbraum, head of the U.S. National Book Foundation offering to send him a reading list. Engdahl was reported "very surprised" that the American reaction was "so violent". He did not think that what he said was "that derogatory or sensational" and conceded his comments may have been "perhaps a bit too generalizing". In April 2018, the New York Times reported that Engdahl had railed against former Academy members who left following allegations of sexual abuse by Jean-Claude Arnault.
https://en.wikipedia.org/wiki?curid=13448
Hebrew language Hebrew (, or ) is a Northwest Semitic language native to Israel. In 2013, Modern Hebrew was spoken by over nine million people worldwide. Historically, it is regarded as the language of the Israelites and their ancestors, although the language was not referred to by the name "Hebrew" in the Tanakh itself. The earliest examples of written Paleo-Hebrew date from the 10th century BCE. Hebrew belongs to the West Semitic branch of the Afroasiatic language family. Hebrew is the only Canaanite language still spoken and the only truly successful example of a revived dead language. Hebrew ceased to be an everyday spoken language somewhere between 200 and 400 CE, declining since the aftermath of the Bar Kokhba revolt. Aramaic and, to a lesser extent, Greek were already in use as international languages, especially among elites and immigrants. Hebrew survived into the medieval period as the language of Jewish liturgy, rabbinic literature, intra-Jewish commerce and poetry. With the rise of Zionism in the 19th century, it was revived as a spoken and literary language, becoming the main language of the Yishuv and subsequently of the State of Israel. According to "Ethnologue", in 1998, Hebrew was the language of five million people worldwide. After Israel, the United States has the second-largest Hebrew-speaking population, with about 220,000 fluent speakers, mostly from Israel. Modern Hebrew is the official language of the State of Israel, while premodern Hebrew is used for prayer or study in Jewish communities around the world today. The Samaritan dialect is also the liturgical tongue of the Samaritans, while modern Hebrew or Arabic is their vernacular. As a foreign language, it is studied mostly by Jews and students of Judaism and Israel and by archaeologists and linguists specializing in the Middle East and its civilizations, as well as by theologians in Christian seminaries. Nearly all of the Hebrew Bible is written in Biblical Hebrew, with much of its present form in the dialect that scholars believe flourished around the 6th century BCE, around the time of the Babylonian captivity. For this reason, Hebrew has been referred to by Jews as "Lashon Hakodesh" (), "the Holy Language", since ancient times. The modern English word "Hebrew" is derived from Old French "Ebrau", via Latin from the Greek "Ἑβραῖος" ("Hebraîos") and Aramaic "'ibrāy", all ultimately derived from Biblical Hebrew "Ivri" (), one of several names for the Israelite (Jewish and Samaritan) people (Hebrews). It is traditionally understood to be an adjective based on the name of Abraham's ancestor, Eber, mentioned in . The name is believed to be based on the Semitic root "ʕ-b-r" () meaning "beyond", "other side", "across"; interpretations of the term "Hebrew" generally render its meaning as roughly "from the other side [of the river/desert]"—i.e., an exonym for the inhabitants of the land of Israel/Judah, perhaps from the perspective of Mesopotamia, Phoenicia or the Transjordan (with the river referenced perhaps the Euphrates, Jordan or Litani; or maybe the northern Arabian Desert between Babylonia and Canaan). Compare cognate Assyrian "ebru", of identical meaning. One of the earliest references to the language's name as "Ivrit" is found in the prologue to the Book of Ben Sira, from the 2nd century BCE. The Hebrew Bible does not use the term "Hebrew" in reference to the language of the Hebrew people; its later historiography, in the Book of Kings, refers to it as ‏יְהוּדִית‎ Yehudit 'Judahite (language)'. Hebrew belongs to the Canaanite group of languages. Canaanite languages are a branch of the Northwest Semitic family of languages. According to Avraham Ben-Yosef, Hebrew flourished as a spoken language in the Kingdoms of Israel and Judah during the period from about 1200 to 586 BCE. Scholars debate the degree to which Hebrew was a spoken vernacular in ancient times following the Babylonian exile, when the predominant international language in the region was Old Aramaic. Hebrew was extinct as a colloquial language by Late Antiquity, but it continued to be used as a literary language and as the liturgical language of Judaism, evolving various dialects of literary Medieval Hebrew, until its revival as a spoken language in the late 19th century. In July 2008, Israeli archaeologist Yossi Garfinkel discovered a ceramic shard at Khirbet Qeiyafa that he claimed may be the earliest Hebrew writing yet discovered, dating from around 3,000 years ago. Hebrew University archaeologist Amihai Mazar said that the inscription was "proto-Canaanite" but cautioned that "The differentiation between the scripts, and between the languages themselves in that period, remains unclear," and suggested that calling the text Hebrew might be going too far. The Gezer calendar also dates back to the 10th century BCE at the beginning of the Monarchic Period, the traditional time of the reign of David and Solomon. Classified as Archaic Biblical Hebrew, the calendar presents a list of seasons and related agricultural activities. The Gezer calendar (named after the city in whose proximity it was found) is written in an old Semitic script, akin to the Phoenician one that, through the Greeks and Etruscans, later became the Roman script. The Gezer calendar is written without any vowels, and it does not use consonants to imply vowels even in the places in which later Hebrew spelling requires them. Numerous older tablets have been found in the region with similar scripts written in other Semitic languages, for example, Protosinaitic. It is believed that the original shapes of the script go back to Egyptian hieroglyphs, though the phonetic values are instead inspired by the acrophonic principle. The common ancestor of Hebrew and Phoenician is called Canaanite, and was the first to use a Semitic alphabet distinct from that of Egyptian. One ancient document is the famous Moabite Stone, written in the Moabite dialect; the Siloam Inscription, found near Jerusalem, is an early example of Hebrew. Less ancient samples of Archaic Hebrew include the ostraca found near Lachish, which describe events preceding the final capture of Jerusalem by Nebuchadnezzar and the Babylonian captivity of 586 BCE. In its widest sense, Biblical Hebrew refers to the spoken language of ancient Israel flourishing between the 10th century BCE and the turn of the 4th century CE. It comprises several evolving and overlapping dialects. The phases of Classical Hebrew are often named after important literary works associated with them. Sometimes the above phases of spoken Classical Hebrew are simplified into "Biblical Hebrew" (including several dialects from the 10th century BCE to 2nd century BCE and extant in certain Dead Sea Scrolls) and "Mishnaic Hebrew" (including several dialects from the 3rd century BCE to the 3rd century CE and extant in certain other Dead Sea Scrolls). However, today most Hebrew linguists classify Dead Sea Scroll Hebrew as a set of dialects evolving out of Late Biblical Hebrew and into Mishnaic Hebrew, thus including elements from both but remaining distinct from either. By the start of the Byzantine Period in the 4th century CE, Classical Hebrew ceased as a regularly spoken language, roughly a century after the publication of the Mishnah, apparently declining since the aftermath of the catastrophic Bar Kokhba War around 135 CE. In the early 6th century BCE, the Neo-Babylonian Empire conquered the ancient Kingdom of Judah, destroying much of Jerusalem and exiling its population far to the East in Babylon. During the Babylonian captivity, many Israelites learned Aramaic, the closely related Semitic language of their captors. Thus for a significant period, the Jewish elite became influenced by Aramaic. After Cyrus the Great conquered Babylon, he allowed the Jewish people to return from captivity. As a result, a local version of Aramaic came to be spoken in Israel alongside Hebrew. By the beginning of the Common Era, Aramaic was the primary colloquial language of Samarian, Babylonian and Galileean Jews, and western and intellectual Jews spoke Greek, but a form of so-called Rabbinic Hebrew continued to be used as a vernacular in Judea until it was displaced by Aramaic, probably in the 3rd century CE. Certain Sadducee, Pharisee, Scribe, Hermit, Zealot and Priest classes maintained an insistence on Hebrew, and all Jews maintained their identity with Hebrew songs and simple quotations from Hebrew texts. While there is no doubt that at a certain point, Hebrew was displaced as the everyday spoken language of most Jews, and that its chief successor in the Middle East was the closely related Aramaic language, then Greek, scholarly opinions on the exact dating of that shift have changed very much. In the first half of the 20th century, most scholars followed Geiger and Dalman in thinking that Aramaic became a spoken language in the land of Israel as early as the beginning of Israel's Hellenistic Period in the 4th century BCE, and that as a corollary Hebrew ceased to function as a spoken language around the same time. Segal, Klausner and Ben Yehuda are notable exceptions to this view. During the latter half of the 20th century, accumulating archaeological evidence and especially linguistic analysis of the Dead Sea Scrolls has disproven that view. The Dead Sea Scrolls, uncovered in 1946–1948 near Qumran revealed ancient Jewish texts overwhelmingly in Hebrew, not Aramaic. The Qumran scrolls indicate that Hebrew texts were readily understandable to the average Israelite, and that the language had evolved since Biblical times as spoken languages do. Recent scholarship recognizes that reports of Jews speaking in Aramaic indicate a multilingual society, not necessarily the primary language spoken. Alongside Aramaic, Hebrew co-existed within Israel as a spoken language. Most scholars now date the demise of Hebrew as a spoken language to the end of the Roman Period, or about 200 CE. It continued on as a literary language down through the Byzantine Period from the 4th century CE. The exact roles of Aramaic and Hebrew remain hotly debated. A trilingual scenario has been proposed for the land of Israel. Hebrew functioned as the local mother tongue with powerful ties to Israel's history, origins and golden age and as the language of Israel's religion; Aramaic functioned as the international language with the rest of the Middle East; and eventually Greek functioned as another international language with the eastern areas of the Roman Empire. William Schniedewind argues that after waning in the Persian Period, the religious importance of Hebrew grew in the Hellenistic and Roman periods, and cites epigraphical evidence that Hebrew survived as a vernacular language — though both its grammar and its writing system had been substantially influenced by Aramaic. According to another summary, Greek was the language of government, Hebrew the language of prayer, study and religious texts, and Aramaic was the language of legal contracts and trade. There was also a geographic pattern: according to Spolsky, by the beginning of the Common Era, "Judeo-Aramaic was mainly used in Galilee in the north, Greek was concentrated in the former colonies and around governmental centers, and Hebrew monolingualism continued mainly in the southern villages of Judea." In other words, "in terms of dialect geography, at the time of the tannaim Palestine could be divided into the Aramaic-speaking regions of Galilee and Samaria and a smaller area, Judaea, in which Rabbinic Hebrew was used among the descendants of returning exiles." In addition, it has been surmised that Koine Greek was the primary vehicle of communication in coastal cities and among the upper class of Jerusalem, while Aramaic was prevalent in the lower class of Jerusalem, but not in the surrounding countryside. After the suppression of the Bar Kokhba revolt in the 2nd century CE, Judaeans were forced to disperse. Many relocated to Galilee, so most remaining native speakers of Hebrew at that last stage would have been found in the north. The Christian New Testament contains some Semitic place names and quotes. The language of such Semitic glosses (and in general the language spoken by Jews in scenes from the New Testament) is often referred to as "Hebrew" in the text, although this term is often re-interpreted as referring to Aramaic instead and is rendered accordingly in recent translations. Nonetheless, these glosses can be interpreted as Hebrew as well. It has been argued that Hebrew, rather than Aramaic or Koine Greek, lay behind the composition of the Gospel of Matthew. (See the Hebrew Gospel hypothesis or Language of Jesus for more details on Hebrew and Aramaic in the gospels.) The term "Mishnaic Hebrew" generally refers to the Hebrew dialects found in the Talmud, excepting quotations from the Hebrew Bible. The dialects organize into Mishnaic Hebrew (also called Tannaitic Hebrew, Early Rabbinic Hebrew, or Mishnaic Hebrew I), which was a spoken language, and Amoraic Hebrew (also called Late Rabbinic Hebrew or Mishnaic Hebrew II), which was a literary language. The earlier section of the Talmud is the Mishnah that was published around 200 CE, although many of the stories take place much earlier, and was written in the earlier Mishnaic dialect. The dialect is also found in certain Dead Sea Scrolls. Mishnaic Hebrew is considered to be one of the dialects of Classical Hebrew that functioned as a living language in the land of Israel. A transitional form of the language occurs in the other works of Tannaitic literature dating from the century beginning with the completion of the Mishnah. These include the halachic Midrashim (Sifra, Sifre, Mechilta etc.) and the expanded collection of Mishnah-related material known as the Tosefta. The Talmud contains excerpts from these works, as well as further Tannaitic material not attested elsewhere; the generic term for these passages is "Baraitot". The dialect of all these works is very similar to Mishnaic Hebrew. About a century after the publication of the Mishnah, Mishnaic Hebrew fell into disuse as a spoken language. The later section of the Talmud, the Gemara, generally comments on the Mishnah and Baraitot in two forms of Aramaic. Nevertheless, Hebrew survived as a liturgical and literary language in the form of later Amoraic Hebrew, which sometimes occurs in the text of the Gemara. Hebrew was always regarded as the language of Israel's religion, history and national pride, and after it faded as a spoken language, it continued to be used as a "lingua franca" among scholars and Jews traveling in foreign countries. After the 2nd century CE when the Roman Empire exiled most of the Jewish population of Jerusalem following the Bar Kokhba revolt, they adapted to the societies in which they found themselves, yet letters, contracts, commerce, science, philosophy, medicine, poetry and laws continued to be written mostly in Hebrew, which adapted by borrowing and inventing terms. After the Talmud, various regional literary dialects of Medieval Hebrew evolved. The most important is Tiberian Hebrew or Masoretic Hebrew, a local dialect of Tiberias in Galilee that became the standard for vocalizing the Hebrew Bible and thus still influences all other regional dialects of Hebrew. This Tiberian Hebrew from the 7th to 10th century CE is sometimes called "Biblical Hebrew" because it is used to pronounce the Hebrew Bible; however, properly it should be distinguished from the historical Biblical Hebrew of the 6th century BCE, whose original pronunciation must be reconstructed. Tiberian Hebrew incorporates the remarkable scholarship of the Masoretes (from "masoret" meaning "tradition"), who added vowel points and grammar points to the Hebrew letters to preserve much earlier features of Hebrew, for use in chanting the Hebrew Bible. The Masoretes inherited a biblical text whose letters were considered too sacred to be altered, so their markings were in the form of pointing in and around the letters. The Syriac alphabet, precursor to the Arabic alphabet, also developed vowel pointing systems around this time. The Aleppo Codex, a Hebrew Bible with the Masoretic pointing, was written in the 10th century, likely in Tiberias, and survives to this day. It is perhaps the most important Hebrew manuscript in existence. During the Golden age of Jewish culture in Spain, important work was done by grammarians in explaining the grammar and vocabulary of Biblical Hebrew; much of this was based on the work of the grammarians of Classical Arabic. Important Hebrew grammarians were Judah ben David Hayyuj, Jonah ibn Janah, Abraham ibn Ezra and later (in Provence), David Kimhi. A great deal of poetry was written, by poets such as Dunash ben Labrat, Solomon ibn Gabirol, Judah ha-Levi, Moses ibn Ezra and Abraham ibn Ezra, in a "purified" Hebrew based on the work of these grammarians, and in Arabic quantitative or strophic meters. This literary Hebrew was later used by Italian Jewish poets. The need to express scientific and philosophical concepts from Classical Greek and Medieval Arabic motivated Medieval Hebrew to borrow terminology and grammar from these other languages, or to coin equivalent terms from existing Hebrew roots, giving rise to a distinct style of philosophical Hebrew. This is used in the translations made by the Ibn Tibbon family. (Original Jewish philosophical works were usually written in Arabic.) Another important influence was Maimonides, who developed a simple style based on Mishnaic Hebrew for use in his law code, the Mishneh Torah. Subsequent rabbinic literature is written in a blend between this style and the Aramaized Rabbinic Hebrew of the Talmud. Hebrew persevered through the ages as the main language for written purposes by all Jewish communities around the world for a large range of uses—not only liturgy, but also poetry, philosophy, science and medicine, commerce, daily correspondence and contracts. There have been many deviations from this generalization such as Bar Kokhba's letters to his lieutenants, which were mostly in Aramaic, and Maimonides' writings, which were mostly in Arabic; but overall, Hebrew did not cease to be used for such purposes. For example, the first Middle East printing press, in Safed (modern Israel), produced a small number of books in Hebrew in 1577, which were then sold to the nearby Jewish world. This meant not only that well-educated Jews in all parts of the world could correspond in a mutually intelligible language, and that books and legal documents published or written in any part of the world could be read by Jews in all other parts, but that an educated Jew could travel and converse with Jews in distant places, just as priests and other educated Christians could converse in Latin. For example, Rabbi Avraham Danzig wrote the "Chayei Adam" in Hebrew, as opposed to Yiddish, as a guide to "Halacha" for the ""average" 17-year-old" (Ibid. Introduction 1). Similarly, the Chofetz Chaim, Rabbi Yisrael Meir Kagan's purpose in writing the "Mishna Berurah" was to "produce a work that could be studied daily so that Jews might know the proper procedures to follow minute by minute". The work was nevertheless written in Talmudic Hebrew and Aramaic, since, "the ordinary Jew [of Eastern Europe] of a century ago, was fluent enough in this idiom to be able to follow the Mishna Berurah without any trouble." Hebrew has been revived several times as a literary language, most significantly by the Haskalah (Enlightenment) movement of early and mid-19th-century Germany. In the early 19th century, a form of spoken Hebrew had emerged in the markets of Jerusalem between Jews of different linguistic backgrounds to communicate for commercial purposes. This Hebrew dialect was to a certain extent a pidgin. Near the end of that century the Jewish activist Eliezer Ben-Yehuda, owing to the ideology of the national revival (, "Shivat Tziyon," later Zionism), began reviving Hebrew as a modern spoken language. Eventually, as a result of the local movement he created, but more significantly as a result of the new groups of immigrants known under the name of the Second Aliyah, it replaced a score of languages spoken by Jews at that time. Those languages were Jewish dialects of local languages, including Judaeo-Spanish (also called "Judezmo" and "Ladino"), Yiddish, Judeo-Arabic and Bukhori (Tajiki), or local languages spoken in the Jewish diaspora such as Russian, Persian and Arabic. The major result of the literary work of the Hebrew intellectuals along the 19th century was a lexical modernization of Hebrew. New words and expressions were adapted as neologisms from the large corpus of Hebrew writings since the Hebrew Bible, or borrowed from Arabic (mainly by Eliezer Ben-Yehuda) and older Aramaic and Latin. Many new words were either borrowed from or coined after European languages, especially English, Russian, German, and French. Modern Hebrew became an official language in British-ruled Palestine in 1921 (along with English and Arabic), and then in 1948 became an official language of the newly declared State of Israel. Hebrew is the most widely spoken language in Israel today. In the Modern Period, from the 19th century onward, the literary Hebrew tradition revived as the spoken language of modern Israel, called variously "Israeli Hebrew", "Modern Israeli Hebrew", "Modern Hebrew", "New Hebrew", "Israeli Standard Hebrew", "Standard Hebrew" and so on. Israeli Hebrew exhibits some features of Sephardic Hebrew from its local Jerusalemite tradition but adapts it with numerous neologisms, borrowed terms (often technical) from European languages and adopted terms (often colloquial) from Arabic. The literary and narrative use of Hebrew was revived beginning with the Haskalah movement. The first secular periodical in Hebrew, "HaMe'assef" (The Gatherer), was published by maskilim in Königsberg (today's Kaliningrad) from 1783 onwards. In the mid-19th century, publications of several Eastern European Hebrew-language newspapers (e.g. "Hamagid", founded in Ełk in 1856) multiplied. Prominent poets were Hayim Nahman Bialik and Shaul Tchernichovsky; there were also novels written in the language. The revival of the Hebrew language as a mother tongue was initiated in the late 19th century by the efforts of Eliezer Ben-Yehuda. He joined the Jewish national movement and in 1881 immigrated to Palestine, then a part of the Ottoman Empire. Motivated by the surrounding ideals of renovation and rejection of the diaspora "shtetl" lifestyle, Ben-Yehuda set out to develop tools for making the literary and liturgical language into everyday spoken language. However, his brand of Hebrew followed norms that had been replaced in Eastern Europe by different grammar and style, in the writings of people like Ahad Ha'am and others. His organizational efforts and involvement with the establishment of schools and the writing of textbooks pushed the vernacularization activity into a gradually accepted movement. It was not, however, until the 1904–1914 Second Aliyah that Hebrew had caught real momentum in Ottoman Palestine with the more highly organized enterprises set forth by the new group of immigrants. When the British Mandate of Palestine recognized Hebrew as one of the country's three official languages (English, Arabic, and Hebrew, in 1922), its new formal status contributed to its diffusion. A constructed modern language with a truly Semitic vocabulary and written appearance, although often European in phonology, was to take its place among the current languages of the nations. While many saw his work as fanciful or even blasphemous (because Hebrew was the holy language of the Torah and therefore some thought that it should not be used to discuss everyday matters), many soon understood the need for a common language amongst Jews of the British Mandate who at the turn of the 20th century were arriving in large numbers from diverse countries and speaking different languages. A Committee of the Hebrew Language was established. After the establishment of Israel, it became the Academy of the Hebrew Language. The results of Ben-Yehuda's lexicographical work were published in a dictionary ("The Complete Dictionary of Ancient and Modern Hebrew"). The seeds of Ben-Yehuda's work fell on fertile ground, and by the beginning of the 20th century, Hebrew was well on its way to becoming the main language of the Jewish population of both Ottoman and British Palestine. At the time, members of the Old Yishuv and a very few Hasidic sects, most notably those under the auspices of Satmar, refused to speak Hebrew and spoke only Yiddish. In the Soviet Union, the use of Hebrew, along with other Jewish cultural and religious activities, was suppressed. Soviet authorities considered the use of Hebrew "reactionary" since it was associated with Zionism, and the teaching of Hebrew at primary and secondary schools was officially banned by the People's Commissariat for Education as early as 1919, as part of an overall agenda aiming to secularize education (the language itself did not cease to be studied at universities for historical and linguistic purposes). The official ordinance stated that Yiddish, being the spoken language of the Russian Jews, should be treated as their only national language, while Hebrew was to be treated as a foreign language. Hebrew books and periodicals ceased to be published and were seized from the libraries, although liturgical texts were still published until the 1930s. Despite numerous protests, a policy of suppression of the teaching of Hebrew operated from the 1930s on. Later in the 1980s in the USSR, Hebrew studies reappeared due to people struggling for permission to go to Israel (refuseniks). Several of the teachers were imprisoned, e.g. Yosef Begun, Ephraim Kholmyansky, Yevgeny Korostyshevsky and others responsible for a Hebrew learning network connecting many cities of the USSR. Standard Hebrew, as developed by Eliezer Ben-Yehuda, was based on Mishnaic spelling and Sephardi Hebrew pronunciation. However, the earliest speakers of Modern Hebrew had Yiddish as their native language and often introduced calques from Yiddish and phono-semantic matchings of international words. Despite using Sephardic Hebrew pronunciation as its primary basis, modern Israeli Hebrew has adapted to Ashkenazi Hebrew phonology in some respects, mainly the following: The vocabulary of Israeli Hebrew is much larger than that of earlier periods. According to Ghil'ad Zuckermann: In Israel, Modern Hebrew is currently taught in institutions called Ulpanim (singular: Ulpan). There are government-owned, as well as private, Ulpanim offering online courses and face-to-face programs. Modern Hebrew is the primary official language of the State of Israel. , there are about 9 million Hebrew speakers worldwide, of whom 7 million speak it fluently. Currently, 90% of Israeli Jews are proficient in Hebrew, and 70% are highly proficient. Some 60% of Israeli Arabs are also proficient in Hebrew, and 30% report having a higher proficiency in Hebrew than in Arabic. In total, about 53% of the Israeli population speaks Hebrew as a native language, while most of the rest speak it fluently. However, in 2013 Hebrew was the native language of only 49% of Israelis over the age of 20, with Russian, Arabic, French, English, Yiddish and Ladino being the native tongues of most of the rest. Some 26% of immigrants from the former Soviet Union and 12% of Arabs reported speaking Hebrew poorly or not at all. Steps have been taken to keep Hebrew the primary language of use, and to prevent large-scale incorporation of English words into the Hebrew vocabulary. The Academy of the Hebrew Language of the Hebrew University of Jerusalem currently invents about 2,000 new Hebrew words each year for modern words by finding an original Hebrew word that captures the meaning, as an alternative to incorporating more English words into Hebrew vocabulary. The Haifa municipality has banned officials from using English words in official documents, and is fighting to stop businesses from using only English signs to market their services. In 2012, a Knesset bill for the preservation of the Hebrew language was proposed, which includes the stipulation that all signage in Israel must first and foremost be in Hebrew, as with all speeches by Israeli officials abroad. The bill's author, MK Akram Hasson, stated that the bill was proposed as a response to Hebrew "losing its prestige" and children incorporating more English words into their vocabulary. Hebrew is also an official national minority language in Poland, since 6 January 2005. Biblical Hebrew had a typical Semitic consonant inventory, with pharyngeal /ʕ ħ/, a series of "emphatic" consonants (possibly ejective, but this is debated), lateral fricative /ɬ/, and in its older stages also uvular /χ ʁ/. /χ ʁ/ merged into /ħ ʕ/ in later Biblical Hebrew, and /b ɡ d k p t/ underwent allophonic spirantization to [v ɣ ð x f θ] (known as begadkefat). The earliest Biblical Hebrew vowel system contained the Proto-Semitic vowels /a aː i iː u uː/ as well as /oː/, but this system changed dramatically over time. By the time of the Dead Sea Scrolls, /ɬ/ had shifted to /s/ in the Jewish traditions, though for the Samaritans it merged with /ʃ/ instead. (Elisha Qimron 1986. "Hebrew of the Dead Sea Scrolls", 29). The Tiberian reading tradition of the Middle Ages had the vowel system /a ɛ e i ɔ o u ă ɔ̆ ɛ̆/, though other Medieval reading traditions had fewer vowels. A number of reading traditions have been preserved in liturgical use. In Oriental (Sephardi and Mizrahi) Jewish reading traditions, the emphatic consonants are realized as pharyngealized, while the Ashkenazi (northern and eastern European) traditions have lost emphatics and pharyngeals (although according to Ashkenazi law, pharyngeal articulation is preferred over uvular or glottal articulation when representing the community in religious service such as prayer and Torah reading), and show the shift of /w/ to /v/. The Samaritan tradition has a complex vowel system that does not correspond closely to the Tiberian systems. Modern Hebrew pronunciation developed from a mixture of the different Jewish reading traditions, generally tending towards simplification. In line with Sephardi Hebrew pronunciation, emphatic consonants have shifted to their ordinary counterparts, /w/ to /v/, and [ɣ ð θ] are not present. Most Israelis today also merge /ʕ ħ/ with /ʔ χ/, do not have contrastive gemination, and pronounce /r/ as a uvular fricative [ʁ] or a voiced velar fricative [ɣ] rather than an alveolar trill, because of Ashkenazi Hebrew influences. The consonants /tʃ/ and /dʒ/ have become phonemic due to loan words, and /w/ has similarly been re-introduced. Notes: Hebrew grammar is partly analytic, expressing such forms as dative, ablative and accusative using prepositional particles rather than grammatical cases. However, inflection plays a decisive role in the formation of the verbs and nouns. For example, nouns have a construct state, called "smikhut", to denote the relationship of "belonging to": this is the converse of the genitive case of more inflected languages. Words in smikhut are often combined with hyphens. In modern speech, the use of the construct is sometimes interchangeable with the preposition "shel", meaning "of". There are many cases, however, where older declined forms are retained (especially in idiomatic expressions and the like), and "person"-enclitics are widely used to "decline" prepositions. Like all Semitic languages, the Hebrew language exhibits a pattern of stems consisting typically of "triliteral", or 3-consonant consonantal roots, from which nouns, adjectives, and verbs are formed in various ways: e.g. by inserting vowels, doubling consonants, lengthening vowels and/or adding prefixes, suffixes or infixes. 4-consonant roots also exist and became more frequent in the modern language due to a process of coining verbs from nouns that are themselves constructed from 3-consonant verbs. Some trilateral roots lose one of their consonants in most forms and are called "Nehim" (Resting). Hebrew uses a number of one-letter prefixes that are added to words for various purposes. These are called inseparable prepositions or "Letters of Use" (). Such items include: the definite article "ha-" () (="the"); prepositions "be-" () (="in"), "le-" () (="to"; a shortened version of the preposition "el"), "mi-" () (="from"; a shortened version of the preposition "min"); conjunctions "ve-" () (="and"), "she-" () (="that"; a shortened version of the Biblical conjunction "asher"), "ke-" () (="as", "like"; a shortened version of the conjunction "kmo"). The vowel accompanying each of these letters may differ from those listed above, depending on the first letter or vowel following it. The rules governing these changes, hardly observed in colloquial speech as most speakers tend to employ the regular form, may be heard in more formal circumstances. For example, if a preposition is put before a word that begins with a moving Shva, then the preposition takes the vowel (and the initial consonant may be weakened): colloquial "be-kfar" (="in a village") corresponds to the more formal "bi-khfar". The definite article may be inserted between a preposition or a conjunction and the word it refers to, creating composite words like "mé-ha-kfar" (="from the village"). The latter also demonstrates the change in the vowel of "mi-". With "be", "le" and "ke", the definite article is assimilated into the prefix, which then becomes "ba", "la" or "ka". Thus *"be-ha-matos" becomes "ba-matos" (="in the plane"). Note that this does not happen to "mé" (the form of "min" or "mi-" used before the letter "he"), therefore "mé-ha-matos" is a valid form, which means "from the airplane". Like most other languages, the vocabulary of the Hebrew language is divided into verbs, nouns, adjectives and so on, and its sentence structure can be analyzed by terms like object, subject and so on. Modern Hebrew is written from right to left using the Hebrew alphabet, which is an "impure" abjad, or consonant-only script, of 22 letters. The ancient paleo-Hebrew alphabet is similar to those used for Canaanite and Phoenician. Modern scripts are based on the "square" letter form, known as "Ashurit" (Assyrian), which was developed from the Aramaic script. A cursive Hebrew script is used in handwriting: the letters tend to be more circular in form when written in cursive, and sometimes vary markedly from their printed equivalents. The medieval version of the cursive script forms the basis of another style, known as Rashi script. When necessary, vowels are indicated by diacritic marks above or below the letter representing the syllabic onset, or by use of "matres lectionis", which are consonantal letters used as vowels. Further diacritics are used to indicate variations in the pronunciation of the consonants (e.g. "bet"/"vet", "shin"/"sin"); and, in some contexts, to indicate the punctuation, accentuation and musical rendition of Biblical texts (see Cantillation). Hebrew has always been used as the language of prayer and study, and the following pronunciation systems are found. Ashkenazi Hebrew, originating in Central and Eastern Europe, is still widely used in Ashkenazi Jewish religious services and studies in Israel and abroad, particularly in the Haredi and other Orthodox communities. It was influenced by the Yiddish language. Sephardi Hebrew is the traditional pronunciation of the Spanish and Portuguese Jews and Sephardi Jews in the countries of the former Ottoman Empire, with the exception of Yemenite Hebrew. This pronunciation, in the form used by the Jerusalem Sephardic community, is the basis of the Hebrew phonology of Israeli native speakers. It was influenced by the Judezmo language. Mizrahi (Oriental) Hebrew is actually a collection of dialects spoken liturgically by Jews in various parts of the Arab and Islamic world. It was derived from the old Arabic language, and in some cases influenced by Sephardi Hebrew. The same claim is sometimes made for Yemenite Hebrew or "Temanit", which differs from other Mizrahi dialects by having a radically different vowel system, and distinguishing between different diacritically marked consonants that are pronounced identically in other dialects (for example gimel and "ghimel".) These pronunciations are still used in synagogue ritual and religious study, in Israel and elsewhere, mostly by people who are not native speakers of Hebrew, though some traditionalist Israelis use liturgical pronunciations in prayer. Many synagogues in the diaspora, even though Ashkenazi by rite and by ethnic composition, have adopted the "Sephardic" pronunciation in deference to Israeli Hebrew. However, in many British and American schools and synagogues, this pronunciation retains several elements of its Ashkenazi substrate, especially the distinction between tsere and segol.
https://en.wikipedia.org/wiki?curid=13450
Horror film A horror film is a film that seeks to elicit fear for entertainment purposes. Initially inspired by literature from authors such as Edgar Allan Poe, Bram Stoker, and Mary Shelley, horror has existed as a film genre for more than a century. The macabre and the supernatural are frequent themes. Horror may also overlap with the fantasy, supernatural fiction, and thriller genres. Horror films often aim to evoke viewers' nightmares, fears, revulsions and terror of the unknown. Plots within the horror genre often involve the intrusion of an evil force, event, or personage into the everyday world. Prevalent elements include ghosts, extraterrestrials, vampires, werewolves, demons, Satanism, evil clowns, gore, torture, vicious animals, evil witches, monsters, giant monsters, zombies, cannibalism, psychopaths, natural, ecological or man-made disasters, and serial killers. Some sub-genres of horror film include comedy horror, folk horror, body horror, found footage, holiday horror, psychological horror, science fiction horror, slasher, supernatural horror, gothic horror, natural horror, zombie film, and teen horror. The first depictions of the supernatural on screen appeared in several of the short silent films created by the French pioneer filmmaker Georges Méliès in the late 1890s. The best known of these early supernatural-based works is the 2 and a half minute short film "Le Manoir du Diable" (1896), known in English as both ""The Haunted Castle"" or ""The House of the Devil"". The film is sometimes credited as being the first ever horror film. In "The Haunted Castle", a mischievous devil appears inside a medieval castle where he harasses the visitors. Méliès' other popular horror film is "La Caverne maudite" (1898), which translates literally as "the accursed cave". The film, also known by its English title "The Cave of the Demons", tells the story of a man stumbling over a cave that is populated by the spirits and skeletons of people who died there. Méliès would also make other short films that historians consider now as horror-comedies. "Une nuit terrible" (1896), which translates to "A Terrible Night", tells a story of a man who tries to get a good night's sleep but ends up wrestling a giant spider. His other film, "L'auberge ensorcelée" (1897), or "The Bewitched Inn", features a story of a hotel guest being pranked and tormented by an unseen presence. In 1897, the American photographer-turned director George Albert Smith created "The X-Ray Fiend" (1897), a horror-comedy trick film that came out a mere two years after x-rays were invented. The film shows a couple of skeletons courting each other. An audience full of people unaccustomed to seeing moving skeletons on screen would have found it frightening and otherworldly. The next year, Smith created the short film "Photographing a Ghost" (1898), considered a precursor to the paranormal investigation subgenre. The film portrays three men attempting to photograph a ghost, only to fail time and again as the ghost eludes the men and throws chairs at them. Japan also made early forays into the horror genre. In 1898, a Japanese film company called Konishi Honten released two horror films both written by Ejiro Hatta. These were "Shinin No Sosei" (Resurrection of a Corpse), and "Bake Jizo" (Jizo the Spook) The film "Shinin No Sosei" told the story of a dead man who comes back to life after having fallen from a coffin that two men were carrying. The writer Hatta played the dead man role, while the coffin-bearers were played by Konishi Honten employees. Though there are no records of the cast, crew, or plot of "Bake Jizo", it was likely based on the Japanese legend of Jizo statues, believed to provide safety and protection to children. In Japan, Jizō is a deity who is seen as the guardian of children, particularly children who have died before their parents. Jizō has been worshiped as the guardian of the souls of "mizuko", namely stillborn, miscarried, or aborted fetuses. The presence of the word bake—which can be translated to "spook," "ghost," or "phantom"—may imply a haunted or possessed statue. Spanish filmmaker Segundo de Chomón is also one of the most significant silent film directors in early filmmaking. He was popular for his frequent camera tricks and optical illusions, an innovation that contributed heavily to the popularity of trick films in the period. His famous works include "Satán se divierte" (1907), which translates to "Satan Having Fun", or "Satan at Play"; "La casa hechizada" (1908), or "The House of Ghosts", considered to be one of the earliest cinematic depictions of a haunted house premise; and "Le spectre rouge" (1907) or "The Red Spectre", a collaboration film with French director Ferdinand Zecca about a demonic magician who attempts to perform his act in a mysterious grotto. The Selig Polyscope Company in the United States produced one of the first film adaptations of a horror-based novel. In 1908, the company produced the film "Dr. Jekyll and Mr. Hyde", directed by Otis Turner and starring Hobart Bosworth in the lead role. The film is, however, now considered a lost film. The story was based on Robert Louis Stevenson's classic gothic novella "Strange Case of Dr Jekyll and Mr Hyde", published 15 years prior, about a man who transforms his personality between two contrasting personas. (The book tells the classic story of a man with an unpredictably dual nature: usually very good, but sometimes shockingly evil as well.) Georges Méliès also liked adapting the Faust legend into his films. In fact, the French filmmaker produced at least six variations of the German legend of the man who made a pact with the devil. Among his notable Faust films include "Faust aux enfers" (1903), known primarily for its English title "The Damnation of Faust", or "Faust in Hell". It is the filmmaker's third film adaptation of the Faust legend. In it, Méliès took inspiration from Hector Berlioz's Faust opera, but it pays less attention to the story and more to the special effects that represent a tour of hell. The film takes advantage of stage machinery techniques and features special effects such as pyrotechnics, substitution splices, superimpositions on black backgrounds, and dissolves. Méliès then made a sequel to that film called "Damnation du docteur Faust" (1904), released in the U.S. as "Faust and Marguerite". This time, the film was based on the opera by Charles Gounod. Méliès' other devil-inspired films in this period include "Les quat'cents farces du diable" (1906), known in English as "The Merry Frolics of Satan" or "The 400 Tricks of the Devil", a tale about an engineer who barters with the Devil for superhuman powers and is forced to face the consequences. Méliès would also make other horror-based short films that aren't inspired by Faust, most notably the fantastical and unsettling "Le papillon fantastique" (1909), where a magician turns a butterfly woman into a spider beast. As the 19th century gave way to the 20th, artists and engineers were all pushing the boundaries of film. Artists like Méliès, first achieved fame as a magician. During the time, stage magicians entertained large crowds with illusions and magic tricks, and decked out their sets with elaborate sets, costumes, and characters. While filmmakers like the Lumière brothers were tinkering with motion picture devices and shot documentary-like films, Méliès, and to an extent, Segundo de Chomón as well, were developing magic tricks on film. They created sophisticated sight gags and theatrical special effects to either entertain or scare the audience. In his autobiography, Méliès recalled a day when he was capturing footage on a Paris street when his camera jammed. Frustrated, he fiddled with the hand crank, fixed the problem, and started shooting again. When he developed the film later, and played it back, he discovered a new trick. The shot started with people walking, children skipping, and a horse-drawn omnibus workers trundling up the street. Then, in the blink of an eye, everything changed. Men turned into women, children were replaced by horses, and – spookiest of all – the omnibus full of workers changed into a hearse. Because of this, Méliès had found a way to perform actual magic with editing, to fool an audience and pull off illusions he'd never been able to do on stage. This was the birth of trick films. Most of the early films in cinema history consist of continuous shots of short skits and or scenes from everyday life [i.e., "The Kiss" (1898) or "Train Pulling into a Station" (1896).] Filmmakers doing trick films attempted to do the impossible on screen; like levitating heads, making people disappear or turning them into skeletons. Trick films were silent films designed to feature innovative special effects. This style of filmmaking was developed by innovators such as Georges Méliès and Segundo de Chomón in their first cinematic experiments. In the first years of film, especially between 1898 and 1908, the trick film was one of the world's most popular film genres. Techniques explored in these trick films included slow motion and fast motion created by varying the camera cranking speed; the editing device called the substitution splice; and various in-camera effects, such as multiple exposure. Double exposures, especially, achieved to show faded or ghostly images on screen. The spectacular nature of trick films lives on especially on horror films. Trick films convey energetic whimsy that make impossible events seem to occur on screen. Trick films are in essence films in which artists use camera techniques to create magic tricks or special effects that feel otherworldly. Other examples of trick films include 1901's "The Big Swallow" in which a man tries to swallow the audience, and 1901's "The Haunted Curiosity Shop" in which apparitions appear inside an antiques shop. In 1910, Edison Studios in the United States produced the first filmed version of Mary Shelley's 1818 classic Gothic novel "Frankenstein", the popular story of a scientist creating a hideous, sapient creature through a scientific experiment. Adapted to the screen for the first time by director J. Searle Dawley, his movie "Frankenstein" (1910) was deliberately designed to de-emphasize the horrific aspects of the story and focus on the story's mystical and psychological elements. Yet, the macabre nature of its source material made the film synonymous with the horror film genre. The United States continued producing films based on the 1886 Gothic novella the "Strange Case of Dr Jekyll and Mr Hyde", a classic tale about a doctor or scientist whose evil persona emerges after getting in contact with a magical formula. New York City's Thanhouser Film Corporation's one-reel "Dr. Jekyll and Mr. Hyde" (1912) was directed by Lucius Henderson and stars future director James Cruze in the title role. A year later, "Dr. Jekyll and Mr. Hyde" (1913) came out. This time it was independently produced by IMP (the future Universal Studios) and stars King Baggot as the doctor. In March 1911, the hour-long Italian silent film epic "L'Inferno" was screened in the Teatro Mercadante in Naples. The film was adapted from the first part of Dante Alighieri's "Divine Comedy" and took visual inspiration from Gustave Doré's haunting illustrations. It remains the best adaptation of "The Inferno" and is regarded by many scholars as the finest film adaptation of any of Dante's works to date. The film became an international success and is arguably the first true blockbuster in all of cinema. "L'Inferno" was directed by three artists; Francesco Bertolini, Adolfo Padovan, and Giuseppe de Liguoro. Their film is well-remembered for its stunning visualization of the nine circles of Hell and special effects that convey haunting visuals. The film presents a massive Lucifer with wings that stretch out behind him in front of a black void. He is seen devouring the Roman figures Brutus and Cassius in a display of double exposure and scale manipulation. According to critics, "L'Inferno" is able to capture some of the manic, tortuous, and bizarre imagery and themes of Dante's complex masterwork. In the 1910s Georges Méliès would continue producing his Faustian films, the most significant of this period was 1912's "Le Chevalier des Neiges" ("The Knight of the Snows"). It was Méliès' last film with Faustian themes and the last of many films in which the filmmaker appeared as the Devil. The film tells a story of a princess kidnapped by Satan and thrown into a dungeon. Her lover, the brave Knight of the Snows, must then go on a journey to rescue her. Special effects in the film were created with stage machinery, pyrotechnics, substitution splices, superimpositions, and dissolves. It is among a few of the best examples of trick films that Georges Méliès and Segundo de Chomón helped popularized. In 1912, French director Abel Gance released his short film "Le masque d'horreur" ("The Mask of Horror"). The film tells a story of a mad sculptor who searches for the perfect realization of "the mask of horror". He places himself in front of a mirror after smearing blood over himself with the glass of an oil lamp. He then swallows a virulent poison to observe the effects of pain. In 1913, German directors Stellan Rye and Paul Wegener made the silent horror film "Der Student von Prag" ("The Student of Prague") loosely based on a short story by Edgar Allan Poe. The film tells a story of a student who inadvertently makes a Faustian bargain. In the film, a student asks a stranger to turn him into a rich man. The stranger visits the student later in his dorm room and conjures up pieces of gold and a contract for him to sign. In return, the stranger is granted to take anything he wants from the room. He chooses to take the student's mirror. Upon moving it from the wall, a doppelgänger steps out and causes trouble. (In Western culture, a doppelgänger is a supernatural or ghostly double or look-alike of a specific person. It is usually seen as a harbinger of bad luck.) Cinematographer Guido Seeber utilized groundbreaking camera tricks to create the effect of the doppelgänger by using a mirror double which produces a seamless double exposure. The film was written by Hanns Heinz Ewers, a noted writer of horror and fantasy stories. His involvement with the screenplay lent a much needed air of respectability to the fledgling art form of horror film and German Expressionism From November 1915 until June 1916, French writer/director Louis Feuillade released a weekly serial entitled "Les Vampires" where he exploited the power of horror imagery to great effect. Consisting of 10 parts or episodes and roughly 7 hours long if combined, "Les Vampires" is considered to be one of the longest films ever made. The series tells a story of a criminal gang called the Vampires, who play upon their supernatural name and style to instill fear in the public and the police who desperately want to put a stop to them. Marked as Feuillade's legendary opus, "Les Vampires" is considered a precursor to movie thrillers. The series is also a close cousin to the surrealist movement. Paul Wegener followed up the success of "The Student of Prague" by adapting a story inspired by the ancient Jewish legend of the golem, an anthropomorphic being magically created entirely from clay or mud. Wegener teamed up with Henrik Galeen to create "Der Golem" (1915). The film, which is still partially lost, tells a story of an antiques dealer who finds a golem, a clay statue, brought to life centuries before. The dealer resurrects the golem as a servant, but the golem falls in love with the antiques dealer's wife. As she does not return his love, the golem commits a series of murders. Wegener made a sequel to the film two years later.This time he teamed up with co-director Rochus Gliese and made "Der Golem und die Tänzerin" (1917), or "The Golem and the Dancing Girl" as it is known in English. It is now considered a lost film. Wegener would make a third golem film another three years later to conclude his "Der Golem" trilogy. In 1919, Austrian director Richard Oswald released a German silent anthology horror film called "Unheimliche Geschichten", also known as "Eerie Tales" or "Uncanny Tales". In the film, a bookshop closes and the portraits of the Strumpet, Death, and the Devil come to life and amuse themselves by reading stories—about themselves, of course, in various guises and eras. The film is split into five stories: "The Apparition", "The Hand", "The Black Cat" (based on the Edgar Allan Poe short story), "The Suicide Club" (based on the Robert Louis Stevenson short story collection) and "Der Spuk" (which translates to "The Spectre" in English). The film is described as the "critical link between the more conventional German mystery and detective films of the mid 1910s and the groundbreaking fantastic cinema of the early 1920s." Robert Wiene's 1920 "Das Cabinet des Dr. Caligari" ("The Cabinet of Dr. Caligari") became a worldwide success and had a lasting impact on the film world, particularly for horror. It was not so much the story but the style that made it distinguishable from other films, ""Dr. Caligari"'s settings, some simply painted on canvas backdrops, are weirdly distorted, with caricatures of narrow streets, misshapen walls, odd rhomboid windows, and leaning doorframes. Effects of light and shadow were rendered by painting black lines and patterns directly on the floors and walls of sets." Critic Roger Ebert called it arguably "the first true horror film", and film reviewer Danny Peary called it cinema's first cult film and a precursor to arthouse films. Considered a classic, "The Cabinet of Dr. Caligari" helped draw worldwide attention to the artistic merit of German cinema and had a major influence on American films, particularly in the genres of horror and film noir, introducing techniques such as the twist ending and the unreliable narrator to the language of narrative film. Writing for the book "1001 Movies You Must See Before You Die", horror film critic Kim Newman called "The Cabinet of Dr. Caligari" "a major early entry in the horror genre, introducing images, themes, characters, and expressions that became fundamental to the likes of Tod Browning's "Dracula" and James Whales' "Frankenstein", both from 1931". "The Cabinet of Dr. Caligari" is also a leading example of what a German Expressionist film looks like. In October 1920, Paul Wegener teamed up with co-director Carl Boese to make the final Golem film entitled "", known in English as "The Golem: How He Came into the World". The final film in the Der Golem trilogy, "The Golem: How He Came into the World" (1920) is a prequel to "Der Golem" from 1915. In this film, Wegener stars as the golem who frightens a young lady with whom he is infatuated. The film is the best known of the series, as it is the only film that is completely preserved. It is also a leading example of early German Expressionism. F. W. Murnau arguably made the first vampire-themed movie, "Nosferatu" (1922). It was an unauthorized adaptation of Bram Stoker's gothic horror novel "Dracula". In "Nosferatu", Murnau created some of cinema's most lasting and haunting imagery which famously involve shadows of the creeping Count Orlok. This helped popularized the expressionism style in filmmaking. Many expressionist works of this era emphasize a distorted reality, stimulating the human psyche and have influenced the horror film genre. For most of the 1920s, German filmmakers like Wegener, Murnau, and Wiene would significantly influence later productions not only in horror films but in filmmaking in general. They would become the leading innovators of the German Expressionist movement. The plots and stories of the German Expressionist films often dealt with madness and insanity. Arthur Robison's film, "Schatten – Eine nächtliche Halluzination" (1923), literally "Shadows – a Nocturnal Hallucination", also known as "Warning Shadows" in English, is also one of the leading German Expressionist films. It tells the story of house guests inside a manor given visions of what might happen if the manor's host, the count played by Fritz Kortner, stays jealous and the guests do not reduce their advances towards his beautiful wife. Kortner's bulging eyes and twisted features are facets of a classic Expressionist performance style, as his unnatural feelings contort his face and body into something that appears other than human. In 1924, German filmmaker Paul Leni made another representative German Expressionist film with "Das Wachsfigurenkabinett", or "Waxworks" as it is commonly known. The horror film tells a story of a writer who accepts a job from a wax museum to write a series of stories on different controversial figures including Ivan the Terrible and Jack the Ripper in order to boost business. Although "Waxworks" is often credited as a horror film, it is an anthology film that goes through several genres including a fantasy adventure, historical film, and horror film through its various episodes. "Waxworks" contain many elements present in a German Expressionist movie. The film features deep shadows, moving shapes, and warped staircases. The director said of the film, "I have tried to create sets so stylized that they evidence no idea of reality." "Waxworks" was director Paul Leni's last film in Germany before heading to Hollywood to make some of the most important horror films of the late silent era. According to "Wisecrack"'s episode on "How Horror Movies Changed", "the horror genre blossoms anywhere there was pain and national chaos. So it's more than fitting that the genre's real boom took place in the mega-depressing Post-World War I Germany. During the war (1914–1918), Germany banned all foreign films, inadvertently throwing all film nerds a boom. Combine that embargo with the general despair of the era, you'll see why German Expressionism took place." German Expressionism was a film genre that was "all about coping with economic and social fallout via dream-like horror films, filled with subjective shots, funky angles, high-contrast spooky lighting, and frequently, sympathetic monsters." Though the word "horror" to describe the film genre would not be used until the 1930s (when Universal Pictures began releasing their initial monster films), earlier American productions often relied on horror and gothic themes. Many of these early films were considered dark melodramas because of their stock characters and emotion-heavy plots that focused on romance, violence, suspense, and sentimentality. In 1923, Universal Pictures started producing movies based on Gothic Horror literature from authors like Victor Hugo and Edgar Allan Poe. This series of pictures from Universal Pictures have retroactively become the first phase of the studio's Universal Classic Monsters series that would continue for three more decades. Universal Pictures' classic monsters of the 1920s featured hideously deformed characters like Quasimodo, The Phantom, and Gwynplaine. The first film of the series was "The Hunchback of Notre Dame" (1923) starring Lon Chaney as the hunchback Quasimodo. The film was adapted from the classic French gothic novel of the same name written by Victor Hugo in 1833, about a horribly deformed bell ringer in the cathedral of Notre-Dame. The film elevated Chaney, already a well-known character actor, to full star status in Hollywood, and also helped set a standard for many later horror films. Two years later, Chaney stars as The Phantom who haunts the Paris Opera House in 1925's silent horror film, "The Phantom of the Opera", based on the mystery novel by Gaston Leroux published 15 years earlier. Roger Ebert said the film "creates beneath the opera one of the most grotesque places in the cinema, and Chaney's performance transforms an absurd character into a haunting one." Adrian Warren of PopMatters called the film "terrific: unsettling, beautifully shot and imbued with a dense and shadowy Gothic atmosphere". Included in the book "1001 Movies You Must See Before You Die", 1925's "The Phantom of the Opera" is lauded for Lon Chaney's masterful acting, Universal Pictures' incredible set design, and its many masterly moments including the unmasking of the tragic villain's disfigured skullface, so shocking that even the camera is terrified, going briefly out of focus. In 1927, German director Paul Leni directed his first of two films for Universal Pictures. His silent horror film "The Cat and the Canary" is the third film in the Universal Classic Monsters series and is considered "the cornerstone of Universal's school of horror." "The Cat and the Canary" is adapted from John Willard's black comedy play of the same name. The plot revolves around the death of a man and the reading of his will 20 years later. His family inherits his fortunes, but when they spend the night in his haunted mansion they are stalked by a mysterious figure. Meanwhile, a lunatic known as "the Cat" escapes from an asylum and hides in the mansion. The film is part of a genre of comedy horror films inspired by 1920s Broadway stage plays. Paul Leni's adaptation of Willard's play blended expressionism with humor, a style Leni was notable for and critics recognized as unique. Alfred Hitchcock cited this film as one of his influences and Tony Rayns called it the "definitive haunted house movie." Paul Leni's second film for Universal Pictures was "The Man Who Laughs" (1928), an adaptation of another Victor Hugo novel. The film, starring Conrad Veidt is known for the bleak carnival freak-like grin on the character Gwynplaine's face. His exaggerated smile was the inspiration for DC Comics' The Joker. (A graphic novel in 2005 exploring the origins of the Joker was also titled "" in homage to this film). Film critic Roger Ebert stated, "The Man Who Laughs is a melodrama, at times even a swashbuckler, but so steeped in Expressionist gloom that it plays like a horror film". The fifth and last film of the Universal Classic Monsters series in the 1920s is "The Last Performance" (1929). It was directed by Paul Fejos and stars Conrad Veidt and Mary Philbin.Veidt plays a middle-aged magician who is in love with his beautiful young assistant. She, on the other hand, is in love with the magician's young protege, who turns out to be a bum and a thief. The film received mixed reviews and a 1929 New York Times article even said that ""Dr. Fejos has handled his scenes with no small degree of imagination."" A Letterboxd reviewer called it a "backstage melodrama with eerie intimations of horror. The trend of inserting an element of macabre into American pre-horror melodramas was popular in the 1920s. Directors known for relying on macabre in their films during the decade were Maurice Tourneur, Rex Ingram, and Tod Browning. Ingram's "The Magician" (1926) contains one of the first examples of a "mad doctor" and is said to have had a large influence on James Whale's version of "Frankenstein". "The Unholy Three" (1925) is an example of Tod Browning's use of macabre and unique style of morbidity; he remade the film in 1930 as a talkie. In 1927, Tod Browning cast Lon Chaney in his horror film "The Unknown". Chaney played a carnival knife thrower called Alonzo the Armless and Joan Crawford as the scantily clad carnival girl he hopes to marry. Chaney did collaborative scenes with a real-life armless double whose legs and feet were used to manipulate objects such as knives and cigarettes in frame with Chaney's upper body and face. 1928's "The Terror" by Warner Bros. Pictures was the first all-talking horror film, made using the Vitaphone sound-on-disc system. The film tells a simple story of guests at an old English manor being stalked by a mysterious killer known only as “The Terror”. The plot centered on sound, with much of the ghost's haunting taking place in vis-a-vis creepy organ music, creaky doors and howling winds. The film was poorly received by audiences and critics. John MacCormac, reporting from London for The New York Times upon the film's UK premiere, wrote; ""The universal opinion of London critics is that "The Terror" is so bad that it is almost suicidal. They claim that it is monotonous, slow, dragging, fatiguing and boring."" Other European countries also, contributed to the genre during this period. In Sweden, Victor Sjöström created "Körkarlen" ("The Phantom Carriage)" in 1921. This is what the Criterion have to say about the film; "The last person to die on New Year's Eve before the clock strikes twelve is doomed to take the reins of Death's chariot and work tirelessly collecting fresh souls for the next year. So says the legend that drives "The Phantom Carriage" "(Körkarlen)", directed by the father of Swedish cinema, Victor Sjöström. The story, based on a novel by Nobel Prize winner Selma Lagerlöf, concerns an alcoholic, abusive ne’er-do-well (Sjöström himself) who is shown the error of his ways, and the pure-of-heart Salvation Army sister who believes in his redemption. This extraordinarily rich and innovative silent classic (which inspired Ingmar Bergman to make movies) is a Dickensian ghost story and a deeply moving morality tale, as well as a showcase for groundbreaking special effects. In 1922, Danish filmmaker Benjamin Christensen created the Swedish-Danish production "Häxan" (also known as "The Witches" or "Witchcraft Through the Ages"), a documentary-style silent horror film based partly on Christensen's study of the Malleus Maleficarum, a 15th-century German guide for inquisitors. "Häxan" is a study of how superstition and the misunderstanding of diseases and mental illness could lead to the hysteria of the witch-hunts.[2] The film was made as a documentary but contains dramatized sequences that are comparable to horror films. To visualize his subject matter, Christensen fills the frame with every frightening image he can conjure out of the historical records, often freely blending fact and fantasy. There are shocking moments in which we witness a woman giving birth to two enormous demons, see a witches' sabbath, and endure tortures by inquisition judges. The film also features an endless parade of demons of all shapes and sizes, some of whom look more or less human, whereas others, are almost fully animal—pigs, twisted birds, cats, and the like. French filmmaker Jean Epstein produced an influential film, "La Chute de la maison Usher" ("The Fall of the House of Usher") in 1928. It is one of multiple films based on the Edgar Allan Poe Gothic short story "The Fall of the House of Usher". Future director Luis Buñuel co-wrote the screenplay with Epstein, his second film credit, having previously worked as assistant director on Epstein's film Mauprat from 1926. Roger Ebert included the film on his list of "Great Movies" in 2002, calling the great hall of the film as "one of the most haunting spaces in the movies". "Il mostro di Frankenstein" (1921), one of a few Italian horror film before the late 1950s, is now considered lost. In the 1930s Universal Pictures continued producing films based on Gothic horror. The studio entered a Golden Age of monster movies in the '30s, releasing a string of hit horror movies. In this decade, the studio assembled several iconic monsters in motion picture history including Dracula, Frankenstein, The Mummy, and The Invisible Man Each movie starring these monsters would go on to make sequels and each of the characters would go on to cross-over with one another in a cinematic shared universe. The films would retroactively be classified together as part of the Universal Classic Monsters series. Universal Pictures created a monopoly on the mainstream horror film, producing stars such as Bela Lugosi and Boris Karloff, and grossing large sums of money at the box office in the process. Not only did Universal bring the subgenre of “creature features” into the limelight, they also gave them their golden years, now reflected back on as “The Monsters Golden Era.” In the 1920s, the studio only put out five features, in the 1930s however, they produced about 21. In the year 1930, Universal Pictures released the mystery film "The Cat Creeps". It was a sound remake of the studio's earlier film, "The Cat and the Canary" from three years ago. Simultaneously, Universal also released a Spanish-speaking version of the film called "La Voluntad del Muerto" "(The Will of the Dead Man)". The film was directed by George Melford who would later direct the Spanish version of "Dracula". Both "The Cat Creeps" and "La Voluntad del Muerto" are considered lost films. On February 14, 1931, Universal Pictures premiered their first film adaptation of "Dracula", the popular story of an ancient vampire who arrives in England where he preys upon a virtuous young girl. The film was based on the 1924 stage play by Hamilton Deane and John L. Balderston, which in turn was loosely based on the classic 1897 novel by Bram Stoker. February 1931's "Dracula" was an English-language vampire-horror film directed by Tod Browning and stars Bela Lugosi as the Count Dracula, the actor's most iconic role. The film was generally well received by critics. "Variety" praised the film for its "remarkably effective background of creepy atmosphere." "Film Daily" declared it "a fine melodrama" and also lauded Lugosi's performance, calling it "splendid" and remarking that he had created "one of the most unique and powerful roles of the screen". Kim Newman, writing for the book "1001 Movies You Must See Before You Die", said that "Dracula" signaled the "true beginning of the horror film as a distinct genre and the vampire movie as its most popular subgenre. Two months later on April 24, 1931, Universal Pictures premiered the Spanish-language version of "Dracula" directed by George Melford. April 1931’s "Drácula" was filmed at night on the same sets that were being used during the day for the English-language version. Of the cast, only Carlos Villarías (playing Count Dracula) was permitted to see rushes of the English-language film, and he was encouraged to imitate Bela Lugosi's performance. Some long shots of Lugosi as the Count and some alternative takes from the English version were used in this production. In recent years, this version has become more highly praised than Tod Browning’s English-language version. The Spanish crew had the advantage of watching the English dailies when they came in for the evening, and they would devise better camera angles and more effective use of lighting in an attempt to improve upon it. In 2015, the Library of Congress selected the film for preservation in the National Film Registry, finding it "culturally, historically, or aesthetically significant". On November 21, 1931, Universal Pictures released another hit film with "Frankenstein". The story is about a scientist and his assistant who dig up corpses in the hopes to reanimated them with electricity. The experiment goes awry when Dr. Frankenstein's assistant accidentally gives the creature a murderer's abnormal brain. 1931's "Frankenstein" was based on a 1927 play by Peggy Webling which in turn was based off Mary Shelley's classic 1818 Gothic novel. The film was directed by James Whale and stars Boris Karloff as Frankenstein’s monster in one of his most iconic roles. A hit with both audiences and critics, the film was followed by multiple sequels and along with the same year’s "Dracula", has become one of the most famous horror films in history. “Universal’s makeup genius Jack Pierce created the main look of the monster, devising the flattop, the neck terminals, the heavy eyelids, and the elongated scarred hands, while director James Whale outfitted the creature with a shabby suit.” On February 21, 1932, Universal Pictures released a double-feature. The first one is "Murders in the Rue Morgue". It stars Bela Lugosi as a lunatic scientist who abducts women and injects them with blood from his ill-tempered caged ape. The film was loosely based on an 1841 short story by Edgar Allan Poe. Universal Pictures would release two more Poe adaptations later in the decade. The second film in the double-feature is the James Whale-directed "The Old Dark House". It's a mystery horror story starring Boris Karloff. Five travelers are admitted to a large foreboding old house that belongs to an extremely strange family. The story was based on a 1927 novel by J.B. Priestly. In December 1932, the studio released "The Mummy" starring Boris Karloff as the Egyptian monster. The film, based on an original screenplay, is about an ancient Egyptian mummy named Imhotep who is discovered by a team of archaeologists and inadvertently brought back to life through a magic scroll. Review aggregator website Rotten Tomatoes reports a 93% score, based on 27 reviews, with an average rating of 7.9/10. The site's consensus states: "Relying more on mood and atmosphere than the thrills typical of modern horror fare, Universal's The Mummy sets a masterful template for mummy-themed films to follow. The Mummy character was so popular that it spawned sequels and remakes over the next decades. Make-up artist Jack Pierce was responsible for the look of the Mummy. After studying photos of ancient mummies, Pierce came up with the look bearing a resemblance to the mummy of Ramesses III. Pierce began transforming Karloff at 11 a.m., applying cotton, collodion and spirit gum to his face; clay to his hair; and wrapping him in linen bandages treated with acid and burnt in an oven, finishing the job at 7 p.m. Karloff finished his scenes by 2 a.m., and another two hours were spent removing the make-up. Boris Karloff found the removal of gum from his face painful, and overall found the day "the most trying ordeal I [had] ever endured". The image of Karloff wrapped in bandages has become one of the most iconic images in the series. Jack Pierce would also come to design the Satanic make-up for Lugosi in the independently produced "White Zombie" (1932). In 1933, after the release of "The Mummy", Universal Pictures released two pictures. The first one was in July. It was a murder-mystery film called "The Secret of the Blue Room". The plot of the film is that, according to legend, the "blue room" inside a mansion is cursed. Everyone who has ever spent the night there has met with an untimely end. Three men wager that each can survive a night in the forbidding room. In November, the studio premiered another iconic character with Dr. Jack Griffin, aka the Invisible Man in the classic science fiction-horror "The Invisible Man". The film was directed by James Whale and stars Claude Rains as the titular character. The movie was based on a science fiction novel of the same name by H. G. Wells published in 1897. The film has been described as a "nearly perfect translation of the spirit of the book". It spawned a number of sequels, plus many spinoffs using the idea of an "invisible man" that were largely unrelated to Wells' original story. "The Invisible Man" is known for its clever and groundbreaking visual effects by John P. Fulton, John J. Mescall and Frank D. Williams, whose work is often credited for the success of the film. When the Invisible Man had no clothes on, the effect was achieved through the use of wires, but when he had some of his clothes on or was taking his clothes off, the effect was achieved by shooting Claude Rains in a completely black velvet suit against a black velvet background and then combining this shot with another shot of the location the scene took place in using a matte process. Claude Rains was claustrophobic and it was hard to breathe through the suit. Consequently, the work was especially difficult for him, and a double, who was somewhat shorter than Rains, was sometimes used. In 1934, Universal Pictures released the successful psychological horror film "The Black Cat". It stars both Boris Karloff and Bela Lugosi. It was the first of six movies Universal Pictures paired the two iconic actors together. "The Black Cat" became Universal Pictures' biggest box office hit of the year and is considered by many to be the one that created and popularized the psychological horror subgenre, emphasizing on atmosphere, eerie sounds, the darker side of the human psyche, and emotions like fear and guilt to deliver its scares, something that was not used in the horror genre before. Although it was credited the film was based om Edgar Allan Poe's classic 1841 short story, the film actually has little to do with Poe's story. In the film, American honeymooners in Hungary become trapped in the home of a Satan-worshiping priest when the bride is taken there for medical help following a road accident. The film exploited a sudden public interest in psychiatry. Peter Ruric (better known as pulp writer Paul Cain) wrote the screenplay. In 1935, Universal Pictures released four pictures from February to July. The first picture they released in 1935 was "The Mystery of Edwin Drood", a mystery drama film starring Claude Rains. The story revolves around an opium-addicted choirmaster who develops an obsession for a beautiful young girl and will not stop short of murder in order to have her. The film was based on the final novel by Charles Dickens in 1870. In April 1935, "Bride of Frankenstein" premiered. The science-fiction/horror film was the first sequel to the 1931 hit "Frankenstein". It is widely regarded as one of the greatest sequels in cinematic history, with many fans and critics considering it to be an improvement on the original film. As with the original, "Bride of Frankenstein" was directed by James Whale and stars Boris Karloff as the Monster. In the film, Dr. Frankenstein, goaded by an even madder scientist, builds his monster a mate, often referred to as the Monster's Bride. Makeup artist Jack Pierce returned to create the makeup for the Monster and his Bride. Over the course of filming, Pierce modified the Monster's makeup to indicate that the Monster's injuries were healing as the film progressed. Pierce co-created the Bride's makeup with strong input from Whale, especially regarding the Bride's iconic hair style, which was based on the Egyptian queen Nefertiti. Actress Elsa Lanchester portrayed the Monster's Bride. The bride's conical hairdo, with its white lightning-trace streaks on each side, has become an iconic symbol of both the character and the film. A month after the release of "Bride of Frankenstein", Universal Pictures premiered the influential werewolf movie "Werewolf of London", the first Hollywood mainstream movie to feature a werewolf, a creature of folklore who shape-shifts from a human into a wolf. The film stars Henry Hull as the titular character. In the movie, he is a botanist who gets attacked by a strange animal. The bite causes him to turn into a bloodthirsty monster. Jack Pierce created the make-up for the creature. Screenwriter and journalist Frank Nugent, writing for "The New York Times", thought the film was "designed solely to amaze and horrify." He continued by writing, ""Werewolf of London" goes about its task with commendable thoroughness, sparing no grisly detail and springing from scene to scene with even greater ease than that oft attributed to a daring young aerialist. Granting that the central idea has been used before, the picture still rates the attention of action-and-horror enthusiasts." Six years later, Universal Pictures would release the second werewolf picture, "The Wolf Man", which would garner greater deal of influence on Hollywood's depiction of the legend of the werewolf. In July 1935, Universal Pictures paired Bela Lugosi and Boris Karloff together for a second time in the studio's third Edgar Allan Poe picture. The film was "The Raven". The film was not actually a direct adaptation of the classic 1845 poem, but rather inspired from it. In the film, a brilliant surgeon, played by Bela Lugosi, is obsessed with the writer Edgar Allan Poe. He saves the life of a beautiful dancer but goes mad when he can't have her. Meanwhile, Boris Karloff plays a fugitive murderer on the run from the police. 1935's "The Raven" contains themes of torture, disfigurement, and grisly revenge. The film did not do particularly well at the box office during its initial release, and indirectly led to a temporary ban on horror films in England. At the time, it was beginning to look like the horror genre was no longer economically viable, and paired with the strict production code of the era, American filmmakers struggled to make creative works on screen, and horror eventually went out of vogue. This proved a devastating development at the time for Lugosi, who found himself losing work and struggling to support his family. Universal Pictures changed ownership in 1936, and the new management was less interested in the macabre. In 1936, Universal Pictures continued to make films for the series. In January, the studio premiered the science fiction melodrama "The Invisible Ray". The film pairs Bela Lugosi and Boris Karloff a third time. In the film, a scientist creates a telescope-like device that captures light waves from the Andromeda Galaxy, giving him a way to view the distant past. He and several colleagues go to Africa to locate a large, unusual meteorite that the light-waves showed fell there a billion years earlier. After discovering that the meteorite is composed of a poisonous unknown element, "Radium X", he begins to glow in the dark, and his touch becomes deadly. These radiation effects also begin to slowly drive him mad. Critics noted the tone of the film to be somber, dignified, and tragic. "The Invisible Ray" is a morality play, particularly given the film's final lines of dialog, uttered nine years before the events of Hiroshima and Nagasaki, by Madame Rukh: "My son, you have broken the first law of science...Janos Rukh is dead, but part of him will go on to eternity, working for humanity". In May 1936, Universal Pictures released a sequel to 1931's "Dracula". The film was called "Dracula's Daughter" and stars Gloria Holden in the title role. "Dracula's Daughter" doesn't feature Bela Lugosi or his character, but instead tells the story of Countess Marya Zaleska, the daughter of Count Dracula and herself a vampire. Following Dracula's death, she believes that by destroying his body she will be free of his influence and live normally. When this fails, she turns to a psychiatrist, played by Otto Kruger. He, in turn, has a fiancé, Janet. The Countess kidnaps Janet and takes her to Transylvania, leading to a battle between Dr. Garth and the Countess. While not as successful as the original upon its release, the film was generally well-reviewed. In the intervening decades, criticism has been deeply divided. Contemporary critics and scholars have noted the film's strong lesbian overtones, which Universal acknowledged from the start of production and exploited in some early advertising. Universal would completed their initial "Dracula" trilogy seven years later with "Son of Dracula". In 1937, Universal Pictures only released one film in the series. The film was "Night Key", a science fiction crime thriller starring Boris Karloff. In "Night Key", Karloff plays an elderly inventor of a burglar alarm who attempts to get back at the man who stole the profits to his invention. Later, his device is then subverted by gangsters who threatens him and use his own device to facilitate burglaries. Letterboxd users call the film "a delightfully corny, old-fashioned thriller". and praised the film for Karloff's performance. In 1938, Universal Pictures did not release any film related to horror, thriller, or science fiction. Instead, they made re-releases of their previous "Dracula" and "Frankenstein" films. It was only in January 1939, a full year and a half after the release of "Night Key" that the studio continued putting out original horror movies. On January 7, 1939, Universal Pictures premiered their 12-part serial "The Phantom Creeps". It stars Bela Lugosi as a mad scientist who attempts to rule the world by creating various elaborate inventions. In a dramatic fashion, foreign agents and G-Men (government men) try to seize the inventions for themselves. A 78-minute feature film version of the film, cut down from the serial's original 265 minutes, was released for television ten years later. "The Phantom Creeps" was Universal Pictures' 112th serial and 44th to have sound. The innovation of the scrolling text version of the synopsis at the beginning of each chapter was used for the "Star Wars" films as the "Star Wars opening crawl". On January 13, 1939, Universal Pictures released "Son of Frankenstein", the third entry in the studio's "Frankenstein" series and the last to feature Boris Karloff as the Monster. It is also the first to feature Bela Lugosi as Ygor. The film is the sequel to James Whale's "Bride of Frankenstein", and stars top-billed Basil Rathbone, Karloff, Lugosi and Lionel Atwill. "Son of Frankenstein" was a reaction to the popular re-releases of "Dracula" and "Frankenstein" as double-features in 1938. In the film, one of the sons of Frankenstein finds his father's monster in a coma and revives him, only to find out he is controlled by Ygor who is bent on revenge. Universal's declining horror output was revitalized with the enormously successful "Son of Frankenstein", in which the studio cast both stars (Lugosi and Karloff) again for the fourth time. In November 1939, Universal Pictures released their last horror film of the 1930s with the historical and quasi-horror film, "Tower of London". It stars Basil Rathbone as the future King Richard III of England, and Boris Karloff as his fictitious club-footed executioner Mord. Vincent Price, in only his third film, appears as George, Duke of Clarence. "Tower of London" is based on the traditional depiction of Richard rising to become King of England in 1483 by eliminating everyone ahead of him. Each time Richard accomplishes a murder, he removes one figurine from a dollhouse resembling a throne room. Once he has completed his task, he now needs to defeat the exiled Henry Tudor to retain the throne. Other studios followed Universal's lead. MGM's controversial "Freaks" (1932) frightened audiences at the time, featuring characters played by people who had real deformities. The studio even disowned the film, and it remained banned in the United Kingdom, for 30 years. Paramount Pictures' "Dr. Jekyll and Mr. Hyde" (1931) is remembered for its innovative use of photographic filters to create Jekyll's transformation before the camera. And RKO created the highly successful and influential monster movie, "King Kong" (1933). With the progression of the genre, actors like Boris Karloff and Bela Lugosi were beginning to build entire careers in horror. Early in the decade also, Danish director Carl Theodor Dreyer created the horror fantasy film "Vampyr" (1932) based on elements from J. Sheridan Le Fanu's collection of supernatural stories "In a Glass Darkly". The German-produced sound film tells the story of Allan Gray, a student of the occult who enters a village under the curse of a vampire. According to the book 1001 Movies You Must See Before You Die, "Vampyr"'s "greatness derives partly from Dreyer's handling of the vampire theme in terms of sexuality and eroticism, and partly from its highly distinctive, dreamy look." Despite the success of "The Wolf Man", by the 1940s, Universal's monster movie formula was growing stale, as evidenced by desperate sequels and ensemble films with multiple monsters. Eventually, the studio resorted to comedy-horror pairings, like "Abbott and Costello Meet Frankenstein", which met with some success. In the 1940s, Universal Pictures released 17 feature films, all of which were sequels and reboots to their popular monster movies from mostly in the 30s. In the year 1940, Universal Pictures released three movies. In January, the Vincent Price-starring "The Invisible Man Returns" premieres in theaters to commercial success despite its production being plagued with problems. The special effects in the movie received an Oscar nomination in the category Best Special Effects. In September, "The Mummy's Hand" was released. Although it is sometimes claimed by fans as a sequel or follow-up to "The Mummy", it does not continue the 1932 film's storyline, or feature any of the same characters. "The Mummy's Hand" was the first of a series of four films all featuring the mummy named Kharis, the sequels being "The Mummy's Tomb" (1942), "The Mummy's Ghost", and "The Mummy's Curse" (both 1944). Tom Tyler played Kharis in this film but Lon Chaney, Jr. took over the role for the following three sequels. At the film's release, film critic Bosley Crowther wrote for "The New York Times", "It's the usual mumbo-jumbo of secret tombs in crumbling temples and salacious old high priests guarding them against the incursions of an archaeological expedition". In December, "The Invisible Woman" was released. It is the third film in the "Invisible Man" film series. This film was more of a screwball comedy than other films in the series thus is considered a comedy more than a horror film. The film stars Virginia Bruce in the lead role and the aging John Barrymore in a supporting role. Reviews from critics were mixed. Theodore Strauss of "The New York Times" called it "silly, banal and repetitious".. Two more films from the "Invisible Man" series would be released in the decade. The 1942 propaganda war-horror"Invisible Agent", which featured a mad scientist working in secret to aid the Third Reich, and 1944's "The Invisible Man's Revenge". Other notable sequels during this era include 1942's "The Ghost of Frankenstein", 1943's "Son of Dracula", 1944's "The Mummy's Curse", "She-Wolf of London", and the screwball comedy "Abbott and Costello Meet Frankenstein". In 1941, Universal Pictures released a reboot of sort to the studio's 1935 werewolf picture Werewolf of London which starred noted character actor Henry Hull in a quite different and more subtle werewolf makeup. 1941's "The Wolf Man", however, was more popular and influential. The character of Larry Talbot aka The Wolf Man is considered one of the best classic monsters in the series. The title character has had a great deal of influence on Hollywood's depictions of the legend of the werewolf.
https://en.wikipedia.org/wiki?curid=13451
Head of state A head of state (or chief of state) is the public persona who officially embodies a state in its unity and legitimacy. Depending on the country's form of government and separation of powers, the head of state may be a ceremonial figurehead or concurrently the head of government and more. In a parliamentary system, such as India and Pakistan, the head of state usually has mostly ceremonial powers, with a separate head of government. However in some parliamentary systems, like South Africa, there is an executive president that is both head of state and head of government. Likewise, in some parliamentary systems the head of state is not the head of government, but still has significant powers, for example Morocco. In contrast, a semi-presidential system, such as France, has both heads of state and government as the "de facto" leaders of the nation (in practice they divide the leadership of the nation among themselves). Meanwhile, in presidential systems such as the United States, the head of state is also the head of government. Former French president Charles de Gaulle, while developing the current Constitution of France (1958), said that the head of state should embody "" ("the spirit of the nation"). Some academic writers discuss states and governments in terms of "models". An independent nation state normally has a head of state, and determines the extent of its head's executive powers of government or formal representational functions. In terms of protocol: the head of a sovereign, independent state is usually identified as the person who, according to that state's constitution, is the reigning monarch, in the case of a monarchy; or the president, in the case of a republic. Among the different state constitutions (fundamental laws) that establish different political systems, four major types of heads of state can be distinguished: In a federal constituent or a dependent territory, the same role is fulfilled by the holder of an office corresponding to that of a head of state. For example, in each Canadian province the role is fulfilled by the lieutenant governor, whereas in most British Overseas Territories the powers and duties are performed by the governor. The same applies to Australian states, Indian states, etc. Hong Kong's constitutional document, the Basic Law, for example, specifies the chief executive as the head of the special administrative region, in addition to their role as the head of government. These non-sovereign-state heads, nevertheless, have limited or no role in diplomatic affairs, depending on the status and the norms and practices of the territories concerned. In parliamentary systems the head of state may be merely the nominal chief executive officer, heading the executive branch of the state, and possessing limited executive power. In reality, however, following a process of constitutional evolution, powers are usually only exercised by direction of a cabinet, presided over by a head of government who is answerable to the legislature. This accountability and legitimacy requires that someone be chosen who has a majority support in the legislature (or, at least, not a majority opposition – a subtle but important difference). It also gives the legislature the right to vote down the head of government and their cabinet, forcing it either to resign or seek a parliamentary dissolution. The executive branch is thus said to be responsible (or answerable) to the legislature, with the head of government and cabinet in turn accepting constitutional responsibility for offering constitutional advice to the head of state. In parliamentary constitutional monarchies, the legitimacy of the unelected head of state typically derives from the tacit approval of the people via the elected representatives. Accordingly, at the time of the Glorious Revolution, the English parliament acted of its own authority to name a new king and queen (the joint monarchs Mary II and William III); likewise, Edward VIII's abdication required the approval of each of the six independent realms of which he was monarch. In monarchies with a written constitution, the position of monarch is a creature of the constitution and could quite properly be abolished through a democratic procedure of constitutional amendment, although there are often significant procedural hurdles imposed on such a procedure (as in the Constitution of Spain). In republics with a parliamentary system (such as India, Germany, Austria, Italy and Israel) the head of state is usually titled "president" and the principal functions of such presidents are mainly ceremonial and symbolic, as opposed to the presidents in a presidential or semi-presidential system. In reality, numerous variants exist to the position of a head of state within a parliamentary system. The older the constitution, the more constitutional leeway tends to exist for a head of state to exercise greater powers over government, as many older parliamentary system constitutions in fact give heads of state powers and functions akin to presidential or semi-presidential systems, in some cases without containing reference to modern democratic principles of accountability to parliament or even to modern governmental offices. Usually, the king had the power of declaring war without previous consent of the parliament. For example, under the 1848 constitution of the Kingdom of Italy, the "Statuto Albertino"—the parliamentary approval to the government appointed by the king—was customary, but not required by law. So, Italy had a de facto parliamentarian system, but a de jure "presidential" system. Examples of heads of state in parliamentary systems using greater powers than usual, either because of ambiguous constitutions or unprecedented national emergencies, include the decision by King Leopold III of the Belgians to surrender on behalf of his state to the invading German army in 1940, against the will of his government. Judging that his responsibility to the nation by virtue of his coronation oath required him to act, he believed that his government's decision to fight rather than surrender was mistaken and would damage Belgium. (Leopold's decision proved highly controversial. After World War II, Belgium voted in a referendum to allow him to resume his monarchial powers and duties, but because of the ongoing controversy he ultimately abdicated.) The Belgian constitutional crisis in 1990, when the head of state refused to sign into law a bill permitting abortion, was resolved by the cabinet assuming the power to promulgate the law while he was treated as "unable to reign" for twenty-four hours. These officials are excluded completely from the executive: they do not possess even theoretical executive powers or any role, even formal, within the government. Hence their states' governments are not referred to by the traditional parliamentary model head of state styles of "His/Her Majesty's Government" or "His/Her Excellency's Government". Within this general category, variants in terms of powers and functions may exist. The was drawn up under the Allied occupation that followed World War II and was intended to replace the previous militaristic and quasi-absolute monarchy system with a form of liberal democracy parliamentary system. The constitution explicitly vests all executive power in the Cabinet, who is chaired by the prime minister (articles 65 and 66) and responsible to the Diet (articles 67 and 69). The emperor is defined in the constitution as "the symbol of the State and of the unity of the people" (article 1), and is generally recognised throughout the world as the Japanese head of state. Although the emperor formally appoints the prime minister to office, article 6 of the constitution requires him to appoint the candidate "as designated by the Diet", without any right to decline appointment. He is a ceremonial figurehead with no independent discretionary powers related to the governance of Japan. Since the passage in Sweden of the 1974 Instrument of Government, the Swedish monarch no longer has many of the standard parliamentary system head of state functions that had previously belonged to him or her, as was the case in the preceding 1809 Instrument of Government. Today, the speaker of the Riksdag appoints (following a vote in the Riksdag) the prime minister and terminates his or her commission following a vote of no confidence or voluntary resignation. Cabinet members are appointed and dismissed at the sole discretion of the prime minister. Laws and ordinances are promulgated by two Cabinet members in unison signing "On Behalf of the Government" and the government—not the monarch—is the high contracting party with respect to international treaties. The remaining official functions of the sovereign, by constitutional mandate or by unwritten convention, are to open the annual session of the Riksdag, receive foreign ambassadors and sign the letters of credence for Swedish ambassadors, chair the foreign advisory committee, preside at the special Cabinet council when a new prime minister takes office, and to be kept informed by the prime minister on matters of state. In contrast, the only contact the president of Ireland has with the Irish government is through a formal briefing session given by the taoiseach (head of government) to the president. However, he or she has no access to documentation and all access to ministers goes through the Department of the Taoiseach. The president does, however, hold limited reserve powers, such as referring a bill to the Supreme Court to test its constitutionality, which are used under the president's discretion. The most extreme non-executive republican head of state is the president of Israel, which holds no reserve powers whatsoever. The least ceremonial powers held by the president are to appoint the prime minister, to approve the dissolution of the Knesset made by the prime minister, and to pardon criminals or to commute their sentence. Some parliamentary republics (like South Africa, Botswana and Myanmar) have fused the roles of the head of state with the head of government (like in a presidential system), while having the sole executive officer, often called a president, being dependent on the Parliament's confidence to rule (like in a parliamentary system). While also being the leading symbol of the nation, the president in this system acts mostly as a prime minister, since the incumbent must be a member of the legislature at the time of the election, answer question sessions in Parliament, avoid motions of no confidence, etc. Semi-presidential systems combine features of presidential and parliamentary systems, notably (in the president-parliamentary subtype) a requirement that the government be answerable to both the president and the legislature. The constitution of the Fifth French Republic provides for a prime minister who is chosen by the president, but who nevertheless must be able to gain support in the National Assembly. Should a president be of one side of the political spectrum and the opposition be in control of the legislature, the president is usually obliged to select someone from the opposition to become prime minister, a process known as Cohabitation. President François Mitterrand, a Socialist, for example, was forced to cohabit with the neo-Gaullist (right wing) Jacques Chirac, who became his prime minister from 1986 to 1988. In the French system, in the event of cohabitation, the president is often allowed to set the policy agenda in security and foreign affairs and the prime minister runs the domestic and economic agenda. Other countries evolve into something akin to a semi-presidential system or indeed a full presidential system. Weimar Germany, for example, in its constitution provided for a popularly elected president with theoretically dominant executive powers that were intended to be exercised only in emergencies, and a cabinet appointed by him from the Reichstag, which was expected, in normal circumstances, to be answerable to the Reichstag. Initially, the president was merely a symbolic figure with the Reichstag dominant; however, persistent political instability, in which governments often lasted only a few months, led to a change in the power structure of the republic, with the president's emergency powers called increasingly into use to prop up governments challenged by critical or even hostile Reichstag votes. By 1932, power had shifted to such an extent that the German president, Paul von Hindenburg, was able to dismiss a chancellor and select his own person for the job, even though the outgoing chancellor possessed the confidence of the Reichstag while the new chancellor did not. Subsequently, President von Hindenburg used his power to appoint Adolf Hitler as Chancellor without consulting the Reichstag. "Note: The head of state in a "presidential" system may not actually hold the title of "president" - the name of the system refers to any head of state who actually governs and is not directly dependent on the legislature to remain in office." Some constitutions or fundamental laws provide for a head of state who is not only in theory but in practice chief executive, operating separately from, and independent from, the legislature. This system is known as a "presidential system" and sometimes called the "imperial model", because the executive officials of the government are answerable solely and exclusively to a presiding, acting head of state, and is selected by and on occasion dismissed by the head of state without reference to the legislature. It is notable that some presidential systems, while not providing for collective executive accountability to the legislature, may require legislative approval for individuals prior to their assumption of cabinet office and empower the legislature to remove a president from office (for example, in the United States of America). In this case the debate centers on confirming them into office, not removing them from office, and does not involve the power to reject or approve proposed cabinet members "en bloc", so it is not accountability in the sense understood in a parliamentary system. Presidential systems are a notable feature of constitutions in the Americas, including those of Argentina, Brazil, Colombia, El Salvador, Mexico and Venezuela; this is generally attributed to the strong influence of the United States in the region, and as the United States Constitution served as an inspiration and model for the Latin American wars of independence of the early 19th century. Most presidents in such countries are selected by democratic means (popular direct or indirect election); however, like all other systems, the presidential model also encompasses people who become head of state by other means, notably through military dictatorship or "coup d'état", as often seen in Latin American, Middle Eastern and other presidential regimes. Some of the characteristics of a presidential system (i.e., a strong dominant political figure with an executive answerable to them, not the legislature) can also be found among absolute monarchies, parliamentary monarchies and single party (e.g., Communist) regimes, but in most cases of dictatorship, their stated constitutional models are applied in name only and not in political theory or practice. In the 1870s in the United States, in the aftermath of the impeachment of President Andrew Johnson and his near-removal from office, it was speculated that the United States, too, would move from a presidential system to a semi-presidential or even parliamentary one, with the speaker of the House of Representatives becoming the real center of government as a quasi-prime minister. This did not happen and the presidency, having been damaged by three late nineteenth and early twentieth century assassinations (Lincoln, Garfield and McKinley) and one impeachment (Johnson), reasserted its political dominance by the early twentieth century through such figures as Theodore Roosevelt and Woodrow Wilson. In certain states under Marxist constitutions of the constitutionally socialist state type inspired by the former Union of Soviet Socialist Republics (USSR) and its constitutive Soviet republics, real political power belonged to the sole legal party. In these states, there was no formal office of head of state, but rather the leader of the legislative branch was considered to be the closest common equivalent of a head of state as a natural person. In the Soviet Union this position carried such titles as "Chairman of the Central Executive Committee of the USSR"; "Chairman of the Presidium of the Supreme Soviet"; and in the case of the Soviet Russia "Chairman of the Central Executive Committee of the All-Russian Congress of Soviets" (pre-1922), and "Chairman of the Bureau of the Central Committee of the Russian SFSR" (1956–1966). This position may or may not have been held by the de facto Soviet leader at the moment. For example, Nikita Khrushchev never headed the Supreme Soviet but was First Secretary of the Central Committee of the Communist Party (party leader) and Chairman of the Council of Ministers (head of government). This may even lead to an institutional variability, as in North Korea, where, after the presidency of party leader Kim Il-Sung, the office was vacant for years. The late president was granted the posthumous title (akin to some ancient Far Eastern traditions to give posthumous names and titles to royalty) of ""Eternal President"". All substantive power, as party leader, itself not formally created for four years, was inherited by his son Kim Jong Il. The post of president was formally replaced on 5 September 1998, for ceremonial purposes, by the office of chairman of the Presidium of the Supreme People's Assembly, while the party leader's post as chairman of the National Defense Commission was simultaneously declared "the highest post of the state", not unlike Deng Xiaoping earlier in the People's Republic of China. In China, under the current country's constitution, the Chinese President is a largely ceremonial office with limited power. However, since 1993, as a matter of convention, the presidency has been held simultaneously by the General Secretary of the Communist Party of China, the top leader in the one party system. The presidency is officially regarded as an institution of the state rather than an administrative post; theoretically, the President serves at the pleasure of the National People's Congress, the legislature, and is not legally vested to take executive action on its own prerogative. While clear categories do exist, it is sometimes difficult to choose which category some individual heads of state belong to. In reality, the category to which each head of state belongs is assessed not by theory but by practice. Constitutional change in Liechtenstein in 2003 gave its head of state, the Reigning Prince, constitutional powers that included a veto over legislation and power to dismiss the head of government and cabinet. It could be argued that the strengthening of the Prince's powers, vis-a-vis the Landtag (legislature), has moved Liechtenstein into the semi-presidential category. Similarly the original powers given to the Greek President under the 1974 Hellenic Republic constitution moved Greece closer to the French semi-presidential model. Another complication exists with South Africa, in which the president is in fact elected by the National Assembly (legislature) and is thus similar, in principle, to a head of government in a parliamentary system but is also, in addition, recognised as the head of state. The offices of president of Nauru and president of Botswana are similar in this respect to the South African presidency. Panama, during the military dictatorships of Omar Torrijos and Manuel Noriega, was nominally a presidential republic. However, the elected civilian presidents were effectively figureheads with real political power being exercised by the chief of the Panamanian Defense Forces. Historically, at the time of the League of Nations (1920–1946) and the founding of the United Nations (1945), India's head of state was the monarch of the United Kingdom, ruling directly or indirectly as Emperor of India through the Viceroy and Governor-General of India. Head of state is the highest-ranking constitutional position in a sovereign state. A head of state has some or all of the roles listed below, often depending on the constitutional category (above), and does not necessarily regularly exercise the most power or influence of governance. There is usually a formal public ceremony when a person becomes head of state, or some time after. This may be the swearing in at the inauguration of a president of a republic, or the coronation of a monarch. One of the most important roles of the modern head of state is being a living national symbol of the state; in hereditary monarchies this extends to the monarch being a symbol of the unbroken continuity of the state. For instance, the Canadian monarch is described by the government as being the personification of the Canadian state and is described by the Department of Canadian Heritage as the "personal symbol of allegiance, unity and authority for all Canadians". In many countries, official portraits of the head of state can be found in government offices, courts of law, or other public buildings. The idea, sometimes regulated by law, is to use these portraits to make the public aware of the symbolic connection to the government, a practice that dates back to medieval times. Sometimes this practice is taken to excess, and the head of state becomes the principal symbol of the nation, resulting in the emergence of a personality cult where the image of the head of state is the only visual representation of the country, surpassing other symbols such as the flag. Other common representations are on coins, postage and other stamps and banknotes, sometimes by no more than a mention or signature; and public places, streets, monuments and institutions such as schools are named for current or previous heads of state. In monarchies (e.g., Belgium) there can even be a practice to attribute the adjective "royal" on demand based on existence for a given number of years. However, such political techniques can also be used by leaders without the formal rank of head of state, even party - and other revolutionary leaders without formal state mandate. Heads of state often greet important foreign visitors, particularly visiting heads of state. They assume a host role during a state visit, and the programme may feature playing of the national anthems by a military band, inspection of military troops, official exchange of gifts, and attending a state dinner at the official residence of the host. At home, heads of state are expected to render lustre to various occasions by their presence, such as by attending artistic or sports performances or competitions (often in a theatrical honour box, on a platform, on the front row, at the honours table), expositions, national day celebrations, dedication events, military parades and war remembrances, prominent funerals, visiting different parts of the country and people from different walks of life, and at times performing symbolic acts such as cutting a ribbon, groundbreaking, ship christening, laying the first stone. Some parts of national life receive their regular attention, often on an annual basis, or even in the form of official patronage. The Olympic Charter (rule 55.3) of the International Olympic Committee states that the Olympic summer and winter games shall be opened by the head of state of the host nation, by uttering a single formulaic phrase as determined by the charter. As such invitations may be very numerous, such duties are often in part delegated to such persons as a spouse, a head of government or a cabinet minister or in other cases (possibly as a message, for instance, to distance themselves without rendering offence) just a military officer or civil servant. For non-executive heads of state there is often a degree of censorship by the politically responsible government (such as the head of government). This means that the government discreetly approves agenda and speeches, especially where the constitution (or customary law) assumes all political responsibility by granting the crown inviolability (in fact also imposing political emasculation) as in the Kingdom of Belgium from its very beginning; in a monarchy this may even be extended to some degree to other members of the dynasty, especially the heir to the throne. Below follows a list of examples from different countries of general provisions in law, which either designate an office as head of state or define its general purpose. In the majority of states, whether republics or monarchies, executive authority is vested, at least notionally, in the head of state. In presidential systems the head of state is the actual, de facto chief executive officer. Under parliamentary systems the executive authority is exercised by the head of state, but in practice is done so on the advice of the cabinet of ministers. This produces such terms as "Her Majesty's Government" and "His Excellency's Government." Examples of parliamentary systems in which the head of state is notional chief executive include Australia, Austria, Canada, Denmark, India, Italy, Norway, Spain and the United Kingdom. The few exceptions where the head of state is not even the nominal chief executive - and where supreme executive authority is according to the constitution explicitly vested in a cabinet - include the Czech Republic, Ireland, Israel, Japan and Sweden. The head of state usually appoints most or all the key officials in the government, including the head of government and other cabinet ministers, key judicial figures; and all major office holders in the civil service, foreign service and commissioned officers in the military. In many parliamentary systems, the head of government is appointed with the consent (in practice often decisive) of the legislature, and other figures are appointed on the head of government's advice. In practice, these decisions are often a formality. The last time the prime minister of the United Kingdom was unilaterally selected by the monarch was in 1963, when Queen Elizabeth II appointed Alec Douglas-Home on the advice of outgoing Prime Minister Harold Macmillan. In presidential systems, such as that of the United States, appointments are nominated by the president's sole discretion, but this nomination is often subject to confirmation by the legislature; and specifically in the US, the Senate has to approve senior executive branch and judicial appointments by a simple majority vote. The head of state may also dismiss office-holders. There are many variants on how this can be done. For example, members of the Irish Cabinet are dismissed by the president on the advice of the taoiseach; in other instances, the head of state may be able to dismiss an office holder unilaterally; other heads of state, or their representatives, have the theoretical power to dismiss any office-holder, while it is exceptionally rarely used. In France, while the president cannot force the prime minister to tender the resignation of the government, he can, in practice, request it if the prime minister is from his own majority. In presidential systems, the president often has the power to fire ministers at his sole discretion. In the United States, the unwritten convention calls for the heads of the executive departments to resign on their own initiative when called to do so. Some countries have alternative provisions for senior appointments: In Sweden, under the Instrument of Government of 1974, the Speaker of the Riksdag has the role of formally appointing the prime minister, following a vote in the Riksdag, and the prime minister in turn appoints and dismisses cabinet ministers at his/her sole discretion. Although many constitutions, particularly from the 19th century and earlier, make no explicit mention of a head of state in the generic sense of several present day international treaties, the officeholders corresponding to this position are recognised as such by other countries. In a monarchy, the monarch is generally understood to be the head of state. The Vienna Convention on Diplomatic Relations, which codified longstanding custom, operates under the presumption that the head of a diplomatic mission (i.e. ambassador or nuncio) of the sending state is accredited to the head of state of the receiving state. The head of state accredits (i.e. formally validates) his or her country's ambassadors (or rarer equivalent diplomatic mission chiefs, such as high commissioner or papal nuncio) through sending formal a Letter of Credence (and a Letter of Recall at the end of a tenure) to other heads of state and, conversely, receives the letters of their foreign counterparts. Without that accreditation, the chief of the diplomatic mission cannot take up their role and receive the highest diplomatic status. The role of a head of state in this regard, is codified in the Vienna Convention on Diplomatic Relations from 1961, which (as of 2017) 191 sovereign states has ratified. However, there are provisions in the Vienna Convention that a diplomatic agent of lesser rank, such as a chargé d'affaires, is accredited to the minister of foreign affairs (or equivalent). The head of state is often designated the high contracting party in international treaties on behalf of the state; signs them either personally or has them signed in his/her name by ministers (government members or diplomats); subsequent ratification, when necessary, may rest with the legislature. The treaties constituting the European Union and the European Communities are noteworthy contemporary cases of multilateral treaties cast in this traditional format, as are the accession agreements of new member states. However, rather than being invariably concluded between two heads of state, it has become common that bilateral treaties are in present times cast in an intergovernmental format, e.g., between the "Government of X and the Government of Y", rather than between "His Majesty the King of X and His Excellency the President of Y". In Canada, these head of state powers belong to the monarch as part of the royal prerogative, but the Governor General has been permitted to exercise them since 1947 and has done so since the 1970s. A head of state is often, by virtue of holding the highest executive powers, explicitly designated as the commander-in-chief of that nation's armed forces, holding the highest office in all military chains of command. In a constitutional monarchy or non-executive presidency, the head of state may de jure hold ultimate authority over the armed forces but will only normally, as per either written law or unwritten convention, exercise their authority on the advice of their responsible ministers: meaning that the de facto ultimate decision making on military manoeuvres is made elsewhere. The head of state will, regardless of actual authority, perform ceremonial duties related to the country's armed forces, and will sometimes appear in military uniform for these purposes; particularly in monarchies where also the monarch's consort and other members of a royal family may also appear in military garb. This is generally the only time a head of state of a stable, democratic country will appear dressed in such a manner, as statesmen and public are eager to assert the primacy of (civilian, elected) politics over the armed forces. In military dictatorships, or governments which have arisen from coups d'état, the position of commander-in-chief is obvious, as all authority in such a government derives from the application of military force; occasionally a power vacuum created by war is filled by a head of state stepping beyond his or her normal constitutional role, as King Albert I of Belgium did during World War I. In these and in revolutionary regimes, the head of state, and often executive ministers whose offices are legally civilian, will frequently appear in military uniform. Some countries with a parliamentary system designate officials other than the head of state with command-in-chief powers. The armed forces of the Communist states are under the absolute control of the Communist party. It is usual that the head of state, particularly in parliamentary systems as part of the symbolic role, is the one who opens the annual sessions of the legislature, e.g. the annual State Opening of Parliament with the Speech from the Throne in Britain. Even in presidential systems the head of state often formally reports to the legislature on the present national status, e.g. the State of the Union address in the United States of America. Most countries require that all bills passed by the house or houses of the legislature be signed into law by the head of state. In some states, such as the United Kingdom, Belgium and Ireland, the head of state is, in fact, formally considered a tier of the legislature. However, in most parliamentary systems, the head of state cannot refuse to sign a bill, and, in granting a bill their assent, indicate that it was passed in accordance with the correct procedures. The signing of a bill into law is formally known as "promulgation". Some monarchical states call this procedure "royal assent". In some parliamentary systems, the head of state retains certain powers in relation to bills to be exercised at his or her discretion. They may have authority to veto a bill until the houses of the legislature have reconsidered it, and approved it a second time; reserve a bill to be signed later, or suspend it indefinitely (generally in states with royal prerogative; this power is rarely used); refer a bill to the courts to test its constitutionality; refer a bill to the people in a referendum. If he or she is also chief executive, he or she can thus politically control the necessary executive measures without which a proclaimed law can remain dead letter, sometimes for years or even forever. A head of state is often empowered to summon and dissolve the country's legislature. In most parliamentary systems, this is often done on the advice of the head of government. In some parliamentary systems, and in some presidential systems, however, the head of state may do so on their own initiative. Some states have fixed term legislatures, with no option of bringing forward elections (e.g., Article II, Section 3, of the U.S. Constitution). In other systems there are usually fixed terms, but the head of state retains authority to dissolve the legislature in certain circumstances. Where a head of government has lost support in the legislature, some heads of state may refuse a dissolution, where one is requested, thereby forcing the head of government's resignation. In a republic, the head of state nowadays usually bears the title of President, but some have or had had other titles. Titles commonly used by monarchs are King/Queen or Emperor/Empress, but also many other; e.g., Grand Duke, Prince, Emir and Sultan. Though president and various monarchical titles are most commonly used for heads of state, in some nationalistic regimes, the leader adopts, formally or de facto, a unique style simply meaning "leader" in the national language, e.g., Germany's single national socialist party chief and combined head of state and government, Adolf Hitler, as the "Führer" between 1934 and 1945. In 1959, when former British crown colony Singapore gained self-government, it adopted the Malay style "Yang di-Pertuan Negara" (literally means "head of state" in Malay) for its governor (the actual head of state remained the British monarch). The second and last incumbent of the office, Yusof bin Ishak, kept the style at 31 August 1963 unilateral declaration of independence and after 16 September 1963 accession to Malaysia as a state (so now as a constituent part of the federation, a non-sovereign level). After its expulsion from Malaysia on 9 August 1965, Singapore became a sovereign Commonwealth republic and installed Yusof bin Ishak as its first president. In 1959 after the resignation of Vice-President of Indonesia Mohammad Hatta, President Sukarno abolished the position and title of vice-president, assuming the positions of Prime Minister and Head of Cabinet. He also proclaimed himself president for life (Indonesian: "Presiden Seumur Hidup Panglima Tertinggi"; ""panglima"" meaning "commander or martial figurehead", ""tertinggi"" meaning "highest"; roughly translated to English as "Supreme Commander of the Revolution"). He was praised as ""Paduka Yang Mulia"", a Malay honorific originally given to kings; Sukarno awarded himself titles in that fashion due to his noble ancestry. There are also a few nations in which the exact title and definition of the office of head of state have been vague. During the Chinese Cultural Revolution, following the downfall of Liu Shaoqi, who was State Chairman (Chinese President), no successor was named, so the duties of the head of state were transferred collectively to the Standing Committee of the National People's Congress. This situation was later changed: the Head of State of the PRC is now the President of the People's Republic of China. Although the presidency is a largely ceremonial office with limited power, the symbolic role of a Head of State is now generally performed by Xi Jinping, who is also General Secretary of the Communist Party (Communist Party leader) and Chairman of the Central Military Commission (Supreme Military Command), making him the most powerful person in China. In North Korea, the late Kim Il-sung was named "Eternal President" 4 years after his death and the presidency was abolished. As a result, some of the duties previously held by the president are constitutionally delegated to the chairman of the Presidium of the Supreme People's Assembly, who performs some of the roles of a head of state, such as accrediting foreign ambassadors and undertaking overseas visits. However, the symbolic role of a Head of State is generally performed by Kim Jong-un, who as the leader of the party and military, is the most powerful person in North Korea. There is debate as to whether Samoa was an elective monarchy or an aristocratic republic, given the comparative ambiguity of the title "O le Ao o le Malo" and the nature of the head of state's office. In some states the office of head of state is not expressed in a specific title reflecting that role, but constitutionally awarded to a post of another formal nature. Thus in March 1979 Colonel Muammar Gaddafi, who kept absolute power (until his overthrow in 2011 referred to as "Guide of the Revolution"), after ten years as combined Head of State and Head of government of the Libyan "Jamahiriya" ("state of the masses"), styled Chairman of the Revolutionary Command Council, formally transferred both qualities to the General secretaries of the General People's Congress (comparable to a Speaker) respectively to a Prime Minister, in political reality both were his creatures. Sometimes a head of state assumes office as a state becomes legal and political reality, before a formal title for the highest office is determined; thus in the since 1 January 1960 independent republic Cameroon ("Cameroun", a former French colony), the first president, Ahmadou Babatoura Ahidjo, was at first not styled "président" but 'merely' known as "chef d'état" - (French 'head of state') until 5 May 1960. In Uganda, Idi Amin the military leader after the coup of 25 January 1971 was formally styled "military head of state" till 21 February 1971, only from then on regular (but unconstitutional, not elected) president. In certain cases a special style is needed to accommodate imperfect statehood, e.g., the title "Sardar-i-Riyasat" was used in Kashmir after its accession to India, and the Palestine Liberation Organization leader, Yasser Arafat, was styled the first "President of the Palestinian National Authority" in 1994. In 2008, the same office was restyled as "President of the State of Palestine". In medieval Europe, it was universally accepted that the Pope ranked first among all rulers and was followed by the Holy Roman Emperor. The Pope also had the sole right to determine the precedence of all others. This principle was first challenged by a Protestant ruler, Gustavus Adolphus of Sweden and was later maintained by his country at the Congress of Westphalia. Great Britain would later claim a break of the old principle for the Quadruple Alliance in 1718. However, it was not until the 1815 Congress of Vienna, when it was decided (due to the abolition of the Holy Roman Empire in 1806 and the weak position of France and other catholic states to assert themselves) and remains so to this day, that all sovereign states are treated as equals, whether monarchies or republics. On occasions when multiple heads of state or their representatives meet, precedence is by the host usually determined in alphabetical order (in whatever language the host determines, although French has for much of the 19th and 20th centuries been the "lingua franca" of diplomacy) or by date of accession. Contemporary international law on precedence, built upon the universally admitted principles since 1815, derives from the Vienna Convention on Diplomatic Relations (in particular, articles 13, 16.1 and Appendix iii). Niccolò Machiavelli used "Prince" () as a generic term for the ruler, similar to contemporary usage of "head of state", in his classical treatise "The Prince", originally published in 1532: in fact that particular literary genre it belongs to is known as Mirrors for princes. Thomas Hobbes in his "Leviathan" (1651) used the term "Sovereign". In Europe the role of a monarchs has gradually transitioned from that of a sovereign rulerin the sense of Divine Right of Kings as articulated by Jean Bodin, Absolutism and the "L'etat c'est moi"to that of a constitutional monarch; parallel with the conceptual evolution of sovereignty from merely the personal rule of a single person, to Westphalian sovereignty (Peace of Westphalia ending both the Thirty Years' War & Eighty Years' War) and popular sovereignty as in consent of the governed; as shown in the Glorious Revolution of 1688 in England & Scotland, the French Revolution in 1789, and the German Revolution of 1918–1919. The monarchies who survived through this era were the ones who were willing to subject themselves to constitutional limitations. Whenever a head of state is not available for any reason, constitutional provisions may allow the role to fall temporarily to an assigned person or collective body. In a republic, this is - depending on provisions outlined by the constitution or improvised - a vice-president, the chief of government, the legislature or its presiding officer. In a monarchy, this is usually a regent or collegial regency (council). For example, in the United States the vice-president acts when the president is incapacitated, and in the United Kingdom the queen's powers may be delegated to counselors of state when she is abroad or unavailable. Neither of the two co-princes of Andorra is resident in Andorra; each is represented in Andorra by a delegate, though these persons hold no formal title. There are also several methods of head of state succession in the event of the removal, disability or death of an incumbent head of state. In exceptional situations, such as war, occupation, revolution or a coup d'état, constitutional institutions, including the symbolically crucial head of state, may be reduced to a figurehead or be suspended in favour of an emergency office (such as the original Roman dictator) or eliminated by a new "provisionary" regime, such as a collective of the junta type, or removed by an occupying force, such as a military governor (an early example being the Spartan Harmost). In early modern Europe, a single person was often monarch simultaneously of separate states. A composite monarchy is a retrospective label for those cases where the states were governed entirely separately. Of contemporary terms, a personal union had less government co-ordination than a real union. One of the two co-princes of Andorra is the president of France. The Commonwealth realms share a monarch, currently Elizabeth II. In the realms other than the United Kingdom, a governor-general ("governor general" in Canada) is appointed by the sovereign, usually on the advice of the relevant prime minister (although sometimes it is based on the result of a vote in the relevant parliament, which is the case for Papua New Guinea and the Solomon Islands), as a representative and to exercise almost all the Royal Prerogative according to established constitutional authority. In Australia the present queen is generally assumed to be head of state, since the governor-general and the state governors are defined as her "representatives". However, since the governor-general performs almost all national regal functions, the governor-general has occasionally been referred to as head of state in political and media discussion. To a lesser extent, uncertainty has been expressed in Canada as to which officeholder—the monarch, the governor general, or both—can be considered the head of state. New Zealand, Papua New Guinea, and Tuvalu explicitly name the monarch as their head of state (though Tuvalu's constitution states that "references in any law to the Head of State shall be read as including a reference to the governor-general"). Governors-general are frequently treated as heads of state on state and official visits; at the United Nations, they are accorded the status of head of state in addition to the sovereign. An example of a governor-general departing from constitutional convention by acting unilaterally (that is, without direction from ministers, parliament, or the monarch) occurred in 1926, when Canada's governor general refused the head of government's formal advice requesting a dissolution of parliament and a general election. In a letter informing the monarch after the event, the Governor General said: "I have to await the verdict of history to prove my having adopted a wrong course, and this I do with an easy conscience that, right or wrong, I have acted in the interests of Canada and implicated no one else in my decision." Another example occurred when, in the 1975 Australian constitutional crisis, the governor-general unexpectedly dismissed the prime minister in order to break a stalemate between the House of Representatives and Senate over money bills. The governor-general issued a public statement saying he felt it was the only solution consistent with the constitution, his oath of office, and his responsibilities, authority, and duty as governor-general. A letter from the queen's private secretary at the time, Martin Charteris, confirmed that the only person competent to commission an Australian prime minister was the governor-general and it would not be proper for the monarch to personally intervene in matters that the Constitution Act so clearly places within the governor-general's jurisdiction. Other Commonwealth realms that are now constituted with a governor-general as the viceregal representative of Elizabeth II are: Antigua and Barbuda, the Bahamas, Belize, Grenada, Jamaica, New Zealand, Saint Kitts and Nevis, Saint Lucia, and Saint Vincent and the Grenadines. Since antiquity, various dynasties or individual rulers have claimed the right to rule by divine authority, such as the Mandate of Heaven and the divine right of kings. Some monarchs even claimed divine ancestry, such as Egyptian pharaohs and Sapa Incas, who claimed descent from their respective sun gods and often sought to maintain this bloodline by practising incestuous marriage. In Ancient Rome, during the Principate, the title ('divine') was conferred (notably posthumously) on the emperor, a symbolic, legitimating element in establishing a de facto dynasty. In Roman Catholicism, the pope was once sovereign pontiff and head of state, first, of the politically important Papal States. After Italian unification, the pope remains head of state of Vatican City. Furthermore, the bishop of Urgell is "ex officio" one of the two co-princes of Andorra. In the Church of England, the reigning monarch holds the title Defender of the Faith and acts as supreme governor of the Church of England, although this is purely a symbolic role. During the early period of Islam, caliphs were spiritual and temporal absolute successors of the prophet Mohammed. Various political Muslim leaders since have styled themselves "Caliph" and served as dynastic heads of state, sometimes in addition to another title, such as the Ottoman Sultan. Historically, some theocratic Islamic states known as "imamates" have been led by imams as head of state, such as in what is now Oman, Yemen, and Saudi Arabia. In the Islamic Republic of Iran, the Supreme Leader, at present Ali Khamenei serves as head of state. The Aga Khans, a unique dynasty of temporal/religious leadership, leading the Nizari offshoot of Shia Islam in Central and South Asia, once ranking among British India's princely states, continue to the present day. In Hinduism, certain dynasties adopted a title expressing their positions as "servant" of a patron deity of the state, but in the sense of a viceroy under an absentee god-king, ruling "in the name of" the patron god(ess), such as Patmanabha Dasa (servant of Vishnu) in the case of the Maharaja of Travancore. From the time of the 5th Dalai Lama until the political retirement of the 14th Dalai Lama in 2011, Dalai Lamas were both political and spiritual leaders ("god-king") of Tibet. Outer Mongolia, the former homeland of the imperial dynasty of Genghis Khan, was another lamaist theocracy from 1585, using various styles, such as tulku. The establishment of the Communist Mongolian People's Republic replaced this regime in 1924. A collective head of state can exist in republics (internal complexity), e.g., nominal triumvirates, the Directoire, the seven-member Swiss Federal Council (where each member acts in turn as president for one year), Bosnia and Herzegovina with a three-member presidium from three different nations, San Marino with two "captains-regent" which maintains the tradition of Italian medieval republics that had always had an even number of consuls. A diarchy, in two rulers was the constitutional norm, may be distinguished from a coregency, in which a monarchy experiences an exceptional period of multiple rulers. In the Roman Republic there were two heads of state, styled consul, both of whom alternated months of authority during their year in office, similarly there was an even number of supreme magistrates in the Italic republics of Ancient Age. In the Athenian Republic there were nine supreme magistrates, styled archons. In Carthage there were two supreme magistrates, styled kings or suffetes (judges). In ancient Sparta there were two hereditary kings, belonging to two different dynasties. In the Soviet Union the Central Executive Committee of the Congress of Soviets (between 1922-1938) and later the Presidium of the Supreme Soviet (between 1938-1989) served as the collective head of state. After World War II the Soviet model was subsequently adopted by almost all countries belonged to its sphere of influence. Czechoslovakia remained the only country among them that retained an office of president as a form of a single head of state throughout this period, followed by Romania in 1974. A modern example of a collective head of state is the Sovereignty Council of Sudan, the interim ruling council of the Republic of Sudan. The Sovereignty Council comprises 11 ministers, who together have exercised all governmental functions for Sudan since the fall of President Omar Al-Bashir. Decisions are made either by consensus or by a super majority vote (8 members). Such arrangements are not to be confused with supranational entities which are not states and are not defined by a common monarchy but may (or not) have a symbolic, essentially protocollary, titled highest office, e.g., Head of the Commonwealth (held by the British crown, but not legally reserved for it) or 'Head of the Arab Union' (14 February - 14 July 1958, held by the Hashemite King of Iraq, during its short-lived Federation with Jordan, its Hashemite sister-realm). The National Government of the Republic of China, established in 1928, had a panel of about 40 people as collective head of state. Though beginning that year, a provisional constitution made the Kuomintang the sole government party and the National Government bound to the instructions of the Central Executive Committee of that party. The position of head of state can be established in different ways, and with different sources of legitimacy. Power can come from force, but formal legitimacy is often established, even if only by fictitious claims of continuity (e.g., a forged claim of descent from a previous dynasty). There have been cases of sovereignty granted by deliberate act, even when accompanied by orders of succession (as may be the case in a dynastic split). Such grants of sovereignty are usually forced, as is common with self-determination granted after nationalist revolts. This occurred with the last Attalid king of Hellenistic Pergamon, who by testament left his realm to Rome to avoid a disastrous conquest. Under a theocracy, perceived divine status translated into earthly authority under divine law. This can take the form of supreme divine authority above the state's, granting a tool for political influence to a priesthood. In this way, the Amun priesthood reversed the reforms of Pharaoh Akhenaten after his death. The division of theocratic power can be disputed, as happened between the Pope and Holy Roman Emperor in the investiture conflict when the temporal power sought to control key clergy nominations in order to guarantee popular support, and thereby his own legitimacy, by incorporating the formal ceremony of unction during coronation. The notion of a social contract holds that the nation—either the whole people or the electorate—gives a mandate, through acclamation or election. Individual heads of state may acquire their position by virtue of a constitution. An example is the Socialist Federal Republic of Yugoslavia, as the 1974 Yugoslav Constitution, article 333, stated that Federal Assembly can appoint namely Josip Broz Tito as the president of Republic without time limitation. The position of a monarch is usually hereditary, but in constitutional monarchies, there are usually restrictions on the incumbent's exercise of powers and prohibitions on the possibility of choosing a successor by other means than by birth. In a hereditary monarchy, the position of monarch is inherited according to a statutory or customary order of succession, usually within one royal family tracing its origin through a historical dynasty or bloodline. This usually means that the heir to the throne is known well in advance of becoming monarch to ensure a smooth succession. However, many cases of uncertain succession in European history have often led to wars of succession. Primogeniture, in which the eldest child of the monarch is first in line to become monarch, is the most common system in hereditary monarchy. The order of succession is usually affected by rules on gender. Historically "agnatic primogeniture" or "patrilineal primogeniture" was favoured, that is inheritance according to seniority of birth among the sons of a monarch or head of family, with sons and their male issue inheriting before brothers and their issue, and male-line males inheriting before females of the male line. This is the same as semi-Salic primogeniture. Complete exclusion of females from dynastic succession is commonly referred to as application of the Salic law (see "Terra salica"). Before primogeniture was enshrined in European law and tradition, kings would often secure the succession by having their successor (usually their eldest son) crowned during their own lifetime, so for a time there would be two kings in coregency – a senior king and a junior king. Examples include Henry the Young King of England and the early Direct Capetians in France. Sometimes, however, primogeniture can operate through the female line. In some systems a female may rule as monarch only when the male line dating back to a common ancestor is exhausted. In 1980, Sweden, by rewriting its 1810 Act of Succession, became the first European monarchy to declare equal (full cognatic) primogeniture, meaning that the eldest child of the monarch, whether female or male, ascends to the throne. Other European monarchies (such as the Netherlands in 1983, Norway in 1990 and Belgium in 1991) have since followed suit. Similar reforms were proposed in 2011 for the United Kingdom and the other Commonwealth realms, which came into effect in 2015 after having been approved by all of the affected nations. Sometimes religion is affected; under the Act of Settlement 1701 all Roman Catholics and all persons who have married Roman Catholics are ineligible to be the British monarch and are skipped in the order of succession. In some monarchies there may be liberty for the incumbent, or some body convening after his or her demise, to choose from eligible members of the ruling house, often limited to legitimate descendants of the dynasty's founder. Rules of succession may be further limited by state religion, residency, equal marriage or even permission from the legislature. Other hereditary systems of succession included tanistry, which is semi-elective and gives weight to merit and Agnatic seniority. In some monarchies, such as Saudi Arabia, succession to the throne usually first passes to the monarch's next eldest brother, and only after that to the monarch's children (agnatic seniority). Election usually is the constitutional way to choose the head of state of a republic, and some monarchies, either directly through popular election, indirectly by members of the legislature or of a special college of electors (such as the Electoral College in the United States), or as an exclusive prerogative. Exclusive prerogative allows the heads of states of constituent monarchies of a federation to choose the head of state for the federation among themselves, as in the United Arab Emirates and Malaysia. The Pope, head of state of Vatican City, is chosen by previously appointed cardinals under 80 years of age from among themselves in a papal conclave. A head of state can be empowered to designate his successor, such as Lord Protector of the Commonwealth Oliver Cromwell, who was succeeded by his son Richard. A head of state may seize power by force or revolution. This is not the same as the use of force to "maintain" power, as is practised by authoritarian or totalitarian rulers. Dictators often use democratic titles, though some proclaim themselves monarchs. Examples of the latter include Emperor Napoleon I of France and King Zog of Albania. In Spain, general Francisco Franco adopted the formal title "Jefe del Estado", or Chief of State, and established himself as regent for a vacant monarchy. Uganda's Idi Amin was one of several who named themselves President for Life. A foreign power can establishing a branch of their own dynasty, or one friendly to their interests. This was the outcome of the Russo-Swedish War from 1741 to 1743 where the Russian Empress made the imposition of her relative Adolf Frederick as the heir to the Swedish Throne, to succeed Frederick I who lacked legitimate issue, as a peace condition. Apart from violent overthrow, a head of state's position can be lost in several ways, including death, another by expiration of the constitutional term of office, abdication, or resignation. In some cases, an abdication cannot occur unilaterally, but comes into effect only when approved by an act of parliament, as in the case of British King Edward VIII. The post can also be abolished by constitutional change; in such cases, an incumbent may be allowed to finish his or her term. Of course, a head of state position will cease to exist if the state itself does. Heads of state generally enjoy widest inviolability, although some states allow impeachment, or a similar constitutional procedure by which the highest legislative or judicial authorities are empowered to revoke the head of state's mandate on exceptional grounds. This may be a common crime, a political sin, or an act by which he or she violates such provisions as an established religion mandatory for the monarch. By similar procedure, an original mandate may be declared invalid. Effigies, memorials and monuments of former heads of state can be designed to represent the history or aspirations of a state or its people, such as the equestrian bronze sculpture of Kaiser Wilhelm I, first Emperor of a unified Germany erected in Berlin at the end of the nineteenth century; or the Victoria Memorial erected in front of Buckingham Palace London, commemorating Queen Victoria and her reign (1837–1901), and unveiled in 1911 by her grandson, King George V; or the monument, placed in front of the Victoria Memorial Hall, Kolkata (Calcutta) (1921), commemorating Queen Victoria's reign as Empress of India from 1876. Another, twentieth century, example is the Mount Rushmore National Memorial, a group sculpture constructed (1927–1941) on a conspicuous skyline in the Black Hills of South Dakota (40th state of the Union, 1889), in the midwestern United States, representing the territorial expansion of the United States in the first 130 years from its founding, which is promoted as the "Shrine of Democracy". Former presidents of the United States, while holding no political powers per se, sometimes continue to exert influence in national and world affairs. A monarch may retain his style and certain prerogatives after abdication, as did King Leopold III of Belgium, who left the throne to his son after winning a referendum which allowed him to retain a full royal household deprived him of a constitutional or representative role. Napoleon transformed the Italian principality of Elba, where he was imprisoned, into a miniature version of his First Empire, with most trappings of a sovereign monarchy, until his "Cent Jours" escape and reseizure of power in France convinced his opponents, reconvening the Vienna Congress in 1815, to revoke his gratuitous privileges and send him to die in exile on barren Saint Helena. By tradition, deposed monarchs who have not freely abdicated continue to use their monarchical titles as a courtesy for the rest of their lives. Hence, even after Constantine II ceased to be "King of the Hellenes", it is still common to refer to the deposed king and his family as if Constantine II were still on the throne, as many European royal courts and households do in guest lists at royal weddings, as in Sweden in 2010, Britain in 2011 and Luxembourg in 2012. The Republic of Greece oppose the right of their deposed monarch and former royal family members to be referred to by their former titles or bearing a surname indicating royal status, and has enacted legislation which hinder acquisition of Greek citizenship unless those terms are met. The former king brought this issue, along with property ownership issues, before the European Court of Human Rights for alleged violations of the European Convention on Human Rights, but lost with respect to the name issue. However, some other states have no problem with deposed monarchs being referred to by their former title, and even allow them to travel internationally on the state's diplomatic passport. The Italian constitution provides that a former president of the Republic takes the title President Emeritus of the Italian Republic and he or she is also a senator for life, and enjoys immunity, flight status and official residences certain privileges.
https://en.wikipedia.org/wiki?curid=13456
Heredity Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic information of their parents. Through heredity, variations between individuals can accumulate and cause species to evolve by natural selection. The study of heredity in biology is genetics. In humans, eye color is an example of an inherited characteristic: an individual might inherit the "brown-eye trait" from one of the parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype. The complete set of observable traits of the structure and behavior of an organism is called its phenotype. These traits arise from the interaction of its genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype: a striking example is people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable traits are known to be passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer that incorporates four types of bases, which are interchangeable. The Nucleic acid sequence (the sequence of bases along a particular DNA molecule) specifies the genetic information: this is comparable to a sequence of letters spelling out a passage of text. Before a cell divides through mitosis, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. A portion of a DNA molecule that specifies a single functional unit is called a gene; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. Organisms inherit genetic material from their parents in the form of homologous chromosomes, containing a unique combination of DNA sequences that code for genes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a particular locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes within and among organisms. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlie some of the mechanics in developmental plasticity and canalization. Recent findings have confirmed important examples of heritable changes that cannot be explained by direct agency of the DNA molecule. These phenomena are classed as epigenetic inheritance systems that are causally or independently evolving over genes. Research into modes and mechanisms of epigenetic inheritance is still in its scientific infancy, however, this area of research has attracted much recent activity as it broadens the scope of heritability and evolutionary biology in general. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference, and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effect that modifies and feeds back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits, group heritability, and symbiogenesis. These examples of heritability that operate above the gene are covered broadly under the title of multilevel or hierarchical selection, which has been a subject of intense debate in the history of evolutionary science. When Charles Darwin proposed his theory of evolution in 1859, one of its major problems was the lack of an underlying mechanism for heredity. Darwin believed in a mix of blending inheritance and the inheritance of acquired traits (pangenesis). Blending inheritance would lead to uniformity across populations in only a few generations and then would remove variation from a population on which natural selection could act. This led to Darwin adopting some Lamarckian ideas in later editions of "On the Origin of Species" and his later biological works. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) rather than suggesting mechanisms. Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits. The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails. Scientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that "seeds" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a "nurse for the young life sown within her". Ancient understandings of heredity transitioned to two debated doctrines in the 18th century. The Doctrine of Epigenesis and the Doctrine of Preformation were two distinct views of the understanding of heredity. The Doctrine of Epigenesis, originated by Aristotle, claimed that an embryo continually develops. The modifications of the parent's traits are passed off to an embryo during its lifetime. The foundation of this doctrine was based on the theory of inheritance of acquired traits. In direct opposition, the Doctrine of Preformation claimed that "like generates like" where the germ would evolve to yield offspring similar to the parents. The Preformationist view believed procreation was an act of revealing what had been created long before. However, this was disputed by the creation of the cell theory in the 19th century, where the fundamental unit of life is the cell, and not some preformed parts of an organism. Various hereditary mechanisms, including blending inheritance were also envisaged without being properly tested or quantified, and were later disputed. Nevertheless, people were able to develop domestic breeds of animals as well as crops through artificial selection. The inheritance of acquired traits also formed a part of early Lamarckian ideas on evolution. During the 18th century, Dutch microscopist Antonie van Leeuwenhoek (1632–1723) discovered "animalcules" in the sperm of humans and other animals. Some scientists speculated they saw a "little man" (homunculus) inside each sperm. These scientists formed a school of thought known as the "spermists". They contended the only contributions of the female to the next generation were the womb in which the homunculus grew, and prenatal influences of the womb. An opposing school of thought, the ovists, believed that the future human was in the egg, and that sperm merely stimulated the growth of the egg. Ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception. The idea of particulate inheritance of genes can be attributed to the Moravian monk Gregor Mendel who published his work on pea plants in 1865. However, his work was not widely known and was rediscovered in 1901. It was initially assumed that Mendelian inheritance only accounted for large (qualitative) differences, such as those seen by Mendel in his pea plants – and the idea of additive effect of (quantitative) genes was not realised until R.A. Fisher's (1918) paper, "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" Mendel's overall contribution gave scientists a useful overview that traits were inheritable. His pea plant demonstration became the foundation of the study of Mendelian Traits. These traits can be traced on a single locus. In the 1930s, work by Fisher and others resulted in a combination of Mendelian and biometric schools into the modern evolutionary synthesis. The modern synthesis bridged the gap between experimental geneticists and naturalists; and between both and palaeontologists, stating that: The idea that speciation occurs after populations are reproductively isolated has been much debated. In plants, polyploidy must be included in any view of speciation. Formulations such as 'evolution consists primarily of changes in the frequencies of alleles between one generation and another' were proposed rather later. The traditional view is that developmental biology ('evo-devo') played little part in the synthesis, but an account of Gavin de Beer's work by Stephen Jay Gould suggests he may be an exception. Almost all aspects of the synthesis have been challenged at times, with varying degrees of success. There is no doubt, however, that the synthesis was a great landmark in evolutionary biology. It cleared up many confusions, and was directly responsible for stimulating a great deal of research in the post-World War II era. Trofim Lysenko however caused a backlash of what is now called Lysenkoism in the Soviet Union when he emphasised Lamarckian ideas on the inheritance of acquired traits. This movement affected agricultural research and led to food shortages in the 1960s and seriously affected the USSR. There is growing evidence that there is transgenerational inheritance of epigenetic changes in humans and other animals. The description of a mode of biological inheritance consists of three main categories: These three categories are part of every exact description of a mode of inheritance in the above order. In addition, more specifications may be added as follows: Determination and description of a mode of inheritance is also achieved primarily through statistical analysis of pedigree data. In case the involved loci are known, methods of molecular genetics can also be employed. An allele is said to be dominant if it is always expressed in the appearance of an organism (phenotype) provided that at least one copy of it is present. For example, in peas the allele for green pods, "G", is dominant to that for yellow pods, "g". Thus pea plants with the pair of alleles either "GG" (homozygote) or "Gg" (heterozygote) will have green pods. The allele for yellow pods is recessive. The effects of this allele are only seen when it is present in both chromosomes, "gg" (homozygote). This derives from Zygosity, the degree to which both copies of a chromosome or gene have the same genetic sequence, in other words, the degree of similarity of the alleles in an organism.
https://en.wikipedia.org/wiki?curid=13457
List of historical drama films and series set in Near Eastern and Western civilization The historical drama or period drama is a film genre in which stories are based upon historical events and famous people. Some historical dramas are docudramas, which attempt an accurate portrayal of a historical event or biography, to the degree that the available historical research will allow. Other historical dramas are fictionalized tales that are based on an actual person and their deeds, such as "Braveheart", which is loosely based on the 13th-century knight William Wallace's fight for Scotland's independence. Due to the sheer volume of films included in this genre and in the interest of continuity, this list is primarily focused on films pertaining to the history of Near Eastern and Western civilization. For films pertaining to the history of East Asia, Central Asia, and South Asia, please refer also to the list of Asian historical drama films.
https://en.wikipedia.org/wiki?curid=13458
H. G. Wells Herbert George Wells (21 September 1866 – 13 August 1946) was an English writer. Prolific in many genres, he wrote dozens of novels, short stories, and works of social commentary, history, satire, biography and autobiography. His work also included two books on recreational war games. Wells is now best remembered for his science fiction novels and is often called the "father of science fiction", along with Jules Verne and the publisher Hugo Gernsback. During his own lifetime, however, he was most prominent as a forward-looking, even prophetic social critic who devoted his literary talents to the development of a progressive vision on a global scale. A futurist, he wrote a number of utopian works and foresaw the advent of aircraft, tanks, space travel, nuclear weapons, satellite television and something resembling the World Wide Web. His science fiction imagined time travel, alien invasion, invisibility, and biological engineering. Brian Aldiss referred to Wells as the "Shakespeare of science fiction". Wells rendered his works convincing by instilling commonplace detail alongside a single extraordinary assumption – dubbed “Wells's law” – leading Joseph Conrad to hail him in 1898 as "O Realist of the Fantastic!". His most notable science fiction works include "The Time Machine" (1895), "The Island of Doctor Moreau" (1896), "The Invisible Man" (1897), "The War of the Worlds" (1898) and the military science fiction "The War in the Air" (1907). Wells was nominated for the Nobel Prize in Literature four times. Wells's earliest specialised training was in biology, and his thinking on ethical matters took place in a specifically and fundamentally Darwinian context. He was also from an early date an outspoken socialist, often (but not always, as at the beginning of the First World War) sympathising with pacifist views. His later works became increasingly political and didactic, and he wrote little science fiction, while he sometimes indicated on official documents that his profession was that of journalist. Novels such as "Kipps" and "The History of Mr Polly", which describe lower-middle-class life, led to the suggestion that he was a worthy successor to Charles Dickens, but Wells described a range of social strata and even attempted, in "Tono-Bungay" (1909), a diagnosis of English society as a whole. Wells was a diabetic and co-founded the charity The Diabetic Association (known today as Diabetes UK) in 1934. Herbert George Wells was born at Atlas House, 162 High Street in Bromley, Kent, on 21 September 1866. Called "Bertie" by his family, he was the fourth and last child of Sarah Neal, a former domestic servant, and Joseph Wells, a former domestic gardener, and at the time a shopkeeper and professional cricketer. An inheritance had allowed the family to acquire a shop in which they sold china and sporting goods, although it failed to prosper: the stock was old and worn out, and the location was poor. Joseph Wells managed to earn a meagre income, but little of it came from the shop and he received an unsteady amount of money from playing professional cricket for the Kent county team. Payment for skilled bowlers and batsmen came from voluntary donations afterwards, or from small payments from the clubs where matches were played. A defining incident of young Wells's life was an accident in 1874 that left him bedridden with a broken leg. To pass the time he began to read books from the local library, brought to him by his father. He soon became devoted to the other worlds and lives to which books gave him access; they also stimulated his desire to write. Later that year he entered Thomas Morley's Commercial Academy, a private school founded in 1849, following the bankruptcy of Morley's earlier school. The teaching was erratic, the curriculum mostly focused, Wells later said, on producing copperplate handwriting and doing the sort of sums useful to tradesmen. Wells continued at Morley's Academy until 1880. In 1877, his father, Joseph Wells, suffered a fractured thigh. The accident effectively put an end to Joseph's career as a cricketer, and his subsequent earnings as a shopkeeper were not enough to compensate for the loss of the primary source of family income. No longer able to support themselves financially, the family instead sought to place their sons as apprentices in various occupations. From 1880 to 1883, Wells had an unhappy apprenticeship as a draper at the Southsea Drapery Emporium, Hyde's. His experiences at Hyde's, where he worked a thirteen-hour day and slept in a dormitory with other apprentices, later inspired his novels "The Wheels of Chance", "The History of Mr Polly", and "Kipps", which portray the life of a draper's apprentice as well as providing a critique of society's distribution of wealth. Wells's parents had a turbulent marriage, owing primarily to his mother's being a Protestant and his father's being a freethinker. When his mother returned to work as a lady's maid (at Uppark, a country house in Sussex), one of the conditions of work was that she would not be permitted to have living space for her husband and children. Thereafter, she and Joseph lived separate lives, though they never divorced and remained faithful to each other. As a consequence, Herbert's personal troubles increased as he subsequently failed as a draper and also, later, as a chemist's assistant. However, Uppark had a magnificent library in which he immersed himself, reading many classic works, including Plato's "Republic", Thomas More's "Utopia", and the works of Daniel Defoe. This was the beginning of Wells's venture into literature. In October 1879, Wells's mother arranged through a distant relative, Arthur Williams, for him to join the National School at Wookey in Somerset as a pupil–teacher, a senior pupil who acted as a teacher of younger children. In December that year, however, Williams was dismissed for irregularities in his qualifications and Wells was returned to Uppark. After a short apprenticeship at a chemist in nearby Midhurst and an even shorter stay as a boarder at Midhurst Grammar School, he signed his apprenticeship papers at Hyde's. In 1883, Wells persuaded his parents to release him from the apprenticeship, taking an opportunity offered by Midhurst Grammar School again to become a pupil–teacher; his proficiency in Latin and science during his earlier short stay had been remembered. The years he spent in Southsea had been the most miserable of his life to that point, but his good fortune at securing a position at Midhurst Grammar School meant that Wells could continue his self-education in earnest. The following year, Wells won a scholarship to the Normal School of Science (later the Royal College of Science in South Kensington, now part of Imperial College London) in London, studying biology under Thomas Henry Huxley. As an alumnus, he later helped to set up the Royal College of Science Association, of which he became the first president in 1909. Wells studied in his new school until 1887, with a weekly allowance of 21 shillings (a guinea) thanks to his scholarship. This ought to have been a comfortable sum of money (at the time many working class families had "round about a pound a week" as their entire household income) yet in his "Experiment in Autobiography", Wells speaks of constantly being hungry, and indeed photographs of him at the time show a youth who is very thin and malnourished. He soon entered the Debating Society of the school. These years mark the beginning of his interest in a possible reformation of society. At first approaching the subject through Plato's "Republic", he soon turned to contemporary ideas of socialism as expressed by the recently formed Fabian Society and free lectures delivered at Kelmscott House, the home of William Morris. He was also among the founders of "The Science School Journal", a school magazine that allowed him to express his views on literature and society, as well as trying his hand at fiction; a precursor to his novel "The Time Machine" was published in the journal under the title "The Chronic Argonauts". The school year 1886–87 was the last year of his studies. During 1888, Wells stayed in Stoke-on-Trent, living in Basford. The unique environment of The Potteries was certainly an inspiration. He wrote in a letter to a friend from the area that "the district made an immense impression on me." The inspiration for some of his descriptions in "The War of the Worlds" is thought to have come from his short time spent here, seeing the iron foundry furnaces burn over the city, shooting huge red light into the skies. His stay in The Potteries also resulted in the macabre short story "The Cone" (1895, contemporaneous with his famous "The Time Machine"), set in the north of the city. After teaching for some time, he was briefly on the staff of Holt Academy in Wales – Wells found it necessary to supplement his knowledge relating to educational principles and methodology and entered the College of Preceptors (College of Teachers). He later received his Licentiate and Fellowship FCP diplomas from the College. It was not until 1890 that Wells earned a Bachelor of Science degree in zoology from the University of London External Programme. In 1889–90, he managed to find a post as a teacher at Henley House School in London, where he taught A. A. Milne (whose father ran the school). His first published work was a "Text-Book of Biology" in two volumes (1893). Upon leaving the Normal School of Science, Wells was left without a source of income. His aunt Mary—his father's sister-in-law—invited him to stay with her for a while, which solved his immediate problem of accommodation. During his stay at his aunt's residence, he grew increasingly interested in her daughter, Isabel, whom he later courted. To earn money, he began writing short humorous articles for journals such as "The Pall Mall Gazette", later collecting these in volume form as "Select Conversations with an Uncle" (1895) and "Certain Personal Matters" (1897). So prolific did Wells become at this mode of journalism that many of his early pieces remain unidentified. According to David C Smith, "Most of Wells's occasional pieces have not been collected, and many have not even been identified as his. Wells did not automatically receive the byline his reputation demanded until after 1896 or so ... As a result, many of his early pieces are unknown. It is obvious that many early Wells items have been lost." His success with these shorter pieces encouraged him to write book-length work, and he published his first novel, "The Time Machine", in 1895. In 1891, Wells married his cousin Isabel Mary Wells (1865–1931; from 1902 Isabel Mary Smith). The couple agreed to separate in 1894, when he had fallen in love with one of his students, Amy Catherine Robbins (1872–1927; later known as Jane), with whom he moved to Woking, Surrey in May 1895. They lived in a rented house, 'Lynton', (now No.141) Maybury Road in the town centre for just under 18 months and married at St Pancras register office in October 1895. His short period in Woking was perhaps the most creative and productive of his whole writing career, for while there he planned and wrote "The War of the Worlds" and "The Time Machine", completed "The Island of Doctor Moreau", wrote and published "The Wonderful Visit" and "The Wheels of Chance", and began writing two other early books, "When the Sleeper Wakes" and "Love and Mr Lewisham". In late summer 1896, Wells and Jane moved to a larger house in Worcester Park, near Kingston upon Thames, for two years; this lasted until his poor health took them to Sandgate, near Folkestone, where he constructed a large family home, Spade House, in 1901. He had two sons with Jane: George Philip (known as "Gip"; 1901–1985) and Frank Richard (1903–1982). Jane died on 6 October 1927, in Dunmow, at the age of 55. Wells had affairs with a significant number of women. In December 1909, he had a daughter, Anna-Jane, with the writer Amber Reeves, whose parents, William and Maud Pember Reeves, he had met through the Fabian Society. Amber had married the barrister G. R. Blanco White in July of that year, as co-arranged by Wells. After Beatrice Webb voiced disapproval of Wells' "sordid intrigue" with Amber, he responded by lampooning Beatrice Webb and her husband Sidney Webb in his 1911 novel "The New Machiavelli" as 'Altiora and Oscar Bailey', a pair of short-sighted, bourgeois manipulators. Between 1910–1913, novelist Elizabeth von Arnim was one of his mistresses. In 1914, he had a son, Anthony West (1914–1987), by the novelist and feminist Rebecca West, 26 years his junior. In 1920–21, and intermittently until his death, he had a love affair with the American birth control activist Margaret Sanger. Between 1924 and 1933 he partnered with the 22-year younger Dutch adventurer and writer Odette Keun, with whom he lived in "Lou Pidou", a house they built together in Grasse, France. Wells dedicated his longest book to her ("The World of William Clissold", 1926). When visiting Maxim Gorky in Russia 1920, he had slept with Gorky's mistress Moura Budberg, then still Countess Benckendorf and 27 years his junior. In 1933, when she left Gorky and emigrated to London, their relationship renewed and she cared for him through his final illness. Wells asked her to marry him repeatedly, but Budberg strongly rejected his proposals. In "Experiment in Autobiography" (1934), Wells wrote: "I was never a great amorist, though I have loved several people very deeply". David Lodge's novel "A Man of Parts" (2011)—a 'narrative based on factual sources' (author's note)—gives a convincing and generally sympathetic account of Wells's relations with the women mentioned above, and others. Director Simon Wells (born 1961), the author's great-grandson, was a consultant on the future scenes in "Back to the Future Part II" (1989). One of the ways that Wells expressed himself was through his drawings and sketches. One common location for these was the endpapers and title pages of his own diaries, and they covered a wide variety of topics, from political commentary to his feelings toward his literary contemporaries and his current romantic interests. During his marriage to Amy Catherine, whom he nicknamed Jane, he drew a considerable number of pictures, many of them being overt comments on their marriage. During this period, he called these pictures "picshuas". These picshuas have been the topic of study by Wells scholars for many years, and in 2006, a book was published on the subject. Some of his early novels, called "scientific romances", invented several themes now classic in science fiction in such works as "The Time Machine", "The Island of Doctor Moreau", "The Invisible Man", "The War of the Worlds", "When the Sleeper Wakes", and "The First Men in the Moon". He also wrote realistic novels that received critical acclaim, including "Kipps" and a critique of English culture during the Edwardian period, "Tono-Bungay". Wells also wrote dozens of short stories and novellas, including, "The Flowering of the Strange Orchid", which helped bring the full impact of Darwin's revolutionary botanical ideas to a wider public, and was followed by many later successes such as "The Country of the Blind" (1904). According to James Gunn, one of Wells's major contributions to the science fiction genre was his approach, which he referred to as his "new system of ideas". In his opinion, the author should always strive to make the story as credible as possible, even if both the writer and the reader knew certain elements are impossible, allowing the reader to accept the ideas as something that could really happen, today referred to as "the plausible impossible" and "suspension of disbelief". While neither invisibility nor time travel was new in speculative fiction, Wells added a sense of realism to the concepts which the readers were not familiar with. He conceived the idea of using a vehicle that allows an operator to travel purposely and selectively forwards or backwards in time. The term "time machine", coined by Wells, is now almost universally used to refer to such a vehicle. He explained that while writing "The Time Machine", he realised that "the more impossible the story I had to tell, the more ordinary must be the setting, and the circumstances in which I now set the Time Traveller were all that I could imagine of solid upper-class comforts." In "Wells's Law", a science fiction story should contain only a single extraordinary assumption. Being aware the notion of magic as something real had disappeared from society, he, therefore, used scientific ideas and theories as a substitute for magic to justify the impossible. Wells's best-known statement of the "law" appears in his introduction to a collection of his works published in 1934: As soon as the magic trick has been done the whole business of the fantasy writer is to keep everything else human and real. Touches of prosaic detail are imperative and a rigorous adherence to the hypothesis. Any extra fantasy outside the cardinal assumption immediately gives a touch of irresponsible silliness to the invention. Dr. Griffin / The Invisible Man is a brilliant research scientist who discovers a method of invisibility, but finds himself unable to reverse the process. An enthusiast of random and irresponsible violence, Griffin has become an iconic character in horror fiction. "The Island of Doctor Moreau" sees a shipwrecked man left on the island home of Doctor Moreau, a mad scientist who creates human-like hybrid beings from animals via vivisection. The earliest depiction of uplift, the novel deals with a number of philosophical themes, including pain and cruelty, moral responsibility, human identity, and human interference with nature. Though "Tono-Bungay" is not a science-fiction novel, radioactive decay plays a small but consequential role in it. Radioactive decay plays a much larger role in "The World Set Free" (1914). This book contains what is surely his biggest prophetic "hit", with the first description of a nuclear weapon. Scientists of the day were well aware that the natural decay of radium releases energy at a slow rate over thousands of years. The "rate" of release is too slow to have practical utility, but the "total amount" released is huge. Wells's novel revolves around an (unspecified) invention that accelerates the process of radioactive decay, producing bombs that explode with no more than the force of ordinary high explosives—but which "continue to explode" for days on end. "Nothing could have been more obvious to the people of the earlier twentieth century", he wrote, "than the rapidity with which war was becoming impossible ... [but] they did not see it until the atomic bombs burst in their fumbling hands". In 1932, the physicist and conceiver of nuclear chain reaction Leó Szilárd read "The World Set Free" (the same year Sir James Chadwick discovered the neutron), a book which he said made a great impression on him. Wells also wrote non-fiction. His first non-fiction bestseller was "Anticipations of the Reaction of Mechanical and Scientific Progress upon Human Life and Thought" (1901). When originally serialised in a magazine it was subtitled "An Experiment in Prophecy", and is considered his most explicitly futuristic work. It offered the immediate political message of the privileged sections of society continuing to bar capable men from other classes from advancement until war would force a need to employ those most able, rather than the traditional upper classes, as leaders. Anticipating what the world would be like in the year 2000, the book is interesting both for its hits (trains and cars resulting in the dispersion of populations from cities to suburbs; moral restrictions declining as men and women seek greater sexual freedom; the defeat of German militarism, and the existence of a European Union) and its misses (he did not expect successful aircraft before 1950, and averred that "my imagination refuses to see any sort of submarine doing anything but suffocate its crew and founder at sea"). His bestselling two-volume work, "The Outline of History" (1920), began a new era of popularised world history. It received a mixed critical response from professional historians. However, it was very popular amongst the general population and made Wells a rich man. Many other authors followed with "Outlines" of their own in other subjects. He reprised his "Outline" in 1922 with a much shorter popular work, "A Short History of the World", a history book praised by Albert Einstein, and two long efforts, "The Science of Life" (1930)—written with his son G. P. Wells and evolutionary biologist Julian Huxley, and "The Work, Wealth and Happiness of Mankind" (1931). The "Outlines" became sufficiently common for James Thurber to parody the trend in his humorous essay, "An Outline of Scientists"—indeed, Wells's "Outline of History" remains in print with a new 2005 edition, while "A Short History of the World" has been re-edited (2006). From quite early in Wells's career, he sought a better way to organise society and wrote a number of Utopian novels. The first of these was "A Modern Utopia" (1905), which shows a worldwide utopia with "no imports but meteorites, and no exports at all"; two travellers from our world fall into its alternate history. The others usually begin with the world rushing to catastrophe, until people realise a better way of living: whether by mysterious gases from a comet causing people to behave rationally and abandoning a European war ("In the Days of the Comet" (1906)), or a world council of scientists taking over, as in "The Shape of Things to Come" (1933, which he later adapted for the 1936 Alexander Korda film, "Things to Come"). This depicted, all too accurately, the impending World War, with cities being destroyed by aerial bombs. He also portrayed the rise of fascist dictators in "The Autocracy of Mr Parham" (1930) and "The Holy Terror" (1939). "Men Like Gods" (1923) is also a utopian novel. Wells in this period was regarded as an enormously influential figure; the critic Malcolm Cowley stated: "by the time he was forty, his influence was wider than any other living English writer". Wells contemplates the ideas of nature and nurture and questions humanity in books such as "The Island of Doctor Moreau". Not all his scientific romances ended in a Utopia, and Wells also wrote a dystopian novel, "When the Sleeper Wakes" (1899, rewritten as "The Sleeper Awakes", 1910), which pictures a future society where the classes have become more and more separated, leading to a revolt of the masses against the rulers. "The Island of Doctor Moreau" is even darker. The narrator, having been trapped on an island of animals vivisected (unsuccessfully) into human beings, eventually returns to England; like Gulliver on his return from the Houyhnhnms, he finds himself unable to shake off the perceptions of his fellow humans as barely civilised beasts, slowly reverting to their animal natures. Wells also wrote the preface for the first edition of W. N. P. Barbellion's diaries, "The Journal of a Disappointed Man", published in 1919. Since "Barbellion" was the real author's pen name, many reviewers believed Wells to have been the true author of the "Journal"; Wells always denied this, despite being full of praise for the diaries. In 1927, a Canadian teacher and writer Florence Deeks unsuccessfully sued Wells for infringement of copyright and breach of trust, claiming that much of "The Outline of History" had been plagiarised from her unpublished manuscript, "The Web of the World's Romance", which had spent nearly nine months in the hands of Wells's Canadian publisher, Macmillan Canada. However, it was sworn on oath at the trial that the manuscript remained in Toronto in the safekeeping of Macmillan, and that Wells did not even know it existed, let alone had seen it. The court found no proof of copying, and decided the similarities were due to the fact that the books had similar nature and both writers had access to the same sources. In 2000, A. B. McKillop, a professor of history at Carleton University, produced a book on the case, "The Spinster & The Prophet: Florence Deeks, H. G. Wells, and the Mystery of the Purloined Past". According to McKillop, the lawsuit was unsuccessful due to the prejudice against a woman suing a well-known and famous male author, and he paints a detailed story based on the circumstantial evidence of the case. In 2004, Denis N. Magnusson, Professor Emeritus of the Faculty of Law, Queen's University, Ontario, published an article on "Deeks v. Wells". This re-examines the case in relation to McKillop's book. While having some sympathy for Deeks, he argues that she had a weak case that was not well presented, and though she may have met with sexism from her lawyers, she received a fair trial, adding that the law applied is essentially the same law that would be applied to a similar case today (i.e., 2004). In 1933, Wells predicted in "The Shape of Things to Come" that the world war he feared would begin in January 1940, a prediction which ultimately came true four months early, in September 1939, with the outbreak of World War II. In 1936, before the Royal Institution, Wells called for the compilation of a constantly growing and changing World Encyclopaedia, to be reviewed by outstanding authorities and made accessible to every human being. In 1938, he published a collection of essays on the future organisation of knowledge and education, "World Brain", including the essay "The Idea of a Permanent World Encyclopaedia". Prior to 1933, Wells's books were widely read in Germany and Austria, and most of his science fiction works had been translated shortly after publication. By 1933, he had attracted the attention of German officials because of his criticism of the political situation in Germany, and on 10 May 1933, Wells's books were burned by the Nazi youth in Berlin's Opernplatz, and his works were banned from libraries and book stores. Wells, as president of PEN International (Poets, Essayists, Novelists), angered the Nazis by overseeing the expulsion of the German PEN club from the international body in 1934 following the German PEN's refusal to admit non-Aryan writers to its membership. At a PEN conference in Ragusa, Wells refused to yield to Nazi sympathisers who demanded that the exiled author Ernst Toller be prevented from speaking. Near the end of the World War II, Allied forces discovered that the SS had compiled lists of people slated for immediate arrest during the invasion of Britain in the abandoned Operation Sea Lion, with Wells included in the alphabetical list of "The Black Book". Seeking a more structured way to play war games, Wells also wrote "Floor Games" (1911) followed by "Little Wars" (1913), which set out rules for fighting battles with toy soldiers (miniatures). "Little Wars" is recognised today as the first recreational war game and Wells is regarded by gamers and hobbyists as "the Father of Miniature War Gaming". A pacifist prior to the First World War, Wells stated "how much better is this amiable miniature [war] than the real thing". According to Wells, the idea of the game developed from a visit by his friend Jerome K. Jerome. After dinner, Jerome began shooting down toy soldiers with a toy cannon and Wells joined in to compete. During August 1914, immediately after the outbreak of the First World War, Wells published a number of articles in London newspapers that subsequently appeared as a book entitled "The War That Will End War". Wells blamed the Central Powers for the coming of the war and argued that only the defeat of German militarism could bring about an end to war. Wells used the shorter form of the phrase, "the war to end war", in "In the Fourth Year" (1918), in which he noted that the phrase "got into circulation" in the second half of 1914. In fact, it had become one of the most common catchphrases of the war. In 1918 Wells worked for the British War Propaganda Bureau also called Wellington House. Wells was also one of fifty-three of the leading British authors — a number that included Rudyard Kipling, Thomas Hardy and Sir Arthur Conan Doyle — who signed their names to the “Authors' Declaration.” This manifesto declared that the German invasion of Belgium had been a brutal crime, and that Britain “could not without dishonour have refused to take part in the present war.” Wells visited Russia three times: 1914, 1920 and 1934. During his second visit, he saw his old friend Maxim Gorky and with Gorky's help, met Vladimir Lenin. In his book "Russia in the Shadows", Wells portrayed Russia as recovering from a total social collapse, "the completest that has ever happened to any modern social organisation." On 23 July 1934, after visiting U.S. President Franklin D. Roosevelt, Wells went to the Soviet Union and interviewed Joseph Stalin for three hours for the "New Statesman" magazine, which was extremely rare at that time. He told Stalin how he had seen 'the happy faces of healthy people' in contrast with his previous visit to Moscow in 1920. However, he also criticised the lawlessness, class discrimination, state violence, and absence of free expression. Stalin enjoyed the conversation and replied accordingly. As the chairman of the London-based PEN Club, which protected the rights of authors to write without being intimidated, Wells hoped by his trip to USSR, he could win Stalin over by force of argument. Before he left, he realised that no reform was to happen in the near future. Wells's literary reputation declined as he spent his later years promoting causes that were rejected by most of his contemporaries as well as by younger authors whom he had previously influenced. In this connection, George Orwell described Wells as "too sane to understand the modern world". G. K. Chesterton quipped: "Mr Wells is a born storyteller who has sold his birthright for a pot of message". Wells had diabetes, and was a co-founder in 1934 of The Diabetic Association (now Diabetes UK, the leading charity for people with diabetes in the UK). On 28 October 1940, on the radio station KTSA in San Antonio, Texas, Wells took part in a radio interview with Orson Welles, who two years previously had performed a famous radio adaptation of "The War of the Worlds". During the interview, by Charles C Shaw, a KTSA radio host, Wells admitted his surprise at the widespread panic that resulted from the broadcast but acknowledged his debt to Welles for increasing sales of one of his "more obscure" titles. Wells died of unspecified causes on 13 August 1946, aged 79, at his home at 13 Hanover Terrace, overlooking Regent's Park, London. In his preface to the 1941 edition of "The War in the Air", Wells had stated that his epitaph should be: "I told you so. You "damned" fools". Wells' body was cremated at Golders Green Crematorium on 16 August 1946; his ashes were subsequently scattered into the English Channel at Old Harry Rocks near Swanage in Dorset. A commemorative blue plaque in his honour was installed by the Greater London Council at his home in Regent's Park in 1966. A renowned futurist and “visionary”, Wells foresaw the advent of aircraft, tanks, space travel, nuclear weapons, satellite television and something resembling the World Wide Web. Asserting that “Wells visions of the future remain unsurpassed”, John Higgs, author of "Stranger Than We Can Imagine: Making Sense of the Twentieth Century", states that in the late 19th century Wells “saw the coming century clearer than anyone else. He anticipated wars in the air, the sexual revolution, motorised transport causing the growth of suburbs and a proto-Wikipedia he called the “world brain”. He foresaw world wars creating a federalised Europe. Britain, he thought, would not fit comfortably in this New Europe and would identify more with the US and other English-speaking countries. In his novel "The World Set Free", he imagined an “atomic bomb” of terrifying power that would be dropped from aeroplanes. This was an extraordinary insight for an author writing in 1913, and it made a deep impression on Winston Churchill.” In a review of "The Time Machine" for the "New Yorker" magazine, Brad Leithauser writes, “At the base of Wells's great visionary exploit is this rational, ultimately scientific attempt to tease out the potential future consequences of present conditions—not as they might arise in a few years, or even decades, but millennia hence, epochs hence. He is world literature's Great Extrapolator. Like no other fiction writer before him, he embraced “deep time.” Wells was a socialist and a member of the Fabian Society. Winston Churchill was an avid reader of Wells' books, and after they first met in 1902 they kept in touch until Wells died in 1946. As a junior minister Churchill borrowed lines from Wells for one of his most famous early landmark speeches in 1906, and as Prime Minister the phrase "the gathering storm" — used by Churchill to describe the rise of Nazi Germany — had been written by Wells in "The War of the Worlds", which depicts an attack on Britain by Martians. Wells's extensive writings on equality and human rights, most notably his most influential work, "The Rights of Man" (1940), laid the groundwork for the 1948 Universal Declaration of Human Rights, which was adopted by the United Nations shortly after his death. His efforts regarding the League of Nations, on which he collaborated on the project with Leonard Woolf with the booklets "The Idea of a League of Nations", "Prolegomena to the Study of World Organization", and "The Way of the League of Nations", became a disappointment as the organization turned out to be a weak one unable to prevent the Second World War, which itself occurred towards the very end of his life and only increased the pessimistic side of his nature. In his last book "Mind at the End of Its Tether" (1945), he considered the idea that humanity being replaced by another species might not be a bad idea. He referred to the era between the two World Wars as "The Age of Frustration". Wells' views on God and religion changed over his lifetime. Early in his life he distanced himself from Christianity, and later from theism, and finally, late in life, he was essentially atheistic. Martin Gardner succinctly summarises this progression:[The younger Wells] ...did not object to using the word "God" provided it did not imply anything resembling human personality. In his middle years Wells went through a phase of defending the concept of a "finite God," similar to the god of such process theologians as Samuel Alexander, Edgar Brightman, and Charles Hartshorne. (He even wrote a book about it called "God the Invisible King".) Later Wells decided he was really an atheist. In "God the Invisible King" (1917), Wells wrote that his idea of God did not draw upon the traditional religions of the world: This book sets out as forcibly and exactly as possible the religious belief of the writer. [Which] is a profound belief in a personal and intimate God. ... Putting the leading idea of this book very roughly, these two antagonistic typical conceptions of God may be best contrasted by speaking of one of them as God-as-Nature or the Creator, and of the other as God-as-Christ or the Redeemer. One is the great Outward God; the other is the Inmost God. The first idea was perhaps developed most highly and completely in the God of Spinoza. It is a conception of God tending to pantheism, to an idea of a comprehensive God as ruling with justice rather than affection, to a conception of aloofness and awestriking worshipfulness. The second idea, which is contradictory to this idea of an absolute God, is the God of the human heart. The writer suggested that the great outline of the theological struggles of that phase of civilisation and world unity which produced Christianity, was a persistent but unsuccessful attempt to get these two different ideas of God into one focus. Later in the work, he aligns himself with a "renascent or modern religion ... neither atheist nor Buddhist nor Mohammedan nor Christian ... [that] he has found growing up in himself". Of Christianity, he said: "it is not now true for me. ... Every believing Christian is, I am sure, my spiritual brother ... but if systemically I called myself a Christian I feel that to most men I should imply too much and so tell a lie". Of other world religions, he writes: "All these religions are true for me as Canterbury Cathedral is a true thing and as a Swiss chalet is a true thing. There they are, and they have served a purpose, they have worked. Only they are not true for me to live in them. ... They do not work for me". In "The Fate of Homo Sapiens" (1939), Wells criticised almost all world religions and philosophies, stating "there is no creed, no way of living left in the world at all, that really meets the needs of the time… When we come to look at them coolly and dispassionately, all the main religions, patriotic, moral and customary systems in which human beings are sheltering today, appear to be in a state of jostling and mutually destructive movement, like the houses and palaces and other buildings of some vast, sprawling city overtaken by a landslide. Wells' opposition to organised religion reached a fever pitch in 1943 with publication of his book "Crux Ansata", subtitled "An Indictment of the Roman Catholic Church". The science fiction historian John Clute describes Wells as "the most important writer the genre has yet seen", and notes his work has been central to both British and American science fiction. Science fiction author and critic Algis Budrys said Wells "remains the outstanding expositor of both the hope, and the despair, which are embodied in the technology and which are the major facts of life in our world". He was nominated for the Nobel Prize in Literature in 1921, 1932, 1935, and 1946. Wells so influenced real exploration of Mars that an impact crater on the planet was named after him. In the United Kingdom, Wells's work was a key model for the British “scientific romance”, and other writers in that mode, such as Olaf Stapledon, J. D. Beresford, S. Fowler Wright, and Naomi Mitchison, all drew on Wells's example. Wells was also an important influence on British science fiction of the period after the Second World War, with Arthur C. Clarke and Brian Aldiss expressing strong admiration for Wells's work. Among contemporary British science fiction writers, Stephen Baxter, Christopher Priest and Adam Roberts have all acknowledged Wells's influence on their writing; all three are Vice-Presidents of the H. G. Wells Society. He also had a strong influence on British scientist J. B. S. Haldane, who wrote "Daedalus; or, Science and the Future" (1924), "The Last Judgement" and "On Being the Right Size" from the essay collection "Possible Worlds" (1927), and "Biological Possibilities for the Human Species in the Next Ten Thousand Years" (1963), which are speculations about the future of human evolution and life on other planets. Haldane gave several lectures about these topics which in turn influenced other science fiction writers. In the United States, Hugo Gernsback reprinted most of Wells's work in the pulp magazine "Amazing Stories", regarding Wells's work as "texts of central importance to the self-conscious new genre". Later American writers such as Ray Bradbury, Isaac Asimov, Frank Herbert and Ursula K. Le Guin all recalled being influenced by Wells's work. Sinclair Lewis's early novels were strongly influenced by Wells's realistic social novels, such as "The History of Mr Polly"; Lewis also named his first son Wells after the author. In an interview with "The Paris Review", Vladimir Nabokov described Wells as his favourite writer when he was a boy and "a great artist." He went on to cite "The Passionate Friends", "Ann Veronica", "The Time Machine", and "The Country of the Blind" as superior to anything else written by Wells's British contemporaries. In an apparent allusion to Wells's socialism and political themes, Nabokov said: "His sociological cogitations can be safely ignored, of course, but his romances and fantasies are superb." Jorge Luis Borges wrote many short pieces on Wells in which he demonstrates a deep familiarity with much of Wells's work. While Borges wrote several critical reviews, including a mostly negative review of Wells's film "Things to Come", he regularly treated Wells as a canonical figure of fantastic literature. Late in his life, Borges included "The Invisible Man" and "The Time Machine" in his "Prologue to a Personal Library", a curated list of 100 great works of literature that he undertook at the behest of the Argentine publishing house Emecé. Canadian author Margaret Atwood read Wells' books, and he also inspired writers of European speculative fiction such as Karel Čapek and Yevgeny Zamyatin. In 1954, the University of Illinois at Urbana–Champaign purchased the H. G. Wells literary papers and correspondence collection. The University's Rare Book & Manuscript Library holds the largest collection of Wells manuscripts, correspondence, first editions and publications in the United States. Among these is unpublished material and the manuscripts of such works as "The War of the Worlds" and "The Time Machine". The collection includes first editions, revisions, translations. The letters contain general family correspondence, communications from publishers, material regarding the Fabian Society, and letters from politicians and public figures, most notably George Bernard Shaw and Joseph Conrad. Sources—collections Sources—letters, essays and interviews Biography Critical essays
https://en.wikipedia.org/wiki?curid=13459
Hypertext Hypertext is text displayed on a computer display or other electronic devices with references (hyperlinks) to other text that the reader can immediately access. Hypertext documents are interconnected by hyperlinks, which are typically activated by a mouse click, keypress set or by touching the screen. Apart from text, the term "hypertext" is also sometimes used to describe tables, images, and other presentational content formats with integrated hyperlinks. Hypertext is one of the key underlying concepts of the World Wide Web, where Web pages are often written in the Hypertext Markup Language (HTML). As implemented on the Web, hypertext enables the easy-to-use publication of information over the Internet. The English prefix "hyper-" comes from the Greek prefix "ὑπερ-" and means "over" or "beyond"; it has a common origin with the prefix "super-" which comes from Latin. It signifies the overcoming of the previous linear constraints of written text. The term "hypertext" is often used where the term "hypermedia" might seem appropriate. In 1992, author Ted Nelson – who coined both terms in 1963 – wrote: Hypertext documents can either be static (prepared and stored in advance) or dynamic (continually changing in response to user input, such as dynamic web pages). Static hypertext can be used to cross-reference collections of data in documents, software applications, or books on CDs. A well-constructed system can also incorporate other user-interface conventions, such as menus and command lines. Links used in a hypertext document usually replace the current piece of hypertext with the destination document. A lesser known feature is StretchText, which expands or contracts the content in place, thereby giving more control to the reader in determining the level of detail of the displayed document. Some implementations support transclusion, where text or other content is included by reference and automatically rendered in place. Hypertext can be used to support very complex and dynamic systems of linking and cross-referencing. The most famous implementation of hypertext is the World Wide Web, written in the final months of 1990 and released on the Internet in 1991. In 1941, Jorge Luis Borges published "The Garden of Forking Paths", a short story that is often considered an inspiration for the concept of hypertext. In 1945, Vannevar Bush wrote an article in "The Atlantic Monthly" called "As We May Think", about a futuristic proto-hypertext device he called a Memex. A Memex would hypothetically store - and record - content on reels of microfilm, using electric photocells to read coded symbols recorded next to individual microfilm frames while the reels spun at high speed, stopping on command. The coded symbols would enable the Memex to index, search, and link content to create and follow associative trails. Because the Memex was never implemented and could only link content in a relatively crude fashion — by creating chains of entire microfilm frames — the Memex is now regarded not only as a proto-hypertext device, but it is fundamental to the history of hypertext because it directly inspired the invention of hypertext by Ted Nelson and Douglas Engelbart. In 1963, Ted Nelson coined the terms 'hypertext' and 'hypermedia' as part of a model he developed for creating and using linked content (first published reference 1965). He later worked with Andries van Dam to develop the Hypertext Editing System (text editing) in 1967 at Brown University. It was implemented using the terminal IBM 2250 with a light pen which was provided as a pointing device. By 1976, its successor FRESS was used in a poetry class in which students could browse a hyperlinked set of poems and discussion by experts, faculty and other students, in what was arguably the world’s first online scholarly community which van Dam says "foreshadowed wikis, blogs and communal documents of all kinds". Ted Nelson said in the 1960s that he began implementation of a hypertext system he theorized, which was named Project Xanadu, but his first and incomplete public release was finished much later, in 1998. Douglas Engelbart independently began working on his NLS system in 1962 at Stanford Research Institute, although delays in obtaining funding, personnel, and equipment meant that its key features were not completed until 1968. In December of that year, Engelbart demonstrated a 'hypertext' (meaning editing) interface to the public for the first time, in what has come to be known as "The Mother of All Demos". The first hypermedia application is generally considered to be the Aspen Movie Map, implemented in 1978. The Movie Map allowed users to arbitrarily choose which way they wished to drive in a virtual cityscape, in two seasons (from actual photographs) as well as 3-D polygons. In 1980, Tim Berners-Lee created ENQUIRE, an early hypertext database system somewhat like a wiki but without hypertext punctuation, which was not invented until 1987. The early 1980s also saw a number of experimental "hyperediting" functions in word processors and hypermedia programs, many of whose features and terminology were later analogous to the World Wide Web. Guide, the first significant hypertext system for personal computers, was developed by Peter J. Brown at UKC in 1982. In 1980 Roberto Busa, an Italian Jesuit priest and one of the pioneers in the usage of computers for linguistic and literary analysis, published the "Index Thomisticus", as a tool for performing text searches within the massive corpus of Aquinas's works. Sponsored by the founder of IBM, Thomas J. Watson, the project lasted about 30 years (1949-1980), and eventually produced the 56 printed volumes of the "Index Thomisticus" the first important hypertext work about Saint Thomas Aquinas books and of a few related authors. In 1983, Ben Shneiderman at the University of Maryland Human - Computer Interaction Lab led a group that developed the HyperTies system that was commercialized by Cognetics Corporation. Hyperties was used to create the July 1988 issue of the Communications of the ACM as a hypertext document and then the first commercial electronic book Hypertext Hands-On! In August 1987, Apple Computer released HyperCard for the Macintosh line at the MacWorld convention. Its impact, combined with interest in Peter J. Brown's GUIDE (marketed by OWL and released earlier that year) and Brown University's Intermedia, led to broad interest in and enthusiasm for hypertext, hypermedia, databases, and new media in general. The first ACM Hypertext (hyperediting and databases) academic conference took place in November 1987, in Chapel Hill NC, where many other applications, including the branched literature writing software Storyspace, were also demonstrated. Meanwhile, Nelson (who had been working on and advocating his Xanadu system for over two decades) convinced Autodesk to invest in his revolutionary ideas. The project continued at Autodesk for four years, but no product was released. In 1989, Tim Berners-Lee, then a scientist at CERN, proposed and later prototyped a new hypertext project in response to a request for a simple, immediate, information-sharing facility, to be used among physicists working at CERN and other academic institutions. He called the project "WorldWideWeb". In 1992, Lynx was born as an early Internet web browser. Its ability to provide hypertext links within documents that could reach into documents anywhere on the Internet began the creation of the Web on the Internet. As new web browsers were released, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. As a result, all previous hypertext systems were overshadowed by the success of the Web, even though it lacked many features of those earlier systems, such as integrated browsers/editors (a feature of the original WorldWideWeb browser, which was not carried over into most of the other early Web browsers). Besides the already mentioned Project Xanadu, Hypertext Editing System, NLS, HyperCard, and World Wide Web, there are other noteworthy early implementations of hypertext, with different feature sets: Among the top academic conferences for new research in hypertext is the annual ACM Conference on Hypertext and Hypermedia. Although not exclusively about hypertext, the World Wide Web series of conferences, organized by IW3C2, include many papers of interest. There is a list on the Web with links to all conferences in the series. Hypertext writing has developed its own style of fiction, coinciding with the growth and proliferation of hypertext development software and the emergence of electronic networks. Two software programs specifically designed for literary hypertext, "Storyspace" and Intermedia became available in the 1990s. On the other hand, concerning the Italian production, the hypertext s000t000d by Filippo Rosso (2002), was intended to lead the reader (with the help of a three-dimensional map) in a web page interface, and was written in HTML and PHP. An advantage of writing a narrative using hypertext technology is that the meaning of the story can be conveyed through a sense of spatiality and perspective that is arguably unique to digitally networked environments. An author's creative use of nodes, the self-contained units of meaning in a hypertextual narrative, can play with the reader's orientation and add meaning to the text. One of the most successful computer games, "Myst", was first written in Hypercard. The game was constructed as a series of Ages, each Age consisting of a separate Hypercard stack. The full stack of the game consists of over 2500 cards. In some ways "Myst" redefined interactive fiction, using puzzles and exploration as a replacement for hypertextual narrative. Critics of hypertext claim that it inhibits the old, linear, reader experience by creating several different tracks to read on, and that this in turn contributes to a postmodernist fragmentation of worlds. In some cases, hypertext may be detrimental to the development of appealing stories (in the case of hypertext Gamebooks), where ease of linking fragments may lead to non-cohesive or incomprehensible narratives. However, they do see value in its ability to present several different views on the same subject in a simple way. This echoes the arguments of 'medium theorists' like Marshall McLuhan who look at the social and psychological impacts of the media. New media can become so dominant in public culture that they effectively create a "paradigm shift" as people have shifted their perceptions, understanding of the world, and ways of interacting with the world and each other in relation to new technologies and media. So hypertext signifies a change from linear, structured and hierarchical forms of representing and understanding the world into fractured, decentralized and changeable media based on the technological concept of hypertext links. In the 1990s, women and feminist artists took advantage of hypertext and produced dozens of works. Linda Dement’s "Cyberflesh Girlmonster" a hypertext CD-ROM that incorporates images of women’s body parts and remixes them to create new monstrous yet beautiful shapes. Dr. Caitlin Fisher’s award-winning online hypertext novella “‘These Waves of Girls“ is set in three time periods of the protagonist exploring polymorphous perversity enacted in her queer identity through memory. The story is written as a reflection diary of the interconnected memories of childhood, adolescence, and adulthood. It consists of an associated multi-modal collection of nodes includes linked text, still and moving images, manipulable images, animations, and sound clips. There are various forms of hypertext, each of which are structured differently. Below are four of the existing forms of hypertext:
https://en.wikipedia.org/wiki?curid=13460
Harald Tveit Alvestrand Harald Tveit Alvestrand (born 29 June 1959) is a Norwegian computer scientist. He was the chair of the Areas for Applications from 1995 until 1997, Operations and Management 1998, and the General Area, which implies being the chairman of the Internet Engineering Task Force (IETF) from 2001 until 2005. Alvestrand was born in Namsos, Norway, received his education from Bergen Cathedral School and the Norwegian Institute of Technology, and has worked for Norsk Data, UNINETT, EDB Maxware and Cisco Systems. He is an author of several important Request for Comments (RFCs), many in the general area of Internationalization and localization, most notable the documents required for interoperability between SMTP and X.400. Since the start of the use of OIDs he has run a front end to the hierarchy of assignments according to X.208 . At the end of 2007 Alvestrand was selected for the ICANN Board, where he remained until December 2010. In 2001 he became a member of the Unicode Board of Directors. He was a co-chair of the IETF EAI and USEFOR WGs. Harald Alvestrand was the executive director of the Linux Counter organization. He was a member of the Norid Board, and the RFC Independent Submissions Editorial Board.
https://en.wikipedia.org/wiki?curid=13461
Hans Gerhard Creutzfeldt Hans Gerhard Creutzfeldt (June 2, 1885 – December 30, 1964) was a German neurologist and neuropathologist. Although he is typically credited as the physician to first describe the Creutzfeldt–Jakob disease, this has been disputed. He was born in Harburg upon Elbe and died in Munich. Hans Gerhard Creutzfeldt was born into a medical family in Harburg, which was incorporated into Hamburg in 1937. In 1903, at the age of 18, he was drafted into the German army and spent his service stationed in Kiel. Afterwards, he attended the School of Medicine of the Universities of University of Jena and University of Rostock, receiving his doctorate at the latter in 1909. Part of his practical training was undertaken at "St. Georg" – Hospital in Hamburg. After qualification he sought adventure as a ship's surgeon, voyaging the Pacific Ocean, taking the opportunity to study local crafts, linguistics, and tropical plants. After returning to Germany in 1912, Creutzfeldt worked at the Neurological Institute in Frankfurt am Main, at the psychiatric-neurological clinics in Breslau, Kiel and Berlin, and at the "Deutsche Forschungsanstalt für Psychiatrie" in Munich. During the First World War, Creutzfeldt was deployed as a reserve medical officer and survived the sinking of the auxiliary cruiser SMS Greif, on which he was embarked. After being captured on February 29, 1916, he was repatriated as a doctor in May of that year and served in the Imperial Navy until the end of the war in 1918. Creutzfeldt was habilitated at Kiel in 1920, and in 1925 became "Extraordinarius" of psychiatry and neurology. In 1938 he was appointed professor and director of the university psychiatric and neurological division in Kiel. He helped to recognize a neurodegenerative disease, with Alfons Maria Jakob, Creutzfeldt–Jakob disease in which the brain tissue develops holes and takes on a sponge-like texture. It is now known it is due to a type of infectious protein called a prion. Prions are misfolded proteins which replicate by converting their properly folded counterparts. In the Third Reich, Creutzfeldt became a Patron Member of Heinrich Himmler's SS from 1932 to 1933. Creutzfeldt was 54 years old when the Second World War broke out. He was unmoved by the Nazi regime and was able to save some people from death in concentration camps and also managed to rescue almost all of his patients from being murdered under the Nazi Aktion T4 euthanasia program, an unusual event since most mental patients identified by T4 personnel were gassed or poisoned at separate euthanasia clinics such as Hadamar Euthanasia Centre. During the war, bombing raids destroyed his home and clinic. After the war he was director of the University of Kiel for six months, before being dismissed by the British occupation forces. His efforts to rebuild the university caused a series of conflicts with the British because he wanted to allow more former army officers to study there. Creutzfeldt resigned from his work at Kiel in 1953 in order to pursue life as professor emeritus in Munich. He was married to Clara Sombart, a daughter of Werner Sombart. They had five children, among them Otto Detlev Creutzfeldt and Werner Creutzfeldt (1924–2006), a renowned German Internist. He died in 1964 in Munich.
https://en.wikipedia.org/wiki?curid=13464
Holmium Holmium is a chemical element with the symbol Ho and atomic number 67. Part of the lanthanide series, holmium is a rare-earth element. Holmium was discovered through isolation by Swedish chemist Per Theodor Cleve and independently by Jacques-Louis Soret and Marc Delafontaine who observed it spectroscopically in 1878. Its oxide was first isolated from rare-earth ores by Cleve in 1878. The element's name comes from "Holmia", the Latin name for the city of Stockholm. Elemental holmium is a relatively soft and malleable silvery-white metal. It is too reactive to be found uncombined in nature, but when isolated, is relatively stable in dry air at room temperature. However, it reacts with water and corrodes readily and also burns in air when heated. Holmium is found in the minerals monazite and gadolinite and is usually commercially extracted from monazite using ion-exchange techniques. Its compounds in nature and in nearly all of its laboratory chemistry are trivalently oxidized, containing Ho(III) ions. Trivalent holmium ions have fluorescent properties similar to many other rare-earth ions (while yielding their own set of unique emission light lines), and thus are used in the same way as some other rare earths in certain laser and glass-colorant applications. Holmium has the highest magnetic permeability of any element and therefore is used for the polepieces of the strongest static magnets. Because holmium strongly absorbs neutrons, it is also used as a burnable poison in nuclear reactors. Holmium is a relatively soft and malleable element that is fairly corrosion-resistant and stable in dry air at standard temperature and pressure. In moist air and at higher temperatures, however, it quickly oxidizes, forming a yellowish oxide. In pure form, holmium possesses a metallic, bright silvery luster. Holmium oxide has some fairly dramatic color changes depending on the lighting conditions. In daylight, it has a tannish yellow color. Under trichromatic light, it is fiery orange-red, almost indistinguishable from the appearance of erbium oxide under the same lighting conditions. The perceived color change is related to the sharp absorption bands of holmium interacting with a subset of the sharp emission bands of the trivalent ions of europium and terbium, acting as phosphors. Holmium has the highest magnetic moment () of any naturally occurring element and possesses other unusual magnetic properties. When combined with yttrium, it forms highly magnetic compounds. Holmium is paramagnetic at ambient conditions, but is ferromagnetic at temperatures below . Holmium metal tarnishes slowly in air and burns readily to form holmium(III) oxide: Holmium is quite electropositive and is generally trivalent. It reacts slowly with cold water and quite quickly with hot water to form holmium hydroxide: Holmium metal reacts with all the halogens: Holmium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Ho(III) ions, which exist as a [Ho(OH2)9]3+ complexes: Holmium's most common oxidation state is +3. Holmium in solution is in the form of Ho3+ surrounded by nine molecules of water. Holmium dissolves in acids. Natural holmium contains one stable isotope, holmium-165. Some synthetic radioactive isotopes are known; the most stable one is holmium-163, with a half-life of 4570 years. All other radioisotopes have ground-state half-lives not greater than 1.117 days, and most have half-lives under 3 hours. However, the metastable 166m1Ho has a half-life of around 1200 years because of its high spin. This fact, combined with a high excitation energy resulting in a particularly rich spectrum of decay gamma rays produced when the metastable state de-excites, makes this isotope useful in nuclear physics experiments as a means for calibrating energy responses and intrinsic efficiencies of gamma ray spectrometers. Holmium ("Holmia", Latin name for Stockholm) was discovered by Jacques-Louis Soret and Marc Delafontaine in 1878 who noticed the aberrant spectrographic absorption bands of the then-unknown element (they called it "Element X"). As well, Per Teodor Cleve independently discovered the element while he was working on erbia earth (erbium oxide), and was the first to isolate it. Using the method developed by Carl Gustaf Mosander, Cleve first removed all of the known contaminants from erbia. The result of that effort was two new materials, one brown and one green. He named the brown substance holmia (after the Latin name for Cleve's home town, Stockholm) and the green one thulia. Holmia was later found to be the holmium oxide, and thulia was thulium oxide. In Henry Moseley's classic paper on atomic numbers, holmium was assigned an atomic number of 66. Evidently, the holmium preparation he had been given to investigate had been grossly impure, dominated by neighboring (and unplotted) dysprosium. He would have seen x-ray emission lines for both elements, but assumed that the dominant ones belonged to holmium, instead of the dysprosium impurity. Like all other rare earths, holmium is not naturally found as a free element. It does occur combined with other elements in gadolinite (the black part of the specimen illustrated to the right), monazite and other rare-earth minerals. No holmium-dominant mineral has yet been found. The main mining areas are China, United States, Brazil, India, Sri Lanka, and Australia with reserves of holmium estimated as 400,000 tonnes. Holmium makes up 1.4 parts per million of the Earth's crust by mass. This makes it the 56th most abundant element in the Earth's crust. Holmium makes up 1 part per million of the soils, 400 parts per quadrillion of seawater, and almost none of Earth's atmosphere. Holmium is rare for a lanthanide. It makes up 500 parts per trillion of the universe by mass. It is commercially extracted by ion exchange from monazite sand (0.05% holmium), but is still difficult to separate from other rare earths. The element has been isolated through the reduction of its anhydrous chloride or fluoride with metallic calcium. Its estimated abundance in the Earth's crust is 1.3 mg/kg. Holmium obeys the Oddo–Harkins rule: as an odd-numbered element, it is less abundant than its immediate even-numbered neighbors, dysprosium and erbium. However, it is the most abundant of the odd-numbered heavy lanthanides. The principal current source are some of the ion-adsorption clays of southern China. Some of these have a rare-earth composition similar to that found in xenotime or gadolinite. Yttrium makes up about 2/3 of the total by mass; holmium is around 1.5%. The original ores themselves are very lean, maybe only 0.1% total lanthanide, but are easily extracted. Holmium is relatively inexpensive for a rare-earth metal with the price about 1000 USD/kg. Holmium has the highest magnetic strength of any element, and therefore is used to create the strongest artificially generated magnetic fields, when placed within high-strength magnets as a magnetic pole piece (also called a magnetic flux concentrator). Since it can absorb nuclear fission-bred neutrons, it is also used as a burnable poison to regulate nuclear reactors. Holmium-doped yttrium iron garnet (YIG) and yttrium lithium fluoride (YLF) have applications in solid-state lasers, and Ho-YIG has applications in optical isolators and in microwave equipment (e.g., YIG spheres). Holmium lasers emit at 2.1 micrometres. They are used in medical, dental, and fiber-optical applications. Holmium is one of the colorants used for cubic zirconia and glass, providing yellow or red coloring. Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200–900 nm. They are therefore used as a calibration standard for optical spectrophotometers and are available commercially. The radioactive but long-lived 166m1Ho (see "Isotopes" above) is used in calibration of gamma-ray spectrometers. In March 2017, IBM announced that they had developed a technique to store one bit of data on a single holmium atom set on a bed of magnesium oxide. With sufficient quantum and classical control techniques, Ho could be a good candidate to make quantum computers. Holmium plays no biological role in humans, but its salts are able to stimulate metabolism. Humans typically consume about a milligram of holmium a year. Plants do not readily take up holmium from the soil. Some vegetables have had their holmium content measured, and it amounted to 100 parts per trillion. Large amounts of holmium salts can cause severe damage if inhaled, consumed orally, or injected. The biological effects of holmium over a long period of time are not known. Holmium has a low level of acute toxicity.
https://en.wikipedia.org/wiki?curid=13465
Hafnium Hafnium is a chemical element with the symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in many zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869, though it was not identified until 1923, by Coster and Hevesy, making it the last stable element to be discovered. Hafnium is named after "Hafnia", the Latin name for Copenhagen, where it was discovered. Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nm and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten. Hafnium's large neutron capture cross section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors. Hafnium is a shiny, silvery, ductile metal that is corrosion-resistant and chemically similar to zirconium (due to its having the same number of valence electrons, being in the same group, but also to relativistic effects; the expected expansion of atomic radii from period 5 to 6 is almost exactly cancelled out by the lanthanide contraction). Hafnium changes from its alpha form, a hexagonal close-packed lattice, to its beta form, a body-centered cubic lattice, at 2388 K. The physical properties of hafnium metal samples are markedly affected by zirconium impurities, especially the nuclear properties, as these two elements are among the most difficult to separate because of their chemical similarity. A notable physical difference between these metals is their density, with zirconium having about one-half the density of hafnium. The most notable nuclear properties of hafnium are its high thermal neutron capture cross section and that the nuclei of several different hafnium isotopes readily absorb two or more neutrons apiece. In contrast with this, zirconium is practically transparent to thermal neutrons, and it is commonly used for the metal components of nuclear reactors – especially the cladding of their nuclear fuel rods. Hafnium reacts in air to form a protective film that inhibits further corrosion. The metal is not readily attacked by acids but can be oxidized with halogens or it can be burnt in air. Like its sister metal zirconium, finely divided hafnium can ignite spontaneously in air. The metal is resistant to concentrated alkalis. The chemistry of hafnium and zirconium is so similar that the two cannot be separated on the basis of differing chemical reactions. The melting points and boiling points of the compounds and the solubility in solvents are the major differences in the chemistry of these twin elements. At least 34 isotopes of hafnium have been observed, ranging in mass number from 153 to 186. The five stable isotopes are in the range of 176 to 180. The radioactive isotopes' half-lives range from only 400 ms for 153Hf, to 2.0 petayears (1015 years) for the most stable one, 174Hf. The nuclear isomer 178m2Hf was at the center of a controversy for several years regarding its potential use as a weapon. Hafnium is estimated to make up about 5.8 ppm of the Earth's upper crust by mass. It does not exist as a free element on Earth, but is found combined in solid solution with zirconium in natural zirconium compounds such as zircon, ZrSiO4, which usually has about 1–4% of the Zr replaced by Hf. Rarely, the Hf/Zr ratio increases during crystallization to give the isostructural mineral hafnon (Hf,Zr)SiO4, with atomic Hf > Zr. An obsolete name for a variety of zircon containing unusually high Hf content is "alvite". A major source of zircon (and hence hafnium) ores is heavy mineral sands ore deposits, pegmatites, particularly in Brazil and Malawi, and carbonatite intrusions, particularly the Crown Polymetallic Deposit at Mount Weld, Western Australia. A potential source of hafnium is trachyte tuffs containing rare zircon-hafnium silicates eudialyte or armstrongite, at Dubbo in New South Wales, Australia. Hafnium reserves have been infamously estimated to last under 10 years by one source if the world population increases and demand grows. In reality, since hafnium occurs with zirconium, hafnium can always be a byproduct of zirconium extraction to the extent that the low demand requires. The heavy mineral sands ore deposits of the titanium ores ilmenite and rutile yield most of the mined zirconium, and therefore also most of the hafnium. Zirconium is a good nuclear fuel-rod cladding metal, with the desirable properties of a very low neutron capture cross-section and good chemical stability at high temperatures. However, because of hafnium's neutron-absorbing properties, hafnium impurities in zirconium would cause it to be far less useful for nuclear-reactor applications. Thus, a nearly complete separation of zirconium and hafnium is necessary for their use in nuclear power. The production of hafnium-free zirconium is the main source for hafnium. The chemical properties of hafnium and zirconium are nearly identical, which makes the two difficult to separate. The methods first used — fractional crystallization of ammonium fluoride salts or the fractional distillation of the chloride — have not proven suitable for an industrial-scale production. After zirconium was chosen as material for nuclear reactor programs in the 1940s, a separation method had to be developed. Liquid-liquid extraction processes with a wide variety of solvents were developed and are still used for the production of hafnium. About half of all hafnium metal manufactured is produced as a by-product of zirconium refinement. The end product of the separation is hafnium(IV) chloride. The purified hafnium(IV) chloride is converted to the metal by reduction with magnesium or sodium, as in the Kroll process. Further purification is effected by a chemical transport reaction developed by Arkel and de Boer: In a closed vessel, hafnium reacts with iodine at temperatures of 500 °C, forming hafnium(IV) iodide; at a tungsten filament of 1700 °C the reverse reaction happens, and the iodine and hafnium are set free. The hafnium forms a solid coating at the tungsten filament, and the iodine can react with additional hafnium, resulting in a steady turn over. Due to the lanthanide contraction, the ionic radius of hafnium(IV) (0.78 ångström) is almost the same as that of zirconium(IV) (0.79 angstroms). Consequently, compounds of hafnium(IV) and zirconium(IV) have very similar chemical and physical properties. Hafnium and zirconium tend to occur together in nature and the similarity of their ionic radii makes their chemical separation rather difficult. Hafnium tends to form inorganic compounds in the oxidation state of +4. Halogens react with it to form hafnium tetrahalides. At higher temperatures, hafnium reacts with oxygen, nitrogen, carbon, boron, sulfur, and silicon. Some compounds of hafnium in lower oxidation states are known. Hafnium(IV) chloride and hafnium(IV) iodide have some applications in the production and purification of hafnium metal. They are volatile solids with polymeric structures. These tetrachlorides are precursors to various organohafnium compounds such as hafnocene dichloride and tetrabenzylhafnium. The white hafnium oxide (HfO2), with a melting point of 2812 °C and a boiling point of roughly 5100 °C, is very similar to zirconia, but slightly more basic. Hafnium carbide is the most refractory binary compound known, with a melting point over 3890 °C, and hafnium nitride is the most refractory of all known metal nitrides, with a melting point of 3310 °C. This has led to proposals that hafnium or its carbides might be useful as construction materials that are subjected to very high temperatures. The mixed carbide tantalum hafnium carbide () possesses the highest melting point of any currently known compound, 4215 K (3942 °C, 7128 °F). Recent supercomputer simulations suggest a hafnium alloy with a melting point of 4400 K. In his report on "The Periodic Law of the Chemical Elements", in 1869, Dmitri Mendeleev had implicitly predicted the existence of a heavier analog of titanium and zirconium. At the time of his formulation in 1871, Mendeleev believed that the elements were ordered by their atomic masses and placed lanthanum (element 57) in the spot below zirconium. The exact placement of the elements and the location of missing elements was done by determining the specific weight of the elements and comparing the chemical and physical properties. The X-ray spectroscopy done by Henry Moseley in 1914 showed a direct dependency between spectral line and effective nuclear charge. This led to the nuclear charge, or atomic number of an element, being used to ascertain its place within the periodic table. With this method, Moseley determined the number of lanthanides and showed the gaps in the atomic number sequence at numbers 43, 61, 72, and 75. The discovery of the gaps led to an extensive search for the missing elements. In 1914, several people claimed the discovery after Henry Moseley predicted the gap in the periodic table for the then-undiscovered element 72. Georges Urbain asserted that he found element 72 in the rare earth elements in 1907 and published his results on "celtium" in 1911. Neither the spectra nor the chemical behavior he claimed matched with the element found later, and therefore his claim was turned down after a long-standing controversy. The controversy was partly because the chemists favored the chemical techniques which led to the discovery of "celtium", while the physicists relied on the use of the new X-ray spectroscopy method that proved that the substances discovered by Urbain did not contain element 72. By early 1923, several physicists and chemists such as Niels Bohr and Charles R. Bury suggested that element 72 should resemble zirconium and therefore was not part of the rare earth elements group. These suggestions were based on Bohr's theories of the atom, the X-ray spectroscopy of Moseley, and the chemical arguments of Friedrich Paneth. Encouraged by these suggestions and by the reappearance in 1922 of Urbain's claims that element 72 was a rare earth element discovered in 1911, Dirk Coster and Georg von Hevesy were motivated to search for the new element in zirconium ores. Hafnium was discovered by the two in 1923 in Copenhagen, Denmark, validating the original 1869 prediction of Mendeleev. It was ultimately found in zircon in Norway through X-ray spectroscopy analysis. The place where the discovery took place led to the element being named for the Latin name for "Copenhagen", "Hafnia", the home town of Niels Bohr. Today, the Faculty of Science of the University of Copenhagen uses in its seal a stylized image of the hafnium atom. Hafnium was separated from zirconium through repeated recrystallization of the double ammonium or potassium fluorides by Valdemar Thal Jantzen and von Hevesey. Anton Eduard van Arkel and Jan Hendrik de Boer were the first to prepare metallic hafnium by passing hafnium tetraiodide vapor over a heated tungsten filament in 1924. This process for differential purification of zirconium and hafnium is still in use today. In 1923, four predicted elements were still missing from the periodic table: 43 (technetium) and 61 (promethium) are radioactive elements and are only present in trace amounts in the environment, thus making elements 75 (rhenium) and 72 (hafnium) the last two unknown non-radioactive elements. Since rhenium was discovered in 1908, hafnium was the last element with stable isotopes to be discovered. Most of the hafnium produced is used in the manufacture of control rods for nuclear reactors. Several details contribute to the fact that there are only a few technical uses for hafnium: First, the close similarity between hafnium and zirconium makes it possible to use zirconium for most of the applications; second, hafnium was first available as pure metal after the use in the nuclear industry for hafnium-free zirconium in the late 1950s. Furthermore, the low abundance and difficult separation techniques necessary make it a scarce commodity. When the demand for zirconium dropped following the Fukushima disaster, the price of hafnium increased sharply from around $500–600/kg in 2014 to around $1000/kg in 2015. The nuclei of several hafnium isotopes can each absorb multiple neutrons. This makes hafnium a good material for use in the control rods for nuclear reactors. Its neutron-capture cross-section (Capture Resonance Integral Io ≈ 2000 barns) is about 600 times that of zirconium (other elements that are good neutron-absorbers for control rods are cadmium and boron). Excellent mechanical properties and exceptional corrosion-resistance properties allow its use in the harsh environment of pressurized water reactors. The German research reactor FRM II uses hafnium as a neutron absorber. It is also common in military reactors, particularly in US naval reactors, but seldom found in civilian ones, the first core of the Shippingport Atomic Power Station (a conversion of a naval reactor) being a notable exception. Hafnium is used in alloys with iron, titanium, niobium, tantalum, and other metals. An alloy used for liquid rocket thruster nozzles, for example the main engine of the Apollo Lunar Modules, is C103 which consists of 89% niobium, 10% hafnium and 1% titanium. Small additions of hafnium increase the adherence of protective oxide scales on nickel-based alloys. It improves thereby the corrosion resistance especially under cyclic temperature conditions that tend to break oxide scales by inducing thermal stresses between the bulk material and the oxide layer. Hafnium-based compounds are employed in gate insulators in the 45 nm generation of integrated circuits from Intel, IBM and others. Hafnium oxide-based compounds are practical high-k dielectrics, allowing reduction of the gate leakage current which improves performance at such scales. Isotopes of hafnium and lutetium (along with ytterbium) are also used in isotope geochemistry and geochronological applications, in lutetium-hafnium dating. It is often used as a tracer of isotopic evolution of Earth's mantle through time. This is because 176Lu decays to 176Hf with a half-life of approximately 37 billion years. In most geologic materials, zircon is the dominant host of hafnium (>10,000 ppm) and is often the focus of hafnium studies in geology. Hafnium is readily substituted into the zircon crystal lattice, and is therefore very resistant to hafnium mobility and contamination. Zircon also has an extremely low Lu/Hf ratio, making any correction for initial lutetium minimal. Although the Lu/Hf system can be used to calculate a "model age", i.e. the time at which it was derived from a given isotopic reservoir such as the depleted mantle, these "ages" do not carry the same geologic significance as do other geochronological techniques as the results often yield isotopic mixtures and thus provide an average age of the material from which it was derived. Garnet is another mineral that contains appreciable amounts of hafnium to act as a geochronometer. The high and variable Lu/Hf ratios found in garnet make it useful for dating metamorphic events. Due to its heat resistance and its affinity to oxygen and nitrogen, hafnium is a good scavenger for oxygen and nitrogen in gas-filled and incandescent lamps. Hafnium is also used as the electrode in plasma cutting because of its ability to shed electrons into air. The high energy content of 178m2Hf was the concern of a DARPA-funded program in the US. This program determined that the possibility of using a nuclear isomer of hafnium (the above-mentioned 178m2Hf) to construct high-yield weapons with X-ray triggering mechanisms—an application of induced gamma emission—was infeasible because of its expense. See "hafnium controversy". Hafnium metallocene compounds can be prepared from hafnium tetrachloride and various cyclopentadiene-type ligand species. Perhaps the simplest hafnium metallocene is halfnocene dichloride. Hafnium metallocenes are part of a large collection of Group 4 transition metal metallocene catalysts that are used worldwide in the production of polyolefin resins like polyethylene and polypropylene. Care needs to be taken when machining hafnium because it is pyrophoric—fine particles can spontaneously combust when exposed to air. Compounds that contain this metal are rarely encountered by most people. The pure metal is not considered toxic, but hafnium compounds should be handled as if they were toxic because the ionic forms of metals are normally at greatest risk for toxicity, and limited animal testing has been done for hafnium compounds. People can be exposed to hafnium in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (Permissible exposure limit) for exposure to hafnium and hafnium compounds in the workplace as TWA 0.5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set the same recommended exposure limit (REL). At levels of 50 mg/m3, hafnium is immediately dangerous to life and health.
https://en.wikipedia.org/wiki?curid=13466
Hamburg Hamburg (, , locally also ; ), officially the Free and Hanseatic City of Hamburg (; ), is the second-largest city in Germany after Berlin and 7th largest city in the European Union with a population of over 1.84 million. One of Germany's 16 federal states, it is surrounded by Schleswig-Holstein to the north and Lower Saxony to the south. The city's metropolitan region is home to more than five million people. Hamburg lies on the River Elbe and two of its tributaries, the River Alster and the River Bille. The official name reflects Hamburg's history as a member of the medieval Hanseatic League and a free imperial city of the Holy Roman Empire. Before the 1871 Unification of Germany, it was a fully sovereign city state, and before 1919 formed a civic republic headed constitutionally by a class of hereditary grand burghers or . Beset by disasters such as the Great Fire of Hamburg, North Sea flood of 1962 and military conflicts including World War II bombing raids, the city has managed to recover and emerge wealthier after each catastrophe. Hamburg is Europe's third-largest port. Major regional broadcaster NDR, the printing and publishing firm and the newspapers and are based in the city. Hamburg is the seat of Germany's oldest stock exchange and the world's oldest merchant bank, Berenberg Bank. Media, commercial, logistical, and industrial firms with significant locations in the city include multinationals Airbus, , , , and Unilever. Hamburg is also a major European science, research, and education hub, with several universities and institutions. The city enjoys a very high quality of living, being ranked 19th in the 2019 Mercer Quality of Living Survey. Hamburg hosts specialists in world economics and international law, including consular and diplomatic missions as the International Tribunal for the Law of the Sea, the EU-LAC Foundation, and the UNESCO Institute for Lifelong Learning, multipartite international political conferences and summits such as and the G20. Both former German Chancellor Helmut Schmidt and Angela Merkel, German chancellor since 2005, were born in Hamburg. Hamburg is a major international and domestic tourist destination. The and were declared World Heritage Sites by UNESCO in 2015. Hamburg's rivers and canals are crossed by around 2,500 bridges, making it the city with the highest number of bridges in Europe. Aside from its rich architectural heritage, the city is also home to notable cultural venues such as the and concert halls. It gave birth to movements like and paved the way for bands including The Beatles. Hamburg is also known for several theatres and a variety of musical shows. St. Pauli's is among the best-known European entertainment districts. Hamburg is at a sheltered natural harbour on the southern fanning-out of the Jutland Peninsula, between Continental Europe to the south and Scandinavia to the north, with the North Sea to the west and the Baltic Sea to the northeast. It is on the River Elbe at its confluence with the Alster and Bille. The city centre is around the Binnenalster ("Inner Alster") and Außenalster ("Outer Alster"), both formed by damming the River Alster to create lakes. The islands of Neuwerk, Scharhörn, and Nigehörn, away in the Hamburg Wadden Sea National Park, are also part of the city of Hamburg. The neighborhoods of Neuenfelde, Cranz, Francop and Finkenwerder are part of the "Altes Land" (old land) region, the largest contiguous fruit-producing region in Central Europe. Neugraben-Fischbek has Hamburg's highest elevation, the Hasselbrack at AMSL. Hamburg borders the states of Schleswig-Holstein and Lower Saxony. Hamburg has an oceanic climate (Köppen: "Cfb"), influenced by its proximity to the coast and maritime influences that originate over the Atlantic Ocean. The location in the north of Germany provides extremes greater than typical marine climates, but definitely in the category due to the prevailing westerlies. Nearby wetlands enjoy a maritime temperate climate. The amount of snowfall has varied greatly in recent decades. In the late 1970s and early 1980s, heavy snowfall sometimes occurred, the winters of recent years have been less cold, with snowfall just a few days per year. The warmest months are June, July, and August, with high temperatures of . The coldest are December, January, and February, with low temperatures of . Claudius Ptolemy (2nd century AD) reported the first name for the vicinity as Treva. The name Hamburg comes from the first permanent building on the site, a castle which the Emperor Charlemagne ordered constructed in AD 808. It rose on rocky terrain in a marsh between the River Alster and the River Elbe as a defence against Slavic incursion, and acquired the name "Hammaburg", "burg" meaning castle or fort. The origin of the "Hamma" term remains uncertain, as does the exact location of the castle. In 834, Hamburg was designated as the seat of a bishopric. The first bishop, Ansgar, became known as the Apostle of the North. Two years later, Hamburg was united with Bremen as the Bishopric of Hamburg-Bremen. Hamburg was destroyed and occupied several times. In 845, 600 Viking ships sailed up the River Elbe and destroyed Hamburg, at that time a town of around 500 inhabitants. In 1030, King Mieszko II Lambert of Poland burned down the city. Valdemar II of Denmark raided and occupied Hamburg in 1201 and in 1214. The Black Death killed at least 60% of the population in 1350. Hamburg experienced several great fires in the medieval period. In 1189, by imperial charter, Frederick I "Barbarossa" granted Hamburg the status of a Free Imperial City and tax-free access (or free-trade zone) up the Lower Elbe into the North Sea. In 1265, an allegedly forged letter was presented to or by the Rath of Hamburg. This charter, along with Hamburg's proximity to the main trade routes of the North Sea and Baltic Sea, quickly made it a major port in Northern Europe. Its trade alliance with Lübeck in 1241 marks the origin and core of the powerful Hanseatic League of trading cities. On 8 November 1266, a contract between Henry III and Hamburg's traders allowed them to establish a "hanse" in London. This was the first time in history that the word "hanse" was used for the trading guild of the Hanseatic League. In 1270, the solicitor of the senate of Hamburg, "Jordan von Boitzenburg", wrote the first description of civil, criminal and procedural law for a city in Germany in the German language, the "Ordeelbook" ("Ordeel": sentence). On 10 August 1410, civil unrest forced a compromise (German: "Rezeß", literally meaning: withdrawal). This is considered the first constitution of Hamburg. In 1529, the city embraced Lutheranism, and it received Reformed refugees from the Netherlands and France. When Jan van Valckenborgh introduced a second layer to the fortifications to protect against the Thirty Years War in the seventeenth century, he extended Hamburg and created a "New Town" ("Neustadt") whose street names still date from the grid system of roads he introduced. Upon the dissolution of the Holy Roman Empire in 1806, the Free Imperial City of Hamburg was not incorporated into a larger administrative area while retaining special privileges (mediatised), but became a sovereign state with the official title of the "Free and Hanseatic City of Hamburg". Hamburg was briefly annexed by Napoleon I to the First French Empire (1804–1814/1815). Russian forces under General Bennigsen finally freed the city in 1814. Hamburg re-assumed its pre-1811 status as a city-state in 1814. The Vienna Congress of 1815 confirmed Hamburg's independence and it became one of 39 sovereign states of the German Confederation (1815–1866). In 1842, about a quarter of the inner city was destroyed in the "Great Fire". The fire started on the night of 4 May and was not extinguished until 8 May. It destroyed three churches, the town hall, and many other buildings, killing 51 people and leaving an estimated 20,000 homeless. Reconstruction took more than 40 years. After periodic political unrest, particularly in 1848, Hamburg adopted in 1860 a semidemocratic constitution that provided for the election of the Senate, the governing body of the city-state, by adult taxpaying males. Other innovations included the separation of powers, the separation of Church and State, freedom of the press, of assembly and association. Hamburg became a member of the North German Confederation (1866–1871) and of the German Empire (1871–1918), and maintained its self-ruling status during the Weimar Republic (1919–1933). Hamburg acceded to the German Customs Union or Zollverein in 1888, the last (along with Bremen) of the German states to join. The city experienced its fastest growth during the second half of the 19th century when its population more than quadrupled to 800,000 as the growth of the city's Atlantic trade helped make it Europe's second-largest port. The Hamburg-America Line, with Albert Ballin as its director, became the world's largest transatlantic shipping company around the start of the 20th century. Shipping companies sailing to South America, Africa, India and East Asia were based in the city. Hamburg was the departure port for many Germans and Eastern Europeans to emigrate to the United States in the late 19th and early 20th centuries. Trading communities from all over the world established themselves there. A major outbreak of cholera in 1892 was badly handled by the city government, which retained an unusual degree of independence for a German city. About 8,600 died in the largest German epidemic of the late 19th century, and the last major cholera epidemic in a major city of the Western world. In Nazi Germany (1933–1945), Hamburg was a "Gau" from 1934 until 1945. During the Second World War, Hamburg suffered a series of Allied air raids which devastated much of the city and the harbour. On 23 July 1943, Royal Air Force (RAF) and United States Army Air Force (USAAF) firebombing created a firestorm which spread from the "Hauptbahnhof" (main railway station) and quickly moved south-east, completely destroying entire boroughs such as Hammerbrook, Billbrook and Hamm South. Thousands of people perished in these densely populated working class boroughs. The raids, codenamed Operation Gomorrah by the RAF, killed at least 42,600 civilians; the precise number is not known. About one million civilians were evacuated in the aftermath of the raids. While some of the boroughs destroyed were rebuilt as residential districts after the war, others such as Hammerbrook were entirely developed into office, retail and limited residential or industrial districts. The Hamburg Commonwealth War Graves Commission Cemetery is in the greater Ohlsdorf Cemetery in the north of Hamburg. At least 42,900 people are thought to have perished in the Neuengamme concentration camp (about outside the city in the marshlands), mostly from epidemics and in the bombing of Kriegsmarine evacuation vessels by the RAF at the end of the war. Systematic deportations of Jewish Germans and Gentile Germans of Jewish descent started on 18 October 1941. These were all directed to Ghettos in Nazi-occupied Europe or to concentration camps. Most deported persons perished in the Holocaust. By the end of 1942 the "Jüdischer Religionsverband in Hamburg" was dissolved as an independent legal entity and its remaining assets and staff were assumed by the Reichsvereinigung der Juden in Deutschland (District Northwest). On 10 June 1943 the Reichssicherheitshauptamt dissolved the "Reichsvereinigung" by a decree. The few remaining employees not somewhat protected by a mixed marriage were deported from Hamburg on 23 June to Theresienstadt, where most of them perished. Hamburg surrendered to British Forces on 3 May 1945, three days after Adolf Hitler's death. After the Second World War, Hamburg formed part of the British Zone of Occupation; it became a state of the then Federal Republic of Germany in 1949. From 1960 to 1962, the Beatles launched their career by playing in various music clubs like the Star Club in the city. On 16 February 1962, a North Sea flood caused the Elbe to rise to an all-time high, inundating one-fifth of Hamburg and killing more than 300 people. The Inner German border – only east of Hamburg – separated the city from most of its hinterland and reduced Hamburg's global trade. Since German reunification in 1990, and the accession of several Central European and Baltic states into the European Union in 2004, the Port of Hamburg has restarted ambitions for regaining its position as the region's largest deep-sea port for container shipping and its major commercial and trading centre. On 31 December 2016, there were 1,860,759 people registered as living in Hamburg in an area of . The population density was . The metropolitan area of the Hamburg region (Hamburg Metropolitan Region) is home to 5,107,429 living on . There were 915,319 women and 945,440 men in Hamburg. For every 1,000 females, there were 1,033 males. In 2015, there were 19,768 births in Hamburg (of which 38.3% were to unmarried women); 6422 marriages and 3190 divorces, and 17,565 deaths. In the city, the population was spread out with 16.1% under the age of 18, and 18.3% were 65 years of age or older. 356 People in Hamburg were over the age of 100. According to the Statistical Office for Hamburg and Schleswig Holstein, the number of people with a migrant background is at 34% (631,246). Immigrants come from 200 different countries. 5,891 people have acquired German cititzenship in 2016. In 2016, there were 1,021,666 households, of which 17.8% had children under the age of 18; 54.4% of all households were made up of singles. 25.6% of all households were single parent households. The average household size was 1.8. Hamburg residents with a foreign citizenship as of 31 December 2016 is as follows Like elsewhere in Germany, Standard German is spoken in Hamburg, but as typical for northern Germany, the original language of Hamburg is Low German, usually referred to as "Hamborger Platt" (German "Hamburger Platt") or "Hamborgsch". Since large-scale standardization of the German language beginning in earnest in the 18th century, various Low German-colored dialects have developed (contact-varieties of German on Low Saxon substrates). Originally, there was a range of such Missingsch varieties, the best-known being the low-prestige ones of the working classes and the somewhat more bourgeois "Hanseatendeutsch" (Hanseatic German), although the term is used in appreciation. All of these are now moribund due to the influences of Standard German used by education and media. However, the former importance of Low German is indicated by several songs, such as the famous sea shanty Hamborger Veermaster, written in the 19th century when Low German was used more frequently. Many toponyms and street names reflect Low Saxon vocabulary, partially even in Low Saxon spelling, which is not standardised, and to some part in forms adapted to Standard German. Less than half of the residents of Hamburg are members of an organized religious group. In 2018, 24.9% of the population belonged to the North Elbian Evangelical Lutheran Church, the largest religious body, and 9.9% to the Roman Catholic Church. 65.2% of the population is not religious or adherent other religions. According to the publication "Muslimisches Leben in Deutschland" ("Muslim life in Germany") estimated 141,900 Muslim migrants (counting in nearly 50 countries of origin) lived in Hamburg in 2008. About three years later (May 2011) calculations based on census data for 21 countries of origin resulted in the number of about 143,200 Muslim migrants in Hamburg, making up 8.4% percent of the population. Hamburg is seat of one of the three bishops of the Evangelical Lutheran Church in Northern Germany and seat of the Roman Catholic Archdiocese of Hamburg. There are several mosques, including the Ahmadiyya run Fazle Omar Mosque, which is the oldest in the city, the Islamic Centre Hamburg, and a Jewish community. The city of Hamburg is one of 16 German states, therefore the Mayor of Hamburg's office corresponds more to the role of a minister-president than to the one of a city mayor. As a German state government, it is responsible for public education, correctional institutions and public safety; as a municipality, it is additionally responsible for libraries, recreational facilities, sanitation, water supply and welfare services. Since 1897, the seat of the government has been the Hamburg Rathaus (Hamburg City Hall), with the office of the mayor, the meeting room for the Senate and the floor for the Hamburg Parliament. From 2001 until 2010, the mayor of Hamburg was Ole von Beust, who governed in Germany's first statewide "black-green" coalition, consisting of the conservative CDU and the alternative GAL, which are Hamburg's regional wing of the Alliance 90/The Greens party. Von Beust was briefly succeeded by Christoph Ahlhaus in 2010, but the coalition broke apart on November, 28. 2010. On 7 March 2011 Olaf Scholz (SPD) became mayor. After the 2015 election the SPD and the Alliance 90/The Greens formed a coalition. Hamburg is made up of seven boroughs (German: "Bezirke") and subdivided into 104 quarters (German: "Stadtteile"). There are 181 localities (German: "Ortsteile"). The urban organization is regulated by the Constitution of Hamburg and several laws. Most of the quarters were former independent cities, towns or villages annexed into Hamburg proper. The last large annexation was done through the Greater Hamburg Act of 1937, when the cities Altona, Harburg and Wandsbek were merged into the state of Hamburg. The "Act of the Constitution and Administration of Hanseatic city of Hamburg" established Hamburg as a state and a municipality. Some of the boroughs and quarters have been rearranged several times. Each borough is governed by a Borough Council (German: "Bezirksversammlung") and administered by a Municipal Administrator (German: "Bezirksamtsleiter"). The boroughs are not independent municipalities: their power is limited and subordinate to the Senate of Hamburg. The borough administrator is elected by the Borough Council and thereafter requires confirmation and appointment by Hamburg's Senate. The quarters have no governing bodies of their own. In 2008, the boroughs were Hamburg-Mitte, Altona, Eimsbüttel, Hamburg-Nord, Wandsbek, Bergedorf and Harburg. "Hamburg-Mitte" ("Hamburg Centre") covers mostly the urban centre of the city and consists of the quarters Billbrook, Billstedt, Borgfelde, Finkenwerder, HafenCity, Hamm, Hammerbrook, Horn, Kleiner Grasbrook, Neuwerk, Rothenburgsort, St. Georg, St. Pauli, Steinwerder, Veddel, Waltershof and Wilhelmsburg. The quarters Hamburg-Altstadt ("old town") and Neustadt ("new town") are the historical origin of Hamburg. "Altona" is the westernmost urban borough, on the right bank of the Elbe river. From 1640 to 1864, Altona was under the administration of the Danish monarchy. Altona was an independent city until 1937. Politically, the following quarters are part of Altona: Altona-Altstadt, Altona-Nord, Bahrenfeld, Ottensen, Othmarschen, Groß Flottbek, Osdorf, Lurup, Nienstedten, Blankenese, Iserbrook, Sülldorf, Rissen, Sternschanze. "Bergedorf" consists of the quarters Allermöhe, Altengamme, Bergedorf—the centre of the former independent town, Billwerder, Curslack, Kirchwerder, Lohbrügge, Moorfleet, Neuengamme, Neuallermöhe, Ochsenwerder, Reitbrook, Spadenland and Tatenberg. "Eimsbüttel" is split into nine-quarters: Eidelstedt, Eimsbüttel, Harvestehude, Hoheluft-West, Lokstedt, Niendorf, Rotherbaum, Schnelsen and Stellingen. Located within this borough is former Jewish neighbourhood Grindel. "Hamburg-Nord" contains the quarters Alsterdorf, Barmbek-Nord, Barmbek-Süd, Dulsberg, Eppendorf, Fuhlsbüttel, Groß Borstel, Hoheluft-Ost, Hohenfelde, Langenhorn, Ohlsdorf with Ohlsdorf cemetery, Uhlenhorst and Winterhude. "Harburg" lies on the southern shores of the river Elbe and covers parts of the port of Hamburg, residential and rural areas, and some research institutes. The quarters are Altenwerder, Cranz, Eißendorf, Francop, Gut Moor, Harburg, Hausbruch, Heimfeld, Langenbek, Marmstorf, Moorburg, Neuenfelde, Neugraben-Fischbek, Neuland, Rönneburg, Sinstorf and Wilstorf. "Wandsbek" is divided into the quarters Bergstedt, Bramfeld, Duvenstedt, Eilbek, Farmsen-Berne, Hummelsbüttel, Jenfeld, Lemsahl-Mellingstedt, Marienthal, Poppenbüttel, Rahlstedt, Sasel, Steilshoop, Tonndorf, Volksdorf, Wandsbek, Wellingsbüttel and Wohldorf-Ohlstedt. Hamburg has architecturally significant buildings in a wide range of styles and no skyscrapers (see List of tallest buildings in Hamburg). Churches are important landmarks, such as St Nicholas', which for a short time in the 19th century was the world's tallest building. The skyline features the tall spires of the most important churches ("Hauptkirchen") St Michael's (nicknamed "Michel"), St Peter's, St James's ("St. Jacobi") and St. Catherine's covered with copper plates, and the Heinrich-Hertz-Turm, the radio and television tower (no longer publicly accessible). The many streams, rivers and canals are crossed by some 2,500 bridges, more than London, Amsterdam and Venice put together. Hamburg has more bridges inside its city limits than any other city in the world. The Köhlbrandbrücke, Freihafen Elbbrücken, and Lombardsbrücke and Kennedybrücke dividing Binnenalster from Aussenalster are important roadways. The town hall is a richly decorated Neo-Renaissance building finished in 1897. The tower is high. Its façade, long, depicts the emperors of the Holy Roman Empire, since Hamburg was, as a Free Imperial City, only under the sovereignty of the emperor. The Chilehaus, a brick expressionist office building built in 1922 and designed by architect Fritz Höger, is shaped like an ocean liner. Europe's largest urban development since 2008, the HafenCity, will house about 10,000 inhabitants and 15,000 workers. The plan includes designs by Rem Koolhaas and Renzo Piano. The Elbphilharmonie "(Elbe Philharmonic Hall)", opened in January 2017, houses concerts in a sail-shaped building on top of an old warehouse, designed by architects "Herzog & de Meuron". The many parks are distributed over the whole city, which makes Hamburg a very verdant city. The biggest parks are the "Stadtpark", the Ohlsdorf Cemetery and Planten un Blomen. The "Stadtpark", Hamburg's "Central Park", has a great lawn and a huge water tower, which houses one of Europe's biggest planetaria. The park and its buildings were designed by Fritz Schumacher in the 1910s. The lavish and spacious "Planten un Blomen" park (Low German dialect for "plants and flowers") located in the centre of Hamburg is the green heart of the city. Within the park are various thematic gardens, the biggest Japanese garden in Germany, and the "Alter Botanischer Garten Hamburg", which is a historic botanical garden that now consists primarily of greenhouses. The "Botanischer Garten Hamburg" is a modern botanical garden maintained by the University of Hamburg. Besides these, there are many more parks of various sizes. In 2014 Hamburg celebrated a birthday of park culture, where many parks were reconstructed and cleaned up. Moreover, every year there are the famous water-light-concerts in the "Planten un Blomen" park from May to early October. Hamburg has more than 40 theatres, 60 museums and 100 music venues and clubs. In 2005, more than 18 million people visited concerts, exhibitions, theatres, cinemas, museums, and cultural events. More than 8,552 taxable companies (average size 3.16 employees) were engaged in the culture sector, which includes music, performing arts and literature. There are five companies in the creative sector per thousand residents (as compared to three in Berlin and 37 in London). Hamburg has entered the European Green Capital Award scheme, and was awarded the title of European Green Capital for 2011. The state-owned "Deutsches Schauspielhaus", the Thalia Theatre, Ohnsorg Theatre, "Schmidts Tivoli" and the "Kampnagel" are well-known theatres. The English Theatre of Hamburg near U3 Mundsburg station was established in 1976 and is the oldest professional English-speaking theatre in Germany, and has exclusively English native-speaking actors in its company. Hamburg has several large museums and galleries showing classical and contemporary art, for example the Kunsthalle Hamburg with its contemporary art gallery ("Galerie der Gegenwart"), the Museum for Art and Industry ("Museum für Kunst und Gewerbe") and the Deichtorhallen/House of Photography. The Internationales Maritimes Museum Hamburg opened in the HafenCity quarter in 2008. There are various specialised museums in Hamburg, such as the Archaeological Museum Hamburg ("Archäologisches Museum Hamburg") in Hamburg-Harburg, the Hamburg Museum of Work ("Museum der Arbeit"), and several museums of local history, for example the Kiekeberg Open Air Museum ("Freilichtmuseum am Kiekeberg"). Two "museum ships" near Landungsbrücken bear witness to the freight ship ("Cap San Diego") and cargo sailing ship era ("Rickmer Rickmers"). The world's largest model railway museum Miniatur Wunderland with total railway length is also situated near Landungsbrücken in a former warehouse. "BallinStadt (Emigration City)" is dedicated to the millions of Europeans who emigrated to North and South America between 1850 and 1939. Visitors descending from those overseas emigrants may search for their ancestors at computer terminals. Hamburg State Opera is a leading opera company. Its orchestra is the Philharmoniker Hamburg. The city's other well-known orchestra is the NDR Elbphilharmonie Orchestra. The main concert venue is the new concert hall Elbphilharmonie. Before it was the Laeiszhalle, "Musikhalle Hamburg". The Laeiszhalle also houses a third orchestra, the Hamburger Symphoniker. György Ligeti and Alfred Schnittke taught at the Hochschule für Musik und Theater Hamburg. Hamburg is the birthplace of Johannes Brahms, who spent his formative early years in the city, and the birthplace and home of the famous waltz composer Oscar Fetrás, who wrote the well-known "Mondnacht auf der Alster" waltz. Since the German premiere of "Cats" in 1986, there have always been musicals running, including "The Phantom of the Opera", "The Lion King", "Dirty Dancing" and "Dance of the Vampires (musical)". This density, the highest in Germany, is partly due to the major musical production company "Stage Entertainment" being based in the city. The city was a major centre for rock music in the early 1960s. The Beatles lived and played in Hamburg during a period from August 1960 to December 1962. They proved popular and gained local acclaim. Prior to the group's initial recording and widespread fame, Hamburg provided residency and performing venues for the Beatles from 1960 to 1962. Hamburg has nurtured a number of other pop musicians. Identical twins Bill Kaulitz and Tom Kaulitz from the rock band Tokio Hotel live and maintain a recording studio in Hamburg, where they recorded their second and third albums, Zimmer 483 and Humanoid. Singer Nena also lives in Hamburg. There are German hip hop acts, such as Fünf Sterne deluxe, Samy Deluxe, Beginner and Fettes Brot. There is a substantial alternative and punk scene, which gathers around the Rote Flora, a squatted former theatre located in the Sternschanze. Hamburg is famous for an original kind of German alternative music called "Hamburger Schule" ("Hamburg School"), a term used for bands like Tocotronic, Blumfeld, Tomte or Kante. The city was a major centre for heavy metal music in the 1980s. Helloween, Gamma Ray, Running Wild and Grave Digger started in Hamburg. The industrial rock band KMFDM was also formed in Hamburg, initially as a performance art project. The influences of these and other bands from the area helped establish the subgenre of power metal. Hamburg has a vibrant psychedelic trance community, with record labels such as Spirit Zone. Hamburg is noted for several festivals and regular events. Some of them are street festivals, such as the gay pride "Hamburg Pride" festival or the Alster fair (German: "Alstervergnügen"), held at the "Binnenalster". The "Hamburger DOM" is northern Germany's biggest funfair, held three times a year. "Hafengeburtstag" is a funfair to honour the birthday of the port of Hamburg with a party and a ship parade. The annual biker's service in Saint Michael's Church attracts tens of thousands of bikers. Christmas markets in December are held at the Hamburg Rathaus square, among other places. The "long night of museums" (German: "Lange Nacht der Museen") offers one entrance fee for about 40 museums until midnight. The sixth "Festival of Cultures" was held in September 2008, celebrating multi-cultural life. The Filmfest Hamburg — a film festival originating from the 1950s "Film Days" (German: "Film Tage") — presents a wide range of films. The "Hamburg Messe and Congress" offers a venue for trade shows, such "hanseboot", an international boat show, or "Du und deine Welt", a large consumer products show. Regular sports events—some open to pro and amateur participants—are the cycling competition EuroEyes Cyclassics, the Hamburg Marathon, the biggest marathon in Germany after Berlin, the tennis tournament Hamburg Masters and equestrian events like the Deutsches Derby. Since 2007, Hamburg has the Dockville music and art festival. It takes place every year in summer in Wilhelmsburg. Original Hamburg dishes are "Birnen, Bohnen und Speck" (green beans cooked with pears and bacon), "Aalsuppe" (Hamburgisch "Oolsupp") is often mistaken to be German for "eel soup" ("Aal"/"Ool" translated 'eel'), but the name probably comes from the Low Saxon "allns" , meaning "all", "everything and the kitchen sink", not necessarily eel. Today eel is often included to meet the expectations of unsuspecting diners. There is "Bratkartoffeln" (pan-fried potato slices), "Finkenwerder Scholle" (Low Saxon "Finkwarder Scholl", pan-fried plaice), "Pannfisch" (pan-fried fish with mustard sauce), "Rote Grütze" (Low Saxon "Rode Grütt", related to Danish "rødgrød", a type of summer pudding made mostly from berries and usually served with cream, like Danish "rødgrød med fløde") and "Labskaus" (a mixture of corned beef, mashed potatoes and beetroot, a cousin of the Norwegian "lapskaus" and Liverpool's lobscouse, all offshoots off an old-time one-pot meal that used to be the main component of the common sailor's humdrum diet on the high seas). "Alsterwasser" (in reference to the city's river, the Alster) is the local name for a type of shandy, a concoction of equal parts of beer and carbonated lemonade ("Zitronenlimonade"), the lemonade being added to the beer. There is the curious regional dessert pastry called Franzbrötchen. Looking rather like a flattened croissant, it is similar in preparation but includes a cinnamon and sugar filling, often with raisins or brown sugar streusel. The name may also reflect to the roll's croissant-like appearance – "franz" appears to be a shortening of "französisch", meaning "French", which would make a "Franzbrötchen" a "French roll". Ordinary bread rolls tend to be oval-shaped and of the French bread variety. The local name is "Schrippe" (scored lengthways) for the oval kind and, for the round kind, "Rundstück" ("round piece" rather than mainstream German "Brötchen", diminutive form of "Brot" "bread"), a relative of Denmark's "rundstykke". In fact, while by no means identical, the cuisines of Hamburg and Denmark, especially of Copenhagen, have a lot in common. This also includes a predilection for open-faced sandwiches of all sorts, especially topped with cold-smoked or pickled fish. The American hamburger may have developed from Hamburg's "Frikadeller": a pan-fried patty (usually larger and thicker than its American counterpart) made from a mixture of ground beef, soaked stale bread, egg, chopped onion, salt and pepper, usually served with potatoes and vegetables like any other piece of meat, not usually on a bun. The Oxford Dictionary defined a "Hamburger steak" in 1802: a sometimes-smoked and -salted piece of meat, that, according to some sources, came from Hamburg to America. The name and food, "hamburger", has entered all English-speaking countries, and derivative words in non-English speaking countries. There are restaurants which offer most of these dishes, especially in the HafenCity. Hamburg has long been a centre of alternative music and counter-culture movements. The boroughs of St. Pauli, Sternschanze and Altona are known for being home to many radical left-wing and anarchist groups, culminating every year during the traditional May Day demonstrations. The Rote Flora is a former theatre, which was squatted in 1989 in the wake of redevelopment plans for that area. Since then, the Rote Flora has become one of the most well-known strongholds against gentrification and a place for radical culture throughout Germany and Europe. Especially during the 33rd G8 summit in nearby Heiligendamm, the Rote Flora served as an important venue for organising the counter-protests that were taking place back then. During the 2017 G20 summit, which took place in Hamburg from 7–8 July that year, protestors clashed violently with the police in the Sternschanze area and particularly around the Rote Flora. On 7 July, several cars were set on fire and street barricades were erected to prevent the police from entering the area. In response to that, the police made heavy use of water cannons and tear gas in order to scatter the protestors. However, this was met with strong resistance by protestors, resulting in a total of 160 injured police and 75 arrested participants in the protests. After the summit, however, the Rote Flora issued a statement, in which it condemns the arbitrary acts of violence that were committed by some of the protestors whilst generally defending the right to use violence as a means of self-defence against police oppression. In particular, the spokesperson of the Rote Flora said that the autonomous cultural centre had a traditionally good relationship with its neighbours and local residents, since they were united in their fight against gentrification in that neighbourhood. There are several English-speaking communities, such as the Caledonian Society of Hamburg, The British Club Hamburg, British and Commonwealth Luncheon Club, Anglo-German Club e.V., Professional Women's Forum, The British Decorative and Fine Arts Society, The English Speaking Union of the Commonwealth, The Scottish Country Dancers of Hamburg, The Hamburg Players e.V. English Language Theatre Group, The Hamburg Exiles Rugby Club, several cricket clubs, and The Morris Minor Register of Hamburg. Furthermore, the Anglo-Hanseatic Lodge No. 850 within the Grand Lodge of British Freemasons of Germany under the United Grand Lodges of Germany works in Hamburg, and has a diverse expat membership. There is also a 400-year-old Anglican church community worshipping at "". American and international English-speaking organisations include The American Club of Hamburg e.V., the American Women's Club of Hamburg, the English Speaking Union, the German-American Women's Club, and The International Women's Club of Hamburg e.V. "The American Chamber of Commerce" handles matters related to business affairs. The International School of Hamburg serves school children. William Wordsworth, Dorothy Wordsworth and Samuel Taylor Coleridge spent the last two weeks of September 1798 at Hamburg. Dorothy wrote a detailed journal of their stay, labelled "The Hamburg Journal (1798) by noted Wordsworth scholar Edward de Selincourt. A Hamburg saying, referring to its anglophile nature, is: "Wenn es in London anfängt zu regnen, spannen die Hamburger den Schirm auf." ... "When it starts raining in London, people in Hamburg open their umbrellas." A memorial for successful English engineer William Lindley, who reorganized, beginning in 1842, the drinking water and sewage system and thus helped to fight against cholera, is near Baumwall train station in Vorsetzen street. In 2009, more than 2,500 "stumbling blocks" "(Stolpersteine)" were laid, engraved with the names of deported and murdered citizens. Inserted into the pavement in front of their former houses, the blocks draw attention to the victims of Nazi persecution. The Gross domestic product (GDP) of Hamburg was 119.0 billion € in 2018, accounting for 3.6% of German economic output. GDP per capita adjusted for purchasing power was 59,600 € or 197% of the EU27 average in the same year. The GDP per employee was 132% of the EU average. The city has a relatively high employment rate, at 88 percent of the working-age population, employed in over 160,000 businesses. The average income in 2016 of employees was €49,332. The unemployment rate stood at 6.1% in October 2018 and was higher than the German average. Hamburg has for centuries been a commercial centre of Northern Europe, and is the most important banking city of Northern Germany. The city is the seat of Germany's oldest bank, the Berenberg Bank, M.M.Warburg & CO and HSH Nordbank. The Hamburg Stock Exchange is the oldest of its kind in Germany. The most significant economic unit is the Port of Hamburg, which ranks third to Rotterdam and Antwerpen in Europe and 17th-largest worldwide with transshipments of of cargo and 138.2 million tons of goods in 2016. International trade is also the reason for the large number of consulates in the city. Although situated up the Elbe, it is considered a sea port due to its ability to handle large ocean-going vessels. Heavy industry of Hamburg includes the making of steel, aluminium, copper and various large shipyards such as Blohm + Voss.
https://en.wikipedia.org/wiki?curid=13467
Hedonism Hedonism is a school of thought that argues seeking pleasure and avoiding suffering are the only components of well-being. Ethical hedonism is the view that combines hedonism with welfarist ethics, which claim that what we should do depends exclusively on what affects the well-being individuals have. Ethical hedonists would defend either increasing pleasure and reducing suffering for all beings capable of experiencing them, or just reducing suffering in the case of negative consequentialism. According to negative utilitarianism, only the minimization of suffering would matter. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good. For its part, hedonistic ethical egoism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them. It is also the idea that every person's pleasure should far surpass their amount of pain. The name derives from the Greek word for "delight" ( "hēdonismos" from "hēdonē" "pleasure", cognate via Proto-Indo-European swéh₂dus through Ancient Greek with English "sweet" + suffix -ισμός "-ismos" "ism"). Opposite to hedonism, there is hedonophobia. This is an extremely strong aversion to hedonism. According to medical author William C. Shiel Jr., MD, FACP, FACR, hedonophobia is "an abnormal, excessive, and persistent fear of pleasure." The condition of being unable to experience pleasure is anhedonia. In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice: "Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night [...] These things alone are the concern of men." This may represent the first recorded advocacy of a hedonistic philosophy. Scenes of a harper entertaining guests at a feast were common in ancient Egyptian tombs (see Harper's Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the pharaohs around the time of the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties. Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life "contentment" or "cheerfulness", claiming that "joy and sorrow are the distinguishing mark of things beneficial and harmful" (DK 68 B 188). The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable momentary sensations. Of these, physical ones are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus, while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism. The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth. They thought that we can know with certainty our immediate sense-experiences (for instance, that one is having a sweet sensation) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet). They also denied that we can have knowledge of what the experiences of other people are like. All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle. Further, they are entirely individual and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct. Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure. Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind. Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable. Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life. Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others. Likewise, friendship and justice are useful because of the pleasure they provide. Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour. Epicureanism is a system of philosophy based upon the teachings of Epicurus ("c". 341–"c". 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippus—about whom very little is known—Epicurus believed that the greatest good was to seek modest, sustainable "pleasure" in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from "hedonism" as it is commonly understood. In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule. It is impossible to live a pleasant life without living wisely and well and justly (agreeing "neither to harm nor be harmed"), and it is impossible to live wisely and well and justly without living a pleasant life. Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine. Some writings by Epicurus have survived. Some scholars consider the epic poem "On the Nature of Things" by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus. Yangism has been described as a form of psychological and ethical egoism. The Yangist philosophers believed in the importance of maintaining self-interest through "keeping one's nature intact, protecting one's uniqueness, and not letting the body be tied by other things". Disagreeing with the Confucian virtues of li (propriety), ren (humaneness), and yi (righteousness) and the Legalist virtue of fa (law), the Yangists saw wei wo, or "everything for myself," as the only virtue necessary for self-cultivation. Individual pleasure is considered desirable, like in hedonism, but not at the expense of the health of the individual. The Yangists saw individual well-being as the prime purpose of life, and considered anything that hindered that well-being immoral and unnecessary. The main focus of the Yangists was on the concept of xing, or human nature, a term later incorporated by Mencius into Confucianism. The xing, according to sinologist A. C. Graham, is a person's "proper course of development" in life. Individuals can only rationally care for their own xing, and should not naively have to support the xing of other people, even if it means opposing the emperor. In this sense, Yangism is a "direct attack" on Confucianism, by implying that the power of the emperor, defended in Confucianism, is baseless and destructive, and that state intervention is morally flawed. The Confucian philosopher Mencius depicts Yangism as the direct opposite of Mohism, while Mohism promotes the idea of universal love and impartial caring, the Yangists acted only "for themselves," rejecting the altruism of Mohism. He criticized the Yangists as selfish, ignoring the duty of serving the public and caring only for personal concerns. Mencius saw Confucianism as the "Middle Way" between Mohism and Yangism. Judaism believes that the world was created to serve God, and in order to do so properly, God in turn gives mankind the opportunity to experience pleasure in the process of serving Him. (Talmud Kidushin 82:b) God placed Adam and Eve in the Garden of Eden—Eden being the Hebrew word for "pleasure". In recent years, Rabbi Noah Weinberg articulated five different levels of pleasure; connecting with God is the highest possible pleasure. The Book of Ecclesiastes in the Old Testament proclaims, "There is nothing better for a person than that he should eat and drink and find enjoyment in his toil. This also, I saw, is from the hand of God..." (Ecclesiastes 2:24) Ethical hedonism as part of Christian theology has also been a concept in some evangelical circles, particularly in those of the Reformed tradition. The term Christian Hedonism was first coined by Reformed Baptist theologian John Piper in his 1986 book "Desiring God": “My shortest summary of it is: God is most glorified in us when we are most satisfied in Him. Or: The chief end of man is to glorify God by enjoying Him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in.” Piper states his term may describe the theology of Jonathan Edwards, who in 1812 referred to "a future enjoyment of Him [God] in heaven". Already in the 17th century, the atomist Pierre Gassendi had adapted Epicureanism to the Christian doctrine. The concept of hedonism is also found in Nastika (heterodox) philosophy such as the Charvaka school. However, Hedonism is criticized by Astika (orthodox) schools of thought on the basis that it is inherently egoistic and therefore detrimental to spiritual liberation. Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society. It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonism—as a view as to what is good for people—to utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill's versions of hedonism differ. There are two somewhat basic schools of thought on hedonism: An extreme form of hedonism that views moral and sexual restraint as either unnecessary or harmful. Famous proponents are Marquis de Sade and John Wilmot, 2nd Earl of Rochester. Contemporary proponents of hedonism include Swedish philosopher Torbjörn Tännsjö, Fred Feldman. and Spanish ethic philosopher Esperanza Guisán (published a "Hedonist manifesto" in 1990). A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject ("L'invention du plaisir : fragments cyréaniques" and "La puissance d'exister : Manifeste hédoniste"). He defines hedonism "as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else". Onfray's philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain's and the body's capacities to their fullest extent – while restoring philosophy to a useful role in art, politics, and everyday life and decisions." Onfray's works "have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy," of which three have been published. For him "In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance – my pleasure at the same time as the pleasure of others – presumes that we approach the subject from different angles – political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical…." For this he has "written books on each of these facets of the same world view". His philosophy aims for "micro-revolutions", or "revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values". The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto "The Hedonistic Imperative" outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as "paradise engineering". A transhumanist and a vegan, Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild. In a talk David Pearce gave at the Future of Humanity Institute and at the Charity International 'Happiness Conference', he said: Sadly, what "won't" abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world's ills. Improving the external environment is admirable and important; but such improvement can't recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being - or ill-being - around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. It's possible to "lower" an individual's hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies suggest that we are typically neither more nor less unhappy than we were before the catastrophic event. Unfortunately, attempts to build an ideal society can't overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one's garden. Even if "everything" that traditional futurists have asked for is delivered - eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc - there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors - or a New Guinea tribesman today - in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. "Un"enhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations - sadness, anxiety, jealousy, existential angst. Their biology is part of "what it means to be human". Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours that enhanced the inclusive fitness of our genes in the ancestral environment. Haybron has distinguished between psychological, ethical, welfare and axiological hedonism. Russian physicist and philosopher Victor Argonov argues that hedonism is not only a philosophical but also a verifiable scientific hypothesis. In 2014, he suggested "postulates of pleasure principle" confirmation of which would lead to a new scientific discipline, hedodynamics. Hedodynamics would be able to forecast the distant future development of human civilization and even the probable structure and psychology of other rational beings within the universe. In order to build such a theory, science must discover the neural correlate of pleasure - neurophysiological parameter unambiguously corresponding to the feeling of pleasure (hedonic tone). According to Argonov, posthumans will be able to reprogram their motivations in an arbitrary manner (to get pleasure from any programmed activity). And if pleasure principle postulates are true, then general direction of civilization development is obvious: maximization of integral happiness in posthuman life (product of life span and average happiness). Posthumans will avoid constant pleasure stimulation, because it is incompatible with rational behavior required to prolong life. However, they can become on average much happier than modern humans. Many other aspects of posthuman society could be predicted by hedodynamics if the neural correlate of pleasure were discovered. For example, optimal number of individuals, their optimal body size (whether it matters for happiness or not) and the degree of aggression. Critics of hedonism have objected to its exclusive concentration on pleasure as valuable or that the retentive breadth of dopamine is limited. In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worlds—one of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question then is if it is better for the beautiful world to exist than the heap of filth. In this, Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism. Perhaps the most famous objection to hedonism is Robert Nozick's famous experience machine. Nozick asks us to hypothetically imagine a machine that will allow us to experience whatever we want—if we want to experience making friends, it will give this to us. Nozick claims that by hedonistic logic, we should remain in this machine for the rest of our lives. However, he gives three reasons why this is not a preferable scenario: firstly, because we want to "do" certain things, as opposed to merely experience them; secondly, we want to be a certain kind of person, as opposed to an 'indeterminate blob' and thirdly, because such a thing would limit our experiences to only what we can imagine. Peter Singer, a hedonistic utilitarian, and Katarzyna de Lazari-Radek have both argued against such an objection by saying that it only provides an answer to certain forms of hedonism, and ignores others. In Islam, one of the main duties of a Muslim is to conquer his nafs (his ego, self, passions, desires) and to be free from it. Certain joys of life are permissible provided they do not lead to excess or evildoing that may bring harm. It is understood that everyone takes their passion as their idol, Islam calls these tawaghit (idols) and Taghut (worship of other than Allah) so there has to be a means of controlling these nafs.
https://en.wikipedia.org/wiki?curid=13470
Holocene The Holocene ( ) is the current geological epoch. It began approximately 11,650 cal years before present, after the last glacial period, which concluded with the Holocene glacial retreat. The Holocene and the preceding Pleistocene together form the Quaternary period. The Holocene has been identified with the current warm period, known as MIS 1. It is considered by some to be an interglacial period within the Pleistocene Epoch, called the Flandrian interglacial. The Holocene corresponds with rapid proliferation, growth and impacts of the human species worldwide, including all of its written history, technological revolutions, development of major civilizations, and overall significant transition towards urban living in the present. The human impact on modern-era Earth and its ecosystems may be considered of global significance for the future evolution of living species, including approximately synchronous lithospheric evidence, or more recently hydrospheric and atmospheric evidence of the human impact. In July 2018, the International Union of Geological Sciences split the Holocene epoch into three distinct subsections, Greenlandian (11,700 years ago to 8,200 years ago), Northgrippian (8,200 years ago to 4,200 years ago) and Meghalayan (4,200 years ago to the present), as proposed by International Commission on Stratigraphy. The boundary stratotype of Meghalayan is a speleothem in Mawmluh cave in India, and the global auxiliary stratotype is an ice core from Mount Logan in Canada. The word is formed from two Ancient Greek words. "Holos" () is the Greek word for "whole." "Cene" comes from the Greek word "kainos" (), meaning "new." The concept is that this epoch is "entirely new." The suffix '-cene' is used for all the seven epochs of the Cenozoic Era. It is accepted by the International Commission on Stratigraphy that the Holocene started approximately 11,650 cal years BP. The Subcommission on Quaternary Stratigraphy quotes Gibbard and van Kolfschoten in Gradstein Ogg and Smith in stating the term 'Recent' as an alternative to Holocene is invalid and should not be used; it also observes that the term Flandrian, derived from marine transgression sediments on the Flanders coast of Belgium, has been used as a synonym for Holocene by authors who consider the last 10,000 years should have the same stage-status as previous interglacial events and thus be included in the Pleistocene. The International Commission on Stratigraphy, however, considers the Holocene an epoch following the Pleistocene and specifically the last glacial period. Local names for the last glacial period include the Wisconsinan in North America, the Weichselian in Europe, the Devensian in Britain, the Llanquihue in Chile and the Otiran in New Zealand. The Holocene can be subdivided into five time intervals, or chronozones, based on climatic fluctuations: The Blytt–Sernander classification of climatic periods initially defined by plant remains in peat mosses, is currently being explored. Geologists working in different regions are studying sea levels, peat bogs and ice core samples by a variety of methods, with a view toward further verifying and refining the Blytt–Sernander sequence. They find a general correspondence across Eurasia and North America, though the method was once thought to be of no interest. The scheme was defined for Northern Europe, but the climate changes were claimed to occur more widely. The periods of the scheme include a few of the final pre-Holocene oscillations of the last glacial period and then classify climates of more recent prehistory. Paleontologists have not defined any faunal stages for the Holocene. If subdivision is necessary, periods of human technological development, such as the Mesolithic, Neolithic, and Bronze Age, are usually used. However, the time periods referenced by these terms vary with the emergence of those technologies in different parts of the world. Climatically, the Holocene may be divided evenly into the Hypsithermal, with warmer temperatures on average in many regions, and Neoglacial periods. The boundary coincides with the start of the Bronze Age in Europe. According to some scholars, a third division, the Anthropocene, has now begun. This term is used to denote the present time interval in which many geologically significant conditions and processes have been profoundly altered by human activities. The ‘Anthropocene’ (a term coined by Paul Crutzen and Eugene Stoermer in 2000) is not a formally defined geological unit. The Subcommission on Quaternary Stratigraphy of the International Commission on Stratigraphy has a working group to determine whether it should be. In May 2019, members of the working group voted in favour of recognizing the Anthropocene as formal chrono-stratigraphic unit, with stratigraphic signals around the mid-twentieth century C.E. as its base. The exact criteria have still to be decided upon, after which the recommendation also has to be approved by the working group’s parent bodies (ultimately the International Union of Geological Sciences). Continental motions due to plate tectonics are less than a kilometre over a span of only 10,000 years. However, ice melt caused world sea levels to rise about in the early part of the Holocene. In addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers and rose as much as due to post-glacial rebound over the late Pleistocene and Holocene, and are still rising today. The sea level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the sea. Holocene marine fossils are known, for example, from Vermont and Michigan. Other than higher-latitude temporary marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain, and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels during the period exceeds any likely tectonic uplift of non-glacial origin. Post-glacial rebound in the Scandinavia region resulted in the formation of the Baltic Sea. Earthquakes are a leading cause of sediment deformation, leading to the creation and destruction of bodies of water. The region continues to rise, still causing weak earthquakes across Northern Europe. The equivalent event in North America was the rebound of Hudson Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to near its present boundaries. Climate has been fairly stable over the Holocene. Ice core records show that before the Holocene there was global warming after the end of the last ice age and cooling periods, but climate changes became more regional at the start of the Younger Dryas. During the transition from the last glacial to the Holocene, the Huelmo–Mascardi Cold Reversal in the Southern Hemisphere began before the Younger Dryas, and the maximum warmth flowed south to north from 11,000 to 7,000 years ago. It appears that this was influenced by the residual glacial ice remaining in the Northern Hemisphere until the later date. The Holocene climatic optimum (HCO) was a period of warming in which the global climate became warmer. However, the warming was probably not uniform across the world. This period of warmth ended about 5,500 years ago with the descent into the Neoglacial and concomitant Neopluvial. At that time, the climate was not unlike today's, but there was a slightly warmer period from the 10th–14th centuries known as the Medieval Warm Period. This was followed by the Little Ice Age, from the 13th or 14th century to the mid-19th century. Compared to glacial conditions, habitable zones have expanded northwards, reaching their northernmost point during the HCO. Greater moisture in the polar regions has caused the disappearance of steppe-tundra. The temporal and spatial extent of Holocene climate change is an area of considerable uncertainty, with radiative forcing recently proposed to be the origin of cycles identified in the North Atlantic region. Climate cyclicity through the Holocene (Bond events) has been observed in or near marine settings and is strongly controlled by glacial input to the North Atlantic. Periodicities of ≈2500, ≈1500, and ≈1000 years are generally observed in the North Atlantic. At the same time spectral analyses of the continental record, which is remote from oceanic influence, reveal persistent periodicities of 1,000 and 500 years that may correspond to solar activity variations during the Holocene epoch. A 1,500-year cycle corresponding to the North Atlantic oceanic circulation may have had widespread global distribution in the Late Holocene. Animal and plant life have not evolved much during the relatively short Holocene, but there have been major shifts in the distributions of plants and animals. A number of large animals including mammoths and mastodons, saber-toothed cats like "Smilodon" and "Homotherium", and giant sloths disappeared in the late Pleistocene and early Holocene—especially in North America, where animals that survived elsewhere (including horses and camels) became extinct. This extinction of American megafauna has been explained as caused by the arrival of the ancestors of Amerindians; though most scientists assert that climatic change also contributed. In addition, a controversial bolide impact over North America has been hypothesized to have triggered the Younger Dryas. Throughout the world, ecosystems in cooler climates that were previously regional have been isolated in higher altitude ecological "islands". The "8.2 ka event", an abrupt cold spell recorded as a negative excursion in the record lasting 400 years, is the most prominent climatic event occurring in the Holocene epoch, and may have marked a resurgence of ice cover. It has been suggested that this event was caused by the final drainage of Lake Agassiz, which had been confined by the glaciers, disrupting the thermohaline circulation of the Atlantic. Subsequent research, however, suggested that the discharge was probably superimposed upon a longer episode of cooler climate lasting up to 600 years and observed that the extent of the area affected was unclear. The beginning of the Holocene corresponds with the beginning of the Mesolithic age in most of Europe, but in regions such as the Middle East and Anatolia with a very early neolithisation, Epipaleolithic is preferred in place of Mesolithic. Cultures in this period include Hamburgian, Federmesser, and the Natufian culture, during which the oldest inhabited places still existing on Earth were first settled, such as Tell es-Sultan (Jericho) in the Middle East. There is also evolving archeological evidence of proto-religion at locations such as Göbekli Tepe, as long ago as the 9th millennium BCE. Both are followed by the aceramic Neolithic (Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and the pottery Neolithic. The Late Holocene brought advancements such as the bow and arrow and saw new methods of warfare in North America. Spear throwers and their large points were replaced by the bow and arrow with its small narrow points beginning in Oregon and Washington. Villages built on defensive bluffs indicate increased warfare, leading to food gathering in communal groups for protection rather than individual hunting. In Mesoamerica, transformations of natural environments have been a common feature at least since the mid-Holocene, mostly through the exploitation of wild plants and the establishment of crops.
https://en.wikipedia.org/wiki?curid=13471
Harbor A harbor (American English) or harbour (British English; see spelling differences) (synonyms: wharves, haven) is a sheltered body of water where ships, boats, and barges can be docked. The term "harbor" is often used interchangeably with "port", which is a man-made facility built for loading and unloading vessels and dropping off and picking up passengers. Ports usually include one or more harbors. Alexandria Port in Egypt is an example of a port with two harbors. Harbors may be natural or artificial. An artificial harbor can have deliberately constructed breakwaters, sea walls, or jettys or they can be constructed by dredging, which requires maintenance by further periodic dredging. An example of an artificial harbor is Long Beach Harbor, California, United States, which was an array of salt marshes and tidal flats too shallow for modern merchant ships before it was first dredged in the early 20th century. In contrast, a natural harbor is surrounded on several sides by prominences of land. Examples of natural harbors include Sydney Harbour, Australia and Trincomalee Harbour in Sri Lanka. Artificial harbors are frequently built for use as ports. The oldest artificial harbor known is the Ancient Egyptian site at Wadi al-Jarf, on the Red Sea coast, which is at least 4500 years old (ca. 2600-2550 BC, reign of King Khufu). The largest artificially created harbor is Jebel Ali in Dubai. Other large and busy artificial harbors include: The Ancient Carthaginians constructed fortified, artificial harbors called cothons. A natural harbor is a landform where a section of a body of water is protected and deep enough to allow anchorage. Many such harbors are rias. Natural harbors have long been of great strategic naval and economic importance, and many great cities of the world are located on them. Having a protected harbor reduces or eliminates the need for breakwaters as it will result in calmer waves inside the harbor. Some examples are: For harbors near the North and South poles, being ice-free is an important advantage, especially when it is year-round. Examples of these include: The world's southernmost harbor, located at Antarctica's Winter Quarters Bay (77° 50′ South), is sometimes ice-free, depending on the summertime pack ice conditions. Although the world's busiest port is a contested title, in 2006 the world's busiest harbor by cargo tonnage was the Port of Shanghai. The following are large natural harbors:
https://en.wikipedia.org/wiki?curid=13475
H H or h is the eighth letter in the ISO basic Latin alphabet. Its name in English is "aitch" (pronounced , plural "aitches"), or regionally "haitch" . The original Semitic letter Heth most likely represented the voiceless pharyngeal fricative (). The form of the letter probably stood for a fence or posts. The Greek eta 'Η' in Archaic Greek alphabets still represented (later on it came to represent a long vowel, ). In this context, the letter eta is also known as heta to underline this fact. Thus, in the Old Italic alphabets, the letter heta of the Euboean alphabet was adopted with its original sound value . While Etruscan and Latin had as a phoneme, almost all Romance languages lost the sound—Romanian later re-borrowed the phoneme from its neighbouring Slavic languages, and Spanish developed a secondary from , before losing it again; various Spanish dialects have developed as an allophone of or in most Spanish-speaking countries, and various dialects of Portuguese use it as an allophone of . 'H' is also used in many spelling systems in digraphs and trigraphs, such as 'ch', which represents in Spanish, Galician, Old Portuguese and English, in French and modern Portuguese, in Italian, French and English, in German, Czech, Polish, Slovak, one native word of English and a few loanwords into English, and in German. For most English speakers, the name for the letter is pronounced as and spelled "aitch" or occasionally "eitch". The pronunciation and the associated spelling "haitch" is often considered to be h-adding and is considered nonstandard in England. It is, however, a feature of Hiberno-English, as well as scattered varieties of Edinburgh, England, and Welsh English and Australia. The perceived name of the letter affects the choice of indefinite article before initialisms beginning with H: for example "an H-bomb" or "a H-bomb". The pronunciation may be a hypercorrection formed by analogy with the names of the other letters of the alphabet, most of which include the sound they represent. The "haitch" pronunciation of "h" has spread in England, being used by approximately 24% of English people born since 1982, and polls continue to show this pronunciation becoming more common among younger native speakers. Despite this increasing number, the pronunciation without the sound is still considered to be standard in England, although the pronunciation with is also attested as a legitimate variant. Authorities disagree about the history of the letter's name. The "Oxford English Dictionary" says the original name of the letter was in Latin; this became in Vulgar Latin, passed into English via Old French , and by Middle English was pronounced . "The American Heritage Dictionary of the English Language" derives it from French "hache" from Latin "haca" or "hic". Anatoly Liberman suggests a conflation of two obsolete orderings of the alphabet, one with "H" immediately followed by "K" and the other without any "K": reciting the former's "..., H, K, L..." as when reinterpreted for the latter "..., H, L..." would imply a pronunciation for "H". In English, occurs as a single-letter grapheme (being either silent or representing the voiceless glottal fricative () and in various digraphs, such as , , , or ), (silent, , , , or ), (), (), (), ( or ), (). The letter is silent in a syllable rime, as in "ah", "ohm", "dahlia", "cheetah", "pooh-poohed", as well as in certain other words (mostly of French origin) such as "hour", "honest", "herb" (in American but not British English) and "vehicle" (in certain varieties of English). Initial is often not pronounced in the weak form of some function words including "had", "has", "have", "he", "her", "him", "his", and in some varieties of English (including most regional dialects of England and Wales) it is often omitted in all words (see "-dropping). It was formerly common for "an" rather than "a" to be used as the indefinite article before a word beginning with in an unstressed syllable, as in "an historian", but use of "a" is now more usual (see ). In English, The pronunciation of as /h/ can be analyzed as a voiceless vowel. That is, when the phoneme /h/ precedes a vowel, /h/ may be realized as a voiceless version of the subsequent vowel. For example the word , /hɪt/ is realized as [ɪ̥ɪt]. H is the eighth most frequently used letter in the English language (after S, N, I, O, A, T, and E), with a frequency of about 4.2% in words. In the German language, the name of the letter is pronounced . Following a vowel, it often silently indicates that the vowel is long: In the word ('heighten'), the second is mute for most speakers outside of Switzerland. In 1901, a spelling reform eliminated the silent in nearly all instances of in native German words such as "thun" ('to do') or "Thür" ('door'). It has been left unchanged in words derived from Greek, such as ('theater') and ('throne'), which continue to be spelled with even after the last German spelling reform. In Spanish and Portuguese, ("hache" in Spanish, pronounced , or "agá" in Portuguese, pronounced or ) is a silent letter with no pronunciation, as in "hijo" ('son') and "húngaro" ('Hungarian'). The spelling reflects an earlier pronunciation of the sound . It is sometimes pronounced with the value , in some regions of Andalusia, Extremadura, Canarias, Cantabria and the Americas in the beginning of some words. also appears in the digraph , which represents in Spanish and northern Portugal, and in oral traditions that merged both sounds (the latter originarily represented by instead) e.g. in most of the Portuguese language and some Spanish-speaking places, prominently Chile, as well as and in Portuguese, whose spelling is inherited from Occitan. In French, the name of the letter is written as "ache" and pronounced . The French orthography classifies words that begin with this letter in two ways, one of which can affect the pronunciation, even though it is a silent letter either way. The "H muet", or "mute" , is considered as though the letter were not there at all, so for example the singular definite article "le" or "la", which is elided to "l"' before a vowel, elides before an "H muet" followed by a vowel. For example, "le + hébergement" becomes "l'hébergement" ('the accommodation'). The other kind of is called "h aspiré" ("aspirated "", though it is not normally aspirated phonetically), and does not allow elision or liaison. For example in "le homard" ('the lobster') the article "le" remains unelided, and may be separated from the noun with a bit of a glottal stop. Most words that begin with an "H muet" come from Latin ("honneur", "homme") or from Greek through Latin ("hécatombe"), whereas most words beginning with an "H aspiré" come from Germanic ("harpe", "hareng") or non-Indo-European languages ("harem", "hamac", "haricot"); in some cases, an orthographic was added to disambiguate the and semivowel pronunciations before the introduction of the distinction between the letters and : "huit" (from "uit", ultimately from Latin "octo"), "huître" (from "uistre", ultimately from Greek through Latin "ostrea"). In Italian, has no phonological value. Its most important uses are in the digraphs 'ch' and 'gh' , as well as to differentiate the spellings of certain short words that are homophones, for example some present tense forms of the verb "avere" ('to have') (such as "hanno", 'they have', vs. "anno", 'year'), and in short interjections ("oh", "ehi"). Some languages, including Czech, Slovak, Hungarian, and Finnish, use as a breathy voiced glottal fricative , often as an allophone of otherwise voiceless in a voiced environment. In Hungarian, the letter has five independent pronunciations, perhaps more than in any other language, with an additional three uses as a productive and non-productive member of a digraph. H may represent /h/ as in the name of the Székely town Hargita; intervocalically it represents /ɦ/ as in "tehéz"; it represents /x/ in the word "doh"; it represents /ç/ in "ihlet"; and it is silent in "Cseh". As part of a diphthong, it represents, in archaic spelling, /t͡ʃ/ with the letter C as in the name "Széchényi; it represents, again, with the letter C, /x/ in "pech" (which is pronounced [pɛx]); in certain environments it breaks palatalization of a consonant, as in the name "Horthy" which is pronounced [hɔrti] (without the intervening H, the name "Horty" would be pronounced [hɔrc]); and finally, it acts as a silent component of a diphthong, as in the name "Vargha", pronounced [vɒrgɒ]. In Ukrainian and Belarusian, when written in the Latin alphabet, is also commonly used for , which is otherwise written with the Cyrillic letter . In Irish, is not considered an independent letter, except for a very few non-native words, however placed after a consonant is known as a "séimhiú" and indicates lenition of that consonant; began to replace the original form of a séimhiú, a dot placed above the consonant, after the introduction of typewriters. In most dialects of Polish, both and the digraph always represent . In Basque, during the 20th century it was not used in the orthography of the Basque dialects in Spain but it marked an aspiration in the North-Eastern dialects. During the standardization of Basque in the 1970s, the compromise was reached that "h" would be accepted if it were the first consonant in a syllable. Hence, "herri" ("people") and "etorri" ("to come") were accepted instead of "erri" (Biscayan) and "ethorri" (Souletin). Speakers could pronounce the h or not. For the dialects lacking the aspiration, this meant a complication added to the standardized spelling. As a phonetic symbol in the International Phonetic Alphabet (IPA), it is used mainly for the so-called aspirations (fricative or trills), and variations of the plain letter are used to represent two sounds: the lowercase form represents the voiceless glottal fricative, and the small capital form represents the voiceless epiglottal fricative (or trill). With a bar, minuscule is used for a voiceless pharyngeal fricative. Specific to the IPA, a hooked is used for a voiced glottal fricative, and a superscript is used to represent aspiration. 1 and all encodings based on ASCII, including the DOS, Windows, ISO-8859 and Macintosh families of encodings.
https://en.wikipedia.org/wiki?curid=13478
Horseshoe A horseshoe is a fabricated product, normally made of metal, although sometimes made partially or wholly of modern synthetic materials, designed to protect a horse hoof from wear. Shoes are attached on the palmar surface (ground side) of the hooves, usually nailed through the insensitive hoof wall that is anatomically akin to the human toenail, although much larger and thicker. However, there are also cases where shoes are glued. The fitting of horseshoes is a professional occupation, conducted by a farrier, who specializes in the preparation of feet, assessing potential lameness issues, and fitting appropriate shoes, including remedial features where required. In some countries, such as the U.K., horseshoeing is legally restricted to only people with specific qualifications and experience. In others, such as the United States, where professional licensing is not legally required, professional organizations provide certification programs that publicly identify qualified individuals. Horseshoes are available in a wide variety of materials and styles, developed for different types of horse and for the work they do. The most common materials are steel and aluminium, but specialized shoes may include use of rubber, plastic, magnesium, titanium, or copper. Steel tends to be preferred in sports in which a strong, long-wearing shoe is needed, such as polo, eventing, show jumping, and western riding events. Aluminium shoes are lighter, making them common in horse racing, where a lighter shoe is desired; and often facilitate certain types of movement, and so are favored in the discipline of dressage. Some horseshoes have "caulkins", "caulks", or "calks": protrusions at the toe or heels of the shoe, or both, to provide additional traction. When kept as a talisman, a horseshoe is said to bring good luck. A stylized variation of the horseshoe is used for a popular throwing game, horseshoes. Since the early history of domestication of the horse, working animals were found to be exposed to many conditions that created breakage or excessive hoof wear. Ancient people recognized the need for the walls (and sometimes the sole) of domestic horses' hooves to have additional protection over and above any natural hardness. An early form of hoof protection was seen in ancient Asia, where horses' hooves were wrapped in rawhide, leather or other materials for both therapeutic purposes and protection from wear. From archaeological finds in Great Britain, the Romans appeared to have attempted to protect their horses' feet with a strap-on, solid-bottomed "hipposandal" that has a slight resemblance to the modern hoof boot. Historians differ on the origin of the horseshoe. Because iron was a valuable commodity, and any worn out items were generally reforged and reused, it is difficult to locate clear archaeological evidence. Although some credit the Druids, there is no hard evidence to support this claim. In 1897 four bronze horseshoes with what are apparently nail holes were found in an Etruscan tomb dated around 400 BC. The assertion by some historians that the Romans invented the "mule shoes" sometime after 100 BC is supported by a reference by Catullus who died in 54 BC. However, these references to use of horseshoes and muleshoes in Rome, may have been to the "hipposandal"—leather boots, reinforced by an iron plate, rather than to nailed horseshoes. Existing references to the nailed shoe are relatively late, first known to have appeared around AD 900, but there may have been earlier uses given that some have been found in layers of dirt. There are no extant references to nailed horseshoes prior to the reign of Emperor Leo VI and by 973 occasional references to them can be found. The earliest clear written record of iron horseshoes is a reference to "crescent figured irons and their nails" in AD 910. There is very little evidence of any sort that suggests the existence of nailed-on shoes prior to AD 500 or 600, though there is a find dated to the 5th century AD of a horseshoe, complete with nails, found in the tomb of the Frankish King Childeric I at Tournai, Belgium. Around 1000 AD, cast bronze horseshoes with nail holes became common in Europe. A design with a scalloped outer rim and six nail holes was common. According to Gordon Ward the scalloped edges were created by double punching the nail holes causing the edges to bulge. The 13th and 14th centuries brought the widespread manufacturing of iron horseshoes. By the time of the Crusades (1096–1270), horseshoes were widespread and frequently mentioned in various written sources. In that period, due to the value of iron, horseshoes were even accepted in lieu of coin to pay taxes. By the 13th century, shoes were forged in large quantities and could be bought ready-made. Hot shoeing, the process of shaping a heated horseshoe immediately before placing it on the horse, became common in the 16th century. From the need for horseshoes, the craft of blacksmithing became "one of the great staple crafts of medieval and modern times and contributed to the development of metallurgy." A treatise titled "No Foot, No Horse" was published in England in 1751. In 1835, the first U.S. patent for a horseshoe manufacturing machine capable of making up to 60 horseshoes per hour was issued to Henry Burden. In mid-19th-century Canada, marsh horseshoes kept horses from sinking into the soft intertidal mud during dike-building. In a common design, a metal horseshoe holds a flat wooden shoe in place. Many changes brought about by the domestication of the horse have led to a need for shoes for numerous reasons, mostly linked to management that results in horses' hooves hardening less and being more vulnerable to injury. In the wild, a horse may travel up to per day to obtain adequate forage. While horses in the wild cover large areas of terrain, they usually do so at relatively slow speeds, unless being chased by a predator. They also tend to live in arid steppe climates. The consequence of slow but nonstop travel in a dry climate is that horses' feet are naturally worn to a small, smooth, even and hard state. The continual stimulation of the sole of the foot keeps it thick and hard. However, in domestication, the ways horses are used differ from what they would encounter in their natural environment. Domesticated horses are brought to colder and wetter areas than their ancestral habitat. These softer and heavier soils soften the hooves and make them prone to splitting, making hoof protection necessary. Consequently, it was in northern Europe that the nailed horseshoe arose in its modern form. Domesticated horses are also subject to inconsistent movement between stabling and work; they must carry or pull additional weight, and in modern times, they are often kept and worked on very soft footing, such as irrigated land, arena footing, or stall bedding. In some cases, management is also inadequate. The hooves of horses that are kept in stalls or small turnouts, even when cleaned adequately, are exposed to more moisture than would be encountered in the wild, as well as to ammonia from urine. The hoof capsule is mostly made from keratin, a protein, and is weakened by this exposure, becoming even more fragile and soft. Shoes do not prevent or reduce damage from moisture and ammonia exposure. Rather, they protect already weakened hooves. Further, without the natural conditioning factors present in the wild, the feet of horses grow overly large and long unless trimmed regularly. Hence, protection from rocks, pebbles, and hard, uneven surfaces is lacking. A balanced diet with proper nutrition also is a factor. Without these precautions, cracks in overgrown and overly brittle hoof walls are a danger, as is bruising of the soft tissues within the foot because of inadequately thick and hard sole material. Horseshoes have long been viewed as an aid to assist horses' hooves when subjected to the various unnatural conditions brought about by domestication, whether due to work conditions or stabling and management. Many generations of domestic horses bred for size, color, speed, and other traits with little regard for hoof quality and soundness make some breeds more dependent on horseshoes than feral horses such as mustangs, which develop strong hooves as a matter of natural selection. Nonetheless, domestic horses do not always require shoes. When possible, a "barefoot" hoof, at least for part of every year, is a healthy option for most horses. However, horseshoes have their place and can help prevent excess or abnormal hoof wear and injury to the foot. Many horses go without shoes year-round, some using temporary protection such as hoof boots for short-term use. Shoeing, when performed correctly, causes no pain to the animal. Farriers trim the insensitive part of the hoof, which is the same area into which they drive the nails. This is analogous to a manicure on a human fingernail, only on a much larger scale. Before beginning to shoe, the farrier removes the old shoe using pincers (shoe pullers) and trims the hoof wall to the desired length with nippers, a sharp pliers-like tool, and the sole and frog of the hoof with a hoof knife. Shoes do not allow the hoof to wear down as it naturally would in the wild, and it can then become too long. The coffin bone inside the hoof should line up straight with both bones in the pastern. If the excess hoof is not trimmed, the bones will become misaligned, which would place stress on the legs of the animal. Shoes are then measured to the foot and bent to the correct shape using a hammer, anvil, forge, and other modifications, such as taps for shoe studs, are added. Farriers may either cold shoe, in which they bend the metal shoe without heating it, or hot shoe, in which they place the metal in a forge before bending it. Hot shoeing can be more time-consuming, and requires the farrier to have access to a forge; however, it usually provides a better fit, as the mark made on the hoof from the hot shoe can show how even it lies. It also allows the farrier to make more modifications to the shoe, such as drawing toe- and quarter-clips. The farrier must take care not to hold the hot shoe against the hoof too long, as the heat can damage the hoof. Hot shoes are placed in water to cool them off. The farrier then nails the shoes on, by driving the nails into the hoof wall at the white line of the hoof. The nails are shaped in such a way that they bend outward as they are driven in, avoiding the sensitive inner part of the foot, so they emerge on the sides of the hoof. When the nail has been completely driven, the farrier cuts off the sharp points and uses a clincher (a form of tongs made especially for this purpose) or a clinching block with hammer to bend the rest of the nail so it is almost flush with the hoof wall. This prevents the nail from getting caught on anything, and also helps to hold the nail, and therefore the shoe, in place. The farrier then uses a rasp (large file), to smooth the edge where it meets the shoe and eliminate any sharp edges left from cutting off the nails. Mistakes are sometimes made by even a skilled farrier, especially if the horse does not stand still. This may sometimes result in a nail coming too close to the sensitive part of the hoof (putting pressure on it), or a nail that is driven slightly into the sensitive hoof, called quicking or nail pricking. This occurs when a nail penetrates the wall and hits the sensitive internal structures of the foot. Quicking results in bleeding and pain and the horse may show signs of lameness or may become lame in following days. Whenever it happens, the farrier must remove the offending nail. Usually a horse that is quicked will react immediately, though some cases where the nail is close to sensitive structures may not cause immediate problems. These mistakes are made occasionally by anyone who shoes horses, and in most cases is not an indication that the farrier is unskilled. It happens most commonly when horses move around while being shod, but also may occur if the hoof wall is particularly thin (common in Thoroughbreds), or if the hoof wall is brittle or damaged. It may also occur with an inexperienced or unskilled horseshoer who misdrives a nail, uses a shoe that is too small, or has not fitted the shoe to the shape of the horse's hoof. Occasionally, manufacturing defects in nails or shoes may also cause a misdriven nail that quicks a horse. However, the term "farrier" implies a professional horseshoer with skill, education, and training. Some people who shoe horses are untrained or unskilled, and likely to do more harm than good for the horse. People who do not understand the horse's foot will not trim the hoof correctly. This can cause serious problems for the animal, resulting in chronic lameness and damage to the hoof wall. Poor trimming will usually place the hoof at an incorrect angle, leave the foot laterally unbalanced and may cut too much off certain areas of the hoof wall, or trim too much of the frog or sole. Some horseshoers will rasp the hoof down to fit an improperly shaped or too-small size of shoe, which is damaging to the movement of the horse and can damage the hoof itself if trimmed or rasped too short. A poor horseshoer can also make mistakes in the shoeing process itself, not only quicking a horse, but also putting the shoe on crooked, using the wrong type of shoe for the job at hand, shaping the shoe improperly, or setting it on too far forward or back. Horseshoes have long been considered lucky. They were originally made of iron, a material that was believed to ward off evil spirits, and traditionally were held in place with seven nails, seven being the luckiest number. The superstition acquired a further Christian twist due to a legend surrounding the 10th-century saint Dunstan, who worked as a blacksmith before becoming Archbishop of Canterbury. The legend recounts that, one day, the Devil walked into Dunstan's shop and asked him to shoe his horse. Dunstan pretended not to recognize him, and agreed to the request; but rather than nailing the shoe to the horse's hoof, he nailed it to the Devil's own foot, causing him great pain. Dunstan eventually agreed to remove the shoe, but only after extracting a promise that the Devil would never enter a household with a horseshoe nailed to the door. Opinion is divided as to which way up the horseshoe ought to be nailed. Some say the ends should point up, so that the horseshoe catches the luck, and that the ends pointing down allow the good luck to be lost; others say they should point down, so that the luck is poured upon those entering the home. Superstitious sailors believe that nailing a horseshoe to the mast will help their vessel avoid storms. In heraldry, horseshoes most often occur as canting charges, such as in the arms of families with names like Farrier, Marshall and Smith. A horseshoe (together with two hammers) also appears in the arms of Hammersmith and Fulham, a borough in London. The flag of Rutland, England's smallest historic county, consists of a golden horseshoe laid over a field scattered with acorns. This references an ancient tradition in which every noble visiting Oakham, Rutland's county town, presents a horseshoe to the Lord of the Manor, which is then nailed to the wall of Oakham Castle. Over the centuries, the Castle has amassed a vast collection of horseshoes, the oldest of which date from the 15th century. The sport of horseshoes involves a horseshoe being thrown as close as possible to a rod in order to score points. As far as it is known, the sport is as old as horseshoes themselves. While traditional horseshoes can still be used, most organized versions of the game use specialized sport horseshoes, which do not fit on horses' hooves.
https://en.wikipedia.org/wiki?curid=13480
Hemoglobin Hemoglobin (American English) or haemoglobin (British English) (Greek αἷμα (haîma, “blood”) + -in) + -o- + globulin (from Latin globus (“ball, sphere”) + -in) (), abbreviated Hb or Hgb, is the iron-containing oxygen-transport metalloprotein in the red blood cells (erythrocytes) of almost all vertebrates (the exception being the fish family Channichthyidae) as well as the tissues of some invertebrates. Hemoglobin in blood carries oxygen from the lungs or gills to the rest of the body (i.e. the tissues). There it releases the oxygen to permit aerobic respiration to provide energy to power the functions of the organism in the process called metabolism. A healthy individual has 12 to 20 grams of hemoglobin in every 100 ml of blood. In mammals, the protein makes up about 96% of the red blood cells' dry content (by weight), and around 35% of the total content (including water). Hemoglobin has an oxygen-binding capacity of 1.34 mL O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood. The mammalian hemoglobin molecule can bind (carry) up to four oxygen molecules. Hemoglobin is involved in the transport of other gases: It carries some of the body's respiratory carbon dioxide (about 20–25% of the total) as carbaminohemoglobin, in which CO2 is bound to the heme protein. The molecule also carries the important regulatory molecule nitric oxide bound to a globin protein thiol group, releasing it at the same time as oxygen. Hemoglobin is also found outside red blood cells and their progenitor lines. Other cells that contain hemoglobin include the A9 dopaminergic neurons in the substantia nigra, macrophages, alveolar cells, lungs, retinal pigment epithelium, hepatocytes, mesangial cells in the kidney, endometrial cells, cervical cells and vaginal epithelial cells. In these tissues, hemoglobin has a non-oxygen-carrying function as an antioxidant and a regulator of iron metabolism. Excessive glucose in one's blood can attach to hemoglobin and raise the level of hemoglobin A1c. Hemoglobin and hemoglobin-like molecules are also found in many invertebrates, fungi, and plants. In these organisms, hemoglobins may carry oxygen, or they may act to transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant of the molecule, called leghemoglobin, is used to scavenge oxygen away from anaerobic systems, such as the nitrogen-fixing nodules of leguminous plants, lest the oxygen poison (deactivate) the system. Hemoglobinemia is a medical condition in which there is an excess of hemoglobin in the blood plasma. This is an effect of intravascular hemolysis, in which hemoglobin separates from red blood cells, a form of anemia. In 1825 J. F. Engelhart discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron he calculated the molecular mass of hemoglobin to "n" × 16000 ("n" = number of iron atoms per hemoglobin, now known to be 4), the first determination of a protein's molecular mass. This "hasty conclusion" drew a lot of ridicule at the time from scientists who could not believe that any molecule could be that big. Gilbert Smithson Adair confirmed Engelhart's results in 1925 by measuring the osmotic pressure of hemoglobin solutions. The oxygen-carrying property of hemoglobin was discovered by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years later by Felix Hoppe-Seyler. In 1959, Max Perutz determined the molecular structure of hemoglobin by X-ray crystallography. This work resulted in his sharing with John Kendrew the 1962 Nobel Prize in Chemistry for their studies of the structures of globular proteins. The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard. The name "hemoglobin" is derived from the words "heme" and "globin", reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces. The most common type of hemoglobin in mammals contains four such subunits. Hemoglobin consists of protein subunits (the "globin" molecules), and these proteins, in turn, are folded chains of a large number of different amino acids called polypeptides. The amino acid sequence of any polypeptide created by a cell is in turn determined by the stretches of DNA called genes. In all proteins, it is the amino acid sequence that determines the protein's chemical properties and function. There is more than one hemoglobin gene: in humans, hemoglobin A (the main form of hemoglobin present) is coded for by the genes, "HBA1", "HBA2", and "HBB". The amino acid sequences of the globin proteins in hemoglobins usually differ between species. These differences grow with evolutionary distance between species. For example, the most common hemoglobin sequences in humans, bonobos and chimpanzees are completely identical, without even single amino acid difference in either the alpha or the beta globin protein chains. Where as the human & gorilla hemoglobin differ in one aminoacid in both alpha & beta chains. These differences grow larger between less closely related species. Even within a species, different variants of hemoglobin always exist, although one sequence is usually a "most common" one in each species. Mutations in the genes for the hemoglobin protein in a species result in hemoglobin variants. Many of these mutant forms of hemoglobin cause no disease. Some of these mutant forms of hemoglobin, however, cause a group of hereditary diseases termed the "hemoglobinopathies". The best known hemoglobinopathy is sickle-cell disease, which was the first human disease whose mechanism was understood at the molecular level. A (mostly) separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation. All these diseases produce anemia. Variations in hemoglobin amino acid sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to high altitudes. Organisms living at high elevations experience lower partial pressures of oxygen compared to those at sea level. This presents a challenge to the organisms that inhabit such environments because hemoglobin, which normally binds oxygen at high partial pressures of oxygen, must be able to bind oxygen when it is present at a lower pressure. Different organisms have adapted to such a challenge. For example, recent studies have suggested genetic variants in deer mice that help explain how deer mice that live in the mountains are able to survive in the thin air that accompanies high altitudes. A researcher from the University of Nebraska-Lincoln found mutations in four different genes that can account for differences between deer mice that live in lowland prairies versus the mountains. After examining wild mice captured from both highlands and lowlands, it was found that: the genes of the two breeds are "virtually identical—except for those that govern the oxygen-carrying capacity of their hemoglobin". "The genetic difference enables highland mice to make more efficient use of their oxygen", since less is available at higher altitudes, such as those in the mountains. Mammoth hemoglobin featured mutations that allowed for oxygen delivery at lower temperatures, thus enabling mammoths to migrate to higher latitudes during the Pleistocene. This was also found in hummingbirds that inhabit the Andes. Hummingbirds already expend a lot of energy and thus have high oxygen demands and yet Andean hummingbirds have been found to thrive in high altitudes. Non-synonymous mutations in the hemoglobin gene of multiple species living at high elevations ("Oreotrochilus, A. castelnaudii, C. violifer, P. gigas," and "A. viridicuada") have caused the protein to have less of an affinity for inositol hexaphosphate (IHP), a molecule found in birds that has a similar role as 2,3-BPG in humans; this results in the ability to bind oxygen in lower partial pressures. Birds' unique circulatory lungs also promote efficient use of oxygen at low partial pressures of O2. These two adaptations reinforce each other and account for birds' remarkable high-altitude performance. Hemoglobin adaptation extends to humans, as well. There is a higher offspring survival rate among Tibetan women with high oxygen saturation genotypes residing at 4,000 m. Natural selection seems to be the main force working on this gene because the mortality rate of offspring is significantly lower for women with higher hemoglobin-oxygen affinity when compared to the mortality rate of offspring from women with low hemoglobin-oxygen affinity. While the exact genotype and mechanism by which this occurs is not yet clear, selection is acting on these women's ability to bind oxygen in low partial pressures, which overall allows them to better sustain crucial metabolic processes. Hemoglobin (Hb) is synthesized in a complex series of steps. The heme part is synthesized in a series of steps in the mitochondria and the cytosol of immature red blood cells, while the globin protein parts are synthesized by ribosomes in the cytosol. Production of Hb continues in the cell throughout its early development from the proerythroblast to the reticulocyte in the bone marrow. At this point, the nucleus is lost in mammalian red blood cells, but not in birds and many other species. Even after the loss of the nucleus in mammals, residual ribosomal RNA allows further synthesis of Hb until the reticulocyte loses its RNA soon after entering the vasculature (this hemoglobin-synthetic RNA in fact gives the reticulocyte its reticulated appearance and name). Hemoglobin has a quaternary structure characteristic of many multi-subunit globular proteins. Most of the amino acids in hemoglobin form alpha helices, and these helices are connected by short non-helical segments. Hydrogen bonds stabilize the helical sections inside this protein, causing attractions within the molecule, which then causes each polypeptide chain to fold into a specific shape. Hemoglobin's quaternary structure comes from its four subunits in roughly a tetrahedral arrangement. In most vertebrates, the hemoglobin molecule is an assembly of four globular protein subunits. Each subunit is composed of a protein chain tightly associated with a non-protein prosthetic heme group. Each protein chain arranges into a set of alpha-helix structural segments connected together in a globin fold arrangement. Such a name is given because this arrangement is the same folding motif used in other heme/globin proteins such as myoglobin. This folding pattern contains a pocket that strongly binds the heme group. A heme group consists of an iron (Fe) ion held in a heterocyclic ring, known as a porphyrin. This porphyrin ring consists of four pyrrole molecules cyclically linked together (by methine bridges) with the iron ion bound in the center. The iron ion, which is the site of oxygen binding, coordinates with the four nitrogen atoms in the center of the ring, which all lie in one plane. The iron is bound strongly (covalently) to the globular protein via the N atoms of the imidazole ring of F8 histidine residue (also known as the proximal histidine) below the porphyrin ring. A sixth position can reversibly bind oxygen by a coordinate covalent bond, completing the octahedral group of six ligands. This reversible bonding with oxygen is why hemoglobin is so useful for transporting oxygen around the body. Oxygen binds in an "end-on bent" geometry where one oxygen atom binds to Fe and the other protrudes at an angle. When oxygen is not bound, a very weakly bonded water molecule fills the site, forming a distorted octahedron. Even though carbon dioxide is carried by hemoglobin, it does not compete with oxygen for the iron-binding positions but is bound to the amine groups of the protein chains attached to the heme groups. The iron ion may be either in the ferrous Fe2+ or in the ferric Fe3+ state, but ferrihemoglobin (methemoglobin) (Fe3+) cannot bind oxygen. In binding, oxygen temporarily and reversibly oxidizes (Fe2+) to (Fe3+) while oxygen temporarily turns into the superoxide ion, thus iron must exist in the +2 oxidation state to bind oxygen. If superoxide ion associated to Fe3+ is protonated, the hemoglobin iron will remain oxidized and incapable of binding oxygen. In such cases, the enzyme methemoglobin reductase will be able to eventually reactivate methemoglobin by reducing the iron center. In adult humans, the most common hemoglobin type is a tetramer (which contains four subunit proteins) called "hemoglobin A", consisting of two α and two β subunits non-covalently bound, each made of 141 and 146 amino acid residues, respectively. This is denoted as α2β2. The subunits are structurally similar and about the same size. Each subunit has a molecular weight of about 16,000 daltons, for a total molecular weight of the tetramer of about 64,000 daltons (64,458 g/mol). Thus, 1 g/dL = 0.1551 mmol/L. Hemoglobin A is the most intensively studied of the hemoglobin molecules. In human infants, the hemoglobin molecule is made up of 2 α chains and 2 γ chains. The gamma chains are gradually replaced by β chains as the infant grows. The four polypeptide chains are bound to each other by salt bridges, hydrogen bonds, and the hydrophobic effect. In general, hemoglobin can be saturated with oxygen molecules (oxyhemoglobin), or desaturated with oxygen molecules (deoxyhemoglobin). "Oxyhemoglobin" is formed during physiological respiration when oxygen binds to the heme component of the protein hemoglobin in red blood cells. This process occurs in the pulmonary capillaries adjacent to the alveoli of the lungs. The oxygen then travels through the blood stream to be dropped off at cells where it is utilized as a terminal electron acceptor in the production of ATP by the process of oxidative phosphorylation. It does not, however, help to counteract a decrease in blood pH. Ventilation, or breathing, may reverse this condition by removal of carbon dioxide, thus causing a shift up in pH. Hemoglobin exists in two forms, a "taut (tense) form" (T) and a "relaxed form" (R). Various factors such as low pH, high CO2 and high 2,3 BPG at the level of the tissues favor the taut form, which has low oxygen affinity and releases oxygen in the tissues. Conversely, a high pH, low CO2, or low 2,3 BPG favors the relaxed form, which can better bind oxygen. The partial pressure of the system also affects O2 affinity where, at high partial pressures of oxygen (such as those present in the alveoli), the relaxed (high affinity, R) state is favoured. Inversely, at low partial pressures (such as those present in respiring tissues), the (low affinity, T) tense state is favoured. Additionally, the binding of oxygen to the iron(II) heme pulls the iron into the plane of the porphyrin ring, causing a slight conformational shift. The shift encourages oxygen to bind to the three remaining heme units within hemoglobin (thus, oxygen binding is cooperative). Deoxygenated hemoglobin is the form of hemoglobin without the bound oxygen. The absorption spectra of oxyhemoglobin and deoxyhemoglobin differ. The oxyhemoglobin has significantly lower absorption of the 660 nm wavelength than deoxyhemoglobin, while at 940 nm its absorption is slightly higher. This difference is used for the measurement of the amount of oxygen in a patient's blood by an instrument called a pulse oximeter. This difference also accounts for the presentation of cyanosis, the blue to purplish color that tissues develop during hypoxia. Deoxygenated hemoglobin is paramagnetic; it is weakly attracted to magnetic fields. In contrast, oxygenated hemoglobin exhibits diamagnetism, a weak repulsion from a magnetic field. Scientists agree that the event that separated myoglobin from hemoglobin occurred after lampreys diverged from jawed vertebrates. This separation of myoglobin and hemoglobin allowed for the different functions of the two molecules to arise and develop: myoglobin has more to do with oxygen storage while hemoglobin is tasked with oxygen transport. The α- and β-like globin genes encode the individual subunits of the protein. The predecessors of these genes arose through another duplication event also after the gnathosome common ancestor derived from jawless fish, approximately 450–500 million years ago. The development of α and β genes created the potential for hemoglobin to be composed of multiple subunits, a physical composition central to hemoglobin's ability to transport oxygen. Having multiple subunits contributes to hemoglobin's ability to bind oxygen cooperatively as well as be regulated allosterically. Subsequently, the α gene also underwent a duplication event to form the "HBA1" and "HBA2" genes. These further duplications and divergences have created a diverse range of α- and β-like globin genes that are regulated so that certain forms occur at different stages of development. Most ice fish of the family Channichthyidae have lost their hemoglobin genes as an adaptation to cold water. Assigning oxygenated hemoglobin's oxidation state is difficult because oxyhemoglobin (Hb-O2), by experimental measurement, is diamagnetic (no net unpaired electrons), yet the lowest-energy (ground-state) electron configurations in both oxygen and iron are paramagnetic (suggesting at least one unpaired electron in the complex). The lowest-energy form of oxygen, and the lowest energy forms of the relevant oxidation states of iron, are these: All of these structures are paramagnetic (have unpaired electrons), not diamagnetic. Thus, a non-intuitive (e.g., a higher-energy for at least one species) distribution of electrons in the combination of iron and oxygen must exist, in order to explain the observed diamagnetism and no unpaired electrons. The two logical possibilities to produce diamagnetic (no net spin) Hb-O2 are: Another possible model in which low-spin Fe4+ binds to peroxide, O22−, can be ruled out by itself, because the iron is paramagnetic (although the peroxide ion is diamagnetic). Here, the iron has been oxidized by two electrons, and the oxygen reduced by two electrons. Direct experimental data: Thus, the nearest formal oxidation state of iron in Hb-O2 is the +3 state, with oxygen in the −1 state (as superoxide .O2−). The diamagnetism in this configuration arises from the single unpaired electron on superoxide aligning antiferromagnetically with the single unpaired electron on iron (in a low-spin d5 state), to give no net spin to the entire configuration, in accordance with diamagnetic oxyhemoglobin from experiment. The second choice of the logical possibilities above for diamagnetic oxyhemoglobin being found correct by experiment, is not surprising: singlet oxygen (possibility #1) is an unrealistically high energy state. Model 3 leads to unfavorable separation of charge (and does not agree with the magnetic data), although it could make a minor contribution as a resonance form. Iron's shift to a higher oxidation state in Hb-O2 decreases the atom's size, and allows it into the plane of the porphyrin ring, pulling on the coordinated histidine residue and initiating the allosteric changes seen in the globulins. Early postulates by bio-inorganic chemists claimed that possibility #1 (above) was correct and that iron should exist in oxidation state II. This conclusion seemed likely, since the iron oxidation state III as methemoglobin, when "not" accompanied by superoxide .O2− to "hold" the oxidation electron, was known to render hemoglobin incapable of binding normal triplet O2 as it occurs in the air. It was thus assumed that iron remained as Fe(II) when oxygen gas was bound in the lungs. The iron chemistry in this previous classical model was elegant, but the required presence of the diamagnetic, high-energy, singlet oxygen molecule was never explained. It was classically argued that the binding of an oxygen molecule placed high-spin iron(II) in an octahedral field of strong-field ligands; this change in field would increase the crystal field splitting energy, causing iron's electrons to pair into the low-spin configuration, which would be diamagnetic in Fe(II). This forced low-spin pairing is indeed thought to happen in iron when oxygen binds, but is not enough to explain iron's change in size. Extraction of an additional electron from iron by oxygen is required to explain both iron's smaller size and observed increased oxidation state, and oxygen's weaker bond. The assignment of a whole-number oxidation state is a formalism, as the covalent bonds are not required to have perfect bond orders involving whole electron transfer. Thus, all three models for paramagnetic Hb-O2 may contribute to some small degree (by resonance) to the actual electronic configuration of Hb-O2. However, the model of iron in Hb-O2 being Fe(III) is more correct than the classical idea that it remains Fe(II). When oxygen binds to the iron complex, it causes the iron atom to move back toward the center of the plane of the porphyrin ring (see moving diagram). At the same time, the imidazole side-chain of the histidine residue interacting at the other pole of the iron is pulled toward the porphyrin ring. This interaction forces the plane of the ring sideways toward the outside of the tetramer, and also induces a strain in the protein helix containing the histidine as it moves nearer to the iron atom. This strain is transmitted to the remaining three monomers in the tetramer, where it induces a similar conformational change in the other heme sites such that binding of oxygen to these sites becomes easier. As oxygen binds to one monomer of hemoglobin, the tetramer's conformation shifts from the T (tense) state to the R (relaxed) state. This shift promotes the binding of oxygen to the remaining three monomer's heme groups, thus saturating the hemoglobin molecule with oxygen. In the tetrameric form of normal adult hemoglobin, the binding of oxygen is, thus, a cooperative process. The binding affinity of hemoglobin for oxygen is increased by the oxygen saturation of the molecule, with the first molecules of oxygen bound influencing the shape of the binding sites for the next ones, in a way favorable for binding. This positive cooperative binding is achieved through steric conformational changes of the hemoglobin protein complex as discussed above; i.e., when one subunit protein in hemoglobin becomes oxygenated, a conformational or structural change in the whole complex is initiated, causing the other subunits to gain an increased affinity for oxygen. As a consequence, the oxygen binding curve of hemoglobin is sigmoidal, or "S"-shaped, as opposed to the normal hyperbolic curve associated with noncooperative binding. The dynamic mechanism of the cooperativity in hemoglobin and its relation with the low-frequency resonance has been discussed. Besides the oxygen ligand, which binds to hemoglobin in a cooperative manner, hemoglobin ligands also include competitive inhibitors such as carbon monoxide (CO) and allosteric ligands such as carbon dioxide (CO2) and nitric oxide (NO). The carbon dioxide is bound to amino groups of the globin proteins to form carbaminohemoglobin; this mechanism is thought to account for about 10% of carbon dioxide transport in mammals. Nitric oxide can also be transported by hemoglobin; it is bound to specific thiol groups in the globin protein to form an S-nitrosothiol, which dissociates into free nitric oxide and thiol again, as the hemoglobin releases oxygen from its heme site. This nitric oxide transport to peripheral tissues is hypothesized to assist oxygen transport in tissues, by releasing vasodilatory nitric oxide to tissues in which oxygen levels are low. The binding of oxygen is affected by molecules such as carbon monoxide (for example, from tobacco smoking, exhaust gas, and incomplete combustion in furnaces). CO competes with oxygen at the heme binding site. Hemoglobin's binding affinity for CO is 250 times greater than its affinity for oxygen, meaning that small amounts of CO dramatically reduce hemoglobin's ability to deliver oxygen to the target tissue. Since carbon monoxide is a colorless, odorless and tasteless gas, and poses a potentially fatal threat, carbon monoxide detectors have become commercially available to warn of dangerous levels in residences. When hemoglobin combines with CO, it forms a very bright red compound called carboxyhemoglobin, which may cause the skin of CO poisoning victims to appear pink in death, instead of white or blue. When inspired air contains CO levels as low as 0.02%, headache and nausea occur; if the CO concentration is increased to 0.1%, unconsciousness will follow. In heavy smokers, up to 20% of the oxygen-active sites can be blocked by CO. In similar fashion, hemoglobin also has competitive binding affinity for cyanide (CN−), sulfur monoxide (SO), and sulfide (S2−), including hydrogen sulfide (H2S). All of these bind to iron in heme without changing its oxidation state, but they nevertheless inhibit oxygen-binding, causing grave toxicity. The iron atom in the heme group must initially be in the ferrous (Fe2+) oxidation state to support oxygen and other gases' binding and transport (it temporarily switches to ferric during the time oxygen is bound, as explained above). Initial oxidation to the ferric (Fe3+) state without oxygen converts hemoglobin into "hemiglobin" or methemoglobin, which cannot bind oxygen. Hemoglobin in normal red blood cells is protected by a reduction system to keep this from happening. Nitric oxide is capable of converting a small fraction of hemoglobin to methemoglobin in red blood cells. The latter reaction is a remnant activity of the more ancient nitric oxide dioxygenase function of globins. Carbon "di"oxide occupies a different binding site on the hemoglobin. Carbon dioxide is more readily dissolved in deoxygenated blood, facilitating its removal from the body after the oxygen has been released to tissues undergoing metabolism. This increased affinity for carbon dioxide by the venous blood is known as the Haldane effect. Through the enzyme carbonic anhydrase, carbon dioxide reacts with water to give carbonic acid, which decomposes into bicarbonate and protons: Hence, blood with high carbon dioxide levels is also lower in pH (more acidic). Hemoglobin can bind protons and carbon dioxide, which causes a conformational change in the protein and facilitates the release of oxygen. Protons bind at various places on the protein, while carbon dioxide binds at the α-amino group. Carbon dioxide binds to hemoglobin and forms carbaminohemoglobin. This decrease in hemoglobin's affinity for oxygen by the binding of carbon dioxide and acid is known as the Bohr effect. The Bohr effect favors the T state rather than the R state. (shifts the O2-saturation curve to the "right"). Conversely, when the carbon dioxide levels in the blood decrease (i.e., in the lung capillaries), carbon dioxide and protons are released from hemoglobin, increasing the oxygen affinity of the protein. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish. It is necessary for hemoglobin to release the oxygen that it binds; if not, there is no point in binding it. The sigmoidal curve of hemoglobin makes it efficient in binding (taking up O2 in lungs), and efficient in unloading (unloading O2 in tissues). In people acclimated to high altitudes, the concentration of 2,3-Bisphosphoglycerate (2,3-BPG) in the blood is increased, which allows these individuals to deliver a larger amount of oxygen to tissues under conditions of lower oxygen tension. This phenomenon, where molecule Y affects the binding of molecule X to a transport molecule Z, is called a "heterotropic" allosteric effect. Hemoglobin in organisms at high altitudes has also adapted such that it has less of an affinity for 2,3-BPG and so the protein will be shifted more towards its R state. In its R state, hemoglobin will bind oxygen more readily, thus allowing organisms to perform the necessary metabolic processes when oxygen is present at low partial pressures. Animals other than humans use different molecules to bind to hemoglobin and change its O2 affinity under unfavorable conditions. Fish use both ATP and GTP. These bind to a phosphate "pocket" on the fish hemoglobin molecule, which stabilizes the tense state and therefore decreases oxygen affinity. GTP reduces hemoglobin oxygen affinity much more than ATP, which is thought to be due to an extra hydrogen bond formed that further stabilizes the tense state. Under hypoxic conditions, the concentration of both ATP and GTP is reduced in fish red blood cells to increase oxygen affinity. A variant hemoglobin, called fetal hemoglobin (HbF, α2γ2), is found in the developing fetus, and binds oxygen with greater affinity than adult hemoglobin. This means that the oxygen binding curve for fetal hemoglobin is left-shifted (i.e., a higher percentage of hemoglobin has oxygen bound to it at lower oxygen tension), in comparison to that of adult hemoglobin. As a result, fetal blood in the placenta is able to take oxygen from maternal blood. Hemoglobin also carries nitric oxide (NO) in the globin part of the molecule. This improves oxygen delivery in the periphery and contributes to the control of respiration. NO binds reversibly to a specific cysteine residue in globin; the binding depends on the state (R or T) of the hemoglobin. The resulting S-nitrosylated hemoglobin influences various NO-related activities such as the control of vascular resistance, blood pressure and respiration. NO is not released in the cytoplasm of red blood cells but transported out of them by an anion exchanger called AE1. Hemoglobin variants are a part of the normal embryonic and fetal development. They may also be pathologic mutant forms of hemoglobin in a population, caused by variations in genetics. Some well-known hemoglobin variants, such as sickle-cell anemia, are responsible for diseases and are considered hemoglobinopathies. Other variants cause no detectable pathology, and are thus considered non-pathological variants. In the embryo: In the fetus: After birth: Variant forms that cause disease: When red blood cells reach the end of their life due to aging or defects, they are removed from the circulation by the phagocytic activity of macrophages in the spleen or the liver or hemolyze within the circulation. Free hemoglobin is then cleared from the circulation via the hemoglobin transporter CD163, which is exclusively expressed on monocytes or macrophages. Within these cells the hemoglobin molecule is broken up, and the iron gets recycled. This process also produces one molecule of carbon monoxide for every molecule of heme degraded. Heme degradation is one of the few natural sources of carbon monoxide in the human body, and is responsible for the normal blood levels of carbon monoxide even in people breathing pure air. The other major final product of heme degradation is bilirubin. Increased levels of this chemical are detected in the blood if red blood cells are being destroyed more rapidly than usual. Improperly degraded hemoglobin protein or hemoglobin that has been released from the blood cells too rapidly can clog small blood vessels, especially the delicate blood filtering vessels of the kidneys, causing kidney damage. Iron is removed from heme and salvaged for later use, it is stored as hemosiderin or ferritin in tissues and transported in plasma by beta globulins as transferrins. When the porphyrin ring is broken up, the fragments are normally secreted as a yellow pigment called bilirubin, which is secreted into the intestines as bile. Intestines metabolise bilirubin into urobilinogen. Urobilinogen leaves the body in faeces, in a pigment called stercobilin. Globulin is metabolised into amino acids that are then released into circulation. Hemoglobin deficiency can be caused either by a decreased amount of hemoglobin molecules, as in anemia, or by decreased ability of each molecule to bind oxygen at the same partial pressure of oxygen. Hemoglobinopathies (genetic defects resulting in abnormal structure of the hemoglobin molecule) may cause both. In any case, hemoglobin deficiency decreases blood oxygen-carrying capacity. Hemoglobin deficiency is, in general, strictly distinguished from hypoxemia, defined as decreased partial pressure of oxygen in blood, although both are causes of hypoxia (insufficient oxygen supply to tissues). Other common causes of low hemoglobin include loss of blood, nutritional deficiency, bone marrow problems, chemotherapy, kidney failure, or abnormal hemoglobin (such as that of sickle-cell disease). The ability of each hemoglobin molecule to carry oxygen is normally modified by altered blood pH or CO2, causing an altered oxygen–hemoglobin dissociation curve. However, it can also be pathologically altered in, e.g., carbon monoxide poisoning. Decrease of hemoglobin, with or without an absolute decrease of red blood cells, leads to symptoms of anemia. Anemia has many different causes, although iron deficiency and its resultant iron deficiency anemia are the most common causes in the Western world. As absence of iron decreases heme synthesis, red blood cells in iron deficiency anemia are "hypochromic" (lacking the red hemoglobin pigment) and "microcytic" (smaller than normal). Other anemias are rarer. In hemolysis (accelerated breakdown of red blood cells), associated jaundice is caused by the hemoglobin metabolite bilirubin, and the circulating hemoglobin can cause kidney failure. Some mutations in the globin chain are associated with the hemoglobinopathies, such as sickle-cell disease and thalassemia. Other mutations, as discussed at the beginning of the article, are benign and are referred to merely as hemoglobin variants. There is a group of genetic disorders, known as the "porphyrias" that are characterized by errors in metabolic pathways of heme synthesis. King George III of the United Kingdom was probably the most famous porphyria sufferer. To a small extent, hemoglobin A slowly combines with glucose at the terminal valine (an alpha aminoacid) of each β chain. The resulting molecule is often referred to as Hb A1c, a glycosylated hemoglobin. The binding of glucose to amino acids in the hemoglobin takes place spontaneously (without the help of an enzyme) in many proteins, and is not known to serve a useful purpose. However, as the concentration of glucose in the blood increases, the percentage of Hb A that turns into Hb A1c increases. In diabetics whose glucose usually runs high, the percent Hb A1c also runs high. Because of the slow rate of Hb A combination with glucose, the Hb A1c percentage reflects a weighted average of blood glucose levels over the lifetime of red cells, which is approximately 120 days. The levels of glycosylated hemoglobin are therefore measured in order to monitor the long-term control of the chronic disease of type 2 diabetes mellitus (T2DM). Poor control of T2DM results in high levels of glycosylated hemoglobin in the red blood cells. The normal reference range is approximately 4.0–5.9%. Though difficult to obtain, values less than 7% are recommended for people with T2DM. Levels greater than 9% are associated with poor control of the glycosylated hemoglobin, and levels greater than 12% are associated with very poor control. Diabetics who keep their glycosylated hemoglobin levels close to 7% have a much better chance of avoiding the complications that may accompany diabetes (than those whose levels are 8% or higher). In addition, increased glycosylation of hemoglobin increases its affinity for oxygen, therefore preventing its release at the tissue and inducing a level of hypoxia in extreme cases. Elevated levels of hemoglobin are associated with increased numbers or sizes of red blood cells, called polycythemia. This elevation may be caused by congenital heart disease, cor pulmonale, pulmonary fibrosis, too much erythropoietin, or polycythemia vera. High hemoglobin levels may also be caused by exposure to high altitudes, smoking, dehydration (artificially by concentrating Hb), advanced lung disease and certain tumors. A recent study done in Pondicherry, India, shows its importance in coronary artery disease. Hemoglobin concentration measurement is among the most commonly performed blood tests, usually as part of a complete blood count. For example, it is typically tested before or after blood donation. Results are reported in g/L, g/dL or mol/L. 1 g/dL equals about 0.6206 mmol/L, although the latter units are not used as often due to uncertainty regarding the polymeric state of the molecule. This conversion factor, using the single globin unit molecular weight of 16,000 Da, is more common for hemoglobin concentration in blood. For MCHC (mean corpuscular hemoglobin concentration) the conversion factor 0.155, which uses the tetramer weight of 64,500 Da, is more common. Normal levels are: Normal values of hemoglobin in the 1st and 3rd trimesters of pregnant women must be at least 11 g/dL and at least 10.5 g/dL during the 2nd trimester. Dehydration or hyperhydration can greatly influence measured hemoglobin levels. Albumin can indicate hydration status. If the concentration is below normal, this is called anemia. Anemias are classified by the size of red blood cells, the cells that contain hemoglobin in vertebrates. The anemia is called "microcytic" if red cells are small, "macrocytic" if they are large, and "normocytic" otherwise. Hematocrit, the proportion of blood volume occupied by red blood cells, is typically about three times the hemoglobin concentration measured in g/dL. For example, if the hemoglobin is measured at 17 g/dL, that compares with a hematocrit of 51%. Laboratory hemoglobin test methods require a blood sample (arterial, venous, or capillary) and analysis on hematology analyzer and CO-oximeter. Additionally, a new noninvasive hemoglobin (SpHb) test method called Pulse CO-Oximetry is also available with comparable accuracy to invasive methods. Concentrations of oxy- and deoxyhemoglobin can be measured continuously, regionally and noninvasively using NIRS. NIRS can be used both on the head and on muscles. This technique is often used for research in e.g. elite sports training, ergonomics, rehabilitation, patient monitoring, neonatal research, functional brain monitoring, brain computer interface, urology (bladder contraction), neurology (Neurovascular coupling) and more. Long-term control of blood sugar concentration can be measured by the concentration of Hb A1c. Measuring it directly would require many samples because blood sugar levels vary widely through the day. Hb A1c is the product of the irreversible reaction of hemoglobin A with glucose. A higher glucose concentration results in more Hb A1c. Because the reaction is slow, the Hb A1c proportion represents glucose level in blood averaged over the half-life of red blood cells, is typically 50–55 days. An Hb A1c proportion of 6.0% or less show good long-term glucose control, while values above 7.0% are elevated. This test is especially useful for diabetics. The functional magnetic resonance imaging (fMRI) machine uses the signal from deoxyhemoglobin, which is sensitive to magnetic fields since it is paramagnetic. Combined measurement with NIRS shows good correlation with both the oxy- and deoxyhemoglobin signal compared to the BOLD signal. Hemoglobin can be tracked noninvasively, to build an individual data set tracking the hemoconcentration and hemodilution effects of daily activities for better understanding of sports performance and training. Athletes are often concerned about endurance and intensity of exercise. The sensor uses light-emitting diodes that emit red and infrared light through the tissue to a light detector, which then sends a signal to a processor to calculate the absorption of light by the hemoglobin protein. This sensor is similar to a pulse oximeter, which consists of a small sensing device that clips to the finger. A variety of oxygen-transport and -binding proteins exist in organisms throughout the animal and plant kingdoms. Organisms including bacteria, protozoans, and fungi all have hemoglobin-like proteins whose known and predicted roles include the reversible binding of gaseous ligands. Since many of these proteins contain globins and the heme moiety (iron in a flat porphyrin support), they are often called hemoglobins, even if their overall tertiary structure is very different from that of vertebrate hemoglobin. In particular, the distinction of "myoglobin" and hemoglobin in lower animals is often impossible, because some of these organisms do not contain muscles. Or, they may have a recognizable separate circulatory system but not one that deals with oxygen transport (for example, many insects and other arthropods). In all these groups, heme/globin-containing molecules (even monomeric globin ones) that deal with gas-binding are referred to as oxyhemoglobins. In addition to dealing with transport and sensing of oxygen, they may also deal with NO, CO2, sulfide compounds, and even O2 scavenging in environments that must be anaerobic. They may even deal with detoxification of chlorinated materials in a way analogous to heme-containing P450 enzymes and peroxidases. The structure of hemoglobins varies across species. Hemoglobin occurs in all kingdoms of organisms, but not in all organisms. Primitive species such as bacteria, protozoa, algae, and plants often have single-globin hemoglobins. Many nematode worms, molluscs, and crustaceans contain very large multisubunit molecules, much larger than those in vertebrates. In particular, chimeric hemoglobins found in fungi and giant annelids may contain both globin and other types of proteins. One of the most striking occurrences and uses of hemoglobin in organisms is in the giant tube worm ("Riftia pachyptila", also called Vestimentifera), which can reach 2.4 meters length and populates ocean volcanic vents. Instead of a digestive tract, these worms contain a population of bacteria constituting half the organism's weight. The bacteria oxidize H2S from the vent with O2 from the water to produce energy to make food from H2O and CO2. The worms' upper end is a deep-red fan-like structure ("plume"), which extends into the water and absorbs H2S and O2 for the bacteria, and CO2 for use as synthetic raw material similar to photosynthetic plants. The structures are bright red due to their content of several extraordinarily complex hemoglobins that have up to 144 globin chains, each including associated heme structures. These hemoglobins are remarkable for being able to carry oxygen in the presence of sulfide, and even to carry sulfide, without being completely "poisoned" or inhibited by it as hemoglobins in most other species are. Some nonerythroid cells (i.e., cells other than the red blood cell line) contain hemoglobin. In the brain, these include the A9 dopaminergic neurons in the substantia nigra, astrocytes in the cerebral cortex and hippocampus, and in all mature oligodendrocytes. It has been suggested that brain hemoglobin in these cells may enable the "storage of oxygen to provide a homeostatic mechanism in anoxic conditions, which is especially important for A9 DA neurons that have an elevated metabolism with a high requirement for energy production". It has been noted further that "A9 dopaminergic neurons may be at particular risk since in addition to their high mitochondrial activity they are under intense oxidative stress caused by the production of hydrogen peroxide via autoxidation and/or monoamine oxidase (MAO)-mediated deamination of dopamine and the subsequent reaction of accessible ferrous iron to generate highly toxic hydroxyl radicals". This may explain the risk of these cells for degeneration in Parkinson's disease. The hemoglobin-derived iron in these cells is not the cause of the post-mortem darkness of these cells (origin of the Latin name, substantia "nigra"), but rather is due to neuromelanin. Outside the brain, hemoglobin has non-oxygen-carrying functions as an antioxidant and a regulator of iron metabolism in macrophages, alveolar cells, and mesangial cells in the kidney. Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics. Artist Julian Voss-Andreae created a sculpture called "Heart of Steel (Hemoglobin)" in 2005, based on the protein's backbone. The sculpture was made from glass and weathering steel. The intentional rusting of the initially shiny work of art mirrors hemoglobin's fundamental chemical reaction of oxygen binding to iron. Montreal artist Nicolas Baier created "Lustre (Hémoglobine)", a sculpture in stainless steel that shows the structure of the hemoglobin molecule. It is displayed in the atrium of McGill University Health Centre's research centre in Montreal. The sculpture measures about 10 metres × 10 metres × 10 metres. Hemoglobin variants: Hemoglobin protein subunits (genes): Hemoglobin compounds: Related Questions:
https://en.wikipedia.org/wiki?curid=13483
Ivanhoe Ivanhoe: A Romance () is a historical novel by Sir Walter Scott, first published in late 1819 in three volumes. At the time it was written it represented a shift by Scott away from fairly realistic novels set in Scotland in the comparatively recent past, to a somewhat fanciful depiction of medieval England. It has proved to be one of the best known and most influential of Scott's novels. "Ivanhoe" is set in 12th-century England with colourful descriptions of a tournament, outlaws, a witch trial and divisions between Jews and Christians. It has been credited for increasing interest in romance and medievalism; John Henry Newman claimed Scott "had first turned men's minds in the direction of the Middle Ages", while Thomas Carlyle and John Ruskin made similar assertions of Scott's overwhelming influence over the revival, based primarily on the publication of this novel. It has also had an important influence on popular perceptions of Richard the Lionheart, King John and Robin Hood. There have been several adaptations for stage, film and television. In June 1819, Scott was still suffering from the severe stomach pains that had forced him to dictate the last part of "The Bride of Lammermoor" and most of "A Legend of the Wars of Montrose", finishing at the end of May. But by the beginning of July at the latest he had started dictating his new novel "Ivanhoe", again with John Ballantyne and William Laidlaw as amanuenses. He was able to take up the pen himself for the second half of the novel and completed it in early November. For detailed information about the middle ages Scott drew on three works by the antiquarian Joseph Strutt: "Horda Angel-cynnan or a Compleat View of the Manners, Customs, Arms, Habits etc. of the Inhabitants of England" (1775–76), "Dress and Habits of the People of England" (1796–99), and "Sports and Pastimes of the People of England" (1801). Two historians gave him a solid grounding in the period: Robert Henry with his "The History of Great Britain" (1771–93), and Sharon Turner with "The History of the Anglo-Saxons from the Earliest Period to the Norman Conquest" (1799–1805). His clearest debt to an original medieval source involved the Templar Rule, reproduced in "The Theatre of Honour and Knight-Hood" (1623) translated from the French of André Favine. Scott was happy to introduce details from the later middle ages, and Chaucer was particularly helpful, as (in a different way) was the fourteenth-century romance "Richard Coeur de Lion". "Ivanhoe" was published by Archibald Constable in Edinburgh. All first editions carry the date of 1820, but it was released on 20 December 1819 and issued in London on the 29th. As with all of the Waverley novels before 1827, publication was anonymous. It is possible that Scott was involved in minor changes to the text during the early 1820s but his main revision was carried out in 1829 for the 'Magnum' edition where the novel appeared in Volumes 16 and 17 in September and October 1830. The standard modern edition, by Graham Tulloch, appeared as Volume 8 of the Edinburgh Edition of the Waverley Novels in 1998: this is based on the first edition with emendations principally from Scott's manuscript in the second half of the work; the new Magnum material is included in Volume 25b. "Ivanhoe" is the story of one of the remaining Anglo-Saxon noble families at a time when the nobility in England was overwhelmingly Norman. It follows the Saxon protagonist, Sir Wilfred of Ivanhoe, who is out of favour with his father for his allegiance to the Norman king Richard the Lionheart. The story is set in 1194, after the failure of the Third Crusade, when many of the Crusaders were still returning to their homes in Europe. King Richard, who had been captured by Leopold of Austria on his return journey to England, was believed to still be in captivity. Protagonist Wilfred of Ivanhoe is disinherited by his father Cedric of Rotherwood for supporting the Norman King Richard and for falling in love with the Lady Rowena, a ward of Cedric and descendant of the Saxon Kings of England. Cedric planned to have Rowena marry the powerful Lord Athelstane, a pretender to the Crown of England by his descent from the last Saxon King, Harold Godwinson. Ivanhoe accompanies King Richard on the Crusades, where he is said to have played a notable role in the Siege of Acre; and tends to Louis of Thuringia, who suffers from malaria. The book opens with a scene of Norman knights and prelates seeking the hospitality of Cedric. They are guided there by a pilgrim, known at that time as a palmer. Also returning from the Holy Land that same night, Isaac of York, a Jewish moneylender, seeks refuge at Rotherwood. Following the night's meal, the palmer observes one of the Normans, the Templar Brian de Bois-Guilbert, issue orders to his Saracen soldiers to capture Isaac. The palmer then assists in Isaac's escape from Rotherwood, with the additional aid of the swineherd Gurth. Isaac of York offers to repay his debt to the palmer with a suit of armour and a war horse to participate in the tournament at Ashby-de-la-Zouch Castle, on his inference that the palmer was secretly a knight. The palmer is taken by surprise, but accepts the offer. The tournament is presided over by Prince John. Also in attendance are Cedric, Athelstane, Lady Rowena, Isaac of York, his daughter Rebecca, Robin of Locksley and his men, Prince John's advisor Waldemar Fitzurse, and numerous Norman knights. On the first day of the tournament, in a bout of individual jousting, a mysterious knight, identifying himself only as "Desdichado" (described in the book as Spanish, taken by the Saxons to mean Disinherited), defeats Bois-Guilbert. The masked knight declines to reveal himself despite Prince John's request, but is nevertheless declared the champion of the day and is permitted to choose the Queen of the Tournament. He bestows this honour upon Lady Rowena. On the second day, at a melee, Desdichado is the leader of one party, opposed by his former adversaries. Desdichado's side is soon hard pressed and he himself beset by multiple foes until rescued by a knight nicknamed 'Le Noir Faineant' ("the Black Sluggard"), who thereafter departs in secret. When forced to unmask himself to receive his coronet (the sign of championship), Desdichado is identified as Wilfred of Ivanhoe, returned from the Crusades. This causes much consternation to Prince John and his court who now fear the imminent return of King Richard. Ivanhoe is severely wounded in the competition yet his father does not move quickly to tend to him. Instead, Rebecca, a skilled healer, tends to him while they are lodged near the tournament and then convinces her father to take Ivanhoe with them to their home in York, when he is fit for that trip. The conclusion of the tournament includes feats of archery by Locksley, such as splitting a willow reed with his arrow. Prince John’s dinner for the local Saxons ends in insults. In the forests between Ashby and York, Isaac, Rebecca and the wounded Ivanhoe are abandoned by their guards, who fear bandits and take all of Isaac’s horses. Cedric, Athelstane and the Lady Rowena meet them and agree to travel together. The party is captured by de Bracy and his companions and taken to Torquilstone, the castle of Front-de-Boeuf. The swineherd Gurth and Wamba the jester manage to escape, and then encounter Locksley, who plans a rescue. The Black Knight, having taken refuge for the night in the hut of local friar, the Holy Clerk of Copmanhurst, volunteers his assistance on learning about the captives from Robin of Locksley. They then besiege the Castle of Torquilstone with Robin's own men, including the friar and assorted Saxon yeomen. Inside Torquilstone, de Bracy expresses his love for the Lady Rowena but is refused. Brian de Bois-Guilbert tries to seduce Rebecca and is rebuffed. Front-de-Boeuf tries to wring a hefty ransom from Isaac of York, but Isaac refuses to pay unless his daughter is freed. When the besiegers deliver a note to yield up the captives, their Norman captors demand a priest to administer the Final Sacrament to Cedric; whereupon Cedric's jester Wamba slips in disguised as a priest, and takes the place of Cedric, who escapes and brings important information to the besiegers on the strength of the garrison and its layout. The besiegers storm the castle. The castle is set aflame during the assault by Ulrica, the daughter of the original lord of the castle, Lord Torquilstone, as revenge for her father's death. Front-de-Boeuf is killed in the fire while de Bracy surrenders to the Black Knight, who identifies himself as King Richard and releases de Bracy. Bois-Guilbert escapes with Rebecca while Isaac is rescued by the Clerk of Copmanhurst. The Lady Rowena is saved by Cedric, while the still-wounded Ivanhoe is rescued from the burning castle by King Richard. In the fighting, Athelstane is wounded and presumed dead while attempting to rescue Rebecca, whom he mistakes for Rowena. Following the battle, Locksley plays host to King Richard. Word is conveyed by de Bracy to Prince John of the King's return and the fall of Torquilstone. In the meantime, Bois-Guilbert rushes with his captive to the nearest Templar Preceptory, where Lucas de Beaumanoir, the Grand Master of the Templars, takes umbrage at Bois-Guilbert's infatuation and subjects Rebecca to a trial for witchcraft. At Bois-Guilbert's secret request, she claims the right to trial by combat; and Bois-Guilbert, who had hoped for the position, is devastated when the Grand-Master orders him to fight against Rebecca's champion. Rebecca then writes to her father to procure a champion for her. Cedric organises Athelstane's funeral at Coningsburgh, in the midst of which the Black Knight arrives with a companion. Cedric, who had not been present at Locksley's carousal, is ill-disposed towards the knight upon learning his true identity; but Richard calms Cedric and reconciles him with his son. During this conversation, Athelstane emerges – not dead, but laid in his coffin alive by monks desirous of the funeral money. Over Cedric's renewed protests, Athelstane pledges his homage to the Norman King Richard and urges Cedric to marry Rowena to Ivanhoe; to which Cedric finally agrees. Soon after this reconciliation, Ivanhoe receives word from Isaac beseeching him to fight on Rebecca's behalf. Ivanhoe, riding day and night, arrives in time for the trial by combat, but horse and man are exhausted, with little chance of victory. The two knights make one charge at each other with lances, Bois-Guilbert appearing to have the advantage. However, Bois-Guilbert, a man trying to have it all without offering to marry Rebecca, dies of natural causes in the saddle before the combat can continue. Fearing further persecution, Rebecca and her father plan to leave England for Granada. Before leaving, Rebecca comes to bid Rowena a fond farewell on her wedding day. Ivanhoe and Rowena marry and live a long and happy life together. Ivanhoe's military service ends with the death of King Richard. "(principal characters in bold)" Dedicatory Epistle: An imaginary letter from the Rev. Dr Dryasdust from Laurence Templeton who has found the materials for the following tale mostly in the Anglo-Norman Wardour Manuscript. He wishes to provide an English counterpart to the preceding Waverley novels, in spite of various difficulties arising from the chronologically remote setting made necessary by the earlier progress of civilisation south of the Border. Ch. 1: Historical sketch. Gurth the swineherd and Wamba the jester discuss life under Norman rule. Ch. 2: Wamba and Gurth wilfully misdirect a group of horsemen headed by Prior Aymer and Brian de Bois-Guilbert seeking shelter at Cedric's Rotherwood. Aymer and Bois-Guilbert discuss the beauty of Cedric's ward Rowena and are redirected, this time correctly, by a palmer [Ivanhoe in disguise]. Ch. 3: Cedric anxiously awaits the return of Gurth and the pigs. Aymer and Bois-Guilbert arrive. Ch. 4: Bois-Guilbert admires Rowena as she enters for the evening feast. Ch. 5: During the feast: Isaac enters and is befriended by the palmer; Cedric laments the decay of the Saxon language; the palmer refutes Bois-Guilbert's assertion of Templar supremacy in a tournament in Palestine, where Ivanhoe defeated him; the palmer and Rowena give a pledge for a return match; and Isaac is thunderstruck by Bois-Guilbert's denial of his assertion of poverty. Ch. 6: On the road to Sheffield, the palmer tells Rowena that Ivanhoe will soon be home. In the morning he offers to protect Isaac from Bois-Guilbert, whom he has overheard giving instructions for his capture. Isaac mentions a source of horse and armour of which he guesses the palmer has need. Ch. 7: As the audience for a tournament at Ashby assembles, Prince John amuses himself by making fun of Athelstane and Isaac. Ch. 8: After a series of Saxon defeats in the tournament the 'Disinherited Knight' [Ivanhoe] triumphs over Bois-Guilbert. Ch. 9: The Disinherited Knight nominates Rowena as Queen of the Tournament. Ch. 10: The Disinherited Knight refuses to ransom Bois-Guilbert's armour, declaring that their business is not concluded. He instructs his attendant, Gurth in disguise, to convey money to Isaac to repay him for arranging the provision of his horse and armour. Gurth does so, but Rebecca secretly refunds the money. Ch. 11: Gurth is assailed by a band of outlaws, but they spare him on hearing his story and after he has defeated one of their number, a miller, at quarter-staves. Ch. 12: The Disinherited Knight's party triumph at the tournament, with the aid of a knight in black [Richard in disguise]; he is revealed as Ivanhoe and faints as a result of the wounds he has incurred. Ch. 13: John encourages De Bracy to court Rowena and receives a warning from France that Richard has escaped. Locksley [Robin Hood] triumphs in an archery contest. Ch. 14: At the tournament banquet Cedric continues to disown his son (who has been associating with the Normans) but drinks to the health of Richard, rather than John, as the noblest of that race. Ch. 1 (15): De Bracy (disguised as a forester) tells Fitzurse of his plan to capture Rowena and then 'rescue' her in his own person. Ch. 2 (16): The Black Knight is entertained by a hermit [Friar Tuck] at Copmanhurst. Ch. 3 (17): The Black Knight and the hermit exchange songs. Ch. 4 (18): (Retrospect: Before going to the banquet Cedric learned that Ivanhoe had been removed by unknown carers; Gurth was recognised and captured by Cedric's cupbearer Oswald.) Cedric finds Athelstane unresponsive to his attempts to interest him in Rowena, who is herself only attracted by Ivanhoe. Ch. 5 (19): Rowena persuades Cedric to escort Isaac and Rebecca, who have been abandoned (along with a sick man [Ivanhoe] in their care) by their hired protectors. Wamba helps Gurth to escape again. De Bracy mounts his attack, during which Wamba escapes. He meets up with Gurth and they encounter Locksley who, after investigation, advises against a counter-attack, the captives not being in immediate danger. Ch. 6 (20): Locksley sends two of his men to watch De Bracy. At Copmanhurst he meets the Black Knight who agrees to join in the rescue. Ch. 7 (21): De Bracy tells Bois-Guilbert he has decided to abandon his 'rescue' plan, mistrusting his companion though the Templar says it is Rebecca he is interested in. On arrival at Torquilstone castle Cedric laments its decline. Ch. 8 (22): Under threat of torture Front-de-Bœuf forces Isaac to agree to pay him a thousand pounds, but only if Rebecca is released. Ch. 9 (23): De Bracy uses Ivanhoe's danger from Front-de-Bœuf to put pressure on Rowena, but he is moved by her resulting distress. The narrator refers the reader to historical instances of baronial oppression in medieval England. Ch. 10 (24): A hag Urfried [Ulrica] warns Rebecca of her forthcoming fate. Rebecca impresses Bois-Guilbert by her spirited resistance to his advances. Ch. 11 (25): Front-de-Bœuf rejects a written challenge from Gurth and Wamba. Wamba offers to spy out the castle posing as a confessor. Ch. 12 (26): Entering the castle, Wamba exchanges clothes with Cedric who encounters Rebecca and Urfried. Ch. 13 (27): Urfried recognises Cedric as a Saxon and, revealing herself as Ulrica, tells her story which involves Front-de-Bœuf murdering his father, who had killed her father and seven brothers when taking the castle, and had become her detested lover. She says she will give a signal when the time is ripe for storming the castle. Front-de-Bœuf sends the presumed friar with a message to summon reinforcements. Athelstane defies him, claiming that Rowena is his fiancée. The monk Ambrose arrives seeking help for Aymer who has been captured by Locksley's men. Ch. 14 (28): (Retrospective chapter detailing Rebecca's care for Ivanhoe from the tournament to the assault on Torquilstone.) Ch. 15 (29): Rebecca describes the assault on Torquilstone to the wounded Ivanhoe, disagreeing with his exalted view of chivalry. Ch. 16 (30): Front-de-Bœuf and De Bracy discuss how best to repel the besiegers. Ulrica sets fire to the castle and Front-de-Bœuf dies in the flames. Ch. 1 (31): (The chapter opens with a retrospective account of the attackers' plans and the taking of the barbican.) The Black Knight defeats De Bracy, making himself known to him as Richard, and rescues Ivanhoe. Bois-Guilbert rescues Rebecca, striking down Athelstane who thinks she is Rowena. Ulrica perishes in the flames after singing a wild pagan hymn. Ch. 2 (32): Locksley supervises the orderly division of the spoil. Friar Tuck brings Isaac whom he has rescued and made captive, and engages in good-natured buffeting with the Black Knight. Ch. 3 (33): Locksley arranges ransom terms for Isaac and Aymer. Ch. 4 (34): De Bracy informs John that Richard is in England. Together with Fitzurse he threatens to desert John, but the prince responds cunningly. Ch. 5 (35): At York, Nathan is horrified by Isaac's determination to seek Rebecca at Templestowe. At the priory Beaumanoir tells Mountfitchet that he intends to take a hard line with Templar irregularities. Arriving, Isaac shows him a letter from Aymer to Bois-Guilbert referring to Rebecca. Ch. 6 (36): Beaumanoir tells Albert Malvoisin of his outrage at Rebecca's presence in the preceptory. Albert insists to Bois-Guilbert that her trial for sorcery must proceed. Mountfichet says he will seek evidence against her. Ch. 7 (37): Rebecca is tried and found guilty. At Bois-Guilbert's secret prompting she demands that a champion defend her in trial by combat. Ch. 8 (38): Rebecca's demand is accepted, Bois-Guilbert being appointed champion for the prosecution. Bearing a message to her father, Higg meets him and Nathan on their way to the preceptory, and Isaac goes in search of Ivanhoe. Ch. 9 (39): Rebecca rejects Bois-Guilbert's offer to fail to appear for the combat in return for her love. Albert persuades him that it is in his interest to appear. Ch. 10 (40): The Black Knight leaves Ivanhoe to travel to Coningsburgh castle for Athelstane's funeral, and Ivanhoe follows him the next day. The Black Knight is rescued by Locksley from an attack carried out by Fitzurse on John's orders, and reveals his identity as Richard to his companions, prompting Locksley to identify himself as Robin Hood. Ch. 11 (41): Richard talks to Ivanhoe and dines with the outlaws before Robin arranges a false alarm to put an end to the delay. The party arrive at Coningsburgh. Ch. 12 (42): Richard procures Ivanhoe's pardon from his father. Athelstane appears, not dead, giving his allegiance to Richard and surrendering Rowena to Ivanhoe. Ch. 13 (43): Ivanhoe appears as Rebecca's champion, and Bois-Guilbert dies the victim of his contending passions. Ch. 14 (44): Beaumanoir and his Templars leave Richard defiantly. Cedric agrees to the marriage of Ivanhoe and Rowena. Rebecca takes her leave of Rowena before her father and she leave England to make a new life under the tolerant King of Grenada. Critics of the novel have treated it as a romance intended mainly to entertain boys. "Ivanhoe" maintains many of the elements of the Romance genre, including the quest, a chivalric setting, and the overthrowing of a corrupt social order to bring on a time of happiness. Other critics assert that the novel creates a realistic and vibrant story, idealising neither the past nor its main character. Scott treats themes similar to those of some of his earlier novels, like "Rob Roy" and "The Heart of Midlothian", examining the conflict between heroic ideals and modern society. In the latter novels, industrial society becomes the centre of this conflict as the backward Scottish nationalists and the "advanced" English have to arise from chaos to create unity. Similarly, the Normans in "Ivanhoe", who represent a more sophisticated culture, and the Saxons, who are poor, disenfranchised, and resentful of Norman rule, band together and begin to mould themselves into one people. The conflict between the Saxons and Normans focuses on the losses both groups must experience before they can be reconciled and thus forge a united England. The particular loss is in the extremes of their own cultural values, which must be disavowed in order for the society to function. For the Saxons, this value is the final admission of the hopelessness of the Saxon cause. The Normans must learn to overcome the materialism and violence in their own codes of chivalry. Ivanhoe and Richard represent the hope of reconciliation for a unified future. Ivanhoe, though of a more noble lineage than some of the other characters, represents a middling individual in the medieval class system who is not exceptionally outstanding in his abilities, as is expected of other quasi-historical fictional characters, such as the Greek heroes. Critic György Lukács points to middling main characters like Ivanhoe in Sir Walter Scott's other novels as one of the primary reasons Scott's historical novels depart from previous historical works, and better explore social and cultural history. The location of the novel is centred upon southern Yorkshire, north-west Leicestershire and northern Nottinghamshire in England. Castles mentioned within the story include Ashby de la Zouch Castle (now a ruin in the care of English Heritage), York (though the mention of Clifford's Tower, likewise an extant English Heritage property, is anachronistic, it not having been called that until later after various rebuilds) and 'Coningsburgh', which is based upon Conisbrough Castle, in the ancient town of Conisbrough near Doncaster (the castle also being a popular English Heritage site). Reference is made within the story to York Minster, where the climactic wedding takes place, and to the Bishop of Sheffield, although the Diocese of Sheffield did not exist at either the time of the novel or the time Scott wrote the novel and was not founded until 1914. Such references suggest that Robin Hood lived or travelled in the region. Conisbrough is so dedicated to the story of "Ivanhoe" that many of its streets, schools, and public buildings are named after characters from the book. The modern conception of Robin Hood as a cheerful, decent, patriotic rebel owes much to "Ivanhoe". "Locksley" becomes Robin Hood's title in the Scott novel, and it has been used ever since to refer to the legendary outlaw. Scott appears to have taken the name from an anonymous manuscript – written in 1600 – that employs "Locksley" as an epithet for Robin Hood. Owing to Scott's decision to make use of the manuscript, Robin Hood from Locksley has been transformed for all time into "Robin of Locksley", alias Robin Hood. (There is, incidentally, a village called Loxley in Yorkshire.) Scott makes the 12th-century's Saxon-Norman conflict a major theme in his novel. The original medieval stories about Robin Hood did not mention any conflict between Saxons and Normans; it was Scott who introduced this theme into the legend. The characters in "Ivanhoe" refer to Prince John and King Richard I as "Normans"; contemporary medieval documents from this period do not refer to either of these two rulers as Normans. Recent re-tellings of the story retain Scott's emphasis on the Norman-Saxon conflict. Scott also shunned the late 16th-century depiction of Robin as a dispossessed nobleman (the Earl of Huntingdon). This, however, has not prevented Scott from making an important contribution to the noble-hero strand of the legend, too, because some subsequent motion picture treatments of Robin Hood's adventures give Robin traits that are characteristic of Ivanhoe as well. The most notable Robin Hood films are the lavish Douglas Fairbanks 1922 silent film, the 1938 triple Academy Award-winning "Adventures of Robin Hood" with Errol Flynn as Robin (which contemporary reviewer Frank Nugent links specifically with "Ivanhoe"), and the 1991 box-office success "" with Kevin Costner). There is also the Mel Brooks spoof "". In most versions of Robin Hood, both Ivanhoe and Robin, for instance, are returning Crusaders. They have quarrelled with their respective fathers, they are proud to be Saxons, they display a highly evolved sense of justice, they support the rightful king even though he is of Norman-French ancestry, they are adept with weapons, and they each fall in love with a "fair maid" (Rowena and Marian, respectively). This particular time-frame was popularised by Scott. He borrowed it from the writings of the 16th-century chronicler John Mair or a 17th-century ballad presumably to make the plot of his novel more gripping. Medieval balladeers had generally placed Robin about two centuries later in the reign of Edward I, II or III. Robin's familiar feat of splitting his competitor's arrow in an archery contest appears for the first time in "Ivanhoe". The general political events depicted in the novel are relatively accurate; the novel tells of the period just after King Richard's imprisonment in Austria following the Crusade and of his return to England after a ransom is paid. Yet the story is also heavily fictionalised. Scott himself acknowledged that he had taken liberties with history in his "Dedicatory Epistle" to "Ivanhoe". Modern readers are cautioned to understand that Scott's aim was to create a compelling novel set in a historical period, not to provide a book of history. There has been criticism of Scott's portrayal of the bitter extent of the "enmity of Saxon and Norman, represented as persisting in the days of Richard" as "unsupported by the evidence of contemporary records that forms the basis of the story." Historian E. A. Freeman criticised Scott's novel, stating its depiction of a Saxon–Norman conflict in late twelfth-century England was unhistorical. Freeman cited medieval writer Walter Map, who claimed that tension between the Saxons and Normans had declined by the reign of Henry I. Freeman also cited the late twelfth-century book "Dialogus de Scaccario" by Richard FitzNeal. This book claimed that the Saxons and Normans had so merged together through intermarriage and cultural assimilation that (outside the aristocracy) it was impossible to tell "one from the other." Finally, Freeman ended his critique of Scott by saying that by the end of the twelfth century, the descendants of both Saxons and Normans in England referred to themselves as "English", not "Saxon" or "Norman". However, Scott may have intended to suggest parallels between the Norman conquest of England, about 130 years previously, and the prevailing situation in Scott's native Scotland (Scotland's union with England in 1707 – about the same length of time had elapsed before Scott's writing and the resurgence in his time of Scottish nationalism evidenced by the cult of Robert Burns, the famous poet who deliberately chose to work in Scots vernacular though he was an educated man and spoke modern English eloquently). Indeed, some experts suggest that Scott deliberately used "Ivanhoe" to illustrate his own combination of Scottish patriotism and pro-British Unionism. The novel generated a new name in English – Cedric. The original Saxon name had been "Cerdic" but Sir Walter misspelled it – an example of metathesis. "It is not a name but a misspelling" said satirist H. H. Munro. In England in 1194, it would have been unlikely for Rebecca to face the threat of being burned at the stake on charges of witchcraft. It is thought that it was shortly afterwards, from the 1250s, that the Church began to undertake the finding and punishment of witches and death did not become the usual penalty until the 15th century. Even then, the form of execution used for witches in England was hanging, burning being reserved for those also convicted of treason. There are various minor errors, e.g. the description of the tournament at Ashby owes more to the 14th century, most of the coins mentioned by Scott are exotic, William Rufus is said to have been John Lackland's grandfather, but he was actually his great-great-uncle, and Wamba (disguised as a monk) says "I am a poor brother of the Order of St Francis", but St. Francis of Assisi only began his preaching ten years after the death of Richard I. "For a writer whose early novels were prized for their historical accuracy, Scott was remarkably loose with the facts when he wrote "Ivanhoe"... But it is crucial to remember that "Ivanhoe", unlike the Waverly books, is entirely a romance. It is meant to please, not to instruct, and is more an act of imagination than one of research. Despite this fancifulness, however, "Ivanhoe" does make some prescient historical points. The novel is occasionally quite critical of King Richard, who seems to love adventure more than he loves the well-being of his subjects. This criticism did not match the typical idealised, romantic view of Richard the Lion-Hearted that was popular when Scott wrote the book, and yet it accurately echoes the way King Richard is often judged by historians today." Rebecca may be based on Rebecca Gratz, a Philadelphia teacher and philanthropist and the first Jewish female college student in America. Scott's attention had been drawn to Gratz's character by novelist Washington Irving, who was a close friend of the Gratz family. The assertion has been disputed, but it has been supported by "The Original of Rebecca in Ivanhoe", in "The Century Magazine" in 1882. The two Jewish characters, the moneylender Isaac of York and his beautiful daughter Rebecca, feature as main characters; the book was written and published during a period of increasing advancement and awareness for the emancipation of the Jews in England, and their position in society is well documented. Most of the original reviewers gave "Ivanhoe" an enthusiastic or broadly favourable reception. As usual, Scott's descriptive powers and his ability to present the matters of the past were generally praised. More than one reviewer found the work notably poetic. Several of them found themselves transported imaginatively to the remote period of the novel, although some problems were recognised: the combining of features from the high and late middle ages; an awkwardly created language for the dialogue; and antiquarian overload. The author's excursion into England was generally judged a success, the forest outlaws and the creation of 'merry England' attracting particular praise. Rebecca was almost unanimously admired, especially in her farewell scene. The plot was either criticised for its weakness, or just regarded as of less importance than the scenes and characters. The scenes at Torquilstone were judged horrible by several critics, with special focus on Ulrica. Athelstane's resurrection found no favour, the kindest response being that of Francis Jeffrey in "The Edinburgh Review" who suggested (writing anonymously, like all the reviewers) that it was 'introduced out of the very wantonness of merriment'. The Eglinton Tournament of 1839 held by the 13th Earl of Eglinton at Eglinton Castle in Ayrshire was inspired by and modelled on "Ivanhoe". On November 5, 2019, the "BBC News" listed "Ivanhoe" on its list of the 100 most influential novels. The novel has been the basis for several motion pictures: There have also been many television adaptations of the novel, including: Victor Sieg's dramatic cantata "Ivanhoé" won the Prix de Rome in 1864 and premiered in Paris the same year. An operatic adaptation of the novel by Sir Arthur Sullivan (entitled "Ivanhoe") ran for over 150 consecutive performances in 1891. Other operas based on the novel have been composed by Gioachino Rossini ("Ivanhoé"), Thomas Sari ("Ivanhoé"), Bartolomeo Pisani ("Rebecca"), A. Castagnier ("Rébecca"), Otto Nicolai ("Il Templario"), and Heinrich Marschner ("Der Templer und die Jüdin"). Rossini's opera is a "pasticcio" (an opera in which the music for a new text is chosen from pre-existent music by one or more composers). Scott attended a performance of it and recorded in his journal, "It was an opera, and, of course, the story sadly mangled and the dialogue, in part nonsense." The railway running through Ashby-de-la-Zouch was known as the Ivanhoe line between 1993 and 2005, in reference to the book's setting in the locality.
https://en.wikipedia.org/wiki?curid=15055
Isoelectric point The isoelectric point (pI, pH(I), IEP), is the pH at which a molecule carries no net electrical charge or is electrically neutral in the statistical mean. The standard nomenclature to represent the isoelectric point is pH(I), although pI is also commonly seen, and is used in this article for brevity. The net charge on the molecule is affected by pH of its surrounding environment and can become more positively or negatively charged due to the gain or loss, respectively, of protons (H+). Surfaces naturally charge to form a double layer. In the common case when the surface charge-determining ions are H+/OH−, the net surface charge is affected by the pH of the liquid in which the solid is submerged. The pI value can affect the solubility of a molecule at a given pH. Such molecules have minimum solubility in water or salt solutions at the pH that corresponds to their pI and often precipitate out of solution. Biological amphoteric molecules such as proteins contain both acidic and basic functional groups. Amino acids that make up proteins may be positive, negative, neutral, or polar in nature, and together give a protein its overall charge. At a pH below their pI, proteins carry a net positive charge; above their pI they carry a net negative charge. Proteins can, thus, be separated by net charge in a polyacrylamide gel using either preparative gel electrophoresis, which uses a constant pH to separate proteins or isoelectric focusing, which uses a pH gradient to separate proteins. Isoelectric focusing is also the first step in 2-D gel polyacrylamide gel electrophoresis. In biomolecules, proteins can be separated by ion exchange chromatography. Biological proteins are made up of zwitterionic amino acid compounds; the net charge of these proteins can be positive or negative depending on the pH of the environment. The specific pI of the target protein can be used to model the process around and the compound can then be purified from the rest of the mixture. Buffers of various pH can be used for this purification process to change the pH of the environment. When a mixture containing a target protein is loaded into an ion exchanger, the stationary matrix can be either positively-charged (for mobile anions) or negatively-charged (for mobile cations). At low pH values, the net charge of most proteins in the mixture is positive - in cation exchangers, these positively-charged proteins bind to the negatively-charged matrix. At high pH values, the net charge of most proteins is negative, where they bind to the positively-charged matrix in anion exchangers. When the environment is at a pH value equal to the protein's pI, the net charge is zero, and the protein is not bound to any exchanger, and therefore, can be eluted out. For an amino acid with only one amine and one carboxyl group, the pI can be calculated from the mean of the pKas of this molecule. The pH of an electrophoretic gel is determined by the buffer used for that gel. If the pH of the buffer is above the pI of the protein being run, the protein will migrate to the positive pole (negative charge is attracted to a positive pole). If the pH of the buffer is below the pI of the protein being run, the protein will migrate to the negative pole of the gel (positive charge is attracted to the negative pole). If the protein is run with a buffer pH that is equal to the pI, it will not migrate at all. This is also true for individual amino acids. In the two examples (on the right) the isoelectric point is shown by the green vertical line. In glycine the pK values are separated by nearly 7 units so the concentration of the neutral species, glycine (GlyH), is effectively 100% of the analytical glycine concentration. Glycine may exist as a zwitterion at the isoelectric point, but the equilibrium constant for the isomerization reaction in solution is not known. The other example, adenosine monophosphate is shown to illustrate the fact that a third species may, in principle, be involved. In fact the concentration of (AMP)H32+ is negligible at the isoelectric point in this case. If the pI is greater than the pH, the molecule will have a positive charge. A number of algorithms for estimating isoelectric points of peptides and proteins have been developed. Most of them use Henderson–Hasselbalch equation with different pK values. For instance, within the model proposed by Bjellqvist and co-workers the pK's were determined between closely related immobilines, by focusing the same sample in overlapping pH gradients. Some improvements in the methodology (especially in the determination of the pK values for modified amino acids) have been also proposed. More advanced methods take into account the effect of adjacent amino acids ±3 residues away from a charged aspartic or glutamic acid, the effects on free C terminus, as well as they apply a correction term to the corresponding pK values using genetic algorithm. Other recent approaches are based on a support vector machine algorithm and pKa optimization against experimentally known protein/peptide isoelectric points. Moreover, experimentally measured isoelectric point of proteins were aggregated into the databases. Recently, a database of isoelectric points for all proteins predicted using most of the available methods had been also developed. The isoelectric points (IEP) of metal oxide ceramics are used extensively in material science in various aqueous processing steps (synthesis, modification, etc.). In the absence of chemisorbed or physisorbed species particle surfaces in aqueous suspension are generally assumed to be covered with surface hydroxyl species, M-OH (where M is a metal such as Al, Si, etc.). At pH values above the IEP, the predominate surface species is M-O−, while at pH values below the IEP, M-OH2+ species predominate. Some approximate values of common ceramics are listed below: "Note: The following list gives the isoelectric point at 25 °C for selected materials in water. The exact value can vary widely, depending on material factors such as purity and phase as well as physical parameters such as temperature. Moreover, the precise measurement of isoelectric points can be difficult, thus many sources often cite differing values for isoelectric points of these materials." Mixed oxides may exhibit isoelectric point values that are intermediate to those of the corresponding pure oxides. For example, a synthetically prepared amorphous aluminosilicate (Al2O3-SiO2) was initially measured as having IEP of 4.5 (the electrokinetic behavior of the surface was dominated by surface Si-OH species, thus explaining the relatively low IEP value). Significantly higher IEP values (pH 6 to 8) have been reported for 3Al2O3-2SiO2 by others. Similarly, also IEP of barium titanate, BaTiO3 was reported in the range 5-6 while others got a value of 3. Mixtures of titania (TiO2) and zirconia (ZrO2) were studied and found to have an isoelectric point between 5.3-6.9, varying non-linearly with %(ZrO2). The surface charge of the mixed oxides was correlated with acidity. Greater titania content led to increased Lewis acidity, whereas zirconia-rich oxides displayed Br::onsted acidity. The different types of acidities produced differences in ion adsorption rates and capacities. The terms isoelectric point (IEP) and point of zero charge (PZC) are often used interchangeably, although under certain circumstances, it may be productive to make the distinction. In systems in which H+/OH− are the interface potential-determining ions, the point of zero charge is given in terms of pH. The pH at which the surface exhibits a neutral net electrical charge is the point of zero charge at the surface. Electrokinetic phenomena generally measure zeta potential, and a zero zeta potential is interpreted as the point of zero net charge at the shear plane. This is termed the isoelectric point. Thus, the isoelectric point is the value of pH at which the colloidal particle remains stationary in an electrical field. The isoelectric point is expected to be somewhat different than the point of zero charge at the particle surface, but this difference is often ignored in practice for so-called pristine surfaces, i.e., surfaces with no specifically adsorbed positive or negative charges. In this context, specific adsorption is understood as adsorption occurring in a Stern layer or chemisorption. Thus, point of zero charge at the surface is taken as equal to isoelectric point in the absence of specific adsorption on that surface. According to Jolivet, in the absence of positive or negative charges, the surface is best described by the point of zero charge. If positive and negative charges are both present in equal amounts, then this is the isoelectric point. Thus, the PZC refers to the absence of any type of surface charge, while the IEP refers to a state of neutral net surface charge. The difference between the two, therefore, is the quantity of charged sites at the point of net zero charge. Jolivet uses the intrinsic surface equilibrium constants, p"K"− and p"K"+ to define the two conditions in terms of the relative number of charged sites: For large Δp"K" (>4 according to Jolivet), the predominant species is MOH while there are relatively few charged species - so the PZC is relevant. For small values of Δp"K", there are many charged species in approximately equal numbers, so one speaks of the IEP.
https://en.wikipedia.org/wiki?curid=15056
International reply coupon An international reply coupon (IRC) is a coupon that can be exchanged for one or more postage stamps representing the minimum postage for an unregistered priority airmail letter of up to twenty grams sent to another Universal Postal Union (UPU) member country. IRCs are accepted by all UPU member countries. UPU member postal services are obliged to exchange an IRC for postage, but are not obliged to sell them. The purpose of the IRC is to allow a person to send someone in another country a letter, along with the cost of postage for a reply. If the addressee is within the same country, there is no need for an IRC because a self-addressed stamped envelope (SASE) or return postcard will suffice; but if the addressee is in another country an IRC removes the necessity of acquiring foreign postage or sending appropriate currency. International reply coupons (in French, "Coupons-Reponse Internationaux") are printed in blue ink on paper that has the letters “UPU” in large characters in the watermark. The front of each coupon is printed in French. The reverse side of the coupon, which has text relating to its use, is printed in German, English, Arabic, Chinese, Spanish, and Russian. Under Universal Postal Union's regulations, participating member countries are not required to place a control stamp or postmark on the international reply coupons that they sell. Therefore, some foreign issue reply coupons that are tendered for redemption may bear the name of the issuing country (generally in French) rather than the optional control stamp or postmark. The Nairobi Model was an international reply coupon printed by the Universal Postal Union which is approximately 3.75 inches by 6 inches and had an expiration date of December 31, 2013. This model was designed by Rob Van Goor, a graphic artist from the Luxembourg Post. It was selected from among 10 designs presented by Universal Postal Union member countries. Van Goor interpreted the theme of the contest – "The Postage Stamp: A Vehicle for Exchange" – by depicting the world being cradled by a hand and the perforated outline of a postage stamp. The Doha Model is named for the 25th UPU congress held in Doha, Qatar, in 2012. The Doha model, designed by Czech artist and graphic designer Michal Sindelar, shows cupped hands catching a stream of water, to celebrate the theme of Water for Life. It expires after December 31, 2017. The Istanbul Model was designed by graphic artist Nguyen Du's and features a pair of hands and a dove against an Arctic backdrop to represent sustainable development in the postal sector. Ten countries participated in the competition which was held Oct. 7, 2016, during the UPU congress in Istanbul, Turkey. It expires after December 31, 2021. The IRC was introduced in 1906 at a Universal Postal Union congress in Rome. At the time an IRC could be exchanged for a single-rate, ordinary postage stamp for surface delivery to a foreign country, as this was before the introduction of airmail services. An IRC is exchangeable in a UPU member country for the minimum postage of a priority or unregistered airmail letter to a foreign country. The current IRC, which features the theme "the Post and sustainable development", was designed by Vietnamese artist Nguyen Du for 2017-2021 and was adopted in Istanbul in 2016, it is known also as the "Istanbul model" for this reason. The previous design, "Water for Life" by Czech artist and graphic designer Michal Sindelar, was issued in 2013 and was valid until 31 December 2017. IRCs are ordered from the UPU headquarters in Bern, Switzerland by postal authorities. They are generally available at large post offices; in the U.S., they were requisitioned along with regular domestic stamps by any post office that had sufficient demand for them. Prices for IRCs vary by country. In the United States in November 2012, the purchase price was $2.20 USD; however, the US Postal Service discontinued sales of IRCs on 27 January 2013 due to declining demand. Britain's Royal Mail also stopped selling IRCs on 18 February 2012, citing minimal sales and claiming that the average post office sold less than one IRC per year. IRCs purchased in foreign countries may be used in the United States toward the purchase of postage stamps and embossed stamped envelopes at the current one-ounce First Class International rate (US$1.05 as of April 2012) per coupon. IRCs are often used by amateur radio operators sending QSL cards to each other; it has traditionally been considered good practice and common courtesy to include an IRC when writing to a foreign operator and expecting a reply by mail. If the operator's home country does not sell IRCs, then a foreign IRC may be used. Previous editions of the IRC, the "Beijing" model and all subsequent versions, bear an expiration date. Consequently, a new IRC will be issued every three years. International reply coupons are sold by the HongKong Post for 19 HKD as of 2018-10-19. International reply coupons are sold by the Swiss Post in packs of 10 for 25 CHF. The Royal Mail stopped selling IRCs on 31 December 2011 due to a lack of demand. The United States Postal Service stopped selling international reply coupons on January 27, 2013. Thailand Post currently sells IRC for 53 THB as of 2020 In 1920, Charles Ponzi made use of the idea that profit could be made by taking advantage of the differing postal rates in different countries to buy IRCs cheaply in one country and exchange them for stamps of a higher value in another country. This subsequently became the fraudulent Ponzi scheme. In practice, the overhead on buying and selling large numbers of the very low-value IRCs precluded any profitability. The selling price and exchange value in stamps in each country have been adjusted to some extent to remove some of the potential for profit, but ongoing fluctuations in currency value and exchange rates make it impossible to achieve this completely, as long as stamps represent a specific currency value, instead of acting as vouchers granting specific postal services, devoid of currency nomination.
https://en.wikipedia.org/wiki?curid=15058
Isaac Bonewits Phillip Emmons Isaac Bonewits (October 1, 1949 – August 12, 2010) was an American Neo-Druid who published a number of books on the subject of Neopaganism and magic. He was a public speaker, liturgist, singer and songwriter, and founder of the Neopagan organizations Ár nDraíocht Féin and the Aquarian Anti-Defamation League. Born in Royal Oak, Michigan, Bonewits had been heavily involved in occultism since the 1960s. Bonewits was born on October 1, 1949 in Royal Oak, Michigan, as the fourth of five children. His mother and father were Roman Catholics. Spending much of his childhood in Ferndale, he was moved at age 12 to San Clemente, California, where he spent a short time in a Catholic high school before he went back to public school to graduate from high school a year early. He enrolled at UC Berkeley in 1966; he graduated from the university in 1970 with a Bachelor of Arts in Magic, perhaps becoming the first and only person known to have ever received any kind of academic degree in Magic from an accredited university. In 1966, while enrolled at UC Berkeley, Bonewits joined the Reformed Druids of North America (RDNA). Bonewits was ordained as a Neo-druid priest in 1969. During this period, the 18-year-old Bonewits was also recruited by the Church of Satan, but left due to political and philosophical conflicts with Anton LaVey. During his stint in the Church of Satan, Bonewits appeared in some scenes of the 1970 documentary "Satanis: The Devil's Mass". Bonewits, in his article "My Satanic Adventure", asserts that the rituals in "Satanis" were staged for the movie at the behest of the filmmakers and were not authentic ceremonies. His first book, "Real Magic", was published in 1972. Between 1973 and 1975 Bonewits was employed as the editor of "Gnostica" magazine in Minnesota (published by Llewellyn Publications). He established an offshoot group of the Reformed Druids of North America (RDNA) called the Schismatic Druids of North America, and helped create a group called the Hasidic Druids of North America (despite, in his words, his "lifelong status as a gentile"). He also founded the short-lived Aquarian Anti-Defamation League (AADL), an early Pagan civil rights group. In 1976, Bonewits moved back to Berkeley and rejoined his original grove there, now part of the New Reformed Druids of North America (NRDNA). He was later elected Archdruid of the Berkeley Grove. Throughout his life Bonewits had varying degrees of involvement with occult groups including Gardnerian Wicca and the New Reformed Orthodox Order of the Golden Dawn (a Wiccan organization not to be confused with the Hermetic Order of the Golden Dawn). Bonewits was a regular presenter at Neopagan conferences and festivals all over the US, as well as attending gaming conventions in the Bay Area. He promoted his book "Authentic Thaumaturgy" to gamers as a way of organizing Dungeons and Dragons games and to give a background to games of . In 1983, Bonewits founded Ár nDraíocht Féin (also known as "A Druid Fellowship" or ADF), which was incorporated in 1990 in the state of Delaware as a U.S. 501(c)3 non-profit organization. Although illness curtailed many of his activities and travels for a time, he remained Archdruid of ADF until 1996. In that year, he resigned from the position of Archdruid but retained the lifelong title of ADF Archdruid Emeritus. A songwriter, singer, and recording artist, he produced two CDs of pagan music and numerous recorded lectures and panel discussions, produced and distributed by the Association for Consciousness Exploration. He lived in Rockland County, New York, and was a member of the Covenant of Unitarian Universalist Pagans (CUUPS). Bonewits encouraged charity programs to help Neopagan seniors, and in January 2006 was the keynote speaker at the Conference On Current Pagan Studies at the Claremont Graduate University in Claremont, CA. Bonewits was married five times. He was married to Rusty Elliot from 1973 to 1976. His second wife was Selene Kumin Vega, followed by marriage to Sally Eaton (1980 to 1985). His fourth wife was author Deborah Lipp, from 1988 to 1998. On July 23, 2004, he was married in a handfasting ceremony to a former vice-president of the Covenant of Unitarian Universalist Pagans, Phaedra Heyman Bonewits. At the time of the handfasting, the marriage was not yet legal because he had not yet been legally divorced from Lipp, although they had been separated for several years. Paperwork and legalities caught up on December 31, 2007, making them legally married. Bonewits' only child, Arthur Shaffrey Lipp-Bonewits, was born to Deborah Lipp in 1990. In 1990, Bonewits was diagnosed with eosinophilia-myalgia syndrome. The illness was a factor in his eventual resignation from the position of Archdruid of the ADF. On October 25, 2009, Bonewits was diagnosed with a rare form of colon cancer, for which he underwent treatment. He died at home, on August 12, 2010, surrounded by his family. In his book "Real Magic" (1971), Bonewits proposed his "Laws of Magic." These "laws" are synthesized from a multitude of belief systems from around the world to explain and categorize magical beliefs within a cohesive framework. Many interrelationships exist, and some belief systems are subsets of others. This work was chosen by Dennis Wheatley in the 1970s to be part of his publishing project 'Library of the Occult'. Bonewits also coined much of the modern terminology used to articulate the themes and issues that affect the North American Neopagan community.
https://en.wikipedia.org/wiki?curid=15059
Intel 8080 The Intel 8080 (""eighty-eighty"") is the second 8-bit microprocessor designed and manufactured by Intel. It first appeared in April 1974 and is an extended and enhanced variant of the earlier 8008 design, although without binary compatibility. The initial specified clock rate or frequency limit was 2 MHz, and with common instructions using 4, 5, 7, 10, or 11 cycles this meant that it operated at a typical speed of a few hundred thousand instructions per second. A faster variant 8080A-1 (Sometimes called the 8080B) became available later with clock frequency limit up to 3.125 MHz. The 8080 needs two support chips to function in most applications, the i8224 clock generator/driver and the i8228 bus controller, and it is implemented in N-type metal-oxide-semiconductor logic (NMOS) using non-saturated enhancement mode transistors as loads thus demanding a +12 V and a −5 V voltage in addition to the main transistor–transistor logic (TTL) compatible +5 V. Although earlier microprocessors were used for calculators, cash registers, computer terminals, industrial robots, and other applications, the 8080 became one of the first widespread microprocessors. Several factors contributed to its popularity: its 40-pin package made it easier to interface than the 18-pin 8008, and also made its data bus more efficient; its NMOS implementation gave it faster transistors than those of the P-type metal-oxide-semiconductor logic (PMOS) 8008, while also simplifying interfacing by making it TTL-compatible; a wider variety of support chips was available; its instruction set was enhanced over the 8008; and its full 16-bit address bus (versus the 14-bit one of the 8008) enabled it to access 64 KB of memory, four times more than the 8008's range of 16 KB. It became the engine of the Altair 8800, and subsequent S-100 bus personal computers, until it was replaced by the Z80 in this role, and was the original target CPU for CP/M operating systems developed by Gary Kildall. The 8080 was successful enough that translation compatibility at the assembly language level became a design requirement for the Intel 8086 when its design began in 1976, and led to the 8080 directly influencing all later variants of the ubiquitous 32-bit and 64-bit x86 architectures. The Intel 8080 is the successor to the 8008. It uses the same basic instruction set and register model as the 8008 (developed by Computer Terminal Corporation), even though it is not source code compatible nor binary code compatible with its predecessor. Every instruction in the 8008 has an equivalent instruction in the 8080 (even though the opcodes differ between the two CPUs). The 8080 also adds a few 16-bit operations in its instruction set. Whereas the 8008 required the use of the HL register pair to indirectly access its 14-bit memory space, the 8080 added addressing modes to allow direct access to its full 16-bit memory space. In addition, the internal 7-level push-down call stack of the 8008 was replaced by a dedicated 16-bit stack-pointer (SP) register. The 8080's large 40-pin DIP packaging permits it to provide a 16-bit address bus and an 8-bit data bus, allowing easy access to 64 KiB of memory. The processor has seven 8-bit registers (A, B, C, D, E, H, and L), where A is the primary 8-bit accumulator, and the other six registers can be used as either individual 8-bit registers or as three 16-bit register pairs (BC, DE, and HL, referred to as B, D and H in Intel documents) depending on the particular instruction. Some instructions also enable the HL register pair to be used as a (limited) 16-bit accumulator, and a pseudo-register M can be used almost anywhere that any other register can be used, referring to the memory address pointed to by the HL pair. It also has a 16-bit stack pointer to memory (replacing the 8008's internal stack), and a 16-bit program counter. The processor maintains internal flag bits (a status register), which indicate the results of arithmetic and logical instructions. Only certain instructions affect the flags. The flags are: The carry bit can be set or complemented by specific instructions. Conditional-branch instructions test the various flag status bits. The flags can be copied as a group to the accumulator. The A accumulator and the flags together are called the PSW register, or program status word. As with many other 8-bit processors, all instructions are encoded in one byte (including register numbers, but excluding immediate data), for simplicity. Some of them are followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. Like larger processors, it has automatic CALL and RET instructions for multi-level procedure calls and returns (which can even be conditionally executed, like jumps) and instructions to save and restore any 16-bit register pair on the machine stack. There are also eight one-byte call instructions () for subroutines located at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke a corresponding interrupt service routine, but are also often employed as fast system calls. The most sophisticated command is , which is used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer. Most 8-bit operations can only be performed on the 8-bit accumulator (the A register). For 8-bit operations with two operands, the other operand can be either an immediate value, another 8-bit register, or a memory byte addressed by the 16-bit register pair HL. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory byte. Due to the regular encoding of the instruction (using a quarter of available opcode space), there are redundant codes to copy a register into itself (, for instance), which are of little use, except for delays. However, what would have been a copy from the HL-addressed cell into itself (i.e., ) is instead used to encode the halt () instruction, halting execution until an external reset or interrupt occurs. Although the 8080 is generally an 8-bit processor, it also has limited abilities to perform 16-bit operations: Any of the three 16-bit register pairs (BC, DE, or HL, referred to as B, D, H in Intel documents) or SP can be loaded with an immediate 16-bit value (using ), incremented or decremented (using and ), or added to HL (using ). The instruction exchanges the values of the HL and DE register pairs. By adding HL to itself, it is possible to achieve the same result as a 16-bit arithmetical left shift with one instruction. The only 16-bit instructions that affect any flag are , which set the CY (carry) flag in order to allow for programmed 24-bit or 32-bit arithmetic (or larger), needed to implement floating-point arithmetic, for instance. The 8080 supports up to 256 input/output (I/O) ports, accessed via dedicated I/O instructions taking port addresses as operands. This I/O mapping scheme is regarded as an advantage, as it frees up the processor's limited address space. Many CPU architectures instead use so-called memory-mapped I/O (MMIO), in which a common address space is used for both RAM and peripheral chips. This removes the need for dedicated I/O instructions, although a drawback in such designs may be that special hardware must be used to insert wait states, as peripherals are often slower than memory. However, in some simple 8080 computers, I/O is indeed addressed as if they were memory cells, "memory-mapped", leaving the I/O commands unused. I/O addressing can also sometimes employ the fact that the processor outputs the same 8-bit port address to both the lower and the higher address byte (i.e., would put the address 0505h on the 16-bit address bus). Similar I/O-port schemes are used in the backward-compatible Zilog Z80 and Intel 8085, and the closely related x86 microprocessor families. One of the bits in the processor state word (see below) indicates that the processor is accessing data from the stack. Using this signal, it is possible to implement a separate stack memory space. However, this feature is seldom used. For more advanced systems, during one phase of its working loop, the processor set its "internal state byte" on the data bus. This byte contains flags that determine whether the memory or I/O port is accessed and whether it is necessary to handle an interrupt. The interrupt system state (enabled or disabled) is also output on a separate pin. For simple systems, where the interrupts are not used, it is possible to find cases where this pin is used as an additional single-bit output port (the popular Radio-86RK computer made in the Soviet Union, for instance). The following 8080/8085 assembler source code is for a subroutine named codice_1 that copies a block of data bytes of a given size from one location to another. The data block is copied one byte at a time, and the data movement and looping logic utilizes 16-bit operations. The address bus has its own 16 pins, and the data bus has 8 pins that are usable without any multiplexing. Using the two additional pins (read and write signals), it is possible to assemble simple microprocessor devices very easily. Only the separate IO space, interrupts, and DMA need added chips to decode the processor pin signals. However, the processor load capacity is limited, and even simple computers often contain bus amplifiers. The processor needs three power sources (−5, +5, and +12 V) and two non-overlapping high-amplitude synchronizing signals. However, at least the late Soviet version КР580ВМ80А was able to work with a single +5 V power source, the +12 V pin being connected to +5 V and the −5 V pin to ground. The processor consumes about 1.3 W of power. The pin-out table, from the chip's accompanying documentation, describes the pins as follows: A key factor in the success of the 8080 was the broad range of support chips available, providing serial communications, counter/timing, input/output, direct memory access, and programmable interrupt control amongst other functions: The 8080 integrated circuit uses non-saturated enhancement-load nMOS gates, demanding extra voltages (for the load-gate bias). It was manufactured in a silicon gate process using a minimal feature size of 6 µm. A single layer of metal is used to interconnect the approximately 6,000 transistors in the design, but the higher resistance polysilicon layer, which required higher voltage for some interconnects, is implemented with transistor gates. The die size is approximately 20 mm2. The 8080 is used in many early microcomputers, such as the MITS Altair 8800 Computer, Processor Technology SOL-20 Terminal Computer and IMSAI 8080 Microcomputer, forming the basis for machines running the CP/M operating system (the later, almost fully compatible and more able, Zilog Z80 processor would capitalize on this, with Z80 & CP/M becoming the dominant CPU and OS combination of the period circa 1976 to 1983 much as did the x86 & DOS for the PC a decade later). Even in 1979 after introduction of the Z80 and 8085 processors, five manufacturers of the 8080 were selling an estimated 500,000 units per month at a price around $3 to $4 each. The first single-board microcomputers, such as MYCRO-1 and the "dyna-micro" / MMD-1 (see: Single-board computer) were based on the Intel 8080. One of the early uses of the 8080 was made in the late 1970s by Cubic-Western Data of San Diego, CA in its Automated Fare Collection Systems custom designed for mass transit systems around the world. An early industrial use of the 8080 is as the "brain" of the DatagraphiX Auto-COM (Computer Output Microfiche) line of products which takes large amounts of user data from reel-to-reel tape and images it onto microfiche. The Auto-COM instruments also include an entire automated film cutting, processing, washing, and drying sub-system – quite a feat, both then and in the 21st century, to all be accomplished successfully with only an 8-bit microprocessor running at a clock speed of less than 1 MHz with a 64 KB memory limit. Also, several early video arcade games were built around the 8080 microprocessor, including "Space Invaders", one of the most popular arcade games ever made. Shortly after the launch of the 8080, the Motorola 6800 competing design was introduced, and after that, the MOS Technology 6502 derivative of the 6800. Zilog introduced the Z80, which has a compatible machine language instruction set and initially used the same assembly language as the 8080, but for legal reasons, Zilog developed a syntactically-different (but code compatible) alternative assembly language for the Z80. At Intel, the 8080 was followed by the compatible and electrically more elegant 8085. Later Intel issued the assembly-language compatible (but not binary-compatible) 16-bit 8086 and then the 8/16-bit 8088, which was selected by IBM for its new PC to be launched in 1981. Later NEC made the NEC V20 (an 8088 clone with Intel 80186 instruction set compatibility) which also supports an 8080 emulation mode. This is also supported by NEC's V30 (a similarly enhanced 8086 clone). Thus, the 8080, via its instruction set architecture (ISA), made a lasting impact on computer history. A number of processors compatible with the Intel 8080A were manufactured in the Eastern Bloc: the KR580VM80A (initially marked as KP580ИK80) in the Soviet Union, the MCY7880 made by Unitra CEMI in Poland, the MHB8080A made by TESLA in Czechoslovakia, the 8080APC made by Tungsram / MEV in Hungary, and the MMN8080 made by Microelectronica Bucharest in Romania. , the 8080 is still in production at Lansdale Semiconductors. The 8080 also changed how computers were created. When the 8080 was introduced, computer systems were usually created by computer manufacturers such as Digital Equipment Corporation, Hewlett Packard, or IBM. A manufacturer would produce the whole computer, including processor, terminals, and system software such as compilers and operating system. The 8080 was designed for almost any application "except" a complete computer system. Hewlett Packard developed the HP 2640 series of smart terminals around the 8080. The HP 2647 is a terminal which runs the programming language BASIC on the 8080. Microsoft would market as its founding product the first popular language for the 8080, and would later acquire DOS for the IBM PC. The 8080 and 8085 gave rise to the 8086, which was designed as a source code compatible (although not binary compatible) extension of the 8085. This design, in turn, later spawned the x86 family of chips, the basis for most CPUs in use today. Many of the 8080's core machine instructions and concepts, for example, registers named "A", "B", "C", and "D", and many of the flags used to control conditional jumps, are still in use in the widespread x86 platform. 8080 assembly code can still be directly translated into x86 instructions; all of its core elements are still present. Federico Faggin, the originator of the 8080 architecture in early 1972, proposed it to Intel's management and pushed for its implementation. He finally got the permission to develop it six months later. Faggin hired Masatoshi Shima from Japan in November 1972, who did the detailed design under his direction, using the design methodology for random logic with silicon gate that Faggin had created for the 4000 family. Stanley Mazor contributed a couple of instructions to the instruction set. Shima finished the layout in August 1973. After the regulation of NMOS fabrication, a prototype of the 8080 was completed in January 1974. It had a flaw, in that driving with standard TTL devices increased the ground voltage because high current flowed into the narrow line. However, Intel had already produced 40,000 units of the 8080 at the direction of the sales section before Shima characterized the prototype. It was released as requiring Low-power Schottky TTL (LS TTL) devices. The 8080A fixed this flaw. Intel offered an instruction set simulator for the 8080 named INTERP/80. It was written by Gary Kildall while he worked as a consultant for Intel.
https://en.wikipedia.org/wiki?curid=15062
Intel 8086 The 8086 (also called iAPX 86) is a 16-bit microprocessor chip designed by Intel between early 1976 and June 8, 1978, when it was released. The Intel 8088, released July 1, 1979, is a slightly modified chip with an external 8-bit data bus (allowing the use of cheaper and fewer supporting ICs), and is notable as the processor used in the original IBM PC design. The 8086 gave rise to the x86 architecture, which eventually became Intel's most successful line of processors. On June 5, 2018, Intel released a limited-edition CPU celebrating the 40th anniversary of the Intel 8086, called the Intel Core i7-8086K. In 1972, Intel launched the 8008, the first 8-bit microprocessor. It implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, which also proved to be fairly general-purpose. The device needed several additional ICs to produce a functional computer, in part due to it being packaged in a small 18-pin "memory package", which ruled out the use of a separate address bus (Intel was primarily a DRAM manufacturer at the time). Two years later, Intel launched the 8080, employing the new 40-pin DIL packages originally developed for calculator ICs to enable a separate address bus. It has an extended instruction set that is source-compatible (not binary compatible) with the 8008 and also includes some 16-bit instructions to make programming easier. The 8080 device, was eventually replaced by the depletion-load-based 8085 (1977), which sufficed with a single +5 V power supply instead of the three different operating voltages of earlier chips. Other well known 8-bit microprocessors that emerged during these years are Motorola 6800 (1974), General Instrument PIC16X (1975), MOS Technology 6502 (1975), Zilog Z80 (1976), and Motorola 6809 (1978). The 8086 project started in May 1976 and was originally intended as a temporary substitute for the ambitious and delayed iAPX 432 project. It was an attempt to draw attention from the less-delayed 16- and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to counter the threat from the Zilog Z80 (designed by former Intel employees), which became very successful. Both the architecture and the physical chip were therefore developed rather quickly by a small group of people, and using the same basic microarchitecture elements and physical implementation techniques as employed for the slightly older 8085 (and for which the 8086 also would function as a continuation). Marketed as source compatible, the 8086 was designed to allow assembly language for the 8008, 8080, or 8085 to be automatically converted into equivalent (suboptimal) 8086 source code, with little or no hand-editing. The programming model and instruction set is (loosely) based on the 8080 in order to make this possible. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly limited 16-bit capabilities of the 8080 and 8085. New kinds of instructions were added as well; full support for signed integers, base+offset addressing, and self-repeating operations were akin to the Z80 design but were all made slightly more general in the 8086. Instructions directly supporting nested ALGOL-family languages such as Pascal and PL/M were also added. According to principal architect Stephen P. Morse, this was a result of a more software-centric approach than in the design of earlier Intel processors (the designers had experience working with compiler implementations). Other enhancements included microcoded multiply and divide instructions and a bus structure better adapted to future coprocessors (such as 8087 and 8089) and multiprocessor systems. The first revision of the instruction set and high level architecture was ready after about three months, and as almost no CAD tools were used, four engineers and 12 layout people were simultaneously working on the chip. The 8086 took a little more than two years from idea to working product, which was considered rather fast for a complex design in 1976–1978. The 8086 was sequenced using a mixture of random logic and microcode and was implemented using depletion-load nMOS circuitry with approximately 20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved to a new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel originally developed for manufacturing of fast static RAM products. This was followed by HMOS-II, HMOS-III versions, and, eventually, a fully static CMOS version for battery powered devices, manufactured using Intel's CHMOS processes. The original chip measured 33 mm² and minimum feature size was 3.2 μm. The architecture was defined by Stephen P. Morse with some help and assistance by Bruce Ravenel (the architect of the 8087) in refining the final revisions. Logic designer Jim McKevitt and John Bayliss were the lead engineers of the hardware-level development team and Bill Pohlman the manager for the project. The legacy of the 8086 is enduring in the basic instruction set of today's personal computers and servers; the 8086 also lent its last two digits to later extended versions of the design, such as the Intel 286 and the Intel 386, all of which eventually became known as the x86 family. (Another reference is that the PCI Vendor ID for Intel devices is 8086h.) All internal registers, as well as internal and external data buses, are 16 bits wide, which firmly established the "16-bit microprocessor" identity of the 8086. A 20-bit external address bus provides a 1 MB physical address space (220 = 1,048,576). This address space is addressed by means of internal memory "segmentation". The data bus is multiplexed with the address bus in order to fit all of the control lines into a standard 40-pin dual in-line package. It provides a 16-bit I/O address bus, supporting 64 KB of separate I/O space. The maximum linear address space is limited to 64 KB, simply because internal address/index registers are only 16 bits wide. Programming over 64 KB memory boundaries involves adjusting the segment registers (see below); this difficulty existed until the 80386 architecture introduced wider (32-bit) registers (the memory management hardware in the 80286 did not help in this regard, as its registers are still only 16 bits wide). Some of the control pins, which carry essential signals for all external operations, have more than one function depending upon whether the device is operated in "min" or "max" mode. The former mode is intended for small single-processor systems, while the latter is for medium or large systems using more than one processor (a kind of multiprocessor mode). Maximum mode is required when using an 8087 or 8089 coprocessor. The voltage on pin 33 (MN/) determine the mode. Changing the state of pin 33 changes the function of certain other pins, most of which have to do with how the CPU handles the (local) bus. The mode is usually hardwired into the circuit and therefore cannot be changed by software. The workings of these modes are described in terms of timing diagrams in Intel datasheets and manuals. In minimum mode, all control signals are generated by the 8086 itself. The 8086 has eight more or less general 16-bit registers (including the stack pointer but excluding the instruction pointer, flag register and segment registers). Four of them, AX, BX, CX, DX, can also be accessed as twice as many 8-bit registers (see figure) while the other four, SI, DI, BP, SP, are 16-bit only. Due to a compact encoding inspired by 8-bit processors, most instructions are one-address or two-address operations, which means that the result is stored in one of the operands. At most one of the operands can be in memory, but this memory operand can also be the "destination", while the other operand, the "source", can be either "register" or "immediate". A single memory location can also often be used as both "source" and "destination" which, among other factors, further contributes to a code density comparable to (and often better than) most eight-bit machines at the time. The degree of generality of most registers are much greater than in the 8080 or 8085. However, 8086 registers were more specialized than in most contemporary minicomputers and are also used implicitly by some instructions. While perfectly sensible for the assembly programmer, this makes register allocation for compilers more complicated compared to more orthogonal 16-bit and 32-bit processors of the time such as the PDP-11, VAX, 68000, 32016 etc. On the other hand, being more regular than the rather minimalistic but ubiquitous 8-bit microprocessors such as the 6502, 6800, 6809, 8085, MCS-48, 8051, and other contemporary accumulator-based machines, it is significantly easier to construct an efficient code generator for the 8086 architecture. Another factor for this is that the 8086 also introduced some new instructions (not present in the 8080 and 8085) to better support stack-based high-level programming languages such as Pascal and PL/M; some of the more useful instructions are push "mem-op", and ret "size", supporting the "Pascal calling convention" directly. (Several others, such as push "immed" and enter, were added in the subsequent 80186, 80286, and 80386 processors.) A 64 KB (one segment) stack growing towards lower addresses is supported in hardware; 16-bit words are pushed onto the stack, and the top of the stack is pointed to by SS:SP. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return addresses. The 8086 has 64 K of 8-bit (or alternatively 32 K of 16-bit word) I/O port space. The 8086 has a 16-bit flags register. Nine of these condition code flags are active, and indicate the current state of the processor: Carry flag (CF), Parity flag (PF), Auxiliary carry flag (AF), Zero flag (ZF), Sign flag (SF), Trap flag (TF), Interrupt flag (IF), Direction flag (DF), and Overflow flag (OF). Also referred to as the status word, the layout of the flags register is as follows: There are also three 16-bit segment registers (see figure) that allow the 8086 CPU to access one megabyte of memory in an unusual way. Rather than concatenating the segment register with the address register, as in most processors whose address space exceeds their register size, the 8086 shifts the 16-bit segment only four bits left before adding it to the 16-bit offset (16×segment + offset), therefore producing a 20-bit external (or effective or physical) address from the 32-bit segment:offset pair. As a result, each external address can be referred to by 212 = 4096 different segment:offset pairs. Although considered complicated and cumbersome by many programmers, this scheme also has advantages; a small program (less than 64 KB) can be loaded starting at a fixed offset (such as 0000) in its own segment, avoiding the need for relocation, with at most 15 bytes of alignment waste. Compilers for the 8086 family commonly support two types of pointer, "near" and "far". Near pointers are 16-bit offsets implicitly associated with the program's code or data segment and so can be used only within parts of a program small enough to fit in one segment. Far pointers are 32-bit segment:offset pairs resolving to 20-bit external addresses. Some compilers also support "huge" pointers, which are like far pointers except that pointer arithmetic on a huge pointer treats it as a linear 20-bit pointer, while pointer arithmetic on a far pointer wraps around within its 16-bit offset without touching the segment part of the address. To avoid the need to specify "near" and "far" on numerous pointers, data structures, and functions, compilers also support "memory models" which specify default pointer sizes. The "tiny" (max 64K), "small" (max 128K), "compact" (data > 64K), "medium" (code > 64K), "large" (code,data > 64K), and "huge" (individual arrays > 64K) models cover practical combinations of near, far, and huge pointers for code and data. The "tiny" model means that code and data are shared in a single segment, just as in most 8-bit based processors, and can be used to build ".com" files for instance. Precompiled libraries often come in several versions compiled for different memory models. According to Morse et al.. the designers actually contemplated using an 8-bit shift (instead of 4-bit), in order to create a 16 MB physical address space. However, as this would have forced segments to begin on 256-byte boundaries, and 1 MB was considered very large for a microprocessor around 1976, the idea was dismissed. Also, there were not enough pins available on a low cost 40-pin package for the additional four address bus pins. In principle, the address space of the x86 series "could" have been extended in later processors by increasing the shift value, as long as applications obtained their segments from the operating system and did not make assumptions about the equivalence of different segment:offset pairs. In practice the use of "huge" pointers and similar mechanisms was widespread and the flat 32-bit addressing made possible with the 32-bit offset registers in the 80386 eventually extended the limited addressing range in a more general way (see below). Intel could have decided to implement memory in 16 bit words (which would have eliminated the signal along with much of the address bus complexities already described). This would mean that all instruction object codes and data would have to be accessed in 16-bit units. Users of the 8080 long ago realized, in hindsight, that the processor makes very efficient use of its memory. By having a large number of 8-bit object codes, the 8080 produces object code as compact as some of the most powerful minicomputers on the market at the time. If the 8086 is to retain 8-bit object codes and hence the efficient memory use of the 8080, then it cannot guarantee that (16-bit) opcodes and data will lie on an even-odd byte address boundary. The first 8-bit opcode will shift the next 8-bit instruction to an odd byte or a 16-bit instruction to an odd-even byte boundary. By implementing the signal and the extra logic needed, the 8086 allows instructions to exist as 1-byte, 3-byte or any other odd byte object codes. Simply put: this is a trade off. If memory addressing is simplified so that memory is only accessed in 16-bit units, memory will be used less efficiently. Intel decided to make the logic more complicated, but memory use more efficient. This was at a time when memory size was considerably smaller, and at a premium, than that which users are used to today. Small programs could ignore the segmentation and just use plain 16-bit addressing. This allows 8-bit software to be quite easily ported to the 8086. The authors of most DOS implementations took advantage of this by providing an Application Programming Interface very similar to CP/M as well as including the simple ".com" executable file format, identical to CP/M. This was important when the 8086 and MS-DOS were new, because it allowed many existing CP/M (and other) applications to be quickly made available, greatly easing acceptance of the new platform. The following 8086/8088 assembler source code is for a subroutine named codice_1 that copies a block of data bytes of a given size from one location to another. The data block is copied one byte at a time, and the data movement and looping logic utilizes 16-bit operations. The code above uses the BP (base pointer) register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine. This kind of calling convention supports reentrant and recursive code, and has been used by most ALGOL-like languages since the late 1950s. The above routine is a rather cumbersome way to copy blocks of data. The 8086 provides dedicated instructions for copying strings of bytes. These instructions assume that the source data is stored at DS:SI, the destination data is stored at ES:DI, and that the number of elements to copy is stored in CX. The above routine requires the source and the destination block to be in the same segment, therefore DS is copied to ES. The loop section of the above can be replaced by: This copies the block of data one byte at a time. The codice_2 instruction causes the following codice_3 to repeat until CX is zero, automatically incrementing SI and DI and decrementing CX as it repeats. Alternatively the codice_4 instruction can be used to copy 16-bit words (double bytes) at a time (in which case CX counts the number of words copied instead of the number of bytes). Most assemblers will properly recognize the codice_2 instruction if used as an in-line prefix to the codice_3 instruction, as in codice_7. This routine will operate correctly if interrupted, because the program counter will continue to point to the codice_8 instruction until the block copy is completed. The copy will therefore continue from where it left off when the interrupt service routine returns control. Although partly shadowed by other design choices in this particular chip, the multiplexed address and data buses limit performance slightly; transfers of 16-bit or 8-bit quantities are done in a four-clock memory access cycle, which is faster on 16-bit, although slower on 8-bit quantities, compared to many contemporary 8-bit based CPUs. As instructions vary from one to six bytes, fetch and execution are made concurrent and decoupled into separate units (as it remains in today's x86 processors): The "bus interface unit" feeds the instruction stream to the "execution unit" through a 6-byte prefetch queue (a form of loosely coupled pipelining), speeding up operations on registers and immediates, while memory operations became slower (four years later, this performance problem was fixed with the 80186 and 80286). However, the full (instead of partial) 16-bit architecture with a full width ALU meant that 16-bit arithmetic instructions could now be performed with a single ALU cycle (instead of two, via internal carry, as in the 8080 and 8085), speeding up such instructions considerably. Combined with orthogonalizations of operations versus operand types and addressing modes, as well as other enhancements, this made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster (see below). As can be seen from these tables, operations on registers and immediates were fast (between 2 and 4 cycles), while memory-operand instructions and jumps were quite slow; jumps took more cycles than on the simple 8080 and 8085, and the 8088 (used in the IBM PC) was additionally hampered by its narrower bus. The reasons why most memory related instructions were slow were threefold: However, memory access performance was drastically enhanced with Intel's next generation of 8086 family CPUs. The 80186 and 80286 both had dedicated address calculation hardware, saving many cycles, and the 80286 also had separate (non-multiplexed) address and data buses. The 8086/8088 could be connected to a mathematical coprocessor to add hardware/microcode-based floating-point performance. The Intel 8087 was the standard math coprocessor for the 8086 and 8088, operating on 80-bit numbers. Manufacturers like Cyrix (8087-compatible) and Weitek ("not" 8087-compatible) eventually came up with high-performance floating-point coprocessors that competed with the 8087. The clock frequency was originally limited to 5 MHz, but the last versions in HMOS were specified for 10 MHz. HMOS-III and CMOS versions were manufactured for a long time (at least a while into the 1990s) for embedded systems, although its successor, the 80186/80188 (which includes some on-chip peripherals), has been more popular for embedded use. The 80C86, the CMOS version of the 8086, was used in the GRiDPad, Toshiba T1200, HP 110, and finally the 1998–1999 Lunar Prospector. For the packaging, the Intel 8086 was available both in ceramic and plastic DIP packages. Compatible—and, in many cases, enhanced—versions were manufactured by Fujitsu, Harris/Intersil, OKI, Siemens AG, Texas Instruments, NEC, Mitsubishi, and AMD. For example, the NEC V20 and NEC V30 pair were hardware-compatible with the 8088 and 8086 even though NEC made original Intel clones μPD8088D and μPD8086D respectively, but incorporated the instruction set of the 80186 along with some (but not all) of the 80186 speed enhancements, providing a drop-in capability to upgrade both instruction set and processing speed without manufacturers having to modify their designs. Such relatively simple and low-power 8086-compatible processors in CMOS are still used in embedded systems. The electronics industry of the Soviet Union was able to replicate the 8086 through . The resulting chip, K1810VM86, was binary and pin-compatible with the 8086. i8086 and i8088 were respectively the cores of the Soviet-made PC-compatible EC1831 and EC1832 desktops. (EC1831 is the EC identification of IZOT 1036C and EC1832 is the EC identification of IZOT 1037C, developed and manufactured in Bulgaria. EC stands for Единая Система.) However, the EC1831 computer (IZOT 1036C) had significant hardware differences from the IBM PC prototype. The EC1831 was the first PC-compatible computer with dynamic bus sizing (US Pat. No 4,831,514). Later some of the EC1831 principles were adopted in PS/2 (US Pat. No 5,548,786) and some other machines (UK Patent Application, Publication No. GB-A-2211325, Published June 28, 1989).
https://en.wikipedia.org/wiki?curid=15063
Intel 8088 The Intel 8088 (""eighty-eighty-eight"", also called iAPX 88) microprocessor is a variant of the Intel 8086. Introduced on June 1, 1979, the 8088 had an eight-bit external data bus instead of the 16-bit bus of the 8086. The 16-bit registers and the one megabyte address range were unchanged, however. In fact, according to the Intel documentation, the 8086 and 8088 have the same execution unit (EU)—only the bus interface unit (BIU) is different. The original IBM PC was based on the 8088, as were its clones. The 8088 was designed at Intel's laboratory in Haifa, Israel, as were a large number of Intel's processors. The 8088 was targeted at economical systems by allowing the use of an eight-bit data path and eight-bit support and peripheral chips; complex circuit boards were still fairly cumbersome and expensive when it was released. The prefetch queue of the 8088 was shortened to four bytes, from the 8086's six bytes, and the prefetch algorithm was slightly modified to adapt to the narrower bus. These modifications of the basic 8086 design were one of the first jobs assigned to Intel's then-new design office and laboratory in Haifa. Variants of the 8088 with more than 5 MHz maximal clock frequency include the 8088–2, which was fabricated using Intel's new enhanced nMOS process called HMOS and specified for a maximal frequency of 8 MHz. Later followed the 80C88, a fully static CHMOS design, which could operate with clock speeds from 0 to 8 MHz. There were also several other, more or less similar, variants from other manufacturers. For instance, the NEC V20 was a pin-compatible and slightly faster (at the same clock frequency) variant of the 8088, designed and manufactured by NEC. Successive NEC 8088 compatible processors would run at up to 16 MHz. In 1984, Commodore International signed a deal to manufacture the 8088 for use in a licensed Dynalogic Hyperion clone, in a move that was regarded as signaling a major new direction for the company. When announced, the list price of the 8088 was US$124.80. The 8088 is architecturally very similar to the 8086. The main difference is that there are only eight data lines instead of the 8086's 16 lines. All of the other pins of the device perform the same function as they do with the 8086 with two exceptions. First, pin 34 is no longer (this is the high-order byte select on the 8086—the 8088 does not have a high-order byte on its eight-bit data bus). Instead it outputs a maximum mode status, . Combined with the IO/ and DT/ signals, the bus cycles can be decoded (it generally indicates when a write operation or an interrupt is in progress). The second change is the pin that signals whether a memory access or input/output access is being made has had it sense reversed. The pin on the 8088 is IO/. On the 8086 part it is /M. The reason for the reversal is that it makes the 8088 compatible with the 8085. Depending on the clock frequency, the number of memory wait states, as well as on the characteristics of the particular application program, the "average" performance for the Intel 8088 ranged approximately from 0.33 to 1 million instructions per second. Meanwhile, the codice_1 and codice_2 instructions, taking two and three cycles respectively, yielded an "absolute peak" performance of between and  MIPS per MHz, that is, somewhere in the range 3–5 MIPS at 10 MHz. The speed of the execution unit (EU) and the bus of the 8086 CPU was well balanced; with a typical instruction mix, an 8086 could execute instructions out of the prefetch queue a good bit of the time. Cutting down the bus to eight bits made it a serious bottleneck in the 8088. With the speed of instruction fetch reduced by 50% in the 8088 as compared to the 8086, a sequence of fast instructions can quickly drain the four-byte prefetch queue. When the queue is empty, instructions take as long to complete as they take to fetch. Both the 8086 and 8088 take four clock cycles to complete a bus cycle; whereas for the 8086 this means four clocks to transfer two bytes, on the 8088 it is four clocks per byte. Therefore, for example, a two-byte shift or rotate instruction, which takes the EU only two clock cycles to execute, actually takes eight clock cycles to complete if it is not in the prefetch queue. A sequence of such fast instructions prevents the queue from being filled as fast as it is drained, and in general, because so many basic instructions execute in fewer than four clocks per instruction byte—including almost all the ALU and data-movement instructions on register operands and some of these on memory operands—it is practically impossible to avoid idling the EU in the 8088 at least ¼ of the time while executing useful real-world programs, and it is not hard to idle it half the time. In short, an 8088 typically runs about half as fast as 8086 clocked at the same rate, because of the bus bottleneck (the only major difference). A side effect of the 8088 design, with the slow bus and the small prefetch queue, is that the speed of code execution can be very dependent on instruction order. When programming the 8088, for CPU efficiency, it is vital to interleave long-running instructions with short ones whenever possible. For example, a repeated string operation or a shift by three or more will take long enough to allow time for the 4-byte prefetch queue to completely fill. If short instructions (i.e. ones totaling few bytes) are placed between slower instructions like these, the short ones can execute at full speed out of the queue. If, on the other hand, the slow instructions are executed sequentially, back to back, then after the first of them the bus unit will be forced to idle because the queue will already be full, with the consequence that later more of the faster instructions will suffer fetch delays that might have been avoidable. As some instructions, such as single-bit-position shifts and rotates, take literally 4 times as long to fetch as to execute, the overall effect can be a slowdown by a factor of two or more. If those code segments are the bodies of loops, the difference in execution time may be very noticeable on the human timescale. The 8088 is also (like the 8086) slow at accessing memory. The same ALU that is used to execute arithmetic and logic instructions is also used to calculate effective addresses. There is a separate adder for adding a shifted segment register to the offset address, but the offset EA itself is always calculated entirely in the main ALU. Furthermore, the loose coupling of the EU and BIU (bus unit) inserts communication overhead between the units, and the four-clock period bus transfer cycle is not particularly streamlined. Contrast this with the two-clock period bus cycle of the 6502 CPU and the 80286's three-clock period bus cycle with pipelining down to two cycles for most transfers. Most 8088 instructions that can operate on either registers or memory, including common ALU and data-movement operations, are at least four times slower for memory operands than for only register operands. Therefore, efficient 8088 (and 8086) programs avoid repeated access of memory operands when possible, loading operands from memory into registers to work with them there and storing back only the finished results. The relatively large general register set of the 8088 compared to its contemporaries assists this strategy. When there are not enough registers for all variables that are needed at once, saving registers by pushing them onto the stack and popping them back to restore them is the fastest way to use memory to augment the registers, as the stack PUSH and POP instructions are the fastest memory operations. The same is probably not true on the 80286 and later; they have dedicated address ALUs and perform memory accesses much faster than the 8088 and 8086. Finally, because calls, jumps, and interrupts reset the prefetch queue, and because loading the IP register requires communication between the EU and the BIU (since the IP register is in the BIU, not in the EU, where the general registers are), these operations are costly. All jumps and calls take at least 15 clock cycles. Any conditional jump requires four clock cycles if not taken, but if taken, it requires 16 cycles in addition to resetting the prefetch queue; therefore, conditional jumps should be arranged to be not taken most of the time, especially inside loops. In some cases, a sequence of logic and movement operations is faster than a conditional jump that skips over one or two instructions to achieve the same result. Intel datasheets for the 8086 and 8088 advertised the dedicated multiply and divide instructions (MUL, IMUL, DIV, and IDIV), but they are very slow, on the order of 100–200 clock cycles each. Many simple multiplications by small constants (besides powers of 2, for which shifts can be used) can be done much faster using dedicated short subroutines. The 80286 and 80386 each greatly increased the execution speed of these multiply and divide instructions. The original IBM PC was the most influential microcomputer to use the 8088. It used a clock frequency of 4.77 MHz (4/3 the NTSC colorburst frequency). Some of IBM's engineers and other employees wanted to use the IBM 801 processor, some would have preferred the new Motorola 68000, while others argued for a small and simple microprocessor, such as the MOS Technology 6502 or Zilog Z80, which had been used in earlier personal computers. However, IBM already had a history of using Intel chips in its products and had also acquired the rights to manufacture the 8086 family. IBM chose the 8088 over the 8086 because Intel offered a better price for the former and could supply more units. Another factor was that the 8088 allowed the computer to be based on a modified 8085 design, as it could easily interface with most nMOS chips with 8-bit databuses, i.e. existing and mature, and therefore economical, components. This included ICs originally intended for support and peripheral functions around the 8085 and similar processors (not exclusively Intel's), which were already well known by many engineers, further reducing cost. The descendants of the 8088 include the 80188, 80186, 80286, 80386, 80486, and later software-compatible processors, which are in use today.
https://en.wikipedia.org/wiki?curid=15064
Insulator (electricity) An electrical insulator is a material in which the electron does not flow freely or the atom of the insulator have tightly bound electrons whose internal electric charges do not flow freely; very little electric current will flow through it under the influence of an electric field. This contrasts with other materials, semiconductors and conductors, which conduct electric current more easily. The property that distinguishes an insulator is its resistivity; insulators have higher resistivity than semiconductors or conductors. The most common examples are non-metals. A perfect insulator does not exist because even insulators contain small numbers of mobile charges (charge carriers) which can carry current. In addition, all insulators become electrically conductive when a sufficiently large voltage is applied that the electric field tears electrons away from the atoms. This is known as the breakdown voltage of an insulator. Some materials such as glass, paper and Teflon, which have high resistivity, are very good electrical insulators. A much larger class of materials, even though they may have lower bulk resistivity, are still good enough to prevent significant current from flowing at normally used voltages, and thus are employed as insulation for electrical wiring and cables. Examples include rubber-like polymers and most plastics which can be thermoset or thermoplastic in nature. Insulators are used in electrical equipment to support and separate electrical conductors without allowing current through themselves. An insulating material used in bulk to wrap electrical cables or other equipment is called "insulation". The term "insulator" is also used more specifically to refer to insulating supports used to attach electric power distribution or transmission lines to utility poles and transmission towers. They support the weight of the suspended wires without allowing the current to flow through the tower to ground. Electrical insulation is the absence of electrical conduction. Electronic band theory (a branch of physics) dictates that a charge flows if states are available into which electrons can be excited. This allows electrons to gain energy and thereby move through a conductor such as a metal. If no such states are available, the material is an insulator. Most (though not all, see Mott insulator) insulators have a large band gap. This occurs because the "valence" band containing the highest energy electrons is full, and a large energy gap separates this band from the next band above it. There is always some voltage (called the breakdown voltage) that gives electrons enough energy to be excited into this band. Once this voltage is exceeded the material ceases being an insulator, and charge begins to pass through it. However, it is usually accompanied by physical or chemical changes that permanently degrade the material's insulating properties. Materials that lack electron conduction are insulators if they lack other mobile charges as well. For example, if a liquid or gas contains ions, then the ions can be made to flow as an electric current, and the material is a conductor. Electrolytes and plasmas contain ions and act as conductors whether or not electron flow is involved. When subjected to a high enough voltage, insulators suffer from the phenomenon of electrical breakdown. When the electric field applied across an insulating substance exceeds in any location the threshold breakdown field for that substance, the insulator suddenly becomes a conductor, causing a large increase in current, an electric arc through the substance. Electrical breakdown occurs when the electric field in the material is strong enough to accelerate free charge carriers (electrons and ions, which are always present at low concentrations) to a high enough velocity to knock electrons from atoms when they strike them, ionizing the atoms. These freed electrons and ions are in turn accelerated and strike other atoms, creating more charge carriers, in a chain reaction. Rapidly the insulator becomes filled with mobile charge carriers, and its resistance drops to a low level. In a solid, the breakdown voltage is proportional to the band gap energy. When corona discharge occurs, the air in a region around a high-voltage conductor can break down and ionise without a catastrophic increase in current. However, if the region of air breakdown extends to another conductor at a different voltage it creates a conductive path between them, and a large current flows through the air, creating an "electric arc". Even a vacuum can suffer a sort of breakdown, but in this case the breakdown or vacuum arc involves charges ejected from the surface of metal electrodes rather than produced by the vacuum itself. In addition, all insulators become conductors at very high temperatures as the thermal energy of the valence electrons is sufficient to put them in the conduction band. In certain capacitors, shorts between electrodes formed due to dielectric breakdown can disappear when the applied electric field is reduced. A very flexible coating of an insulator is often applied to electric wire and cable, this is called "insulated wire". Wires sometimes don't use an insulating coating, just air, since a solid (e.g. plastic) coating may be impractical. However, wires that touch each other produce cross connections, short circuits, and fire hazards. In coaxial cable the center conductor must be supported exactly in the middle of the hollow shield to prevent EM wave reflections. Finally, wires that expose voltages higher than 60 V can cause human shock and electrocution hazards. Insulating coatings help to prevent all of these problems. Some wires have a mechanical covering with no voltage rating—e.g.: service-drop, welding, doorbell, thermostat wire. An insulated wire or cable has a voltage rating and a maximum conductor temperature rating. It may not have an ampacity (current-carrying capacity) rating, since this is dependent upon the surrounding environment (e.g. ambient temperature). In electronic systems, printed circuit boards are made from epoxy plastic and fibreglass. The nonconductive boards support layers of copper foil conductors. In electronic devices, the tiny and delicate active components are embedded within nonconductive epoxy or phenolic plastics, or within baked glass or ceramic coatings. In microelectronic components such as transistors and ICs, the silicon material is normally a conductor because of doping, but it can easily be selectively transformed into a good insulator by the application of heat and oxygen. Oxidised silicon is quartz, i.e. silicon dioxide, the primary component of glass. In high voltage systems containing transformers and capacitors, liquid insulator oil is the typical method used for preventing arcs. The oil replaces air in spaces that must support significant voltage without electrical breakdown. Other high voltage system insulation materials include ceramic or glass wire holders, gas, vacuum, and simply placing wires far enough apart to use air as insulation. Overhead conductors for high-voltage electric power transmission are bare, and are insulated by the surrounding air. Conductors for lower voltages in distribution may have some insulation but are often bare as well. Insulating supports called "insulators" are required at the points where they are supported by utility poles or transmission towers. Insulators are also required where the wire enters buildings or electrical devices, such as transformers or circuit breakers, to insulate the wire from the case. These hollow insulators with a conductor inside them are called bushings. Insulators used for high-voltage power transmission are made from glass, porcelain or composite polymer materials. Porcelain insulators are made from clay, quartz or alumina and feldspar, and are covered with a smooth glaze to shed water. Insulators made from porcelain rich in alumina are used where high mechanical strength is a criterion. Porcelain has a dielectric strength of about 4–10 kV/mm. Glass has a higher dielectric strength, but it attracts condensation and the thick irregular shapes needed for insulators are difficult to cast without internal strains. Some insulator manufacturers stopped making glass insulators in the late 1960s, switching to ceramic materials. Recently, some electric utilities have begun converting to polymer composite materials for some types of insulators. These are typically composed of a central rod made of fibre reinforced plastic and an outer weathershed made of silicone rubber or ethylene propylene diene monomer rubber (EPDM). Composite insulators are less costly, lighter in weight, and have excellent hydrophobic capability. This combination makes them ideal for service in polluted areas. However, these materials do not yet have the long-term proven service life of glass and porcelain. The electrical breakdown of an insulator due to excessive voltage can occur in one of two ways: Most high voltage insulators are designed with a lower flashover voltage than puncture voltage, so they flash over before they puncture, to avoid damage. Dirt, pollution, salt, and particularly water on the surface of a high voltage insulator can create a conductive path across it, causing leakage currents and flashovers. The flashover voltage can be reduced by more than 50% when the insulator is wet. High voltage insulators for outdoor use are shaped to maximise the length of the leakage path along the surface from one end to the other, called the creepage length, to minimise these leakage currents. To accomplish this the surface is moulded into a series of corrugations or concentric disc shapes. These usually include one or more "sheds"; downward facing cup-shaped surfaces that act as umbrellas to ensure that the part of the surface leakage path under the 'cup' stays dry in wet weather. Minimum creepage distances are 20–25 mm/kV, but must be increased in high pollution or airborne sea-salt areas. These are the common classes of insulators: Pin-type insulators are unsuitable for voltages greater than about 69 kV line-to-line. Higher transmission voltages use suspension insulator strings, which can be made for any practical transmission voltage by adding insulator elements to the string. Higher voltage transmission lines usually use modular suspension insulator designs. The wires are suspended from a 'string' of identical disc-shaped insulators that attach to each other with metal clevis pin or ball and socket links. The advantage of this design is that insulator strings with different breakdown voltages, for use with different line voltages, can be constructed by using different numbers of the basic units. Also, if one of the insulator units in the string breaks, it can be replaced without discarding the entire string. Each unit is constructed of a ceramic or glass disc with a metal cap and pin cemented to opposite sides. To make defective units obvious, glass units are designed so that an overvoltage causes a puncture arc through the glass instead of a flashover. The glass is heat-treated so it shatters, making the damaged unit visible. However the mechanical strength of the unit is unchanged, so the insulator string stays together. Standard suspension disc insulator units are in diameter and long, can support a load of 80-120 kN (18-27 klbf), have a dry flashover voltage of about 72 kV, and are rated at an operating voltage of 10-12 kV. However, the flashover voltage of a string is less than the sum of its component discs, because the electric field is not distributed evenly across the string but is strongest at the disc nearest to the conductor, which flashes over first. Metal "grading rings" are sometimes added around the disc at the high voltage end, to reduce the electric field across that disc and improve flashover voltage. In very high voltage lines the insulator may be surrounded by corona rings. These typically consist of toruses of aluminium (most commonly) or copper tubing attached to the line. They are designed to reduce the electric field at the point where the insulator is attached to the line, to prevent corona discharge, which results in power losses. The first electrical systems to make use of insulators were telegraph lines; direct attachment of wires to wooden poles was found to give very poor results, especially during damp weather. The first glass insulators used in large quantities had an unthreaded pinhole. These pieces of glass were positioned on a tapered wooden pin, vertically extending upwards from the pole's crossarm (commonly only two insulators to a pole and maybe one on top of the pole itself). Natural contraction and expansion of the wires tied to these "threadless insulators" resulted in insulators unseating from their pins, requiring manual reseating. Amongst the first to produce ceramic insulators were companies in the United Kingdom, with Stiff and Doulton using stoneware from the mid-1840s, Joseph Bourne (later renamed Denby) producing them from around 1860 and Bullers from 1868. Utility patent number 48,906 was granted to Louis A. Cauvet on 25 July 1865 for a process to produce insulators with a threaded pinhole: pin-type insulators still have threaded pinholes. The invention of suspension-type insulators made high-voltage power transmission possible. As transmission line voltages reached and passed 60,000 volts, the insulators required become very large and heavy, with insulators made for a safety margin of 88,000 volts being about the practical limit for manufacturing and installation. Suspension insulators, on the other hand, can be connected into strings as long as required for the line's voltage. A large variety of telephone, telegraph and power insulators have been made; some people collect them, both for their historic interest and for the aesthetic quality of many insulator designs and finishes. One collectors organisation is the US National Insulator Association, which has over 9,000 members. Often a broadcasting radio antenna is built as a mast radiator, which means that the entire mast structure is energised with high voltage and must be insulated from the ground. Steatite mountings are used. They have to withstand not only the voltage of the mast radiator to ground, which can reach values up to 400 kV at some antennas, but also the weight of the mast construction and dynamic forces. Arcing horns and lightning arresters are necessary because lightning strikes to the mast are common. Guy wires supporting antenna masts usually have strain insulators inserted in the cable run, to keep the high voltages on the antenna from short circuiting to ground or creating a shock hazard. Often guy cables have several insulators, placed to break up the cable into lengths that prevent unwanted electrical resonances in the guy. These insulators are usually ceramic and cylindrical or egg-shaped (see picture). This construction has the advantage that the ceramic is under compression rather than tension, so it can withstand greater load, and that if the insulator breaks, the cable ends are still linked. These insulators also have to be equipped with overvoltage protection equipment. For the dimensions of the guy insulation, static charges on guys have to be considered. For high masts, these can be much higher than the voltage caused by the transmitter, requiring guys divided by insulators in multiple sections on the highest masts. In this case, guys which are grounded at the anchor basements via a coil - or if possible, directly - are the better choice. Feedlines attaching antennas to radio equipment, particularly twin lead type, often must be kept at a distance from metal structures. The insulated supports used for this purpose are called "standoff insulators". The most important insulation material is air. A variety of solid, liquid, and gaseous insulators are also used in electrical apparatus. In smaller transformers, generators, and electric motors, insulation on the wire coils consists of up to four thin layers of polymer varnish film. Film insulated magnet wire permits a manufacturer to obtain the maximum number of turns within the available space. Windings that use thicker conductors are often wrapped with supplemental fiberglass insulating tape. Windings may also be impregnated with insulating varnishes to prevent electrical corona and reduce magnetically induced wire vibration. Large power transformer windings are still mostly insulated with paper, wood, varnish, and mineral oil; although these materials have been used for more than 100 years, they still provide a good balance of economy and adequate performance. Busbars and circuit breakers in switchgear may be insulated with glass-reinforced plastic insulation, treated to have low flame spread and to prevent tracking of current across the material. In older apparatus made up to the early 1970s, boards made of compressed asbestos may be found; while this is an adequate insulator at power frequencies, handling or repairs to asbestos material can release dangerous fibers into the air and must be carried cautiously. Wire insulated with felted asbestos was used in high-temperature and rugged applications from the 1920s. Wire of this type was sold by General Electric under the trade name "Deltabeston." Live-front switchboards up to the early part of the 20th century were made of slate or marble. Some high voltage equipment is designed to operate within a high pressure insulating gas such as sulfur hexafluoride. Insulation materials that perform well at power and low frequencies may be unsatisfactory at radio frequency, due to heating from excessive dielectric dissipation. Electrical wires may be insulated with polyethylene, crosslinked polyethylene (either through electron beam processing or chemical crosslinking), PVC, Kapton, rubber-like polymers, oil impregnated paper, Teflon, silicone, or modified ethylene tetrafluoroethylene (ETFE). Larger power cables may use compressed inorganic powder, depending on the application. Flexible insulating materials such as PVC (polyvinyl chloride) are used to insulate the circuit and prevent human contact with a 'live' wire – one having voltage of 600 volts or less. Alternative materials are likely to become increasingly used due to EU safety and environmental legislation making PVC less economic. All portable or hand-held electrical devices are insulated to protect their user from harmful shock. Class I insulation requires that the metal body and other exposed metal parts of the device be connected to earth via a "grounding wire" that is earthed at the main service panel—but only needs basic insulation on the conductors. This equipment needs an extra pin on the power plug for the grounding connection. Class II insulation means that the device is "double insulated". This is used on some appliances such as electric shavers, hair dryers and portable power tools. Double insulation requires that the devices have both basic and supplementary insulation, each of which is sufficient to prevent electric shock. All internal electrically energized components are totally enclosed within an insulated body that prevents any contact with "live" parts. In the EU, double insulated appliances all are marked with a symbol of two squares, one inside the other.
https://en.wikipedia.org/wiki?curid=15066
Internetworking Internetworking is the practice of interconnecting multiple computer networks,, such that any pair of hosts in the connected networks can exchange messages irrespective of their hardware-level networking technology. The resulting system of interconnected networks are called an "internetwork", or simply an "internet". The most notable example of internetworking is the Internet, a network of networks based on many underlying hardware technologies. The Internet is defined by a unified global addressing system, packet format, and routing methods provided by the Internet Protocol. The term "internetworking" is a combination of the components "inter" ("between") and "networking". An earlier term for an internetwork is "catenet", a short-form of "(con)catenating networks". Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The first two interconnected networks were the ARPANET and the NPL network. The network elements used to connect individual networks in the ARPANET, the predecessor of the Internet, were originally called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. Today the interconnecting gateways are called routers. The definition of an internetwork today includes the connection of other types of computer networks such as personal area networks. To build an internetwork, the following are needed: A standardized scheme to address packets to any host on any participating network; a standardized protocol defining format and handling of transmitted packets; components interconnecting the participating networks by routing packets to their destinations based on standardized addresses. Another type of interconnection of networks often occurs within enterprises at the Link Layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers and having an internetworking software layer that applications employ. The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time service, such as video streaming or voice chat. Catenet is an obsolete term for a system of packet-switched communication networks interconnected via gateways. The term was coined by Louis Pouzin in October 1973 in a note circulated to the International Networking Working Group, later published in a 1974 paper ""A Proposal for Interconnecting Packet Switching Networks"". Pouzin was a pioneer in packet-switching technology and founder of the CYCLADES network, at a time when "network" meant what is now called a local area network. Catenet was the concept of linking these networks into a "network of networks" with specifications for compatibility of addressing and routing. The term catenet was gradually displaced by the short-form of the term internetwork, "internet" (lower-case "i"), when the Internet Protocol replaced earlier protocols on the ARPANET. Two architectural models are commonly used to describe the protocols and methods used in internetworking. The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organization for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model. The Internet Protocol Suite, also known as the TCP/IP model, was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Requests for Comment and Internet standards. Despite similar appearance as a layered model, it has a much less rigorous, loosely defined architecture that concerns itself only with the aspects of the style of networking in its own historical provenance. It assumes the availability of any suitable hardware infrastructure, without discussing hardware-specific low-level interfaces, and that a host has access to this local network to which it is connected via a Link Layer interface. For a period in the late 1980s and early 1990s, the network engineering community was polarized over the implementation of competing protocol suites, commonly known as the Protocol Wars. It was unclear which of the OSI model and the Internet protocol suite would result in the best and most robust computer networks.
https://en.wikipedia.org/wiki?curid=15067
Infantry Infantry is a military specialization that engages in military combat on foot, distinguished from cavalry, artillery, and armored forces. Also known as foot soldiers or infantrymen, infantry traditionally relies on moving by foot between combats as well, but may also use mounts, military vehicles, or other transport. Infantry make up a large portion of all armed forces in most nations, and typically bear the largest brunt in warfare, as measured by casualties, deprivation, or physical and psychological stress. The first military forces in history were infantry. In antiquity, infantry were armed with an early melee weapon such as a spear, axe or sword, or an early ranged weapon like a javelin, sling, or bow, with a few infantrymen having both a melee and a ranged weapon. With the development of gunpowder, infantry began converting to primarily firearms. By the time of Napoleonic warfare, infantry, cavalry, and artillery formed a basic triad of ground forces, though infantry usually remained the most numerous. With armoured warfare, armoured fighting vehicles have replaced the horses of cavalry, and airpower has added a new dimension to ground combat, but infantry remains pivotal to all modern combined arms operations. Infantry have much greater local situational awareness than other military forces, due to their inherent intimate contact with the battlefield ("boots on the ground"); this is vital for engaging and infiltrating enemy positions, holding and defending ground (any military objectives), securing battlefield victories, maintaining military area control and security both at and behind the front lines, for capturing ordnance or materiel, taking prisoners, and military occupation. Infantry can more easily recognise, adapt and respond to local conditions, weather, and changing enemy weapons or tactics. They can operate in a wide range of terrain inaccessible to military vehicles, and can operate with a lower logistical burden. Infantry are the most easily delivered forces to ground combat areas, by simple and reliable marching, or by trucks, sea or air transport; they can also be inserted directly into combat by amphibious landing, or for air assault by parachute (airborne infantry) or helicopter (airmobile infantry). They can be augmented with a variety of crew-served weapons, armoured personnel carriers, and infantry fighting vehicles. In English, use of the term "infantry" began about the 1570s, describing soldiers who march and fight on foot. The word derives from Middle French "infanterie", from older Italian (also Spanish) "infanteria" (foot soldiers too inexperienced for cavalry), from Latin "īnfāns" (without speech, newborn, foolish), from which English also gets "infant". The individual-soldier term infantryman was not coined until 1837. In modern usage, foot soldiers of any era are now considered infantry and infantrymen. From the mid-18th century until 1881 the British Army named its infantry as numbered regiments "of Foot" to distinguish them from cavalry and dragoon regiments (see List of Regiments of Foot). Infantry equipped with special weapons were often named after that weapon, such as grenadiers for their grenades, or fusiliers for their "fusils". These names can persist long after the weapon speciality; examples of infantry units that retained such names are the Royal Irish Fusiliers and the Grenadier Guards. More commonly in modern times, infantry with special tactics are named for their roles, such as commandos, rangers, snipers, marines, (who all have additional training) and militia (who have limited training); they are still infantry due to their expectation to fight as infantry when they enter combat. Dragoons were created as mounted infantry, with horses for travel between battles; they were still considered infantry since they dismounted before combat. However, if light cavalry was lacking in an army, any available dragoons might be assigned their duties; this practise increased over time, and dragoons eventually received all the weapons and training as both infantry and cavalry, and could be classified as both. Conversely, starting about the mid-19th century, regular cavalry have been forced to spend more of their time dismounted in combat due to the ever-increasing effectiveness of enemy infantry firearms. Thus most cavalry transitioned to mounted infantry. As with grenadiers, the "dragoon" and "cavalry" designations can be retained long after their horses, such as in the Royal Dragoon Guards, Royal Lancers, and King's Royal Hussars. Similarly, motorised infantry have trucks and other unarmed vehicles for non-combat movement, but are still infantry since they leave their vehicles for any combat. Most modern infantry have vehicle transport, to the point where infantry being motorised is generally assumed, and the few exceptions might be identified as modern "light infantry", or "leg infantry" colloquially. Mechanised infantry go beyond motorised, having transport vehicles with combat abilities, armoured personnel carriers (APCs), providing at least some options for combat without leaving their vehicles. In modern infantry, some APCs have evolved to be infantry fighting vehicles (IFVs), which are transport vehicles with more substantial combat abilities, approaching those of light tanks. Some well-equipped mechanised infantry can be designated as "armoured infantry". Given that infantry forces typically also have some tanks, and given that most armoured forces have more mechanised infantry units than tank units in their organisation, the distinction between mechanised infantry and armour forces has blurred. The terms "infantry", "armour", and "cavalry" used in the official names for military units like divisions, brigades, or regiments might be better understood as a description of their expected balance of defensive, offensive, and mobility roles, rather than just use of vehicles. Some modern mechanised infantry units are termed "cavalry" or "armoured cavalry", even though they never had horses, to emphasise their combat mobility. In the modern US Army, about 15% of soldiers are officially "Infantry". The basic training for all new US Army soldiers includes use of infantry weapons and tactics, even for tank crews, artillery crews, and base and logistical personnel. The first warriors, adopting hunting weapons or improvised melee weapons, before the existence of any organised military, likely started essentially as loose groups without any organisation or formation. But this changed sometime before recorded history; the first ancient empires (2500–1500 BC) are shown to have some soldiers with standardised military equipment, and the training and discipline required for battlefield formations and manoeuvres: regular infantry. Though the main force of the army, these forces were usually kept small due to their cost of training and upkeep, and might be supplemented by local short-term mass-conscript forces using the older irregular infantry weapons and tactics; this remained a common practice almost up to modern times. Before the adoption of the chariot to create the first mobile fighting forces , all armies were pure infantry. Even after, with a few exceptions like the Mongol Empire, infantry has been the largest component of most armies in history. In the Western world, from Classical Antiquity through the Middle Ages ( 8th century BC to 15th century AD), infantry are categorised as either heavy infantry or light infantry. Heavy infantry, such as Greek hoplites, Macedonian phalangites, and Roman legionaries, specialised in dense, solid formations driving into the main enemy lines, using weight of numbers to achieve a decisive victory, and were usually equipped with heavier weapons and armour to fit their role. Light infantry, such as Greek peltasts, Balearic slingers, and Roman velites, using open formations and greater manoeuvrability, took on most other combat roles: scouting, screening the army on the march, skirmishing to delay, disrupt, or weaken the enemy to prepare for the main forces' battlefield attack, protecting them from flanking manoeuvers, and then afterwards either pursuing the fleeing enemy or covering their army's retreat. After the fall of Rome, the quality of heavy infantry declined, and warfare was dominated by heavy cavalry, such as knights, forming small elite units for decisive shock combat, supported by peasant infantry militias and assorted light infantry from the lower classes. Towards the end of Middle Ages, this began to change, where more professional and better trained light infantry could be effective against knights, such as the English longbowmen in the Hundred Years' War. By the start of the Renaissance, the infantry began to return to dominance, with Swiss pikemen and German Landsknechts filling the role of heavy infantry again, using dense formations of pikes to drive off any cavalry. Dense formations are vulnerable to ranged weapons. Technological developments allowed the raising of large numbers of light infantry units armed with ranged weapons, without the years of training expected for traditional high-skilled archers and slingers. This started slowly, first with crossbowmen, then hand cannoneers and arquebusiers, each with increasing effectiveness, marking the beginning of early modern warfare, when firearms rendered the use of heavy infantry obsolete. The introduction of musketeers using bayonets in the mid 17th century began replacement of the pike with the infantry square replacing the pike square. To maximise their firepower, musketeer infantry were trained to fight in wide lines facing the enemy, creating line infantry. These fulfilled the central battlefield role of earlier heavy infantry, using ranged weapons instead of melee weapons. To support these lines, smaller infantry formations using dispersed skirmish lines were created, called light infantry, fulfilling the same multiple roles as earlier light infantry. Their arms were no lighter than line infantry; they were distinguished by their skirmish formation and flexible tactics. The modern rifleman infantry became the primary force for taking and holding ground on battlefields worldwide, a vital element of combined arms combat. As firepower continued to increase, use of infantry lines diminished, until all infantry became light infantry in practice. Modern classifications of infantry have expanded to reflect modern equipment and tactics, such as motorised infantry, mechanised or armoured infantry, mountain infantry, marine infantry, and airborne infantry. An infantryman's equipment is of vital concern both for the man and the military. The needs of the infantryman to maintain fitness and effectiveness must be constantly balanced against being overburdened. While soldiers in other military branches can use their mount or vehicle for carrying equipment, and tend to operate together as crews serving their vehicle or ordnance, infantrymen must operate more independently; each infantryman usually having much more personal equipment to use and carry. This encourages searching for ingenious combinations of effective, rugged, serviceable and adaptable, yet light, compact, and handy infantry equipment. Beyond their main arms and armour, each infantryman's "military kit" includes combat boots, battledress or combat uniform, camping gear, heavy weather gear, survival gear, secondary weapons and ammunition, weapon service and repair kits, health and hygiene items, mess kit, rations, filled water canteen, and all other consumables each infantryman needs for the expected duration of time operating away from their unit's base, plus any special mission-specific equipment. One of the most valuable pieces of gear is the entrenching tool—basically a folding spade—which can be employed not only to dig important defences, but also in a variety of other daily tasks, and even sometimes as a weapon. Infantry typically have shared equipment on top of this, like tents or heavy weapons, where the carrying burden is spread across several infantrymen. In all, this can reach for each soldier on the march. Such heavy infantry burdens have changed little over centuries of warfare; in the late Roman Republic, legionaries were nicknamed "Marius' mules" as their main activity seemed to be carrying the weight of their legion around on their backs. When combat is expected, infantry typically switch to "packing light", meaning reducing their equipment to weapons, ammo, and bare essentials, and leaving the rest with their transport or baggage train, at camp or rally point, in temporary hidden caches, or even (in emergencies) discarding whatever may slow them down. Additional specialised equipment may be required, depending on the mission or to the particular terrain or environment, including satchel charges, demolition tools, mines, barbed wire, carried by the infantry or attached specialists. Historically, infantry have suffered high casualty rates from disease, exposure, exhaustion and privation — often in excess of the casualties suffered from enemy attacks. Better infantry equipment to support their health, energy, and protect from environmental factors greatly reduces these rates of loss, and increase their level of effective action. Health, energy, and morale are greatly influenced by how the soldier is fed, so militaries often standardised field rations, starting from hardtack, to US K-rations, to modern MREs. Communications gear has become a necessity, as it allows effective command of infantry units over greater distances, and communication with artillery and other support units. Modern infantry can have GPS, encrypted individual communications equipment, surveillance and night vision equipment, advanced intelligence and other high-tech mission-unique aids. Armies have sought to improve and standardise infantry gear to reduce fatigue for extended carrying, increase freedom of movement, accessibility, and compatibility with other carried gear, such as the US All-purpose Lightweight Individual Carrying Equipment (ALICE). Infantrymen are defined by their primary arms – the personal weapons and body armour for their own individual use. The available technology, resources, history, and society can produce quite different weapons for each military and era, but common infantry weapons can be distinguished in a few basic categories. Infantrymen often carry secondary or back-up weapons, sometimes called a sidearm or ancillary weapons in modern terminology, either issued officially as an addition to the soldier's standard arms, or acquired unofficially by any other means as an individual preference. Such weapons are used when the primary weapon is no longer effective, such it becoming damaged, running out of ammunition, malfunction, or in a change of tactical situation where another weapon is preferred, such as going from ranged to close combat. Infantry with ranged or pole weapons often carried a sword or dagger for possible hand-to-hand combat. The "pilum" was a javelin of the Roman legionaries threw just before drawing their primary weapon, the "gladius" (short sword), and closing with the enemy line. Modern infantrymen now treat the bayonet as a backup weapon, but may also have handguns or pistols. They may also deploy anti-personnel mines, booby traps, incendiary or explosive devices defensively before combat. Some non-weapon equipment are designed for close combat shock effects, to get and psychological edge before melee, such as battle flags, war drums, brilliant uniforms, fierce body paint or tattoos, and even battle cries. These have become mostly only ceremonial since the decline of close combat military tactics. Infantry have employed many different methods of protection from enemy attacks, including various kinds of armour and other gear, and tactical procedures. The most basic is personal armour. This includes shields, helmets and many types of armour – padded linen, leather, lamellar, mail, plate, and kevlar. Initially, armour was used to defend both from ranged and close combat; even a fairly light shield could help defend against most slings and javelins, though high-strength bows and crossbows might penetrate common armour at very close range. Infantry armour had to compromise between protection and coverage, as a full suit of attack-proof armour would be too heavy to wear in combat. As firearms improved, armour for ranged defence had to be thicker and stronger. With the introduction of the heavy arquebus designed to pierce standard steel armour, it was proven easier to make heavier firearms than heavier armour; armour transitioned to be only for close combat purposes. Pikemen armour tended to be just steel helmets and breastplates, and gunners little or no armour. By the time of the musket, the dominance of firepower shifted militaries away from any close combat, and use of armour decreased, until infantry typically went without any armour. Helmets were added back during World War I as artillery began to dominate the battlefield, to protect against their fragmentation and other blast effects beyond a direct hit. Modern developments in bullet-proof composite materials like kevlar have started a return to body armour for infantry, though the extra weight is a notable burden. In modern times, infantrymen must also often carry protective measures against chemical and biological attack, including gas masks, counter-agents, and protective suits. All of these protective measures add to the weight an infantryman must carry, and may decrease combat efficiency. Modern militaries are struggling to balance the value of personal body protection versus the weight burden and ability to function under such weight. Early crew-served weapons were siege weapons, like the ballista, trebuchet, and battering ram. Modern versions include machine guns, anti-tank missiles, and infantry mortars. Beginning with the development the first regular military forces, close-combat regular infantry fought less as unorganised groups of individuals and more in coordinated units, maintaining a defined tactical formation during combat, for increased battlefield effectiveness; such infantry formations and the arms they used developed together, starting with the spear and the shield. A spear has decent attack abilities with the additional advantage keeping opponents at distance; this advantage can be increased by using longer spears, but this could allow the opponent to side-step the point of the spear and close for hand-to-hand combat where the longer spear is near useless. This can be avoided when each spearman stays side by side with the others in close formation, each covering the ones next to him, presenting a solid wall of spears to the enemy that they cannot get around. Similarly, a shield has decent defence abilities, but is literally hit-or-miss; an attack from an unexpected angle can bypass it completely. Larger shields can cover more, but are also heavier and less manoeuvrable, making unexpected attacks even more of a problem. This can be avoided by having shield-armed soldiers stand close together, side-by-side, each protecting both themselves and their immediate comrades, presenting a solid shield wall to the enemy. The opponents for these first formations, the close-combat infantry of more tribal societies, or any military without regular infantry (so called "barbarians") used arms that focused on the individual – weapons using personal strength and force, such as larger swinging swords, axes, and clubs. These take more room and individual freedom to swing and wield, necessitating a more loose organisation. While this may allow for a fierce running attack (an initial shock advantage) the tighter formation of the heavy spear and shield infantry gave them a local manpower advantage where several might be able to fight each opponent. Thus tight formations heightened advantages of heavy arms, and gave greater local numbers in melee. To also increase their staying power, multiple rows of heavy infantrymen were added. This also increased their shock combat effect; individual opponents saw themselves literally lined-up against several heavy infantryman each, with seemingly no chance of defeating all of them. "Heavy infantry" developed into huge solid block formations, up to a hundred meters wide and a dozen rows deep. Maintaining the advantages of heavy infantry meant maintaining formation; this became even more important when two forces with heavy infantry met in battle; the solidity of the formation became the deciding factor. Intense discipline and training became paramount. Empires formed around their military. The organization of military forces into regular military units is first noted in Egyptian records of the Battle of Kadesh (). Soldiers were grouped into units of 50, which were in turn grouped into larger units of 250, then 1,000, and finally into units of up to 5,000 – the largest independent command. Several of these Egyptian "divisions" made up an army, but operated independently, both on the march and tactically, demonstrating sufficient military command and control organisation for basic battlefield manoeuvres. Similar hierarchical organizations have been noted in other ancient armies, typically with approximately 10 to 100 to 1,000 ratios (even where base 10 was not common), similar to modern sections (squads), companies, and regiments. The training of the infantry has differed drastically over time and from place to place. The cost of maintaining an army in fighting order and the seasonal nature of warfare precluded large permanent armies. The antiquity saw everything from the well-trained and motivated citizen armies of Greek and Rome, the tribal host assembled from farmers and hunters with only passing acquaintance with warfare and masses of lightly armed and ill-trained militia put up as a last ditch effort. In medieval times the foot soldiers varied from peasant levies to semi-permanent companies of mercenaries, foremost among them the Swiss, English, Aragonese and German, to men-at-arms who went into battle as well-armoured as knights, the latter of which at times also fought on foot. The creation of standing armies—permanently assembled for war or defence—saw increase in training and experience. The increased use of firearms and the need for drill to handle them efficiently. The introduction of national and mass armies saw an establishment of minimum requirements and the introduction of special troops (first of them the engineers going back to medieval times, but also different kinds of infantry adopted to specific terrain, bicycle, motorcycle, motorised and mechanised troops) culminating with the introduction of highly trained special forces during the first and second World War. As a branch of the armed forces, the role of the infantry in warfare is to engage, fight, and kill the enemy at close range—using either a firearm (rifle, pistol, machine gun), an edged-weapon (knife, bayonet), or bare hands (close quarters combat)—as required by the mission to hand; thus Beginning with the Napoleonic Wars of the early 19th century, artillery has become an increasingly dominant force on the battlefield. Since World War I, combat aircraft and armoured vehicles have also become dominant. However, the most effective method for locating all enemy forces on a battlefield is still the infantry patrol, and it is the presence or absence of infantry that ultimately determines whether a particular piece of ground has been captured or held. In 20th and 21st century warfare, infantry functions most effectively as part of a combined arms team including artillery, armour, and combat aircraft. Studies have shown that of all casualties, 50% or more were caused by artillery; about 10% were caused by machine guns; 2–5% by rifle fire; and 1% or less by hand grenades, bayonets, knives, and unarmed combat combined. Several infantry divisions both Allied and Axis in the European theatre of WWII suffered higher than 100% combat and non combat casualties and some above 200%, meaning that the number of service personnel that became casualties was greater than the sum of the divisions' available service positions at full strength. Attack operations are the most basic role of the infantry, and along with defence, form the main stances of the infantry on the battlefield. Traditionally, in an open battle, or meeting engagement, two armies would manoeuvre to contact, at which point they would form up their infantry and other units opposite each other. Then one or both would advance and attempt to defeat the enemy force. The goal of an attack remains the same: to advance into an enemy-held "objective," most frequently a hill, river crossing, city or other dominant terrain feature, and dislodge the enemy, thereby establishing control of the objective. Attacks are often feared by the infantry conducting them because of the high number of casualties suffered while advancing to close with and destroy the enemy while under enemy fire. In mechanised infantry the armoured personnel carrier (APC) is considered the assaulting position. These APCs can deliver infantrymen through the front lines to the battle and—in the case of infantry fighting vehicles—contribute supporting firepower to engage the enemy. Successful attacks rely on sufficient force, preparative reconnaissance and battlefield preparation with bomb assets. Retention of discipline and cohesion throughout the attack is paramount to success. A subcategory of attacks is the ambush, where infantrymen lie in wait for enemy forces before attacking at a vulnerable moment. This gives the ambushing infantrymen the combat advantage of surprise, concealment and superior firing positions, and causes confusion. The ambushed unit does not know what it is up against, or where they are attacking from. Patrolling is the most common infantry mission. Full-scale attacks and defensive efforts are occasional, but patrols are constant. Patrols consist of small groups of infantry moving about in areas of possible enemy activity to locate the enemy and destroy them when found. Patrols are used not only on the front-lines, but in rear areas where enemy infiltration or insurgencies are possible. Pursuit is a role that the infantry often assumes. The objective of pursuit operations is the destruction of withdrawing enemy forces which are not capable of effectively engaging friendly units, before they can build their strength to the point where they are effective. Infantry traditionally have been the main force to overrun these units in the past, and in modern combat are used to pursue enemy forces in constricted terrain (urban areas in particular), where faster forces, such as armoured vehicles are incapable of going or would be exposed to ambush. Defence operations are the natural counter to attacks, in which the mission is to hold an objective and defeat enemy forces attempting to dislodge the defender. Defensive posture offers many advantages to the infantry, including the ability to use terrain and constructed fortifications to advantage; these reduce exposure to enemy fire compared with advancing forces. Effective defence relies on minimising losses to enemy fire, breaking the enemy's cohesion before their advance is completed, and preventing enemy penetration of defensive positions. Escorting consists of protecting support units from ambush, particularly from hostile infantry forces. Combat support units (a majority of the military) are not as well armed or trained as infantry units and have a different mission. Therefore, they need the protection of the infantry, particularly when on the move. This is one of the most important roles for the modern infantry, particularly when operating alongside armoured vehicles. In this capacity, infantry essentially conducts patrol on the move, scouring terrain which may hide enemy infantry waiting to ambush friendly vehicles, and identifying enemy strong points for attack by the heavier units. Infantry units are tasked to protect certain areas like command posts or airbases. Units assigned to this job usually have a large number of military police attached to them for control of checkpoints and prisons. Maneouvering consumes much of an infantry unit's time. Infantry, like all combat arms units, are often manoeuvred to meet battlefield needs, and often must do so under enemy attack. The infantry must maintain their cohesion and readiness during the move to ensure their usefulness when they reach their objective. Traditionally, infantry have relied on their own legs for mobility, but mechanised or armoured infantry often uses trucks and armoured vehicles for transport. These units can quickly disembark and transition to light infantry, without vehicles, to access terrain which armoured vehicles can't effectively access. Surveillance operations are often carried out with the employment of small recon units or sniper teams which gather information about the enemy, reporting on characteristics such as size, activity, location, unit and equipment. These infantry units typically are known for their stealth and ability to operate for periods of time within close proximity of the enemy without being detected. They may engage high-profile targets, or be employed to hunt down terrorist cells and insurgents within a given area. These units may also entice the enemy to engage a located recon unit, thus disclosing their location to be destroyed by more powerful friendly forces. Some assignments for infantry units involve deployment behind the front, although patrol and security operations are usually maintained in case of enemy infiltration. This is usually the best time for infantry units to integrate replacements into units and to maintain equipment. Additionally, soldiers can be rested and general readiness should improve. However, the unit must be ready for deployment at any point. This can be undertaken either in reserve or on the front, but consists of using infantry troops as labor for construction of field positions, roads, bridges, airfields, and all other manner of structures. The infantry is often given this assignment because of the physical quantity of strong men within the unit, although it can lessen a unit's morale and limit the unit's ability to maintain readiness and perform other missions. More often, such jobs are given to specialist engineering corps. Infantry units are trained to quickly mobilise, infiltrate, enter and neutralise threat forces when appropriate combat intelligence indicates to secure a location, rescue or capture high-profile targets. Urban combat poses unique challenges to the combat forces. It is one of the most complicated type of operations an infantry unit will undertake. With many places for the enemy to hide and ambush from, infantry units must be trained in how to enter a city, and systematically clear the buildings, which most likely will be booby trapped, in order to kill or capture enemy personnel within the city. Care must be taken to differentiate innocent civilians who often hide and support the enemy from the non-uniformed armed enemy forces. Civilian and military casualties both are usually very high. Because of an infantryman's duties with firearms, explosives, physical and emotional stress, and physical violence, casualties and deaths are not uncommon in both war and in peacetime training or operations. It is a highly dangerous and demanding combat service; in World War II, military doctors concluded that even physically unwounded soldiers were psychologically worn out after about 200 days of combat. The physical, mental, and environmental operating demands of the infantryman are high. All of the combat necessities such as ammunition, weapon systems, food, water, clothing, and shelter are carried on the backs of the infantrymen, at least in light role as opposed to mounted/mechanised. Combat loads of over 36 kg (80 lbs) are standard, and greater loads in excess of 45 kg (100 lbs) are very common. These heavy loads, combined with long foot patrols of over a day, in any climate from in temperature, require the infantryman to be in good physical and mental condition. Infantrymen live, fight and die outdoors in all types of brutal climates, often with no physical shelter. Poor climate conditions adds misery to this already demanding existence. Disease epidemics, frostbite, heat stroke, trench foot, insect and wild animal bites are common along with stress disorders and these have sometimes caused more casualties than enemy action. Infantrymen are expected to continue with their combat missions despite the death and injury of friends, fear, despair, fatigue, and bodily injury. Some infantry units are considered Special Forces. The earliest Special Forces commando units were more highly trained infantrymen, with special weapons, equipment, and missions. Special Forces units recruit heavily from regular infantry units to fill their ranks. Foreign and domestic militaries typically have a slang term for their infantrymen. In the U.S. military, the slang term among both Marine and Army infantrymen for themselves is "grunt." In the British Army, they are the "squaddies." The infantry is a small close-knit community, and the slang names are terms of endearment that convey mutual respect and shared experiences. Naval infantry, commonly known as marines, are primarily a category of infantry that form part of the naval forces of states and perform roles on land and at sea, including amphibious operations, as well as other, naval roles. They also perform other tasks, including land warfare, separate from naval operations. Air force infantry and base defence forces, such as the Royal Air Force Regiment, Royal Australian Air Force Airfield Defence Guards, and Indonesian Air Force Paskhas Corps, are used primarily for ground-based defence of air bases and other air force facilities. They also have a number of other, specialist roles. These include, among others, Chemical, Biological, Radiological and Nuclear (CBRN) defence and training other airmen in basic ground defence tactics.
https://en.wikipedia.org/wiki?curid=15068
Identity function In mathematics, an identity function, also called an identity relation or identity map or identity transformation, is a function that always returns the same value that was used as its argument. That is, for being identity, the equality holds for all . Formally, if is a set, the identity function on is defined to be that function with domain and codomain which satisfies In other words, the function value in (that is, the codomain) is always the same input element of (now considered as the domain). The identity function on is clearly an injective function as well as a surjective function, so it is also bijective. The identity function on is often denoted by . In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or "diagonal" of . If is any function, then we have (where "∘" denotes function composition). In particular, is the identity element of the monoid of all functions from to . Since the identity element of a monoid is unique, one can alternately define the identity function on to be this identity element. Such a definition generalizes to the concept of an identity morphism in category theory, where the endomorphisms of need not be functions.
https://en.wikipedia.org/wiki?curid=15069
Intel 80386 The Intel 80386, also known as i386 or just 386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the CPU of many workstations and high-end personal computers of the time. As the original implementation of the 32-bit extension of the 80286 architecture, the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors, which is termed the "i386-architecture", "x86", or "IA-32", depending on context. The 32-bit 80386 can correctly execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. (Following the same tradition, modern 64-bit x86 processors are able to run most programs written for older x86 CPUs, all the way back to the original 16-bit 8086 of 1978.) Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386 (and thousands of times faster than the 8086). A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS. The 80386 was introduced in October 1985, while manufacturing of the chips in significant quantities commenced in June 1986. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was rationalized upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was designed and manufactured by Compaq and marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM. In May 2006, Intel announced that 80386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems. Such systems using an 80386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones also used (later fully static CMOS variants of) the 80386 processor, such as BlackBerry 950 and Nokia 9000 Communicator. Linux continued to support 80386 processors until December 11, 2012; when the kernel cut 386-specific instructions in version 3.8. The processor was a significant evolution in the x86 architecture, and extended a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system. The 80386 added a three-stage instruction pipeline, extended the architecture from 16-bits to 32-bits, and added an on-chip memory management unit. This paging translation unit made it much easier to implement operating systems that used virtual memory. It also offered support for register debugging. The 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode, which debuted in the 286, was extended to allow the 386 to address up to 4 GB of memory. The all new virtual 8086 mode (or "VM86") made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible. The ability for a 386 to be set up to act like it had a flat memory model in protected mode despite the fact that it uses a segmented memory model in all modes was arguably the most important feature change for the x86 processor family until AMD released x86-64 in 2003. Several new instructions have been added to 386: BSF, BSR, BT, BTS, BTR, BTC, CDQ, CWDE, LFS, LGS, LSS, MOVSX, MOVZX, SETcc, SHLD, SHRD. Two new segment registers have been added (FS and GS) for general-purpose programs, single Machine Status Word of 286 grew into eight control registers CR0–CR7. Debug registers DR0–DR7 were added for hardware breakpoints. New forms of MOV instruction are used to access them. Chief architect in the development of the 80386 was John H. Crawford. He was responsible for extending the 80286 architecture and instruction set to 32-bit, and then led the microprogram development for the 80386 chip. The 80486 and P5 Pentium line of processors were descendants of the 80386 design. The following data types are directly supported and thus implemented by one or more 80386 machine instructions; these data types are briefly described here.: The following 80386 assembly source code is for a subroutine named codice_1 that copies a null-terminated ASCIIZ character string from one location to another, converting all alphabetic characters to lower case. The string is copied one byte (8-bit character) at a time. The example code uses the EBP (base pointer) register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine. This kind of calling convention supports reentrant and recursive code and has been used by Algol-like languages since the late 1950s. A flat memory model is assumed, specifically, that the DS and ES segments address the same region of memory. In 1988, Intel introduced the 80386SX, most often referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus mainly intended for lower-cost PCs aimed at the home, educational, and small-business markets, while the 386DX remained the high-end variant used in workstations, servers, and other demanding tasks. The CPU remained fully 32-bit internally, but the 16-bit bus was intended to simplify circuit-board layout and reduce total cost. The 16-bit bus simplified designs but hampered performance. Only 24 pins were connected to the address bus, therefore limiting addressing to 16 MB, but this was not a critical constraint at the time. Performance differences were due not only to differing data-bus widths, but also due to performance-enhancing cache memories often employed on boards using the original chip. The original 80386 was subsequently renamed 80386DX to avoid confusion. However, Intel subsequently used the "DX" suffix to refer to the floating-point capability of the 80486DX. The 80387SX was an 80387 part that was compatible with the 386SX (i.e. with a 16-bit databus). The 386SX was packaged in a surface-mount QFP and sometimes offered in a socket to allow for an upgrade. The i386SL was introduced as a power-efficient version for laptop computers. The processor offered several power-management options (e.g. SMM), as well as different "sleep" modes to conserve battery power. It also contained support for an external cache of 16 to 64 kB. The extra functions and circuit implementation techniques caused this variant to have over 3 times as many transistors as the i386DX. The i386SL was first available at 20 MHz clock speed, with the 25 MHz model later added. The first company to design and manufacture a PC based on the Intel 80386 was Compaq. By extending the 16/24-bit IBM PC/AT standard into a natively 32-bit computing environment, Compaq became the first third party to implement a major technical hardware advance on the PC platform. IBM was offered use of the 80386, but had manufacturing rights for the earlier 80286. IBM therefore chose to rely on that processor for a couple more years. The early success of the Compaq 386 PC played an important role in legitimizing the PC "clone" industry and in de-emphasizing IBM's role within it. Prior to the 386, the difficulty of manufacturing microchips and the uncertainty of reliable supply made it desirable that any mass-market semiconductor be multi-sourced, that is, made by two or more manufacturers, the second and subsequent companies manufacturing under license from the originating company. The 386 was for "a time" (4.7 years) only available from Intel, since Andy Grove, Intel's CEO at the time, made the decision not to encourage other manufacturers to produce the processor as second sources. This decision was ultimately crucial to Intel's success in the market. The 386 was the first significant microprocessor to be single-sourced. Single-sourcing the 386 allowed Intel greater control over its development and substantially greater profits in later years. AMD introduced its compatible Am386 processor in March 1991 after overcoming legal obstacles, thus ending Intel's 4.7-year monopoly on 386-compatible processors. From 1991 IBM also manufactured 386 chips under license for use only in IBM PCs and boards. Intel originally intended for the 80386 to debut at 16 MHz. However, due to poor yields, it was instead introduced at 12 MHz. Early in production, Intel discovered a marginal circuit that could cause a system to return incorrect results from 32-bit multiply operations. Not all of the processors already manufactured were affected, so Intel tested its inventory. Processors that were found to be bug-free were marked with a double sigma (ΣΣ), and affected processors were marked "16 BIT S/W ONLY". These latter processors were sold as good parts, since at the time 32-bit capability was not relevant for most users. Such chips are now extremely rare and became collectible. The i387 math coprocessor was not ready in time for the introduction of the 80386, and so many of the early 80386 motherboards instead provided a socket and hardware logic to make use of an 80287. In this configuration the FPU operated asynchronously to the CPU, usually with a clock rate of 10 MHz. The original Compaq Deskpro 386 is an example of such design. However, this was an annoyance to those who depended on floating-point performance, as the performance advantages of the 80387 over the 80287 were significant. Intel later offered a modified version of its 80486DX in 80386 packaging, branded as the Intel RapidCAD. This provided an upgrade path for users with 80386-compatible hardware. The upgrade was a pair of chips that replaced both the 80386 and 80387. Since the 80486DX design contained an FPU, the chip that replaced the 80386 contained the floating-point functionality, and the chip that replaced the 80387 served very little purpose. However, the latter chip was necessary in order to provide the FERR signal to the mainboard and appear to function as a normal floating-point unit. Third parties offered a wide range of upgrades, for both SX and DX systems. The most popular ones were based on the Cyrix 486DLC/SLC core, which typically offered a substantial speed improvement due to its more efficient instruction pipeline and internal L1 SRAM cache. The cache was usually 1 kB, or sometimes 8 kB in the TI variant. Some of these upgrade chips (such as the 486DRx2/SRx2) were marketed by Cyrix themselves, but they were more commonly found in kits offered by upgrade specialists such as Kingston, Evergreen and Improve-It Technologies. Some of the fastest CPU upgrade modules featured the IBM SLC/DLC family (notable for its 16 kB L1 cache), or even the Intel 486 itself. Many 386 upgrade kits were advertised as being simple drop-in replacements, but often required complicated software to control the cache or clock doubling. Part of the problem was that on most 386 motherboards, the A20 line was controlled entirely by the motherboard with the CPU being unaware, which caused problems on CPUs with internal caches. Overall, it was very difficult to configure upgrades to produce the results advertised on the packaging, and upgrades were often not very stable or not fully compatible. Original version, released in October 1985. A specially packaged Intel 486DX and a dummy floating point unit (FPU) designed as pin-compatible replacements for an Intel 80386 processor and 80387 FPU. This was an embedded version of the 80386SX which did not support real mode and paging in the MMU. System and power management and built in peripheral and support functions: Two 82C59A interrupt controllers; Timer, Counter (3 channels); Asynchronous SIO (2 channels); Synchronous SIO (1 channel); Watchdog timer (Hardware/Software); PIO. Usable with 80387SX or i387SL FPUs. Transparent power management mode, integrated MMU and TTL compatible inputs (only 386SXSA). Usable with i387SX or i387SL FPUs. Transparent power management mode and integrated MMU. Usable with i387SX or i387SL FPUs.
https://en.wikipedia.org/wiki?curid=15070
INTERCAL The Compiler Language With No Pronounceable Acronym, abbreviated INTERCAL, is an esoteric programming language that was created as a parody by Don Woods and James M. Lyon, two Princeton University students, in 1972. It satirizes aspects of the various programming languages at the time, as well as the proliferation of proposed language constructs and notations in the 1960s. There are two maintained implementations of INTERCAL dialects: C-INTERCAL, maintained by Eric S. Raymond, and CLC-INTERCAL, maintained by Claudio Calvelli. , both implementations were available in the Debian Software Archive. According to the original manual by the authors, The original Princeton implementation used punched cards and the EBCDIC character set. To allow INTERCAL to run on computers using ASCII, substitutions for two characters had to be made: $ substituted for ¢ as the "mingle" operator, "represent[ing] the increasing cost of software in relation to hardware", and ? was substituted for ⊻ as the unary exclusive-or operator to "correctly express the average person's reaction on first encountering exclusive-or". In recent versions of C-INTERCAL, the older operators are supported as alternatives; INTERCAL programs may now be encoded in ASCII, Latin-1, or UTF-8. . C-INTERCAL swaps the major and minor version numbers, compared to tradition, the HISTORY file showing releases starting at version 0.3 and having progressed to 0.31, but containing 1.26 between 0.26 and 0.27. CLC-INTERCAL version numbering scheme was traditional until version 0.06, when it changed to the scheme documented in the README file, which says: INTERCAL was intended to be completely different from all other computer languages. Common operations in other languages have cryptic and redundant syntax in INTERCAL. From the INTERCAL Reference Manual: INTERCAL has many other features designed to make it even more aesthetically unpleasing to the programmer: it uses statements such as "READ OUT", "IGNORE", "FORGET", and modifiers such as "PLEASE". This last keyword provides two reasons for the program's rejection by the compiler: if "PLEASE" does not appear often enough, the program is considered insufficiently polite, and the error message says this; if too often, the program could be rejected as excessively polite. Although this feature existed in the original INTERCAL compiler, it was undocumented. Despite the language's intentionally obtuse and wordy syntax, INTERCAL is nevertheless Turing-complete: given enough memory, INTERCAL can solve any problem that a Universal Turing machine can solve. Most implementations of INTERCAL do this very slowly, however. A Sieve of Eratosthenes benchmark, computing all prime numbers less than 65536, was tested on a Sun SPARCstation 1. In C, it took less than half a second; the same program in INTERCAL took over seventeen hours. The INTERCAL Reference Manual contains many paradoxical, nonsensical, or otherwise humorous instructions: The manual also contains a "tonsil", as explained in this footnote: "4) Since all other reference manuals have Appendices, it was decided that the INTERCAL manual should contain some other type of removable organ." The INTERCAL manual gives unusual names to all non-alphanumeric ASCII characters: single and double quotes are "sparks" and "rabbit ears" respectively. (The exception is the ampersand: as the Jargon File states, "what could be sillier?") The assignment operator, represented as an equals sign (INTERCAL's "half mesh") in many other programming languages, is in INTERCAL a left-arrow, codice_1, made up of an "angle" and a "worm", obviously read as "gets". Input (using the codice_2 instruction) and output (using the codice_3 instruction) do not use the usual formats; in INTERCAL-72, WRITE IN inputs a number written out as digits in English (such as SIX FIVE FIVE THREE FIVE), and READ OUT outputs it in "butchered" Roman numerals. More recent versions have their own I/O systems. Comments can be achieved by using the inverted statement identifiers involving NOT or N'T; these cause lines to be initially ABSTAINed so that they have no effect. (A line can be ABSTAINed from even if it doesn't have valid syntax; syntax errors happen at runtime, and only then when the line is un-ABSTAINed.) INTERCAL-72 (the original version of INTERCAL) had only four data types: the 16-bit integer (represented with a codice_4, called a "spot"), the 32-bit integer (codice_5, a "twospot"), the array of 16-bit integers (codice_6, a "tail"), and the array of 32-bit integers (codice_7, a "hybrid"). There are 65535 available variables of each type, numbered from codice_8 to codice_9 for 16-bit integers, for instance. However, each of these variables has its own stack on which it can be pushed and popped (STASHed and RETRIEVEd, in INTERCAL terminology), increasing the possible complexity of data structures. More modern versions of INTERCAL have by and large kept the same data structures, with appropriate modifications; TriINTERCAL, which modifies the radix with which numbers are represented, can use a 10-trit type rather than a 16-bit type, and CLC-INTERCAL implements many of its own data structures, such as "classes and lectures", by making the basic data types store more information rather than adding new types. Arrays are dimensioned by assigning to them as if they were a scalar variable. Constants can also be used, and are represented by a codice_10 ("mesh") followed by the constant itself, written as a decimal number; only integer constants from 0 to 65535 are supported. There are only five operators in INTERCAL-72. Implementations vary in which characters represent which operation, and many accept more than one character, so more than one possibility is given for many of the operators. Contrary to most other languages, AND, OR, and XOR are unary operators, which work on consecutive bits of their argument; the most significant bit of the result is the operator applied to the least significant and most significant bits of the input, the second-most-significant bit of the result is the operator applied to the most and second-most significant bits, the third-most-significant bit of the result is the operator applied to the second-most and third-most bits, and so on. The operator is placed between the punctuation mark specifying a variable name or constant and the number that specifies which variable it is, or just inside grouping marks (i.e. one character later than it would be in programming languages like C.) SELECT and INTERLEAVE (which is also known as MINGLE) are infix binary operators; SELECT takes the bits of its first operand that correspond to "1" bits of its second operand and removes the bits that correspond to "0" bits, shifting towards the least significant bit and padding with zeroes (so 51 (110011 in binary) SELECT 21 (10101 in binary) is 5 (101 in binary)); MINGLE alternates bits from its first and second operands (in such a way that the least significant bit of its second operand is the least significant bit of the result). There is no operator precedence; grouping marks must be used to disambiguate the precedence where it would otherwise be ambiguous (the grouping marks available are codice_11 ("spark"), which matches another spark, and codice_12 ("rabbit ears"), which matches another rabbit ears; the programmer is responsible for using these in such a way that they make the expression unambiguous). INTERCAL statements all start with a "statement identifier"; in INTERCAL-72, this can be codice_13, codice_14, or codice_15, all of which mean the same to the program (but using one of these too heavily causes the program to be rejected, an undocumented feature in INTERCAL-72 that was mentioned in the C-INTERCAL manual), or an inverted form (with codice_16 or codice_17 appended to the identifier). Backtracking INTERCAL, a modern variant, also allows variants using codice_18 (possibly combined with PLEASE or DO) as a statement identifier, which introduces a choice-point. Before the identifier, an optional line number (an integer enclosed in parentheses) can be given; after the identifier, a percent chance of the line executing can be given in the format codice_19, which defaults to 100%. In INTERCAL-72, the main control structures are NEXT, RESUME, and FORGET. codice_20 branches to the line specified, remembering the next line that would be executed if it weren't for the NEXT on a call stack (other identifiers than DO can be used on any statement, DO is given as an example); codice_21 removes "expression" entries from the top of the call stack (this is useful to avoid the error that otherwise happens when there are more than 80 entries), and codice_22 removes "expression" entries from the call stack and jumps to the last line remembered. C-INTERCAL also provides the COME FROM instruction, written codice_23; CLC-INTERCAL and the most recent C-INTERCAL versions also provide computed COME FROM (codice_24 and NEXT FROM, which is like COME FROM but also saves a return address on the NEXT STACK. Alternative ways to affect program flow, originally available in INTERCAL-72, are to use the IGNORE and REMEMBER instructions on variables (which cause writes to the variable to be silently ignored and to take effect again, so that instructions can be disabled by causing them to have no effect), and the ABSTAIN and REINSTATE instructions on lines or on types of statement, causing the lines to have no effect or to have an effect again respectively. The traditional "Hello, world!" program demonstrates how different INTERCAL is from standard programming languages. In C, it could read as follows: int main(void) { The equivalent program in C-INTERCAL is longer and harder to read: DO ,1 The authors of C-INTERCAL also created the TriINTERCAL variant, based on the Ternary numeral system and generalizing INTERCAL's set of operators. A more recent variant is Threaded Intercal, which extends the functionality of COME FROM to support multithreading. CLC-INTERCAL has a library called INTERNET for networking functionality including being an INTERCAL server, and also includes features such as Quantum Intercal, which enables multi-value calculations in a way purportedly ready for the first quantum computers. In early 2017 a .NET Implementation targeting the .NET Framework appeared on GitHub. This implementation supports the creation of standalone binary libraries and interop with other programming languages. In the article "A Box, Darkly: Obfuscation, Weird Languages, and Code Aesthetics", INTERCAL is described under the heading "Abandon all sanity, ye who enter here: INTERCAL". The compiler and commenting strategy are among the "weird" features described: In "Technomasochism", Lev Bratishenko characterizes the INTERCAL compiler as a dominatrix: The Nitrome Enjoyment System, a fictional video game console created by British indie game developer Nitrome, has games which are programmed in INTERCAL.
https://en.wikipedia.org/wiki?curid=15075
International Data Encryption Algorithm In cryptography, the International Data Encryption Algorithm (IDEA), originally called Improved Proposed Encryption Standard (IPES), is a symmetric-key block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. The algorithm was intended as a replacement for the Data Encryption Standard (DES). IDEA is a minor revision of an earlier cipher Proposed Encryption Standard (PES). The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also a trademark. The last patents expired in 2012, and IDEA is now patent-free and thus completely free for all uses. IDEA was used in Pretty Good Privacy (PGP) v2.0 and was incorporated after the original cipher used in v1.0, BassOmatic, was found to be insecure. IDEA is an optional algorithm in the OpenPGP standard. IDEA operates on 64-bit blocks using a 128-bit key and consists of a series of 8 identical transformations (a "round", see the illustration) and an output transformation (the "half-round"). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups — modular addition and multiplication, and bitwise eXclusive OR (XOR) — which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are: After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap): The overall structure of IDEA follows the Lai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round. Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations. Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption. The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. , the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds). Note that a "break" is any attack that requires less than 2128 operations; the 6-round attack requires 264 known plaintexts and 2126.8 operations. Bruce Schneier thought highly of IDEA in 1996, writing: "In my opinion, it is the best and most secure block algorithm available to the public at this time." ("Applied Cryptography", 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in its cryptanalysis, and the issue of patents. In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack. Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack on AES; however, this attack does not threaten the security of IDEA in practice. The very simple key schedule makes IDEA subject to a class of weak keys; some keys containing a large number of 0 bits produce weak encryption. These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE. Larger classes of weak keys were found in 2002. This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable. A patent application for IDEA was first filed in Switzerland (CH A 1690/90) on May 18, 1990, then an international patent application was filed under the Patent Cooperation Treaty on May 16, 1991. Patents were eventually granted in Austria, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, the United Kingdom, (, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), the United States (, issued May 25, 1993 and expired January 7, 2012) and Japan (JP 3225440) (expired May 16, 2011). MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release on May 2005) IDEA NXT, which was previously called FOX.
https://en.wikipedia.org/wiki?curid=15076
Indoor rower An indoor rower, or rowing machine, is a machine used to simulate the action of watercraft rowing for the purpose of exercise or training for rowing. Indoor rowing has become established as a sport in its own right. The term "indoor rower" also refers to a participant in this sport. Modern indoor rowers are often known as ergometers (colloquially erg or ergo), which is technically incorrect, as an ergometer is a device which measures the amount of work performed. The indoor rower is calibrated to measure the amount of energy the rower is using through their use of the equipment. Typically the display of the ergometer will show the time it takes to row 500m at each strokes power, also called split rate, or split. For exercise, one advantage of a rower compared to other types of machines is the high count of muscles worked on - a dozen. Chabrias, an Athenian admiral of the 4th century BC, introduced the first rowing machines as supplemental military training devices. "To train inexperienced oarsmen, Chabrias built wooden rowing frames on shore where beginners could learn technique and timing before they went on board ship." Early rowing machines are known to have existed from the mid-1800s, a US patent being issued to W.B. Curtis in 1872 for a particular hydraulic based damper design. Machines using linear pneumatic resistance were common around 1900—one of the most popular was the Narragansett hydraulic rower, manufactured in Rhode Island from around 1900–1960. However they did not simulate actual rowing very accurately nor measure power output. In the 1950s and 1960s, coaches in many countries began using specially made rowing machines for training and improved power measurement. One original design incorporated a large, heavy, solid iron flywheel with a mechanical friction brake, developed by John Harrison of Leichhardt Rowing Club in Sydney, later to become a professor of mechanical engineering at the University of New South Wales. Harrison, a dual Australian champion beach sprinter who went on to row in the coxless four at the 1956 Melbourne Olympics, had been introduced to rowing after a chance meeting with one of the fathers of modern athletic physiological training and testing, and the coach of the Leichhardt "Guinea Pigs", Professor Frank Cotton. Cotton had produced a rudimentary friction-based machine for evaluating potential rowers by exhausting them, without any pretence of accurately measuring power output. Harrison realised the importance of using a small braking area with a non-absorbent braking material, combined with a large flywheel. The advantage of this design (produced by Ted Curtain Engineering, Curtain being a fellow Guinea Pig) was the virtual elimination of factors able to interfere with accurate results—for instance ambient humidity or temperature. The Harrison-Cotton machine represents the very first piece of equipment able to accurately quantify human power output; power calculation within an accuracy range as achieved by his machine of less than 1% remains an impressive result today. The friction brake was adjusted according to a rower's weight to give an accurate appraisal of boat-moving ability (drag on a boat is proportional to weight). Inferior copies of Harrison's machine were produced in several countries utilising a smaller flywheel and leather straps—unfortunately the leather straps were sensitive to humidity, and the relatively large braking area made results far less accurate than Harrison's machine. The weight correction factor tended to make them unpopular among rowers of the time. Harrison, arguably the father of modern athletic power evaluation, died in February 2012. In the 1970s, the Gjessing-Nilson ergometer from Norway used a friction brake mechanism with industrial strapping applied over the broad rim of the flywheel. Weights hanging from the strap ensured that an adjustable and predictable friction could be calculated. The cord from the handle mechanism ran over a helical pulley with varying radius, thereby adjusting the gearing and speed of the handle in a similar way to the changing mechanical gearing of the oar through the stroke, derived from changes in oar angle and other factors. This machine was for many years the internationally accepted standard for measurement. The first air resistance ergometers were introduced around 1980 by Repco. In 1981, Peter and Richard Dreissigacker, and Jonathan Williams, filed for U.S. patent protection, as joint inventors of a "Stationary Rowing Unit". The patent was granted in 1983 (US 4396188A). The first commercial embodiment of the Concept2 "rowing ergometer" (as it came to be known) was the Model A, a fixed-frame sliding-seat design using a bicycle wheel with fins attached for air resistance. The Model B, introduced in 1986, introduced a solid cast flywheel (now enclosed by a cage) and the first digital performance monitor, which proved revolutionary. This machine's capability of accurate calibration combined with easy transportability spawned the sport of competitive indoor rowing, and revolutionised training and selection procedures for watercraft rowing. Later models were the C (1993) and D (2003). In 1995, Casper Rekers, a Dutch engineer, was granted a U.S. patent for a (US 5382210A) "Dynamically Balanced Rowing Simulator". This device differed from the prior art in that the flywheel and footrests are fixed to a carriage, the carriage being free to slide fore and aft on a rail or rails integral to the frame. The seat is also free to slide fore and aft on a rail or rails integral to the frame. From the patent Abstract: "During exercise, the independent seat and energy dissipating unit move apart and then together in a coordinated manner as a function of the stroke cycle of the oarsman." All rowing-machine designs consist of an energy damper or braking mechanism connected to a chain, strap, belt and/or handle. Footrests are attached to the same mounting as the energy damper. Most include a rail which either the seat or the mechanism slide upon. Different machines have a variety of layouts and damping mechanisms, each of which have certain advantages and disadvantages. Currently available ergometer (flywheel-type) rowing machines use a spring or elastic cord to take up the pull chain/strap and return the handle. Advances in elastic cord and spring technology have contributed to the longevity and reliability of this strategy, but it still has disadvantages. With time and usage, an elastic element loses its strength and elasticity. Occasionally it will require adjustment, and eventually it will no longer take up the chain with sufficient vigour, and will need to be replaced. The resilience of an elastic cord is also directly proportional to temperature. In an unheated space in a cold climate, an elastic cord equipped rowing ergometer is unusable because the chain take-up is too sluggish. Thus, as the result of several factors, the force required to stretch the elastic cord is a variable, not a constant. This is of little consequence if the exercise device is used for general fitness, but it is an unacknowledged problem, the "dirty little secret", of indoor rowing competitions. The electronic monitor only measures the user input to the flywheel. It does not measure the energy expenditure to stretch the elastic cord. A claim of a "level playing field" cannot be made when a resistance variable exists (that of the elastic cord) which is not measured or monitored in any way (see more on this in "Competitions" section). In the patent record, means are disclosed whereby the chain/cable take-up and handle return are accomplished without the use of a spring or elastic cord, thereby avoiding the stated disadvantages and defects of this broadly used method. One example is the Gjessing-Nilson device described above. Partially discernable in the thumbnail photo, it utilizes a cable wrapped around a helical pulley on the flywheel shaft, the ends of this cable being connected to opposite ends of a long pole to which a handle is fixed. The obvious disadvantage of this system is the forward space requirement to accommodate the extension of the handle pole at the "catch" portion of the stroke. The advantage is that, except for small transmission losses, all of the user's energy output is imparted to the flywheel, where it can be accurately measured, not split between the flywheel and an elastic cord of variable, unmeasured resistance. If a similar system were installed on all rowing ergometers used in indoor rowing competitions, consistency between machines would be guaranteed because the variability factor of elastic cord resistance would be eliminated, and this would therefore ensure that the monitor displayed actual user energy input. In a 1988 US patent (US 4772013A), Elliot Tarlow discloses another non-elastic chain/cable take-up and handle return strategy. Described and depicted is a continuous chain/cable loop that passes around the flywheel sprocket and around and between fixed pulleys and sprockets positioned fore and aft on the device. The handle is secured in the middle of the exposed upper horizontal section of the chain/cable loop. Although somewhat lacking in aesthetics, the Tarlow device does eliminate the stated disadvantages and defects of the ubiquitous elastic cord handle return. Tarlow further argues that the disclosed method provides an improved replication of rowing because in actual rowing the rower is not assisted by the contraction of a spring or elastic cord during the "recovery" portion of the stroke. The rower must push the oar handle forward against wind and oarlock resistance in preparation for the next stroke. Tarlow asserts that the invention replicates that resistance. A third non-elastic handle return strategy is disclosed in US patent, "Gravity Return Rowing Exercise Device" (US9878200 B2, 2018) granted to Robert Edmondson. As stated in the patent document, the utilization of gravity (i.e.: a weight) to take up the chain and return the handle eliminates the inevitable variability of handle return force associated with an elastic cord system and thereby ensures consistency between machines. Machines with a digital display calculate the user's power by measuring the speed of the flywheel during the stroke and then recording the rate at which it decelerates during the recovery. Using this and the known moment of inertia of the flywheel, the computer is able to calculate speed, power, distance and energy usage. Some ergometers can be connected to a personal computer using software, and data on individual exercise sessions can be collected and analysed. In addition, some software packages allows users to connect multiple ergometers either directly or over the internet for virtual races and workouts. At the current state of the art, indoor rowers which utilize flywheel resistance can be categorized into two motion types. In both types, the rowing movement of the user causes the footrests and the seat to move further and closer apart in co-ordination with the user's stroke. The difference between the two types is in the movement, or absence of movement, of the footrests relative to ground. The first type is characterized by the Dreissigacker/Williams device (referenced above). With this type the flywheel and footrests are fixed to a stationary frame, and the seat is free to slide fore and aft on a rail or rails integral to the stationary frame. Therefore, during use, the seat moves relative to the footrests and also relative to ground, while the flywheel and footrests remain stationary relative to ground. The second type is characterized by the Rekers device (referenced above). With this type, both the seat and the footrests are free to slide fore and aft on a rail or rails integral to a stationary frame. Therefore, during use, the seat and the footrests move relative to each other, and both also move relative to ground. Piston resistance comes from hydraulic cylinders that are attached to the handles of the rowing machine. The length of the rower handles on this class of rower is typically adjustable, however, during the row the handle length is fixed which in turn fixes the trajectory that the hands must take on the stroke and return, thus making the stroke less accurate than is possible on the other types of resistance models where it is possible to emulate the difference in hand height on the stroke and return. Furthermore, many models in this class have a fixed seat position that eliminates the leg drive which is the foundation of competitive on water rowing technique. Because of the compact size of the pistons and mechanical simplicity of design, these models are typically not as large or as expensive as the others types. Braked flywheel resistance models comprise magnetic, air and water resistance rowers. These machines are mechanically similar since all three types use a handle connected to a flywheel by rope, chain, or strap to provide resistance to the user – the types differ only in braking mechanism. Because the handle is attached to the resistance source by rope or similarly flexible media, the trajectory of the hands in the vertical plane is free making it possible for the rower to emulate the hand height difference between the stroke and the return. Most of these models have the characteristic sliding seat typical of competitive on-the-water boats. Magnetic resistance models control resistance by means of permanent magnets or electromagnets. A rotary plate, made of non-magnetic, electrical conducting material such as aluminum or copper, and either integral with, or independent of the flywheel, cuts through the magnetic field of the permanent magnet or the electromagnet, resulting in induced eddy currents which generate a retarding force that opposes the motion of the rotary plate. Resistance is adjusted with the permanent magnet system by changing the position of the permanent magnet relative to the rotary plate. Resistance is adjusted with the electromagnetic system by varying the strength of the electromagnetic field through which the rotary plate moves. The magnetic braking system is quieter than the other braked flywheel types and energy can be accurately measured on this type of rower. The drawback of this type of resistance mechanism is that the resistance is constant for any given setting. Rowers using air or water resistance more accurately simulate actual rowing, where the resistance increases the harder the handle is pulled. Some rowing machines incorporate both air and magnetic resistance. Air resistance models use vanes on the flywheel to provide the flywheel braking needed to generate resistance. As the flywheel is spun faster, the air resistance increases. An adjustable vent can be used to control the volume of air moved by the vanes of the rotating flywheel, therefore a larger vent opening results in a higher resistance, and a smaller vent opening results in a lower resistance. The energy dissipated can be accurately calculated given the known moment of inertia of the flywheel and a tachometer to measure the deceleration of the flywheel. Air resistance rowing machines are most often used by sport rowers (particularly during the off season and inclement weather) and competitive indoor rowers. Water resistance models consist of a paddle revolving in an enclosed tank of water. The mass and drag of the moving water creates the resistance. Proponents claim that this approach results in a more realistic action than possible with air or magnetic type machines. "WaterRower" was the first company to manufacturer this type of rowing machine. The company was formed in the 1980s by John Duke, a US National Team rower, and inventor of the device (1989 US Patent ). At that time, in the patent record, there were a few prior art fluid resistance rowing machines, but they lacked the simplicity and elegance of the Duke design. From the 1989 patent Abstract: "... rowing machine features a hollow container that holds a supply of water. Pulling on a drive cord during a pulling segment of a stroke rotates a paddle or like mechanism within the container to provide a momentum effect." An extremely efficient method of exercise, rowing uses 86% of muscles when done with correct form. Its health benefits are often contrasted with those of spinning, since both are dually categorized as static and dynamic exercises. Indoor rowing primarily works the cardiovascular systems with typical workouts consisting of steady pieces of 20–40 minutes, although the standard trial distance for record attempts is 2000 m, which can take from five and a half minutes (best elite rowers) to nine minutes or more. Like other forms of cardio focused exercise, interval training is also commonly used in indoor rowing. While cardio-focused, rowing also stresses many muscle groups throughout the body anaerobically, thus rowing is often referred to as a strength-endurance sport. The standard measurement of speed on an ergometer is generally known as the "split", or the amount of time in minutes and seconds required to travel at the current pace — a split of 2:00 represents a speed of two minutes per 500 metres, or about . Although ergometer tests are used by rowing coaches to evaluate rowers and are part of athlete selection for many senior and junior national rowing teams, "the data suggest that physiological and performance tests performed on a rowing ergometer are not good indicators of on water performance". Rowing technique on the erg broadly follows the same pattern as that of a normal rowing stroke on water, but with minor modifications: it is not necessary to "tap down" at the finish, since there are no blades to extract from water; but many who also row on water do this anyway. Also, the rigid, single-piece handle enables neither a sweep nor a sculling stroke. The oar handle during a sweep stroke follows a long arc, while the oar handles during a sculling stroke follow two arcs. The standard handle does neither. But regardless of this, to reduce the chance of injury, an exercise machine should enable a bio-mechanically correct movement of the user. The handle is the interface between the human and the machine, and should adapt to the natural movement of the user, not the user to the machine, as is now the case. During competitions an exaggerated finish is often used, whereby the hands are pulled further up the chest than would be possible on the water, resulting in a steep angulation of the wrists - but even with a normal stroke, stop-action images show wrist angulation at the finish, evidence that the standard rigid, single-piece handle does not allow the user to maintain a bio-mechanically correct alignment of hands, wrists, and forearms in the direction of applied force. On the Concept 2 website "Forum", many regular users of the indoor rower have complained of chronic wrist pain. Some have rigged handgrips with flexible straps to enable their hands, wrists, and forearms to maintain proper alignment, and thereby reduce the possibility of repetitive strain injury. Rowing machine manufacturers have ignored this problem. Rowing on an ergometer requires four basic phases to complete one stroke; the catch, the drive, the finish and the recovery. The catch is the initial part of the stroke. The drive is where the power from the rower is generated while the finish is the final part of the stroke. Then, the recovery is the initial phase to begin taking a new stroke. The phases repeat until a time duration or a distance is completed. Knees are bent with the shins in a vertical position. The back should be roughly parallel to the thigh without hyperflexion (leaning forward too far). The arms and shoulders should be extended forward and relaxed. The arms should be level. The drive is initiated by the extension of the legs; the body remains in the catch posture at this point of the drive. As the legs continue to full extension, the rower engages the core to begin the motion of the body levering backward, adding to the work of the legs. When the legs are flat, the rower begins to pull the handle toward the chest with their arms while keeping their arms straight and parallel to the floor. The legs are at full extension and flat. The shoulders are slightly behind the pelvis, and the arms are in full contraction with the elbows bent and hands against the chest below the nipples. The back of the rower is still maintained in an upright posture and wrists should be flat. The recovery is a slow slide back to the initial part of the stroke, it gives the rower time to recover from the previous stroke. During the recovery the actions are in reverse order of the drive. The arms are fully extended so that they are straight. The torso is engaged to move forward back over the pelvis. Weight transfers from the back of the seat to the front of the seat at this time. When the hands come over the knees, the legs contract back towards the foot stretcher. Slowly the back becomes more parallel to the thighs until the recovery becomes the catch. The first indoor rowing competition was held in Cambridge, MA in February 1982 with participation of 96 on-water rowers who called themselves the "Charles River Association of Sculling Has-Beens". Thus the acronym, "CRASH-B". A large number of indoor rowing competitions are now held worldwide, including the indoor rowing world championships (still known as CRASH-B Sprints) held in Boston, Massachusetts, United States in February and the British Indoor Rowing Championships held in Birmingham, England in November, or in more recent years the Lee Valley VeloPark London in December; both are rowed on Concept2s. The core event for most competitions is the individual 2000-m; less common are the mile (e.g., Evesham), the 2500 meter (e.g., Basingstoke—also the original distance of the CRASH-B Sprints). Many competitions also include a sprint event (100–500m) and sometimes team relay events. Most competitions are organized into categories based on sex, age, and weight class. While the fastest times are generally achieved by rowers between 20 and 40 years old, teenagers and rowers over 90 are common at competitions. There is a nexus between performance on-water and performance on the ergometer, with open events at the World Championships often being dominated by elite on-water rowers. Former men's Olympic single scull champions Pertti Karppinen and Rob Waddell and five-time gold medalist Sir Steven Redgrave have all won world championships or set world records in indoor rowing. The British Graham Benton and the Italian Emanuele Romoli are two of the main "non-rower" that won several indoor rowing competitions. In addition to live venue competitions, many erg racers compete by internet, either offline by posting scores to challenges, or live online races facilitated by computer connection. Online Challenges sponsored by Concept2 include the annual ultra-rowing challenge, the Virtual Team Challenge.
https://en.wikipedia.org/wiki?curid=15077
Internetwork Packet Exchange Internetwork Packet Exchange (IPX) is the network layer protocol in the IPX/SPX protocol suite. IPX is derived from Xerox Network Systems' IDP. It may act as a transport layer protocol as well. The IPX/SPX protocol suite was very popular through the late 1980s into the mid-1990s because it was used by the Novell NetWare network operating system. Because of Novell NetWare popularity, the IPX became a prominent internetworking protocol. A big advantage of IPX was a small memory footprint of the IPX driver, which was vital for DOS and Windows up to the version Windows 95 because of limited size of the conventional memory. Another IPX advantage is an easy configuration of the client computers. However, IPX does not scale well for large networks such as the Internet, and as such, IPX usage decreased as the boom of the Internet made TCP/IP nearly universal. Computers and networks can run multiple network protocols, so almost all IPX sites will be running TCP/IP as well to allow Internet connectivity. It is also possible to run later Novell products without IPX, with the beginning of full support for both IPX and TCP/IP by NetWare version 5 in late 1998. A big advantage of IPX protocol is its little or no need for configuration. In the time when protocols for dynamic host configuration did not exist and the BOOTP protocol for centralized assigning of addresses was not common, the IPX network could be configured almost automatically. A client computer uses the MAC address of its network card as the node address and learns what it needs to know about the network topology from the servers or routers – routes are propagated by Routing Information Protocol, services by Service Advertising Protocol. A small IPX network administrator had to care only Each IPX packet begins with a header with the following structure: The Packet Type values are: An IPX address has the following structure: The network number allows to address (and communicate with) the IPX nodes which do not belong to the same network or "cabling system". The cabling system is a network in which a data link layer protocol can be used for communication. To allow communication between different networks, they must be connected with IPX routers. A set of interconnected networks is called an internetwork. Any Novell NetWare server may serve as an IPX router. Novell also supplied stand-alone routers. Multiprotocol routers of other vendors often support IPX routing. Using different frame formats in one cabling system is possible, but it works similarly as if separate cabling systems were used (i.e. different network numbers must be used for different frame formats even in the same cabling system and a router must be used to allow communication between nodes using different frame formats in the same cabling system). The node number is used to address an individual computer (or more exactly, a network interface) in the network. Client stations use its network interface card MAC address as the node number. The value FF:FF:FF:FF:FF:FF may be used as a node number in a destination address to broadcast a packet to "all nodes in the current network". The socket number serves to select a process or application in the destination node. The presence of a socket number in the IPX address allows the IPX to act as a transport layer protocol, comparable with the User Datagram Protocol (UDP) in the Internet protocol suite. The IPX network number is conceptually identical to the network part of the IP address (the parts with netmask bits set to 1); the node number has the same meaning as the bits of IP address with netmask bits set to 0. The difference is that the boundary between network and node part of address in IP is variable, while in IPX it is fixed. As the node address is usually identical to the MAC address of the network adapter, the Address Resolution Protocol is not needed in IPX. For routing, the entries in the IPX routing table are similar to IP routing tables; routing is done by network address, and for each network address a network:node of the next router is specified in a similar fashion an IP address/netmask is specified in IP routing tables. There are three routing protocols available for IPX networks. In early IPX networks, a version of Routing Information Protocol (RIP) was the only available protocol to exchange routing information. Unlike RIP for IP, it uses delay time as the main metric, retaining the hop count as a secondary metric. Since NetWare 3, the NetWare Link Services Protocol (NLSP) based on IS-IS is available, which is more suitable for larger networks. Cisco routers implement an IPX version of EIGRP protocol as well. IPX can be transmitted over Ethernet using one of the following 4 frame formats or encapsulation types: In non-Ethernet networks only 802.2 and SNAP frame types are available.
https://en.wikipedia.org/wiki?curid=15078
International human rights instruments International human rights instruments are the treaties and other international texts that serve as legal sources for international human rights law and the protection of human rights in general. There are many varying types, but most can be classified into two broad categories: "declarations", adopted by bodies such as the United Nations General Assembly, which are by nature declaratory, so not legally-binding although they may be politically authoritative and very well-respected soft law;, and often express guiding principles; and "conventions" that are multi-party treaties that are designed to become legally binding, usually include prescriptive and very specific language, and usually are concluded by a long procedure that frequently requires ratification by each states' legislature. Lesser known are some "recommendations" which are similar to conventions in being multilaterally agreed, yet cannot be ratified, and serve to set common standards. There may also be administrative guidelines that are agreed multilaterally by states, as well as the statutes of tribunals or other institutions. A specific prescription or principle from any of these various international instruments can, over time, attain the status of customary international law whether it is specifically accepted by a state or not, just because it is well-recognized and followed over a sufficiently long time. International human rights instruments can be divided further into "global instruments", to which any state in the world can be a party, and "regional instruments", which are restricted to states in a particular region of the world. Most conventions and recommendations (but few declarations) establish mechanisms for monitoring and establish bodies to oversee their implementation. In some cases these bodies that may have relatively little political authority or legal means, and may be ignored by member states; in other cases these mechanisms have bodies with great political authority and their decisions are almost always implemented. A good example of the latter is the European Court of Human Rights. Monitoring mechanisms also vary as to the degree of individual access to expose cases of abuse and plea for remedies. Under some conventions or recommendations – e.g. the European Convention on Human Rights – individuals or states are permitted, subject to certain conditions, to take individual cases to a full-fledged tribunal at international level. Sometimes, this can be done in national courts because of universal jurisdiction. The Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Covenant on Economic, Social and Cultural Rights together with other international human rights instruments are sometimes referred to as the "international bill of rights". International human rights instruments are identified by the OHCHR and most are referenced on the OHCHR website. According to OHCHR, there are 9 "core" international human rights instruments and several optional protocols. The core instruments are: Several more human rights instruments exist. A few examples:
https://en.wikipedia.org/wiki?curid=15079
Indian removal Indian removal was a forced migration in the 19th century whereby Native Americans were forced by the United States government to leave their ancestral homelands in the eastern United States to lands west of the Mississippi River, specifically to a designated Indian Territory (roughly, modern Oklahoma). The Indian Removal Act, the key law that forced the removal of the Indians, was signed by Andrew Jackson in 1830. Jackson took a hard line on Indian removal, but the law was put into effect primarily under the Martin van Buren administration. Indian removal was a consequence of actions first by European settlers to North America in the colonial period, then by the United States government and its citizens until the mid-20th century. The policy traced its direct origins to the administration of James Monroe, though it addressed conflicts between European Americans and Native Americans that had been occurring since the 17th century, and were escalating into the early 19th century as white settlers were continually pushing westward. American leaders in the Revolutionary and Early National era debated whether the American Indians should be treated officially as individuals or as nations in their own right. Some of these views are summarized below. In a draft, "Proposed Articles of Confederation", presented to the Continental Congress on May 10, 1775, Benjamin Franklin called for a "perpetual Alliance" with the Indians for the nation about to take birth, especially with the Six Nations of the Iroquois Confederacy: In his Notes on the State of Virginia (1785), Thomas Jefferson defended American Indian culture and marveled at how the tribes of Virginia "never submitted themselves to any laws, any coercive power, any shadow of government" due to their "moral sense of right and wrong". He would later write to the Marquis de Chastellux in 1785, "I believe the Indian then to be in body and mind equal to the whiteman". His desire, as interpreted by Francis Paul Prucha, was for the Native Americans to intermix with European Americans and to become one people. To achieve that end, Jefferson would, as president, offer U.S. citizenship to some Indian nations, and propose offering credit to them to facilitate their trade. President George Washington, in his address to the Seneca nation in 1790, describing the pre-Constitutional Indian land sale difficulties as "evils", asserted that the case was now entirely altered, and publicly pledged to uphold their "just rights". In March and April 1792, Washington met with 50 tribal chiefs in Philadelphia—including the Iroquois—to discuss closer friendship between them and the United States. Later that same year, in his Fourth Annual Message to Congress, Washington stressed the need for building peace, trust, and commerce with America's Indian neighbors: In 1795, in his Seventh Annual Message to Congress, Washington intimated that if the U.S. government wanted peace with the Indians, then it must give peace to them, and that if the U.S. wanted raids by Indians to stop, then raids by American "frontier inhabitants" must also stop. The Confederation Congress passed the Northwest Ordinance of 1787, which would serve broadly as a precedent for the manner in which the United States' territorial expansion would occur for years to come, calling for the protection of Indians' "property, rights, and liberty": The U.S. Constitution of 1787 (Article I, Section 8) makes Congress responsible for regulating commerce with the Indian tribes. In 1790, the new U.S. Congress passed the Indian Nonintercourse Act (renewed and amended in 1793, 1796, 1799, 1802, and 1834) to protect and codify the land rights of recognized tribes. As president, Thomas Jefferson developed a far-reaching Indian policy that had two primary goals. First, the security of the new United States was paramount, so Jefferson wanted to assure that the Native nations were tightly bound to the United States, and not other foreign nations. Second, he wanted "to civilize" them into adopting an agricultural, rather than a hunter-gatherer lifestyle. These goals would be achieved through the development of trade and the signing of treaties. Jefferson initially promoted an American policy that encouraged Native Americans to become assimilated, or "civilized". As president, Jefferson made sustained efforts to win the friendship and cooperation of many Native American tribes, repeatedly articulating his desire for a united nation of both whites and Indians, as in a letter to the Seneca spiritual leader, Handsome Lake, dated November 3, 1802: When a delegation from the Upper Towns of the Cherokee Nation lobbied Jefferson for the full and equal citizenship George Washington had promised to Indians living in American territory, his response indicated that he was willing to accommodate citizenship for those Indian nations that sought it. In his Eighth Annual Message to Congress on November 8, 1808, he presented to the nation a vision of white and Indian unity: As some of Jefferson's other writings illustrate, however, he was ambivalent about Indian assimilation, even going so far as to use the words "exterminate" and "extirpate" regarding tribes that resisted American expansion and were willing to fight to defend their lands. Jefferson's intention was to change Indian lifestyles from hunting and gathering to farming, largely through "the decrease of game rendering their subsistence by hunting insufficient". He expected that the switch to agriculture would make them dependent on white Americans for trade goods and therefore more likely to give up their land in exchange, or else be removed to lands west of the Mississippi. In a private 1803 letter to William Henry Harrison, Jefferson wrote: Elsewhere in the same letter, Jefferson spoke of protecting the Indians from injustices perpetrated by whites: By the terms of the treaty of February 27, 1819, the U.S. government would again offer citizenship to the Cherokees who lived east of the Mississippi River, along with 640 acres of land per family. Native American land was sometimes purchased, either via a treaty or under duress. The idea of land exchange, that is, that Native Americans would give up their land east of the Mississippi in exchange for a similar amount of territory west of the river, was first proposed by Jefferson in 1803 and had first been incorporated in treaties in 1817, years after the Jefferson presidency. The Indian Removal Act of 1830 incorporated this concept. Under President James Monroe, Secretary of War John C. Calhoun devised the first plans for Indian removal. By late 1824, Monroe approved Calhoun's plans and in a special message to the Senate on January 27, 1825, requested the creation of the Arkansaw Territory and Indian Territory. The Indians east of the Mississippi were to voluntarily exchange their lands for lands west of the river. The Senate accepted Monroe's request and asked Calhoun to draft a bill, which was killed in the House of Representatives by the Georgia delegation. President John Quincy Adams assumed the Calhoun–Monroe policy and was determined to remove the Indians by non-forceful means, but Georgia refused to submit to Adams' request, forcing Adams to make a treaty with the Cherokees granting Georgia the Cherokee lands. On July 26, 1827, the Cherokee Nation adopted a written constitution modeled after that of the United States which declared they were an independent nation with jurisdiction over their own lands. Georgia contended that it would not countenance a sovereign state within its own territory, and proceeded to assert its authority over Cherokee territory. When Andrew Jackson became president as the candidate of the newly organized Democratic Party, he agreed that the Indians should be forced to exchange their eastern lands for western lands and relocate to them, and enforced Indian removal policy vigorously. When Andrew Jackson assumed office as president of the United States in 1829, his government took a hard line on Indian Removal policy. Jackson abandoned the policy of his predecessors of treating different Indian groups as separate nations. Instead, he aggressively pursued plans against all Indian tribes which claimed constitutional sovereignty and independence from state laws, and which were based east of the Mississippi River. They were to be removed to reservations in Indian Territory west of the Mississippi (now Oklahoma), where their laws could be sovereign without any state interference. At Jackson's request, the United States Congress opened a debate on an Indian Removal Bill. After fierce disagreements, the Senate passed the measure 28–19, the House 102–97. Jackson signed the legislation into law May 30, 1830. In 1830, the majority of the "Five Civilized Tribes"—the Chickasaw, Choctaw, Creek, Seminole, and Cherokee—were living east of the Mississippi. The Indian Removal Act of 1830 implemented the federal government's policy towards the Indian populations, which called for moving Native American tribes living east of the Mississippi River to lands west of the river. While it did not authorize the forced removal of the indigenous tribes, it authorized the president to negotiate land exchange treaties with tribes located in lands of the United States. On September 27, 1830, the Choctaw signed the Treaty of Dancing Rabbit Creek and by concession, became the first Native American tribe to be removed. The agreement represented one of the largest transfers of land that was signed between the U.S. Government and Native Americans without being instigated by warfare. By the treaty, the Choctaw signed away their remaining traditional homelands, opening them up for European-American settlement in Mississippi Territory. When the Choctaw reached Little Rock, a Choctaw chief referred to the trek as a "trail of tears and death". In 1831, Alexis de Tocqueville, the French historian and political thinker, witnessed an exhausted group of Choctaw men, women and children emerging from the forest during an exceptionally cold winter near Memphis, Tennessee, on their way to the Mississippi to be loaded onto a steamboat, and wrote: While the Indian Removal Act made the move of the tribes voluntary, it was often abused by government officials. The best-known example is the Treaty of New Echota, which was negotiated and signed by a small faction of only twenty Cherokee tribal members, not the tribal leadership, on December 29, 1835. Most of the Cherokees later blamed them and the treaty for the forced relocation of the tribe in 1838. An estimated 4,000 Cherokees died in the march, now known as the Trail of Tears. Missionary organizer Jeremiah Evarts urged the Cherokee Nation to take their case to the U.S. Supreme Court. The Marshall court heard the case in "Cherokee Nation v. Georgia" (1831), but declined to rule on its merits, instead declaring that the Native American tribes were not sovereign nations, and had no status to "maintain an action" in the courts of the United States. In "Worcester v. Georgia" (1832), the court held, in an opinion written by Chief Justice Marshall, that individual states had no authority in American Indian affairs. Yet the state of Georgia defied the Supreme Court ruling, and the desire of white settlers and land speculators for Indian lands continued unabated. Some whites claimed that the Indian presence was a threat to peace and security; the Georgia legislature passed a law that after March 31, 1831, forbade whites from living on Indian territory without a license from the state, in order to exclude white missionaries who opposed Indian removal. In 1835, the Seminole people refused to leave their lands in Florida, leading to the Second Seminole War. Osceola was a war leader of the Seminole in their fight against removal. Based in the Everglades of Florida, Osceola and his band used surprise attacks to defeat the U.S. Army in many battles. In 1837, Osceola was seized by deceit upon the orders of U.S. General Thomas Jesup when Osceola came under a flag of truce to negotiate a peace near Fort Peyton. Osceola died in prison of illness. The war would result in over 1,500 U.S. deaths and cost the government $20 million. Some Seminole traveled deeper into the Everglades, while others moved west. Removal continued out west and numerous wars ensued over land. In the aftermath of the Treaty of Fort Jackson and the Treaty of Washington, the Muscogee were confined to a small strip of land in present-day east central Alabama. Following the Indian Removal Act, in 1832 the Creek National Council signed the Treaty of Cusseta, ceding their remaining lands east of the Mississippi to the U.S., and accepting relocation to the Indian Territory. Most Muscogee were removed to Indian Territory during the Trail of Tears in 1834, although some remained behind. Unlike other tribes who exchanged land grants, the Chickasaw were to receive mostly financial compensation of $3 million from the United States for their lands east of the Mississippi River. In 1836, the Chickasaw reached an agreement that purchased land from the previously removed Choctaw after a bitter five-year debate, paying them $530,000 for the westernmost part of Choctaw land. Most of the Chickasaw moved in 1837–1838. The $3,000,000 that the U.S. owed the Chickasaw went unpaid for nearly 30 years. As a result, the Five Civilized Tribes were resettled in the new Indian Territory in modern-day Oklahoma. The Cherokee occupied the northeast corner of the Territory, as well as a strip of land seventy miles wide in Kansas on the border between the two. Some indigenous nations resisted forced migration more strongly. Those few that stayed behind eventually formed tribal groups, including the Eastern Band of Cherokee based in North Carolina, the Mississippi Band of Choctaw Indians, the Seminole Tribe of Florida, and the Creeks in Alabama, including the Poarch Band. Tribes in the Old Northwest were far smaller and more fragmented than the Five Civilized Tribes, so the treaty and emigration process was more piecemeal. Bands of Shawnee, Ottawa, Potawatomi, Sauk, and Meskwaki (Fox) signed treaties and relocated to the Indian Territory. In 1832, a Sauk leader named Black Hawk led a band of Sauk and Fox back to their lands in Illinois; in the ensuing Black Hawk War, the U.S. Army and Illinois militia defeated Black Hawk and his warriors, resulting in the Sauk and Fox being relocated into what would become present day Iowa. Tribes further to the east, such as the already displaced Lenape (or Delaware tribe), as well as the Kickapoo and Shawnee, were removed from Indiana, Michigan, and Ohio in the 1820s. The Potawatomi were forced out in late 1838 and resettled in Kansas Territory. Many Miami were resettled to Indian Territory in the 1840s. Communities in present-day Ohio were forced to move to Louisiana, which was then controlled by Spain. By the terms of the Second Treaty of Buffalo Creek (1838), the Senecas transferred all their land in New York, excepting one small reservation, in exchange for 200,000 acres of land in Indian Territory. The U.S. federal government would be responsible for the removal of those Senecas who opted to go west, while the Ogden Land company would acquire their lands in New York. The lands were sold by government officials, however, and the money deposited in the U.S. Treasury. The Senecas asserted that they had been defrauded, and sued for redress in the U.S. Court of Claims. The case was not resolved until 1898, when the United States awarded $1,998,714.46 in compensation to "the New York Indians". In 1842 and 1857, the U.S. signed treaties with the Senecas and the Tonawanda Senecas, respectively. Under the treaty of 1857, the Tonawandas renounced all claim to lands west of the Mississippi in exchange for the right to buy back the lands of the Tonawanda reservation from the Ogden Land Company. Over a century later, the Senecas purchased a nine-acre plot (part of their original reservation) in downtown Buffalo to build the "Seneca Buffalo Creek Casino". The following is a compilation of the statistics, many containing rounded figures, regarding the Southern removals. Historical views regarding the Indian Removal have been re-evaluated since that time. Widespread acceptance at the time of the policy, due in part to an embracing of the concept of Manifest destiny by the general populace, have since given way to somewhat harsher views. Descriptions such as "paternalism", ethnic cleansing, and even genocide have been ascribed by historians past and present to the motivation behind the Removals. Andrew Jackson's reputation took a blow for his treatment of the Indians. Historians who admire Jackson's strong presidential leadership, such as Arthur Schlesinger, Jr., would skip over the Indian question with a footnote. Writing in 1969, Francis Paul Prucha argued that Jackson's removal of the Five Civilized Tribes from the very hostile white environment in the Old South to Oklahoma probably saved their very existence. In the 1970s, however, Jackson came under sharp attack from writers, such as Michael Paul Rogin and Howard Zinn, chiefly on this issue. Zinn called him "exterminator of Indians"; Paul R. Bartrop and Steven Leonard Jacobs argue that Jackson's policies did not meet the criterion for genocide or cultural genocide.
https://en.wikipedia.org/wiki?curid=15080
Green Party (Ireland) The Green Party - An Comhaontas Glas (, literally "Green Alliance") is a green political party that operates in Ireland—both the Republic of Ireland and Northern Ireland. It was founded as the Ecology Party of Ireland in 1981 by Dublin teacher Christopher Fettes. The party became the Green Alliance in 1983 and adopted its current English name in 1987 while the Irish name was kept unchanged. Its leader is Eamon Ryan, its deputy leader is Catherine Martin and its chairperson is Hazel Chu. Green Party candidates have been elected to most levels of representation: local (in the Republic), Dáil Éireann, Northern Ireland Assembly and European Parliament. The Green Party first entered the Dáil in 1989. It has served in the Irish government twice, from 2007 to 2011 as junior partner in a coalition with Fianna Fáil . The party suffered a wipeout in the February 2011 election, losing all six of its TDs. In the February 2016 election, it returned to the Dáil with two seats. Following this, Grace O'Sullivan was elected to the Seanad on 26 April that year of 2016 and Joe O'Brien was elected to Dáil Éireann in the 2019 Dublin Fingal by-election. In the 2020 general election the party had its best result ever, securing 12 TDs and becoming the fourth largest party in Ireland before entering into government a second time in coalition with Fianna Fáil and Fine Gael. The Green Party began life as the "Ecology Party" in November 1982, with Christopher Fettes serving as the party's first Chairperson. The party's first public appearance was almost painfully humble: The event announced that they would be contesting the November 1982 general election, and was attended by their 7 election candidates, 20 party supporters, and one singular journalist. Fettes had opened the meeting by noting the party didn't expect to win any seats. Willy Clingan, the journalist present, recalled that "The Ecology Party introduced its seven election candidates at the nicest and most endearingly honest press conference of the whole campaign". The Ecology party took 0.2% of the vote that year. Following a name change to the "Green Alliance", it contested the 1984 European elections, with party founder Roger Garland winning 1.9% in the Dublin constituency. The following year, it won its first election when Marcus Counihan was elected to Killarney Urban District Council at the 1985 local elections, buoyed by winning 5,200 first preference votes as a European candidate in Dublin the previous year. The party nationally ran 34 candidates and won 0.6% of the vote. The party continued to struggle until the 1989 general election when the Green Party (as it was now named) won its first seat in Dáil Éireann, when Roger Garland was elected in Dublin South. Garland lost his seat at the 1992 general election, while Trevor Sargent gained a seat in Dublin North. In the 1994 European election, Patricia McKenna topped the poll in the Dublin constituency and Nuala Ahern won a seat in Leinster. They retained their European Parliament seats in the 1999 European election, although the party lost five councillors in local elections held that year despite an increase in its vote. At the 1997 general election, the party gained a seat when John Gormley won a Dáil seat in Dublin South-East. At the 2002 general election the party made a breakthrough, getting six Teachtaí Dála (TDs) elected to the Dáil with 4% of the national vote. However, in the 2004 European election, the party lost both of its European Parliament seats. In the 2004 local elections, it increased its number of councillors at county level from 8 to 18 (out of 883) and at town council level from 5 to 14 (out of 744). The party gained its first representation in the Northern Ireland Assembly in 2007, the Green Party in Northern Ireland having become a regional branch of the party the previous year. The Green Party entered government for the first time after the 2007 general election, held on 24 May. Although its share of first-preference votes increased at the election, the party failed to increase the number of TDs returned. Mary White won a seat for the first time in Carlow–Kilkenny; however, Dan Boyle lost his seat in Cork South-Central. The party had approached the 2007 general election on an independent platform, ruling out no coalition partners while expressing its preference for an alternative to the outgoing coalition of Fianna Fáil and the Progressive Democrats. Neither the outgoing government nor an alternative of Fine Gael, Labour and the Green Party had sufficient seats to form a majority. Fine Gael ruled out a coalition arrangement with Sinn Féin, opening the way for Green Party negotiations with Fianna Fáil. Before the negotiations began, Ciarán Cuffe TD wrote on his blog that "a deal with Fianna Fáil would be a deal with the devil… and [the Green Party would be] decimated as a Party". The negotiations were undertaken by Donall Geoghegan (the party's general secretary), Dan Boyle and the then party Chair John Gormley. The Green Party walked out after six days; this, Geoghegan later said, was owing to there not being "enough in [the deal] to allow [the Green Party] to continue". The negotiations restarted on 11 June; a draft programme for government was agreed the next day, which under party rules needed 66% of members to endorse it at a special convention. On 13 June 2007, Green members at the Mansion House in Dublin voted 86% in favour (441 to 67; with 2 spoilt votes) of entering coalition with Fianna Fáil. The following day, the six Green Party TDs voted for the re-election of Bertie Ahern as Taoiseach. New party leader John Gormley was appointed as Minister for the Environment, Heritage and Local Government and Eamon Ryan was appointed as Minister for Communications, Energy and Natural Resources. Trevor Sargent was named Minister of State for Food and Horticulture. Before its entry into government, the Green Party had been a vocal supporter of the Shell to Sea movement, the campaign to reroute the M3 motorway away from Tara and (to a lesser extent) the campaign to end United States military use of Shannon Airport. After the party entered government there were no substantive changes in government policy on these issues, which meant that Eamon Ryan oversaw the Corrib gas project while he was in office. The Green Party had, at its last annual conference, made an inquiry into the irregularities surrounding the project (see Corrib gas controversy) a precondition of entering government but changed its stance during post-election negotiations with Fianna Fáil. The 2008 budget, announced on 6 December 2007, did not include a carbon levy on fuels such as petrol, diesel and home heating oil, which the Green Party had sought before the election. A carbon levy was, however, introduced in the 2010 Budget. The 2008 budget did include a separate carbon budget announced by Gormley, which introduced new energy efficiency tax credit, a ban on incandescent bulbs from January 2009, a tax scheme incentivising commuters' purchases of bicycles and a new scale of vehicle registration tax based on carbon emissions. At a special convention on whether to support the Treaty of Lisbon on 19 January 2008, the party voted 63.5% in favour of supporting the Treaty; this fell short of the party's two-third majority requirement for policy issues. As a result, the Green Party did not have an official campaign in the first Lisbon Treaty referendum, although individual members were involved on different sides. The referendum did not pass in 2008, and following the Irish government's negotiation with EU member states of additional legal guarantees and assurances, the Green Party held another special convention meeting in Dublin on 18 July 2009 to decide its position on the second Lisbon referendum. Precisely two-thirds of party members present voted to campaign for a 'Yes' in the referendum. This was the first time in the party's history that it had campaigned in favour of a European treaty. The government's response to the post-2008 banking crisis significantly affected the party's support, and it suffered at the 2009 local elections, returning with only three County Council seats in total and losing its entire traditional Dublin base, with the exception of a Town Council seat in Balbriggan. Déirdre de Búrca, one of two Green Senators nominated by Taoiseach Bertie Ahern in 2007, resigned from the party and her seat in 2010, in part owing to the party's inability to secure her a job in the European Commission. On 23 February 2010, Trevor Sargent resigned as Minister of State for Food and Horticulture owing to allegations over contacting Gardaí about a criminal case involving a constituent. On 23 March 2010, Ciarán Cuffe was appointed as Minister for Horticulture, Sustainable Travel, Planning and Heritage while the party gained a junior ministerial position with Mary White appointed as Minister for Equality, Human Rights and Integration. The Green Party supported the passage legislation for EC–ECB–IMF financial support for Ireland's bank bailout. On 19 January, the party derailed Taoiseach Brian Cowen's plans to reshuffle his cabinet when it refused to endorse Cowen's intended replacement ministers, forcing Cowen to redistribute the vacant portfolios among incumbent ministers. The Greens were angered at not having been consulted about this effort, and went as far as to threaten to pull out of the coalition unless Cowen set a firm date for an election due that spring. He ultimately set the date for 11 March. On 23 January 2011, the Green Party met with Cowen following his resignation as leader of senior coalition partner Fianna Fáil the previous afternoon. The Green Party then announced it was breaking off the coalition and going into opposition with immediate effect. Green Party leader John Gormley said at a press conference announcing the withdrawal: The government ministerial posts of Gormley and Ryan were reassigned to Fianna Fáil ministers Éamon Ó Cuív and Pat Carey respectively. Green Ministers of State Ciarán Cuffe and Mary White also resigned from their roles. In almost four years in Government, from 2007 to 2011, the Green Party contributed to the passage of civil partnership for same-sex couples, the introduction of major planning reform, a major increase in renewable energy output, progressive budgets, and a nationwide scheme of home insulation retrofitting. The party suffered a wipeout at the 2011 general election, with all of its six TDs losing their seats, including those of former Ministers John Gormley and Eamon Ryan. Three of their six incumbent TDs lost their deposits. The party's share of the vote fell below 2%, meaning that they could not reclaim election expenses, and their lack of parliamentary representation led to the ending of state funding for the party. The party candidates in the 2011 election to the Seanad were Dan Boyle and Niall Ó Brolcháin; neither was elected, and as a result, for the first time since 1989 the Green Party had no representatives in the Oireachtas. Eamon Ryan was elected as party leader on 27 May 2011, succeeding John Gormley. Catherine Martin, was later appointed deputy leader, while Ciarán Cuffe and Mark Dearey were also placed on the party's front bench. In the 2014 European election the party received 4.9% of the vote nationally (an increase of 3% on the 2009 result), failing to return a candidate to the European Parliament. In the 2014 local elections the party received 1.6% of the vote nationally. 12 candidates were elected to County Councils, an increase of nine. At the 2016 general election the Green Party gained two seats, becoming the first Irish political party to lose all seats at an election and win seats at the subsequent election. In the subsequent election to Seanad Éireann, Grace O'Sullivan became the first elected Green Party Senator, winning a seat of the Agricultural Panel. She established the Civil Engagement group with five Independent Senators. On 30 May 2016, the Green Party joined with the Social Democrats to form a technical group in the Dáil. In the 2019 local elections the Green Party saw significant gains, increasing their number of councillors from 12 to 49 and becoming the second largest party on Dublin City Council. At the concurrent 2019 European Parliament election the party received 11.4% of the vote nationally (an increase of 6.5% on the 2014 result), the highest share they have won at any election to date. As a result, the Greens are represented in the European Parliament for the first time since 2004 by two MEPs - former TD Ciarán Cuffe in Dublin and Senator Grace O'Sullivan in South. On 1 November 2019, Pippa Hackett was elected to Seanad Éireann. She filled the seat left vacant by Grace O'Sullivan after the 2019 European Parliament election. Joe O'Brien was elected to Dáil Éireann on 29 November 2019 in the 2019 Dublin Fingal by-election. He became the party's first TD to win a by-election and the party's third TD in the 32nd Dáil. In the 2020 general election, the party had its best result ever, earning 7.1% of the first-preference votes and returning 12 TDs, up from three. It became the fourth-largest party in the Dáil and entered government in coalition with Fianna Fáil and Fine Gael. In the 2020 Seanad election the party returned two senators. A further two senators were nominated by Taoiseach, Micheál Martin bringing the total party representation in the Oireachtas to 16. The Green Party has seven "founding principles". Broadly, these founding principles reflect the "Four Pillars" of Green Politics observed by the majority of Green Parties internationally: Ecological wisdom, Social justice, Grassroots democracy and Nonviolence. They also reflect the Six guiding principles of the Global Greens, which also includes Respect for diversity as a principle. While strongly associated with environmentalist policies, the party also has policies covering all other key areas. These include: protection of the Irish language, lowering the voting age in Ireland to 16, a directly elected Seanad, support for universal healthcare, and a constitutional amendment which guarantees that the water of Ireland will never be privatised. The party also advocates that terminally ill people should have the right to legally choose assisted dying, on which subject it believes "provisions should apply only to those with a terminal illness which is likely to result in death within six months". It also states that "such a right would only apply where the person has a clear and settled intention to end their own life which is proved by making, and signing, a written declaration to that effect. Such a declaration must be countersigned by two qualified doctors". The National Executive Committee is the organising committee of the party. It comprises the party leader Eamon Ryan, the deputy leader Catherine Martin, the Chair Hazel Chu, the Young Greens representative, the Treasurer and ten members elected annually at the party convention. The party did not have a national leader until 2001. At a special "Leadership Convention" in Kilkenny on 6 October 2001, Trevor Sargent was elected the first official leader of the Green Party. He was re-elected to this position in 2003 and again in 2005. The party's constitution requires that a leadership election be held within six months of a general election. Sargent resigned the leadership in the wake of the 2007 general election to the 30th Dáil. During the campaign, Sargent had promised that he would not lead the party into Government with Fianna Fáil. At the election the party retained six Dáil seats, making it the most likely partner for Fianna Fáil. Sargent and the party negotiated a coalition government; at the 12 June 2007 membership meeting to approve the agreement, he announced his resignation as leader. In the subsequent leadership election, John Gormley became the new leader on 17 July 2007, defeating Patricia McKenna by 478 votes to 263. Mary White was subsequently elected as the deputy Leader. Gormley served as Minister for the Environment, Heritage and Local Government from July 2007 until the Green Party's decision to exit government in December 2010. Following the election defeats of 2011, Gormley announced his intention not to seek another term as Green Party leader. Eamon Ryan was elected as the new party leader, over party colleagues Phil Kearney and Cllr Malcolm Noonan in a postal ballot election of party members in May 2011. Monaghan-based former councillor Catherine Martin defeated Down-based Dr John Barry and former Senator Mark Dearey to the post of deputy leader on 11 June 2011 during the party's annual convention. Roderic O'Gorman was elected party chairperson. The Green Party lost all its Dáil seats in the 2011 general election. Party Chairman Dan Boyle and Déirdre de Búrca were nominated by the Taoiseach to Seanad Éireann after the formation of the Fianna Fáil–Progressive Democrats–Green Party government in 2007, and Niall Ó Brolcháin was elected in December 2009. De Búrca resigned in February 2010, and was replaced by Mark Dearey. Neither Boyle nor O'Brolchain was re-elected to Seanad Éireann in the Seanad election of 2011, leaving the Green Party without Oireachtas representation until the 2016 general election, in which it regained two Dáil seats. The Green Party is organised throughout the island of Ireland, with regional structures in both the Republic of Ireland and Northern Ireland. The Green Party in Northern Ireland voted to become a regional partner of the Green Party in Ireland in 2005 at its annual convention, and again in a postal ballot in March 2006. Brian Wilson, formerly a councillor for the Alliance Party, won the Green Party's first seat in the Northern Ireland Assembly in the 2007 election. Steven Agnew held that seat in the 2011 election.
https://en.wikipedia.org/wiki?curid=15081
Iconoclasm Iconoclasm is the social belief in the importance of the destruction of icons and other images or monuments, most frequently for religious or political reasons. People who engage in or support iconoclasm are called iconoclasts, a term that has come to be figuratively applied to any individual who challenges "cherished beliefs or venerated institutions on the grounds that they are erroneous or pernicious". Conversely, one who reveres or venerates religious images is called an "iconolater" (by iconoclasts); in a Byzantine context, such a person is called an "iconodule" or "iconophile." The term does not generally encompass the destruction of the images of a specific ruler after his or her death or overthrow ("damnatio memoriae"). Iconoclasm may be carried out by adherents of a different religion, but it is more often the result of sectarian disputes between factions of the same religion. Within Christianity, iconoclasm has generally been motivated by those who adopt a strict interpretation of the Ten Commandments, which forbid the production and worship of "graven images or any likeness of anything". The later Church Fathers identified Jews, who were fundamentally iconoclasts, with heresy and saw deviations from orthodox Christianity and opposition to the veneration of images as heresies that were essentially "Jewish in spirit". Degrees of iconoclasm vary greatly among religions and their branches. Islam, in general, tends to be more iconoclastic than Christianity, with Sunni Islam being more iconoclastic than Shia Islam. In the Bronze Age, the most significant episode of iconoclasm occurred in Egypt during the Amarna Period, when Akhenaten, based in his new capital of Akhetaten, instituted a significant shift in Egyptian artistic styles alongside a campaign of intolerance towards the traditional gods and a new emphasis on a state monolatristic tradition focused on the god Aten, the Sun disk— many temples and monuments were destroyed as a result: In rebellion against the old religion and the powerful priests of Amun, Akhenaten ordered the eradication of all of Egypt's traditional gods. He sent royal officials to chisel out and destroy every reference to Amun and the names of other deities on tombs, temple walls, and cartouches to instill in the people that the Aten was the one true god. Public references to Akhenaten were destroyed soon after his death. Comparing the ancient Egyptians with the Israelites, Jan Assmann writes: For Egypt, the greatest horror was the destruction or abduction of the cult images. In the eyes of the Israelites, the erection of images meant the destruction of divine presence; in the eyes of the Egyptians, this same effect was attained by the destruction of images. In Egypt, iconoclasm was the most terrible religious crime; in Israel, the most terrible religious crime was idolatry. In this respect Osarseph alias Akhenaten, the iconoclast, and the Golden Calf, the paragon of idolatry, correspond to each other inversely, and it is strange that Aaron could so easily avoid the role of the religious criminal. It is more than probable that these traditions evolved under mutual influence. In this respect, Moses and Akhenaten became, after all, closely related. Although widespread use of Christian iconography only began as Christianity increasingly spread among gentiles after the legalization of Christianity by Roman Emperor Constantine (c. 312 AD), scattered expressions of opposition to the use of images were reported (e.g. the Spanish Synod of Elvira). The period after the reign of Byzantine Emperor Justinian (527–565) evidently saw a huge increase in the use of images, both in volume and quality, and a gathering aniconic reaction. One notable change within the Byzantine Empire came in 695, when Justinian II's government added a full-face image of Christ on the obverse of imperial gold coins. The change caused the Caliph Abd al-Malik to stop his earlier adoption of Byzantine coin types. He started a purely Islamic coinage with lettering only. A letter by the Patriarch Germanus written before 726 to two Iconoclast bishops says that "now whole towns and multitudes of people are in considerable agitation over this matter" but there is little written evidence of the debate. Government-led iconoclasm began with Byzantine Emperor Leo III, who issued a series of edicts against the veneration of images between 726 and 730. The religious conflict created political and economic divisions in Byzantine society. It was generally supported by the Eastern, poorer, non-Greek peoples of the Empire who had to deal frequently with raids from the new Muslim Empire. On the other hand, the wealthier Greeks of Constantinople and the peoples of the Balkan and Italian provinces strongly opposed iconoclasm. The first iconoclastic wave happened in Wittenberg in the early 1520s under reformers Thomas Müntzer and Andreas Karlstadt. It prompted Martin Luther, then concealing as "Junker Jörg", to intervene. Luther argued that the mental picturing of Christ when reading the Scriptures was similar in character to artistic renderings of Christ. In contrast to the Lutherans who favoured sacred art in their churches and homes, the Reformed (Calvinist) leaders, in particular Andreas Karlstadt, Huldrych Zwingli and John Calvin, encouraged the removal of religious images by invoking the Decalogue's prohibition of idolatry and the manufacture of graven (sculpted) images of God. As a result, individuals attacked statues and images. However, in most cases, civil authorities removed images in an orderly manner in the newly Reformed Protestant cities and territories of Europe. Significant iconoclastic riots took place in Basel (in 1529), Zurich (1523), Copenhagen (1530), Münster (1534), Geneva (1535), Augsburg (1537), Scotland (1559), Rouen (1560) and Saintes and La Rochelle (1562). Calvinist iconoclasm in Europe "provoked reactive riots by Lutheran mobs" in Germany and "antagonized the neighbouring Eastern Orthodox" in the Baltic region. The Seventeen Provinces (now the Netherlands, Belgium and parts of Northern France) were disrupted by widespread Calvinist iconoclasm in the summer of 1566. This is called the "Beeldenstorm" and began with the destruction of the statuary of the Monastery of Saint Lawrence in Steenvoorde after a ""Hagenpreek"", or field sermon, by Sebastiaan Matte. Hundreds of other attacks included the sacking of the Monastery of Saint Anthony after a sermon by Jacob de Buysere. The "Beeldenstorm" marked the start of the revolution against the Spanish forces and the Catholic Church. The iconoclastic belief caused havoc throughout Europe. In 1523, specifically due to the Swiss reformer Huldrych Zwingli, a vast number of his followers viewed themselves as being involved in a spiritual community that in matters of faith should obey neither the visible Church nor lay authorities. According to Peter George Wallace: During the Reformation in England started during the reign of Anglican monarch Henry VIII, and urged on by reformers such as Hugh Latimer and Thomas Cranmer, limited official action was taken against religious images in churches in the late 1530s. Henry's young son, Edward VI, came to the throne in 1547 and, under Cranmer's guidance, issued Injunctions for Religious Reforms in the same year and in 1550, an Act of Parliament "for the abolition and putting away of divers books and images". During the English Civil War, Bishop Joseph Hall of Norwich described the events of 1643 when troops and citizens, encouraged by a Parliamentary ordinance against superstition and idolatry, behaved thus: Lord what work was here! What clattering of glasses! What beating down of walls! What tearing up of monuments! What pulling down of seats! What wresting out of irons and brass from the windows! What defacing of arms! What demolishing of curious stonework! What tooting and piping upon organ pipes! And what a hideous triumph in the market-place before all the country, when all the mangled organ pipes, vestments, both copes and surplices, together with the leaden cross which had newly been sawn down from the Green-yard pulpit and the service-books and singing books that could be carried to the fire in the public market-place were heaped together. Protestant Christianity was not uniformly hostile to the use of religious images. Martin Luther taught the "importance of images as tools for instruction and aids to devotion", stating: "If it is not a sin but good to have the image of Christ in my heart, why should it be a sin to have it in my eyes?" Lutheran churches retained ornate church interiors with a prominent crucifix, reflecting their high view of the real presence of Christ in Eucharist. As such, "Lutheran worship became a complex ritual choreography set in a richly furnished church interior." For Lutherans, "the Reformation renewed rather than removed the religious image." Lutheran scholar Jeremiah Ohl writes: The Ottoman Sultan Suleiman the Magnificent, who had pragmatic reasons to support the Dutch Revolt (the rebels, like himself, were fighting against Spain) also completely approved of their act of "destroying idols", which accorded well with Muslim teachings. A bit later in Dutch history, in 1627 the artist Johannes van der Beeck was arrested and tortured, charged with being a religious non-conformist and a blasphemer, heretic, atheist, and Satanist. The 25 January 1628 judgment from five noted advocates of The Hague pronounced him guilty of "blasphemy against God and avowed atheism, at the same time as leading a frightful and pernicious lifestyle. At the court's order his paintings were burned, and only a few of them survive " In the history of Islam the act of removing idols from the Ka'ba in Mecca has great symbolic and historic importance for all believers. In general, Muslim societies have avoided the depiction of living beings (animals and humans) within such sacred spaces as mosques and madrasahs. This opposition to figural representation is based not on the Qur'an, but on traditions contained within the Hadith. The prohibition of figuration has not always been extended to the secular sphere, and a robust tradition of figural representation exists within Muslim art. However, Western authors have tended to perceive "a long, culturally determined, and unchanging tradition of violent iconoclastic acts" within Islamic society. The first act of Muslim iconoclasm dates to the beginning of Islam, in 630, when the various statues of Arabian deities housed in the Kaaba in Mecca were destroyed. There is a tradition that Muhammad spared a fresco of Mary and Jesus. This act was intended to bring an end to the idolatry which, in the Muslim view, characterized Jahiliyya. The destruction of the idols of Mecca did not, however, determine the treatment of other religious communities living under Muslim rule after the expansion of the caliphate. Most Christians under Muslim rule, for example, continued to produce icons and to decorate their churches as they wished. A major exception to this pattern of tolerance in early Islamic history was the "Edict of Yazīd", issued by the Umayyad caliph Yazid II in 722–723. This edict ordered the destruction of crosses and Christian images within the territory of the caliphate. Researchers have discovered evidence that the order was followed, particularly in present-day Jordan, where archaeological evidence shows the removal of images from the mosaic floors of some, although not all, of the churches that stood at this time. But, Yazīd's iconoclastic policies were not continued by his successors, and Christian communities of the Levant continued to make icons without significant interruption from the sixth century to the ninth. Al-Maqrīzī, writing in the 15th century, attributes the missing nose on the Great Sphinx of Giza to iconoclasm by Muhammad Sa'im al-Dahr, a Sufi Muslim in the mid-1300s. He was reportedly outraged by local Muslims making offerings to the Great Sphinx in the hope of controlling the flood cycle, and he was later executed for vandalism. However, whether this was actually the cause of the missing nose has been debated by historians. Mark Lehner who performed an archaeological study concluded that it was broken with instruments at an earlier unknown time between the 3rd and 10th centuries. Certain conquering Muslim armies have used local temples or houses of worship as mosques. An example is Hagia Sophia in Istanbul (formerly Constantinople), which was converted into a mosque in 1453. Most icons were desecrated and the rest were covered with plaster. In the 1920s, Hagia Sophia was converted to a museum, and the restoration of the mosaics was undertaken by the American Byzantine Institute beginning in 1932. Certain Muslim denominations continue to pursue iconoclastic agendas. There has been much controversy within Islam over the recent and apparently on-going destruction of historic sites by Saudi Arabian authorities, prompted by the fear they could become the subject of "idolatry". A recent act of iconoclasm was the 2001 destruction of the giant Buddhas of Bamyan by the then-Taliban government of Afghanistan. The act generated worldwide protests and was not supported by other Muslim governments and organizations. It was widely perceived in the Western media as a result of the Muslim prohibition against figural decoration. Such an account overlooks "the coexistence between the Buddhas and the Muslim population that marveled at them for over a millennium" before their destruction. The Buddhas had twice in the past been attacked by Nadir Shah and Aurengzeb. According to the art historian F.B. Flood, analysis of the Taliban's statements regarding the Buddhas suggest that their destruction was motivated more by political than by theological concerns. Taliban spokespeople have given many different explanations of the motives for the destruction. During the Tuareg rebellion of 2012, the radical Islamist militia Ansar Dine destroyed various Sufi shrines from the 15th and 16th centuries in the city of Timbuktu, Mali. In 2016, the International Criminal Court (ICC) sentenced Ahmad al-Faqi al-Mahdi, a former member of Ansar Dine, to nine years in prison for this destruction of cultural world heritage. This was the first time that the ICC convicted a person for such a crime. The short-lived Islamic State of Iraq and the Levant carried out iconoclastic attacks such as the destruction of Shia mosques and shrines. Notable incidents include blowing up the Mosque of the Prophet Yunus (Jonah) and destroying the Shrine to Seth in Mosul. In early medieval India, there were numerous recorded instances of temple desecration by Indian kings against rival Indian kingdoms, which involved conflicts between devotees of different Hindu deities, as well as conflicts between Hindus, Buddhists and Jains. In 642, the Pallava king Narasimhavarman I looted a Ganesha temple in the Chalukyan capital of Vatapi. "Circa" 692, Chalukya armies invaded northern India where they looted temples of Ganga and Yamuna. In the 8th century, Bengali troops from the Buddhist Pala Empire desecrated temples of Vishnu Vaikuntha, the state deity of Lalitaditya's kingdom in Kashmir. In the early 9th century, Indian Hindu kings from Kanchipuram and the Pandyan king Srimara Srivallabha looted Buddhist temples in Sri Lanka. In the early 10th century, the Pratihara king Herambapala looted an image from a temple in the Sahi kingdom of Kangra, which in the 10th century was looted by the Pratihara king Yasovarman. In the early 11th century, the Chola king Rajendra I looted temples in a number of neighbouring kingdoms, including Durga and Ganesha temples in the Chalukya Kingdom; Bhairava, Bhairavi and Kali temples in the Kalinga kingdom; a Nandi temple in the Eastern Chalukya kingdom; and a Siva temple in Pala Bengal. In the mid-11th century, the Chola king Rajadhiraja plundered a temple in Kalyani. In the late 11th century, the Hindu king Harsha of Kashmir plundered temples as an institutionalised activity. In the late 12th to early 13th centuries, the Paramara dynasty attacked and plundered Jain temples in Gujarat. In the 1460s, Suryavamshi Gajapati dynasty founder Kapilendra sacked the Saiva and Vaishnava temples in the Cauvery delta in the course of wars of conquest in the Tamil country. Vijayanagara king Krishnadevaraya looted a Balakrishna temple in Udayagiri in 1514, and he looted a Vittala temple in Pandharpur in 1520. Perhaps the most notorious episode of iconoclasm in India was Mahmud of Ghazni's attack on the Somnath temple. In 1024, during the reign of Bhima I, the prominent Turkic Muslim ruler Mahmud of Ghazni raided Gujarat, plundering the Somnath temple and breaking its jyotirlinga despite pleas by Brahmins not to break it. He took away a booty of 20 million dinars. The attack may have been inspired by the belief that an idol of Manat (goddess) had been secretly transferred to the temple. According to the Ghaznavid court poet Farrukhi Sistani, who claimed to have accompanied Mahmud on his raid, Somnat (as rendered in Persian) was a garbled version of su-manat referring to the goddess Manat. According to him as well as a later Ghaznavid historian Abu Sa'id Gardezi, the images of the other goddesses were destroyed in Arabia but the one of Manat was secretly sent away to Kathiawar (in modern Gujarat) for safe keeping. Since the idol of Manat was an aniconic image of black stone, it could have been easily confused with a lingam at Somnath. Mahmud is said to have broken the idol and taken away parts of it as loot and placed so that people would walk on it. In his letters to the Caliphate, Mahmud exaggerated the size, wealth and religious significance of the Somnath temple, receiving grandiose titles from the Caliph in return. Historical records compiled by Muslim historian Maulana Hakim Saiyid Abdul Hai attest to the religious violence during Mamluk dynasty ruler Qutb-ud-din Aybak. The first mosque built in Delhi, the "Quwwat al-Islam" was built with demolished parts of 20 Hindu and Jain temples. This pattern of iconoclasm was common during his reign. During Delhi Sultanate muslim army led by Malik Kafur, a general of Alauddin Khalji, pursued two violent campaigns into south India, between 1309 and 1311, against three Hindu kingdoms of Deogiri (Maharashtra), Warangal (Telangana) and Madurai(Tamil Nadu).Many Temples were plundered, Hoysaleswara Temple was destroyed. In Kashmir, Sikandar Shah Miri began expanding, and unleashed religious violence that earned him the name but-shikan or idol-breaker. He earned this sobriquet because of the sheer scale of desecration and destruction of Hindu and Buddhist temples, shrines, ashrams, hermitages and other holy places in what is now known as Kashmir and its neighboring territories. He destroyed vast majority of Hindu and Buddhist temples in his reach in Kashmir region (north and northwest India). The Hindu text Madala Panji and regional tradition state that Kalapahad attacked and damaged the Konark Sun Temple in 1568. Some of the most dramatic cases of iconoclasm by Muslims are found in parts of India where Hindu and Buddhist temples were razed and mosques erected in their place. Aurangzeb, the 6th Mughal Emperor, destroyed the famous Hindu temples at Varanasi and Mathura. In modern India, the most high-profile case of iconoclasm was from 1992. Hindu extremists, led by the Vishva Hindu Parishad and Bajrang Dal, destroyed the 430-year-old Islamic Babri Mosque in Ayodhya. Revolutions and changes of regime, whether through uprising of the local population, foreign invasion, or a combination of both, are often accompanied by the public destruction of statues and monuments identified with the previous regime. This may also be known as "damnatio memoriae", the ancient Roman practice of official obliteration of the memory of a specific individual. Stricter definitions of "iconoclasm" exclude both types of action, reserving the term for religious or more widely cultural destruction. In many cases, such as Revolutionary Russia or Ancient Egypt, this distinction can be hard to make. Among Roman emperors and other political figures subject to decrees of "damnatio memoriae" were Sejanus, Publius Septimius Geta, and Domitian. Several Emperors, such as Domitian and Commodus had during their reigns erected numerous statues of themselves, which were pulled down and destroyed when they were overthrown. The perception of "damnatio memoriae" in the Classical world was an act of erasing memory has been challenged by scholars who have argued that it "did not negate historical traces, but created gestures which served to "dishonor" the record of the person and so, in an oblique way, to confirm memory", and was in effect a spectacular display of "pantomime forgetfulness". Examining cases of political monument destruction in modern Irish history, Guy Beiner has demonstrated that iconoclastic vandalism often entails subtle expressions of ambiguous remembrance and that, rather than effacing memory, such acts of "decommemorating" effectively preserve memory in obscure forms. Throughout the radical phase of the French Revolution, iconoclasm was supported by members of the government as well as the citizenry. Numerous monuments, religious works, and other historically significant pieces were destroyed in an attempt to eradicate any memory of the Old Regime. At the same time, the republican government felt responsible to preserve these works for their historical, aesthetic, and cultural value. One way the republican government succeeded in their paradoxical mission of preserving and destroying symbols of the Old Regime was through the development of museums. During the Revolution, a statue of King Louis XV in the Paris square which until then bore his name, was pulled down and destroyed. This was a prelude to the guillotining of his successor Louis XVI in the same site, renamed "Place de la Révolution" (at present Place de la Concorde). The statue of Napoleon on the column at Place Vendôme, Paris was also the target of iconoclasm several times: destroyed after the Bourbon Restoration, restored by Louis-Philippe, destroyed during the Paris Commune and restored by Adolphe Thiers. Records from the campaign recorded in the "Chach Nama" record the destruction of temples during the early eighth century when the Umayyad governor of Damascus, al-Hajjaj ibn Yusuf, mobilized an expedition of 6000 cavalry under Muhammad bin Qasim in 712. The historian Upendra Thakur records the persecution of Hindus and Buddhists: In 725 Junayad, the governor of Sind, sent his armies to destroy the second Somnath. In 1024, the temple was again destroyed by Mahmud of Ghazni, who raided the temple from across the Thar Desert. The wooden structure was replaced by Kumarapala (r. 1143–72), who rebuilt the temple out of stone. Sultan Sikandar Butshikan of Kashmir (1389–1413) ordered the breaking of all "golden and silver images". Firishta states, "After the emigration of the Bramins, Sikundur ordered all the temples in Kashmeer to be thrown down. Having broken all the images in Kashmeer, (Sikandar) acquired the title of 'Destroyer of Idols'". There have been a number of anti-Buddhist campaigns in Chinese history that led to the destruction of Buddhist temples and images. One of the most notable of these campaigns was the Great Anti-Buddhist Persecution of the Tang dynasty. During and after the Xinhai Revolution, there was widespread destruction of religious and secular images in China. During the Northern Expedition in Guangxi in 1926, Kuomintang General Bai Chongxi led his troops in destroying Buddhist temples and smashing Buddhist images, turning the temples into schools and Kuomintang party headquarters. It was reported that almost all of the viharas in Guangxi were destroyed and the monks were removed. Bai also led a wave of anti-foreignism in Guangxi, attacking Americans, Europeans, and other foreigners, and generally making the province unsafe for foreigners and missionaries. Westerners fled from the province and some Chinese Christians were also attacked as imperialist agents. The three goals of the movement were anti-foreignism, anti-imperialism and anti-religion. Bai led the anti-religious movement against superstition. Huang Shaohong, also a Kuomintang member of the New Guangxi clique, supported Bai's campaign. The anti-religious campaign was agreed upon by all Guangxi Kuomintang members. There was extensive destruction of religious and secular imagery in Tibet after it was invaded and occupied by China. Many religious and secular images were destroyed during the Cultural Revolution of 1966-1976, ostensibly because they were a holdover from China's traditional past (which the Communist regime led by Mao Zedong reviled). The Cultural Revolution included widespread destruction of historic artworks in public places and private collections, whether religious or secular. Objects in state museums were mostly left intact. During and after the October Revolution, widespread destruction of religious and secular imagery took place, as well as the destruction of imagery related to the Imperial family. The Revolution was accompanied by destruction of monuments of past tsars, as well as the destruction of imperial eagles at various locations throughout Russia. According to Christopher Wharton, "In front of a Moscow cathedral, crowds cheered as the enormous statue of Tsar Alexander III was bound with ropes and gradually beaten to the ground. After a considerable amount of time, the statue was decapitated and its remaining parts were broken into rubble". The Soviet Union actively destroyed religious sites, including Russian Orthodox churches and Jewish cemeteries, in order to discourage religious practice and curb the activities of religious groups. During the Hungarian Revolution of 1956 and during the Revolutions of 1989, protesters often attacked and took down sculptures and images of Joseph Stalin, such as the Stalin Monument in Budapest. The fall of Communism in 1989-1991 was also followed by the destruction or removal of statues of Vladimir Lenin and other Communist leaders in the former Soviet Union and in other Eastern Bloc countries. Particularly well-known was the destruction of "Iron Felix", the statue of Felix Dzerzhinsky outside the KGB's headquarters. Another statue of Dzerzhinsky was destroyed in a Warsaw square that was named after him during communist rule, but which is now called Bank Square. Other examples of political destruction of images include:
https://en.wikipedia.org/wiki?curid=15085
Isaiah Isaiah was the 8th-century BC Israelite prophet after whom the Book of Isaiah is named. Within the text of the Book of Isaiah, Isaiah himself is referred to as "the prophet", but the exact relationship between the Book of Isaiah and any such historical Isaiah is complicated. The traditional view is that all 66 chapters of the book of Isaiah were written by one man, Isaiah, possibly in two periods between 740 BC and c. 686 BC, separated by approximately 15 years, and that the book includes dramatic prophetic declarations of Cyrus the Great in the Bible, acting to restore the nation of Israel from Babylonian captivity. Another widely held view is that parts of the first half of the book (chapters 1–39) originated with the historical prophet, interspersed with prose commentaries written in the time of King Josiah a hundred years later, and that the remainder of the book dates from immediately before and immediately after the end of the exile in Babylon, almost two centuries after the time of the historical prophet. The first verse of the Book of Isaiah states that Isaiah prophesied during the reigns of Uzziah (or Azariah), Jotham, Ahaz, and Hezekiah, the kings of Judah (). Uzziah's reign was 52 years in the middle of the 8th century BC, and Isaiah must have begun his ministry a few years before Uzziah's death, probably in the 740s BC. Isaiah lived until the fourteenth year of the reign of Hezekiah (who died 698 BC). He may have been contemporary for some years with Manasseh. Thus Isaiah may have prophesied for as long as 64 years. According to some modern interpretations, Isaiah's wife was called "the prophetess" (), either because she was endowed with the prophetic gift, like Deborah () and Huldah (), or simply because she was the "wife of the prophet". They had three sons, naming the eldest Shear-jashub, meaning "A remnant shall return" (), the next Immanuel, meaning "God with us" (), and the youngest, Maher-Shalal-Hash-Baz, meaning, "Spoil quickly, plunder speedily" (). Soon after this, Shalmaneser V determined to subdue the kingdom of Israel, taking over and destroying Samaria (722 BC). So long as Ahaz reigned, the kingdom of Judah was untouched by the Assyrian power. But when Hezekiah gained the throne, he was encouraged to rebel "against the king of Assyria" (), and entered into an alliance with the king of Egypt (). The king of Assyria threatened the king of Judah, and at length invaded the land. Sennacherib (701 BC) led a powerful army into Judah. Hezekiah was reduced to despair, and submitted to the Assyrians (). But after a brief interval, war broke out again. Again Sennacherib led an army into Judah, one detachment of which threatened Jerusalem (; ). Isaiah on that occasion encouraged Hezekiah to resist the Assyrians (), whereupon Sennacherib sent a threatening letter to Hezekiah, which he "spread before the LORD" (). According to the account in 2 Kings 19 (and its derivative account in 2 Chronicles 32) an angel of God fell on the Assyrian army and 185,000 of its men were killed in one night. "Like Xerxes in Greece, Sennacherib never recovered from the shock of the disaster in Judah. He made no more expeditions against either Southern Palestine or Egypt." The remaining years of Hezekiah's reign were peaceful (). Isaiah probably lived to its close, and possibly into the reign of Manasseh. The time and manner of his death are not specified in either the Bible or other primary sources. The Talmud [Yevamot 49b] says that he suffered martyrdom by being sawn in two under the orders of Manasseh. According to rabbinic literature, Isaiah was the maternal grandfather of Manasseh. The book of Isaiah, along with the book of Jeremiah, is distinctive in the Hebrew bible for its direct portrayal of the "wrath of the Lord" as presented, for example, in Isaiah 9:19 stating, "Through the wrath of the Lord of hosts is the land darkened, and the people shall be as the fuel of the fire." The Ascension of Isaiah, a pseudepigraphical Christian text dated to sometime between the end of the 1st century to the beginning of the 3rd, gives a detailed story of Isaiah confronting an evil false prophet and ending with Isaiah being martyred – none of which is attested in the original Biblical account. Gregory of Nyssa (c. 335–395) believed that the Prophet Isaiah "knew more perfectly than all others the mystery of the religion of the Gospel". Jerome (c. 342–420) also lauds the Prophet Isaiah, saying, "He was more of an Evangelist than a Prophet, because he described all of the Mysteries of the Church of Christ so vividly that you would assume he was not prophesying about the future, but rather was composing a history of past events." Of specific note are the songs of the Suffering Servant, which Christians say are a direct prophetic revelation of the nature, purpose, and detail of the death of Jesus Christ. The Book of Isaiah is quoted many times by New Testament writers. Ten of those references are about the Suffering Servant, how he will suffer and die to save many from their sins, be buried in a rich man's tomb, and be a light to the Gentiles. The Gospel of John says that Isaiah "saw Jesus’ glory and spoke about him." The Eastern Orthodox Church celebrates Saint Isaiah the Prophet on May 9. The Book of Mormon quotes Jesus Christ as stating that "great are the words of Isaiah", and that all things prophesied by Isaiah have been and will be fulfilled. The Book of Mormon and Doctrine and Covenants also quote Isaiah more than any other prophet from the Old Testament. Additionally, members of The Church of Jesus Christ of Latter-day Saints consider the founding of the church by Joseph Smith in the 19th century to be a fulfillment of Isaiah 11, the translation of the Book of Mormon to be a fulfillment of Isaiah 29, and the building of Latter-day Saint temples as a fulfillment of Isaiah 2:2. Isaiah, or his Arabic name أشعياء (transliterated: "Ashiʻyā), is not mentioned by name in the Quran or the Hadith, but appears frequently as a prophet in Islamic sources, such as Qisas Al-Anbiya and Tafsir. Tabari (310/923) provides the typical accounts for Islamic traditions regarding Isaiah. He is further mentioned and accepted as a prophet by other Islamic scholars such as Ibn Kathir, Al-Tha`labi and Kisa'i and also modern scholars such as Muhammad Asad and Abdullah Yusuf Ali. According to Muslim scholars, Isaiah predicted the coming of Jesus and Muhammad, although the reference to Muhammad is disputed by other religious scholars. Isaiah's narrative in Islamic literature can be divided into three sections. The first establishes Isaiah as a prophet of Israel during the reign of Hezekiah; the second relates Isaiah's actions during the siege of Jerusalem by Sennacherib; and the third warns the nation of coming doom. Paralleling the Hebrew Bible, Islamic tradition states that Hezekiah was king in Jerusalem during Isaiah's time. Hezekiah heard and obeyed Isaiah's advice, but could not quell the turbulence in Israel. This tradition maintains that Hezekiah was a righteous man and that the turbulence worsened after him. After the death of the king, Isaiah told the people not to forsake God, and warned Israel to cease from its persistent sin and disobedience. Muslim tradition maintains that the unrighteous of Israel in their anger sought to kill Isaiah. In a death that resembles that attributed to Isaiah in "Lives of the Prophets", Muslim exegesis recounts that Isaiah was martyred by Israelites by being sawn in two. In the courts of Al-Ma'mun, the seventh Abbasid caliph, Ali al-Ridha, the great grandson of Muhammad and prominent scholar (Imam) of his era, was questioned by the High Jewish Rabbi to prove through the Torah that both Jesus and Muhammad were prophets. Among his several proofs, the Imam references the Book of Isaiah, stating "Sha‘ya (Isaiah), the Prophet, said in the Torah concerning what you and your companions say: ‘I have seen two riders to whom (He) illuminated earth. One of them was on a donkey and the other was on a camel. Who is the rider of the donkey, and who is the rider of the camel?'" The Rabbi was unable to answer with certainty. Al-Ridha goes on to state that "As for the rider of the donkey, he is ‘Isa (Jesus); and as for the rider of the camel, he is Muhammad, may Allah bless him and his family. Do you deny that this (statement) is in the Torah?" The Rabbi responds "No, I do not deny it." According to the rabbinic literature, Isaiah was a descendant of the royal house of Judah and Tamar (Sotah 10b). He was the son of Amoz (not to be confused with Prophet Amos), who was the brother of King Amaziah of Judah. (Talmud tractate Megillah 15a). In February 2018 archaeologist Eilat Mazar announced that she and her team had discovered a small seal impression which reads "[belonging] to Isaiah nvy" (could be reconstructed and read as "[belonging] to Isaiah the prophet") during the Ophel excavations, just south of the Temple Mount in Jerusalem. The tiny bulla was found "only 10 feet away" from where an intact bulla bearing the inscription "[belonging] to King Hezekiah of Judah" was discovered in 2015 by the same team. Although the name "Isaiah" in Paleo-Hebrew alphabet is unmistakable, the damage on the bottom left part of the seal causes difficulties in confirming the word "prophet" or a common Hebrew name "Navi", casting some doubts whether this seal really belongs to the prophet Isaiah.
https://en.wikipedia.org/wiki?curid=15088
Interpreted language An interpreted language is a type of programming language for which most of its implementations execute instructions directly and freely, without previously compiling a program into machine-language instructions. The interpreter executes the program directly, translating each statement into a sequence of one or more subroutines, and then into another language (often machine code). The terms "interpreted language" and "compiled language" are not well defined because, in theory, any programming language can be either interpreted or compiled. In modern programming language implementation, it is increasingly popular for a platform to provide both options. Interpreted languages can also be contrasted with machine languages. Functionally, both execution and interpretation mean the same thing — fetching the next instruction/statement from the program and executing it. Although interpreted byte code is additionally identical to machine code in form and has an assembler representation, the term "interpreted" is sometimes reserved for "software processed" languages (by virtual machine or emulator) on top of the native (i.e. hardware) processor. In principle, programs in many languages may be compiled or interpreted, emulated or executed natively, so this designation is applied solely based on common implementation practice, rather than representing an essential property of a language. Many languages have been implemented using both compilers and interpreters, including BASIC, C, Lisp, and Pascal. Java and C# are compiled into bytecode, the virtual-machine-friendly interpreted language. Lisp implementations can freely mix interpreted and compiled code. The distinction between a compiler and an interpreter is not always well defined, and many language processors do a combination of both. In the early days of computing, language design was heavily influenced by the decision to use compiling or interpreting as a mode of execution. For example, Smalltalk (1980), which was designed to be interpreted at run-time, allows generic objects to dynamically interact with each other. Initially, interpreted languages were compiled line-by-line; that is, each line was compiled as it was about to be executed, and if a loop or subroutine caused certain lines to be executed multiple times, they would be recompiled every time. This has become much less common. Most so-called interpreted languages use an intermediate representation, which combines compiling and interpreting. Examples include: The intermediate representation can be compiled once and for all (as in Java), each time before execution (as in Ruby), or each time a change in the source is detected before execution (as in Python). Interpreting a language gives implementations some additional flexibility over compiled implementations. Features that are often easier to implement in interpreters than in compilers include: Furthermore, source code can be read and copied, giving users more freedom. Disadvantages of interpreted languages are: Several criteria can be used to determine whether a particular language is likely to be called compiled or interpreted by its users: These are not definitive. Compiled languages can have interpreter-like properties and vice versa. Many languages are first compiled to bytecode. Sometimes, bytecode can also be compiled to a native binary using an AOT compiler or executed natively, by hardware processor.
https://en.wikipedia.org/wiki?curid=15089
Ionosphere The ionosphere () is the ionized part of Earth's upper atmosphere, from about to altitude, a region that includes the thermosphere and parts of the mesosphere and exosphere. The ionosphere is ionized by solar radiation. It plays an important role in atmospheric electricity and forms the inner edge of the magnetosphere. It has practical importance because, among other functions, it influences radio propagation to distant places on the Earth. The region below the ionosphere is called neutral atmosphere, or neutrosphere. As early as 1839, the German mathematician and physicist Carl Friedrich Gauss postulated that an electrically conducting region of the atmosphere could account for observed variations of Earth's magnetic field. Sixty years later, Guglielmo Marconi received the first trans-Atlantic radio signal on December 12, 1901, in St. John's, Newfoundland (now in Canada) using a kite-supported antenna for reception. The transmitting station in Poldhu, Cornwall, used a spark-gap transmitter to produce a signal with a frequency of approximately 500 kHz and a power of 100 times more than any radio signal previously produced. The message received was three dits, the Morse code for the letter S. To reach Newfoundland the signal would have to bounce off the ionosphere twice. Dr. Jack Belrose has contested this, however, based on theoretical and experimental work. However, Marconi did achieve transatlantic wireless communications in Glace Bay, Nova Scotia, one year later. In 1902, Oliver Heaviside proposed the existence of the Kennelly–Heaviside layer of the ionosphere which bears his name. Heaviside's proposal included means by which radio signals are transmitted around the Earth's curvature. . Also in 1902, Arthur Edwin Kennelly discovered some of the ionosphere's radio-electrical properties. In 1912, the U.S. Congress imposed the Radio Act of 1912 on amateur radio operators, limiting their operations to frequencies above 1.5 MHz (wavelength 200 meters or smaller). . This led to the discovery of HF radio propagation via the ionosphere in 1923. In 1926, Scottish physicist Robert Watson-Watt introduced the term "ionosphere" in a letter published only in 1969 in "Nature": In the early 1930s, test transmissions of Radio Luxembourg inadvertently provided evidence of the first radio modification of the ionosphere; HAARP ran a series of experiments in 2017 using the eponymous Luxembourg Effect. Edward V. Appleton was awarded a Nobel Prize in 1947 for his confirmation in 1927 of the existence of the ionosphere. Lloyd Berkner first measured the height and density of the ionosphere. This permitted the first complete theory of short-wave radio propagation. Maurice V. Wilkes and J. A. Ratcliffe researched the topic of radio propagation of very long radio waves in the ionosphere. Vitaly Ginzburg has developed a theory of electromagnetic wave propagation in plasmas such as the ionosphere. In 1962, the Canadian satellite Alouette 1 was launched to study the ionosphere. Following its success were Alouette 2 in 1965 and the two ISIS satellites in 1969 and 1971, further AEROS-A and -B in 1972 and 1975, all for measuring the ionosphere. On July 26, 1963 the first operational geosynchronous satellite Syncom 2 was launched. The board radio beacons on this satellite (and its successors) enabled – for the first time – the measurement of total electron content (TEC) variation along a radio beam from geostationary orbit to an earth receiver. (The rotation of the plane of polarization directly measures TEC along the path.) Australian geophysicist Elizabeth Essex-Cohen from 1969 onwards was using this technique to monitor the atmosphere above Australia and Antarctica. The ionosphere is a shell of electrons and electrically charged atoms and molecules that surrounds the Earth, stretching from a height of about to more than . It exists primarily due to ultraviolet radiation from the Sun. The lowest part of the Earth's atmosphere, the troposphere extends from the surface to about . Above that is the stratosphere, followed by the mesosphere. In the stratosphere incoming solar radiation creates the ozone layer. At heights of above , in the thermosphere, the atmosphere is so thin that free electrons can exist for short periods of time before they are captured by a nearby positive ion. The number of these free electrons is sufficient to affect radio propagation. This portion of the atmosphere is partially "ionized" and contains a plasma which is referred to as the ionosphere. Ultraviolet (UV), X-ray and shorter wavelengths of solar radiation are "ionizing," since photons at these frequencies contain sufficient energy to dislodge an electron from a neutral gas atom or molecule upon absorption. In this process the light electron obtains a high velocity so that the temperature of the created electronic gas is much higher (of the order of thousand K) than the one of ions and neutrals. The reverse process to ionization is recombination, in which a free electron is "captured" by a positive ion. Recombination occurs spontaneously, and causes the emission of a photon carrying away the energy produced upon recombination. As gas density increases at lower altitudes, the recombination process prevails, since the gas molecules and ions are closer together. The balance between these two processes determines the quantity of ionization present. Ionization depends primarily on the Sun and its activity. The amount of ionization in the ionosphere varies greatly with the amount of radiation received from the Sun. Thus there is a diurnal (time of day) effect and a seasonal effect. The local winter hemisphere is tipped away from the Sun, thus there is less received solar radiation. The activity of the Sun modulates following the solar cycle, with more radiation occurring with more sunspots, with a periodicity of around 11 years. Radiation received also varies with geographical location (polar, auroral zones, mid-latitudes, and equatorial regions). There are also mechanisms that disturb the ionosphere and decrease the ionization. There are disturbances such as solar flares and the associated release of charged particles into the solar wind which reaches the Earth and interacts with its geomagnetic field. At night the F layer is the only layer of significant ionization present, while the ionization in the E and D layers is extremely low. During the day, the D and E layers become much more heavily ionized, as does the F layer, which develops an additional, weaker region of ionisation known as the F layer. The F layer persists by day and night and is the main region responsible for the refraction and reflection of radio waves. The D layer is the innermost layer, to above the surface of the Earth. Ionization here is due to Lyman series-alpha hydrogen radiation at a wavelength of 121.6 nanometre (nm) ionizing nitric oxide (NO). In addition, high solar activity can generate hard X-rays (wavelength ) that ionize N and O. Recombination rates are high in the D layer, so there are many more neutral air molecules than ions. Medium frequency (MF) and lower high frequency (HF) radio waves are significantly attenuated within the D layer, as the passing radio waves cause electrons to move, which then collide with the neutral molecules, giving up their energy. Lower frequencies experience greater absorption because they move the electrons farther, leading to greater chance of collisions. This is the main reason for absorption of HF radio waves, particularly at 10 MHz and below, with progressively less absorption at higher frequencies. This effect peaks around noon and is reduced at night due to a decrease in the D layer's thickness; only a small part remains due to cosmic rays. A common example of the D layer in action is the disappearance of distant AM broadcast band stations in the daytime. During solar proton events, ionization can reach unusually high levels in the D-region over high and polar latitudes. Such very rare events are known as Polar Cap Absorption (or PCA) events, because the increased ionization significantly enhances the absorption of radio signals passing through the region. In fact, absorption levels can increase by many tens of dB during intense events, which is enough to absorb most (if not all) transpolar HF radio signal transmissions. Such events typically last less than 24 to 48 hours. The E layer is the middle layer, to above the surface of the Earth. Ionization is due to soft X-ray (1–10 nm) and far ultraviolet (UV) solar radiation ionization of molecular oxygen (O). Normally, at oblique incidence, this layer can only reflect radio waves having frequencies lower than about 10 MHz and may contribute a bit to absorption on frequencies above. However, during intense sporadic E events, the E layer can reflect frequencies up to 50 MHz and higher. The vertical structure of the E layer is primarily determined by the competing effects of ionization and recombination. At night the E layer weakens because the primary source of ionization is no longer present. After sunset an increase in the height of the E layer maximum increases the range to which radio waves can travel by reflection from the layer. This region is also known as the Kennelly–Heaviside layer or simply the Heaviside layer. Its existence was predicted in 1902 independently and almost simultaneously by the American electrical engineer Arthur Edwin Kennelly (1861–1939) and the British physicist Oliver Heaviside (1850–1925). In 1924 that its existence was detected by Edward V. Appleton and Miles Barnett. The E layer (sporadic E-layer) is characterized by small, thin clouds of intense ionization, which can support reflection of radio waves, rarely up to 225 MHz. Sporadic-E events may last for just a few minutes to several hours. Sporadic E propagation makes VHF-operating radio amateurs very excited, as propagation paths that are generally unreachable can open up. There are multiple causes of sporadic-E that are still being pursued by researchers. This propagation occurs most frequently during the summer months when high signal levels may be reached. The skip distances are generally around . Distances for one hop propagation can be anywhere from to . Double-hop reception over is possible. The F layer or region, also known as the Appleton–Barnett layer, extends from about to more than above the surface of Earth. It is the layer with the highest electron density, which implies signals penetrating this layer will escape into space. Electron production is dominated by extreme ultraviolet (UV, 10–100 nm) radiation ionizing atomic oxygen. The F layer consists of one layer (F) at night, but during the day, a secondary peak (labelled F) often forms in the electron density profile. Because the F layer remains by day and night, it is responsible for most skywave propagation of radio waves and long distance high frequency (HF, or shortwave) radio communications. Above the F layer, the number of oxygen ions decreases and lighter ions such as hydrogen and helium become dominant. This region above the F layer peak and below the plasmasphere is called the topside ionosphere. From 1972 to 1975 NASA launched the AEROS and AEROS B satellites to study the F region. An ionospheric model is a mathematical description of the ionosphere as a function of location, altitude, day of year, phase of the sunspot cycle and geomagnetic activity. Geophysically, the state of the ionospheric plasma may be described by four parameters: "electron density, electron and ion temperature" and, since several species of ions are present, "ionic composition". Radio propagation depends uniquely on electron density. Models are usually expressed as computer programs. The model may be based on basic physics of the interactions of the ions and electrons with the neutral atmosphere and sunlight, or it may be a statistical description based on a large number of observations or a combination of physics and observations. One of the most widely used models is the International Reference Ionosphere (IRI), which is based on data and specifies the four parameters just mentioned. The IRI is an international project sponsored by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI). The major data sources are the worldwide network of ionosondes, the powerful incoherent scatter radars (Jicamarca, Arecibo, Millstone Hill, Malvern, St Santin), the ISIS and Alouette topside sounders, and in situ instruments on several satellites and rockets. IRI is updated yearly. IRI is more accurate in describing the variation of the electron density from bottom of the ionosphere to the altitude of maximum density than in describing the total electron content (TEC). Since 1999 this model is "International Standard" for the terrestrial ionosphere (standard TS16457). Ionograms allow deducing, via computation, the true shape of the different layers. Nonhomogeneous structure of the electron/ion-plasma produces rough echo traces, seen predominantly at night and at higher latitudes, and during disturbed conditions. At mid-latitudes, the F2 layer daytime ion production is higher in the summer, as expected, since the Sun shines more directly on the Earth. However, there are seasonal changes in the molecular-to-atomic ratio of the neutral atmosphere that cause the summer ion loss rate to be even higher. The result is that the increase in the summertime loss overwhelms the increase in summertime production, and total F2 ionization is actually lower in the local summer months. This effect is known as the winter anomaly. The anomaly is always present in the northern hemisphere, but is usually absent in the southern hemisphere during periods of low solar activity. Within approximately ± 20 degrees of the "magnetic equator", is the "equatorial anomaly". It is the occurrence of a trough in the ionization in the F2 layer at the equator and crests at about 17 degrees in magnetic latitude. The Earth's magnetic field lines are horizontal at the magnetic equator. Solar heating and tidal oscillations in the lower ionosphere move plasma up and across the magnetic field lines. This sets up a sheet of electric current in the E region which, with the horizontal magnetic field, forces ionization up into the F layer, concentrating at ± 20 degrees from the magnetic equator. This phenomenon is known as the "equatorial fountain". The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (ionospheric dynamo region) ( altitude). Resulting from this current is an electrostatic field directed west–east (dawn–dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current flow within ± 3 degrees of the magnetic equator, known as the equatorial electrojet. When the Sun is active, strong solar flares can occur that hit the sunlit side of Earth with hard X-rays. The X-rays penetrate to the D-region, releasing electrons that rapidly increase absorption, causing a high frequency (3–30 MHz) radio blackout. During this time very low frequency (3–30 kHz) signals will be reflected by the D layer instead of the E layer, where the increased atmospheric density will usually increase the absorption of the wave and thus dampen it. As soon as the X-rays end, the sudden ionospheric disturbance (SID) or radio black-out ends as the electrons in the D-region recombine rapidly and signal strengths return to normal. Associated with solar flares is a release of high-energy protons. These particles can hit the Earth within 15 minutes to 2 hours of the solar flare. The protons spiral around and down the magnetic field lines of the Earth and penetrate into the atmosphere near the magnetic poles increasing the ionization of the D and E layers. PCA's typically last anywhere from about an hour to several days, with an average of around 24 to 36 hours. Coronal mass ejections can also release energetic protons that enhance D-region absorption in the polar regions. A geomagnetic storm is a temporary intense disturbance of the Earth's magnetosphere. Lightning can cause ionospheric perturbations in the D-region in one of two ways. The first is through VLF (very low frequency) radio waves launched into the magnetosphere. These so-called "whistler" mode waves can interact with radiation belt particles and cause them to precipitate onto the ionosphere, adding ionization to the D-region. These disturbances are called "lightning-induced electron precipitation" (LEP) events. Additional ionization can also occur from direct heating/ionization as a result of huge motions of charge in lightning strikes. These events are called early/fast. In 1925, C. T. R. Wilson proposed a mechanism by which electrical discharge from lightning storms could propagate upwards from clouds to the ionosphere. Around the same time, Robert Watson-Watt, working at the Radio Research Station in Slough, UK, suggested that the ionospheric sporadic E layer (Es) appeared to be enhanced as a result of lightning but that more work was needed. In 2005, C. Davis and C. Johnson, working at the Rutherford Appleton Laboratory in Oxfordshire, UK, demonstrated that the Es layer was indeed enhanced as a result of lightning activity. Their subsequent research has focused on the mechanism by which this process can occur. Due to the ability of ionized atmospheric gases to refract high frequency (HF, or shortwave) radio waves, the ionosphere can reflect radio waves directed into the sky back toward the Earth. Radio waves directed at an angle into the sky can return to Earth beyond the horizon. This technique, called "skip" or "skywave" propagation, has been used since the 1920s to communicate at international or intercontinental distances. The returning radio waves can reflect off the Earth's surface into the sky again, allowing greater ranges to be achieved with multiple hops. This communication method is variable and unreliable, with reception over a given path depending on time of day or night, the seasons, weather, and the 11-year sunspot cycle. During the first half of the 20th century it was widely used for transoceanic telephone and telegraph service, and business and diplomatic communication. Due to its relative unreliability, shortwave radio communication has been mostly abandoned by the telecommunications industry, though it remains important for high-latitude communication where satellite-based radio communication is not possible. Some broadcasting stations and automated services still use shortwave radio frequencies, as do radio amateur hobbyists for private recreational contacts. When a radio wave reaches the ionosphere, the electric field in the wave forces the electrons in the ionosphere into oscillation at the same frequency as the radio wave. Some of the radio-frequency energy is given up to this resonant oscillation. The oscillating electrons will then either be lost to recombination or will re-radiate the original wave energy. Total refraction can occur when the collision frequency of the ionosphere is less than the radio frequency, and if the electron density in the ionosphere is great enough. A qualitative understanding of how an electromagnetic wave propagates through the ionosphere can be obtained by recalling geometric optics. Since the ionosphere is a plasma, it can be shown that the refractive index is less than unity. Hence, the electromagnetic "ray" is bent away from the normal rather than toward the normal as would be indicated when the refractive index is greater than unity. It can also be shown that the refractive index of a plasma, and hence the ionosphere, is frequency-dependent, see Dispersion (optics). The critical frequency is the limiting frequency at or below which a radio wave is reflected by an ionospheric layer at vertical incidence. If the transmitted frequency is higher than the plasma frequency of the ionosphere, then the electrons cannot respond fast enough, and they are not able to re-radiate the signal. It is calculated as shown below: where N = electron density per m3 and fcritical is in Hz. The Maximum Usable Frequency (MUF) is defined as the upper frequency limit that can be used for transmission between two points at a specified time. where formula_3 = angle of attack, the angle of the wave relative to the horizon, and sin is the sine function. The cutoff frequency is the frequency below which a radio wave fails to penetrate a layer of the ionosphere at the incidence angle required for transmission between two specified points by refraction from the layer. The open system electrodynamic tether, which uses the ionosphere, is being researched. The space tether uses plasma contactors and the ionosphere as parts of a circuit to extract energy from the Earth's magnetic field by electromagnetic induction. Scientists explore the structure of the ionosphere by a wide variety of methods. They include: A variety of experiments, such as HAARP (High Frequency Active Auroral Research Program), involve high power radio transmitters to modify the properties of the ionosphere. These investigations focus on studying the properties and behavior of ionospheric plasma, with particular emphasis on being able to understand and use it to enhance communications and surveillance systems for both civilian and military purposes. HAARP was started in 1993 as a proposed twenty-year experiment, and is currently active near Gakona, Alaska. The SuperDARN radar project researches the high- and mid-latitudes using coherent backscatter of radio waves in the 8 to 20 MHz range. Coherent backscatter is similar to Bragg scattering in crystals and involves the constructive interference of scattering from ionospheric density irregularities. The project involves more than 11 different countries and multiple radars in both hemispheres. Scientists are also examining the ionosphere by the changes to radio waves, from satellites and stars, passing through it. The Arecibo radio telescope located in Puerto Rico, was originally intended to study Earth's ionosphere. Ionograms show the virtual heights and critical frequencies of the ionospheric layers and which are measured by an ionosonde. An ionosonde sweeps a range of frequencies, usually from 0.1 to 30 MHz, transmitting at vertical incidence to the ionosphere. As the frequency increases, each wave is refracted less by the ionization in the layer, and so each penetrates further before it is reflected. Eventually, a frequency is reached that enables the wave to penetrate the layer without being reflected. For ordinary mode waves, this occurs when the transmitted frequency just exceeds the peak plasma, or critical, frequency of the layer. Tracings of the reflected high frequency radio pulses are known as ionograms. Reduction rules are given in: "URSI Handbook of Ionogram Interpretation and Reduction", edited by William Roy Piggott and Karl Rawer, Elsevier Amsterdam, 1961 (translations into Chinese, French, Japanese and Russian are available). Incoherent scatter radars operate above the critical frequencies. Therefore, the technique allows probing the ionosphere, unlike ionosondes, also above the electron density peaks. The thermal fluctuations of the electron density scattering the transmitted signals lack coherence, which gave the technique its name. Their power spectrum contains information not only on the density, but also on the ion and electron temperatures, ion masses and drift velocities. Radio occultation is a remote sensing technique where a GNSS signal tangentially scrapes the Earth, passing through the atmosphere, and is received by a Low Earth Orbit (LEO) satellite. As the signal passes through the atmosphere, it is refracted, curved and delayed. An LEO satellite samples the total electron content and bending angle of many such signal paths as it watches the GNSS satellite rise or set behind the Earth. Using an Inverse Abel's transform, a radial profile of refractivity at that tangent point on earth can be reconstructed. Major GNSS radio occultation missions include the GRACE, CHAMP, and COSMIC. In empirical models of the ionosphere such as Nequick, the following indices are used as indirect indicators of the state of the ionosphere. F10.7 and R12 are two indices commonly used in ionospheric modelling. Both are valuable for their long historical records covering multiple solar cycles. F10.7 is a measurement of the intensity of solar radio emissions at a frequency of 2800 MHz made using a ground radio telescope. R12 is a 12 months average of daily sunspot numbers. Both indices have been shown to be correlated to each other. However, both indices are only indirect indicators of solar ultraviolet and X-ray emissions, which are primarily responsible for causing ionization in the Earth's upper atmosphere. We now have data from the GOES spacecraft that measures the background X-ray flux from the Sun, a parameter more closely related to the ionization levels in the ionosphere. There are a number of models used to understand the effects of the ionosphere global navigation satellite systems. The Klobuchar model is currently used to compensate for ionospheric effects in GPS. This model was developed at the US Air Force Geophysical Research Laboratory circa 1974 by John (Jack) Klobuchar. The Galileo navigation system uses the NeQuick model. Objects in the Solar System that have appreciable atmospheres (i.e., all of the major planets and many of the larger natural satellites) generally produce ionospheres. Planets known to have ionospheres include Venus, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto. The atmosphere of Titan includes an ionosphere that ranges from about to in altitude and contains carbon compounds. Ionospheres have also been observed at Io, Europa, Ganymede, and Triton.
https://en.wikipedia.org/wiki?curid=15097
Interlingua Interlingua (; ISO 639 language codes "ia", "ina") is an Italic international auxiliary language (IAL), developed between 1937 and 1951 by the International Auxiliary Language Association (IALA). It ranks among the top most widely used IALs, and is the most widely used naturalistic IAL: in other words, its vocabulary, grammar and other characteristics are derived from natural languages, rather than being centrally planned. Interlingua was developed to combine a simple, mostly regular grammar with a vocabulary common to the widest possible range of western European languages, making it unusually easy to learn, at least for those whose native languages were sources of Interlingua's vocabulary and grammar. Conversely, it is used as a rapid introduction to many natural languages. Interlingua literature maintains that (written) Interlingua is comprehensible to the hundreds of millions of people who speak Romance languages, though it is actively spoken by only a few hundred. The name Interlingua comes from the Latin words ', meaning "between", and ', meaning "tongue" or "language". These morphemes are identical in Interlingua. Thus, "Interlingua" would mean "between language". The expansive movements of science, technology, trade, diplomacy, and the arts, combined with the historical dominance of the Greek and Latin languages have resulted in a large common vocabulary among European languages. With Interlingua, an objective procedure is used to extract and standardize the most widespread word or words for a concept found in a set of primary control languages: English, French, Italian, Spanish and Portuguese, with German and Russian as secondary control languages. Words from any language are eligible for inclusion, so long as their internationality is shown by their presence in these control languages. Hence, Interlingua includes such diverse word forms as Japanese "geisha" and "samurai", Arabic "califa", Guugu Yimithirr "gangurru" (Interlingua: kanguru), and Finnish "sauna". Interlingua combines this pre-existing vocabulary with a minimal grammar based on the control languages. People with a good knowledge of a Romance language, or a smattering of a Romance language plus a good knowledge of the "international scientific vocabulary" can frequently understand it immediately on reading or hearing it. The immediate comprehension of Interlingua, in turn, makes it unusually easy to learn. Speakers of other languages can also learn to speak and write Interlingua in a short time, thanks to its simple grammar and regular word formation using a small number of roots and affixes. Once learned, Interlingua can be used to learn other related languages quickly and easily, and in some studies, even to understand them immediately. Research with Swedish students has shown that, after learning Interlingua, they can translate elementary texts from Italian, Portuguese, and Spanish. In one 1974 study, an Interlingua class translated a Spanish text that students who had taken 150 hours of Spanish found too difficult to understand. Gopsill has suggested that Interlingua's freedom from irregularities allowed the students to grasp the mechanisms of language quickly. The American heiress Alice Vanderbilt Morris (1874–1950) became interested in linguistics and the international auxiliary language movement in the early 1920s, and in 1924, Morris and her husband, Dave Hennen Morris, established the non-profit International Auxiliary Language Association (IALA) in New York City. Their aim was to place the study of IALs on a scientific basis. Morris developed the research program of IALA in consultation with Edward Sapir, William Edward Collinson, and Otto Jespersen. The IALA became a major supporter of mainstream American linguistics. Numerous studies by Sapir, Collinson, and Morris Swadesh in the 1930s and 1940s, for example, were funded by IALA. Alice Morris edited several of these studies and provided much of IALA's financial support. IALA also received support from such prestigious groups as the Carnegie Corporation, the Ford Foundation, the Research Corporation, and the Rockefeller Foundation. In its early years, IALA concerned itself with three tasks: finding other organizations around the world with similar goals; building a library of books about languages and interlinguistics; and comparing extant IALs, including Esperanto, Esperanto II, Ido, Peano's Interlingua (Latino sine flexione), Novial, and Interlingue (Occidental). In pursuit of the last goal, it conducted parallel studies of these languages, with comparative studies of national languages, under the direction of scholars at American and European universities. It also arranged conferences with proponents of these IALs, who debated features and goals of their respective languages. With a "concession rule" that required participants to make a certain number of concessions, early debates at IALA sometimes grew from heated to explosive. At the Second International Interlanguage Congress, held in Geneva in 1931, IALA began to break new ground; 27 recognized linguists signed a testimonial of support for IALA's research program. An additional eight added their signatures at the third congress, convened in Rome in 1933. That same year, Herbert N. Shenton and Edward L. Thorndike became influential in IALA's work by authoring key studies in the interlinguistic field. The first steps towards the finalization of Interlingua were taken in 1937, when a committee of 24 eminent linguists from 19 universities published "Some Criteria for an International Language and Commentary". However, the outbreak of World War II in 1939 cut short the intended biannual meetings of the committee. Originally, the association had not set out to create its own language. Its goal was to identify which auxiliary language already available was best suited for international communication, and how to promote it most effectively. However, after ten years of research, more and more members of IALA concluded that none of the existing interlanguages were up to the task. By 1937, the members had made the decision to create a new language, to the surprise of the world's interlanguage community. To that point, much of the debate had been equivocal on the decision to use naturalistic (e.g., Peano's Interlingua, Novial and Occidental) or systematic (e.g., Esperanto and Ido) words. During the war years, proponents of a naturalistic interlanguage won out. The first support was Thorndike's paper; the second was a concession by proponents of the systematic languages that thousands of words were already present in many, or even a majority, of the European languages. Their argument was that systematic derivation of words was a Procrustean bed, forcing the learner to unlearn and re-memorize a new derivation scheme when a usable vocabulary was already available. This finally convinced supporters of the systematic languages, and IALA from that point assumed the position that a naturalistic language would be best. IALA's research activities were based in Liverpool, before relocating to New York due to the outbreak of World War II, where E. Clark Stillman established a new research staff. Stillman, with the assistance of Alexander Gode, developed a "prototyping" technique – an objective methodology for selecting and standardizing vocabulary based on a comparison of "control languages". In 1943 Stillman left for war work and Gode became Acting Director of Research. IALA began to develop models of the proposed language, the first of which were presented in Morris's "General Report" in 1945. From 1946 to 1948, French linguist André Martinet was Director of Research. During this period IALA continued to develop models and conducted polling to determine the optimal form of the final language. In 1946, IALA sent an extensive survey to more than 3,000 language teachers and related professionals on three continents. Four models were canvassed: The results of the survey were striking. The two more schematic models were rejected – K overwhelmingly. Of the two naturalistic models, M received somewhat more support than P. IALA decided on a compromise between P and M, with certain elements of C. Martinet took up a position at Columbia University in 1948, and Gode took on the last phase of Interlingua's development. The vocabulary and grammar of Interlingua were first presented in 1951, when IALA published the finalized "" and the 27,000-word "Interlingua–English Dictionary" (IED). In 1954, IALA published an introductory manual entitled "Interlingua a Prime Vista" ("Interlingua at First Sight"). Interlingua as presented by the IALA is very close to Peano's Interlingua (Latino sine flexione), both in its grammar and especially in its vocabulary. Accordingly, the very name "Interlingua" was kept, yet a distinct abbreviation was adopted: IA instead of IL. An early practical application of Interlingua was the scientific newsletter "Spectroscopia Molecular", published from 1952 to 1980. In 1954, Interlingua was used at the Second World Cardiological Congress in Washington, D.C. for both written summaries and oral interpretation. Within a few years, it found similar use at nine further medical congresses. Between the mid-1950s and the late 1970s, some thirty scientific and especially medical journals provided article summaries in Interlingua. Science Service, the publisher of "Science Newsletter" at the time, published a monthly column in Interlingua from the early 1950s until Gode's death in 1970. In 1967, the International Organization for Standardization, which normalizes terminology, voted almost unanimously to adopt Interlingua as the basis for its dictionaries. The IALA closed its doors in 1953 but was not formally dissolved until 1956 or later. Its role in promoting Interlingua was largely taken on by Science Service, which hired Gode as head of its newly formed Interlingua Division. Hugh E. Blair, Gode's close friend and colleague, became his assistant. A successor organization, the Interlingua Institute, was founded in 1970 to promote Interlingua in the US and Canada. The new institute supported the work of other linguistic organizations, made considerable scholarly contributions and produced Interlingua summaries for scholarly and medical publications. One of its largest achievements was two immense volumes on phytopathology produced by the American Phytopathological Society in 1976 and 1977. Interlingua had attracted many former adherents of other international-language projects, notably Occidental and Ido. The former Occidentalist Ric Berger founded The Union Mundial pro Interlingua (UMI) in 1955, and by the late 1950s, interest in Interlingua in Europe had already begun to overtake that in North America. Beginning in the 1980s, UMI has held international conferences every two years (typical attendance at the earlier meetings was 50 to 100) and launched a publishing programme that eventually produced over 100 volumes. Other Interlingua-language works were published by university presses in Sweden and Italy, and in the 1990s, Brazil and Switzerland. Several Scandinavian schools undertook projects that used Interlingua as a means of teaching the international scientific and intellectual vocabulary. In 2000, the Interlingua Institute was dissolved amid funding disputes with the UMI; the American Interlingua Society, established the following year, succeeded the institute and responded to new interest emerging in Mexico. Interlingua was spoken and promoted in the Soviet bloc, despite attempts to suppress the language. In East Germany, government officials confiscated the letters and magazines that the UMI sent to Walter Rädler, the Interlingua representative there. In Czechoslovakia, Július Tomin published his first article on Interlingua in the Slovak magazine "Príroda a spoločnosť" (Nature and Society) in 1971, after which he received several anonymous threatening letters. He went on to become the Czech Interlingua representative, teach Interlingua in the school system, and publish a series of articles and books. Today, interest in Interlingua has expanded from the scientific community to the general public. Individuals, governments, and private companies use Interlingua for learning and instruction, travel, online publishing, and communication across language barriers. Interlingua is promoted internationally by the Union Mundial pro Interlingua. Periodicals and books are produced by many national organizations, such as the Societate American pro Interlingua, the Svenska Sällskapet för Interlingua, and the Union Brazilian pro Interlingua. It is not certain how many people have an active knowledge of Interlingua. As noted above, Interlingua is claimed to be the most widely spoken naturalistic auxiliary language. Interlingua's greatest advantage is that it is the most widely "understood" international auxiliary language besides Interlingua (IL) de A.p.I. by virtue of its naturalistic (as opposed to schematic) grammar and vocabulary, allowing those familiar with a Romance language, and educated speakers of English, to read and understand it without prior study. Interlingua has active speakers on all continents, especially in South America and in Eastern and Northern Europe, most notably Scandinavia; also in Russia and Ukraine. There are copious Interlingua web pages, including editions of Wikipedia and Wiktionary, and a number of periodicals, including "Panorama in Interlingua" from the Union Mundial pro Interlingua (UMI) and magazines of the national societies allied with it. There are several active mailing lists, and Interlingua is also in use in certain Usenet newsgroups, particularly in the europa.* hierarchy. Interlingua is presented on CDs, radio, and television. Interlingua is taught in many high schools and universities, sometimes as a means of teaching other languages quickly, presenting interlinguistics, or introducing the international vocabulary. The University of Granada in Spain, for example, offers an Interlingua course in collaboration with the Centro de Formación Continua. Every two years, the UMI organizes an international conference in a different country. In the year between, the Scandinavian Interlingua societies co-organize a conference in Sweden. National organizations such as the Union Brazilian pro Interlingua also organize regular conferences. , Google Keyboard supports Interlingua. Interlingua has a largely phonemic orthography. Interlingua uses the 26 letters of the ISO basic Latin alphabet with no diacritics. The alphabet, pronunciation in IPA and letter name in Interlingua are: The book "Grammar of Interlingua" defines in §15 a "collateral orthography". Interlingua is primarily a written language, and the pronunciation is not entirely settled. The sounds in parentheses are not used by all speakers. For the most part, consonants are pronounced as in English, while the vowels are like Spanish. Written double consonants may be geminated as in Italian for extra clarity or pronounced as single as in English or French. Interlingua has five falling diphthongs, , and , although and are rare. The "general rule" is that stress falls on the vowel before the last consonant (e.g., "lingua", 'language', "esser", 'to be', "requirimento", 'requirement') ignoring the final plural "-(e)s" (e.g. "linguas", the plural of "lingua", still has the same stress as the singular), and where that is not possible, on the first vowel ("via", 'way', "io crea", 'I create'). There are a few exceptions, and the following rules account for most of them: Speakers may pronounce all words according to the general rule mentioned above. For example, "kilometro" is acceptable, although "kilometro" is more common. Interlingua has no explicitly defined phonotactics. However, the prototyping procedure for determining Interlingua words, which strives for internationality, should in general lead naturally to words that are easy for most learners to pronounce. In the process of forming new words, an ending cannot always be added without a modification of some kind in between. A good example is the plural "-s", which is always preceded by a vowel to prevent the occurrence of a hard-to-pronounce consonant cluster at the end. If the singular does not end in a vowel, the final "-s" becomes "-es." Unassimilated foreign loanwords, or borrowed words, are spelled as in their language of origin. Their spelling may contain diacritics, or accent marks. If the diacritics do not affect pronunciation, they are removed. Words in Interlingua may be taken from any language, as long as their internationality is verified by their presence in seven "control" languages: Spanish, Portuguese, Italian, French, and English, with German and Russian acting as secondary controls. These are the most widely spoken Romance, Germanic, and Slavic languages, respectively. Because of their close relationship, Spanish and Portuguese are treated as one unit. The largest number of Interlingua words are of Latin origin, with the Greek and Germanic languages providing the second and third largest number. The remainder of the vocabulary originates in Slavic and non-Indo-European languages. A word, that is a form with meaning, is eligible for the Interlingua vocabulary if it is verified by at least three of the four primary control languages. Either secondary control language can substitute for a primary language. Any word of Indo-European origin found in a control language can contribute to the eligibility of an international word. In some cases, the archaic or "potential" presence of a word can contribute to its eligibility. A word can be potentially present in a language when a derivative is present, but the word itself is not. English "proximity", for example, gives support to Interlingua "proxime", meaning 'near, close'. This counts as long as one or more control languages actually have this basic root word, which the Romance languages all do. Potentiality also occurs when a concept is represented as a compound or derivative in a control language, the morphemes that make it up are themselves international, and the combination adequately conveys the meaning of the larger word. An example is Italian "fiammifero" (lit. flamebearer), meaning "match, lucifer", which leads to Interlingua "flammifero", or "match". This word is thus said to be potentially present in the other languages although they may represent the meaning with a single morpheme. Words do not enter the Interlingua vocabulary solely because cognates exist in a sufficient number of languages. If their meanings have become different over time, they are considered different words for the purpose of Interlingua eligibility. If they still have one or more meanings in common, however, the word can enter Interlingua with this smaller set of meanings. If this procedure did not produce an international word, the word for a concept was originally taken from Latin (see below). This only occurred with a few grammatical particles. The form of an Interlingua word is considered an "international prototype" with respect to the other words. On the one hand, it should be neutral, free from characteristics peculiar to one language. On the other hand, it should maximally capture the characteristics common to all contributing languages. As a result, it can be transformed into any of the contributing variants using only these language-specific characteristics. If the word has any derivatives that occur in the source languages with appropriate parallel meanings, then their morphological connection must remain intact; for example, the Interlingua word for 'time' is spelled "tempore" and not "*tempus" or "*tempo" in order to match it with its derived adjectives, such as "temporal". The language-specific characteristics are closely related to the sound laws of the individual languages; the resulting words are often close or even identical to the most recent form common to the contributing words. This sometimes corresponds with that of Vulgar Latin. At other times, it is much more recent or even contemporary. It is never older than the classical period. The French "œil", Italian "occhio", Spanish "ojo", and Portuguese "olho" appear quite different, but they descend from a historical form "oculus". German "Auge", Dutch "oog" and English "eye" (cf. Czech and Polish "oko", Ukrainian "око" "(óko)") are related to this form in that all three descend from Proto-Indo-European "*okʷ". In addition, international derivatives like "ocular" and "oculista" occur in all of Interlingua's control languages. Each of these forms contributes to the eligibility of the Interlingua word. German and English base words do not influence the form of the Interlingua word, because their Indo-European connection is considered too remote. Instead, the remaining base words and especially the derivatives determine the form "oculo" found in Interlingua. Interlingua has been developed to omit any grammatical feature that is absent from any one primary control language. Thus, Interlingua has no noun–adjective agreement by gender, case, or number (cf. Spanish and Portuguese "gatas negras" or Italian "gatte nere", 'black female cats'), because this is absent from English, and it has no progressive verb tenses (English "I am reading"), because they are absent from French. Conversely, Interlingua distinguishes singular nouns from plural nouns because all the control languages do. With respect to the secondary control languages, Interlingua has articles, unlike Russian. The definite article "le" is invariable, as in English. Nouns have no grammatical gender. Plurals are formed by adding "-s", or "-es" after a final consonant. Personal pronouns take one form for the subject and one for the direct object and reflexive. In the third person, the reflexive is always "se". Most adverbs are derived regularly from adjectives by adding "-mente", or "-amente" after a "-c". An adverb can be formed from any adjective in this way. Verbs take the same form for all persons ("io vive, tu vive, illa vive", 'I live', 'you live', 'she lives'). The indicative ("pare", 'appear', 'appears') is the same as the imperative ("pare!" 'appear!'), and there is no subjunctive. Three common verbs usually take short forms in the present tense: "es" for 'is', 'am', 'are;' "ha" for 'has', 'have;' and "va" for 'go', 'goes'. A few irregular verb forms are available, but rarely used. There are four simple tenses (present, past, future, and conditional), three compound tenses (past, future, and conditional), and the passive voice. The compound structures employ an auxiliary plus the infinitive or the past participle (e.g., "Ille ha arrivate", 'He has arrived'). Simple and compound tenses can be combined in various ways to express more complex tenses (e.g., "Nos haberea morite", 'We would have died'). Word order is subject–verb–object, except that a direct object pronoun or reflexive pronoun comes before the verb ("Io les vide", 'I see them'). Adjectives may precede or follow the nouns they modify, but they most often follow it. The position of adverbs is flexible, though constrained by common sense. The grammar of Interlingua has been described as similar to that of the Romance languages, but greatly simplified, primarily under the influence of English. More recently, Interlingua's grammar has been likened to the simple grammars of Japanese and particularly Chinese. Critics argue that, being based on a few European languages, Interlingua is suitable for speakers of European languages. Others contend that Interlingua has spelling irregularities that, while internationally recognizable in written form, increase the time needed to fully learn the language, especially for those unfamiliar with Indo-European languages. Proponents argue that Interlingua's source languages include not only Romance languages but English, German, and Russian as well. Moreover, the source languages are widely spoken, and large numbers of their words also appear in other languages – still more when derivative forms and loan translations are included. Tests had shown that if a larger number of source languages were used, the results would be about the same. From an essay by Alexander Gode: As with Esperanto, there have been proposals for a flag of Interlingua; the proposal by Czech translator Karel Podrazil is recognized by multilingual sites. It consists of a white four-pointed star extending to the edges of the flag and dividing it into an upper blue and lower red half. The star is symbolic of the four cardinal directions, and the two halves symbolize Romance and non-Romance speakers of Interlingua who understand each other. Another symbol of Interlingua is the "Blue Marble" surrounded by twelve stars on a black or blue background, echoing the twelve stars of the Flag of Europe (because the source languages of Interlingua are purely European).
https://en.wikipedia.org/wiki?curid=15100
Isle of Wight The Isle of Wight () is a county and the largest and second-most populous island in England. It is in the English Channel, between 2 and 5 miles off the coast of Hampshire, separated by the Solent. The island has resorts that have been holiday destinations since Victorian times, and is known for its mild climate, coastal scenery, and verdant landscape of fields, downland and chines. The island is designated a UNESCO Biosphere Reserve. The island has been home to the poets Algernon Charles Swinburne and Alfred, Lord Tennyson and to Queen Victoria, who built her much-loved summer residence and final home Osborne House at East Cowes. It has a maritime and industrial tradition including boat-building, sail-making, the manufacture of flying boats, the hovercraft, and Britain's space rockets. The island hosts annual music festivals including the Isle of Wight Festival, which in 1970 was the largest rock music event ever held. It has well-conserved wildlife and some of the richest cliffs and quarries for dinosaur fossils in Europe. The isle was owned by a Norman family until 1293 and was earlier a kingdom in its own right, Wihtwara. In common with the Crown dependencies, the British Crown was then represented on the island by the Governor of the Isle of Wight until 1995. The island has played an important part in the defence of the ports of Southampton and Portsmouth, and been near the front-line of conflicts through the ages, including the Spanish Armada and the Battle of Britain. Rural for most of its history, its Victorian fashionability and the growing affordability of holidays led to significant urban development during the late 19th and early 20th centuries. Historically part of Hampshire, the island became a separate administrative county in 1890. It continued to share the Lord Lieutenant of Hampshire until 1974, when it was made its own ceremonial county. Apart from a shared police force, and the island's Anglican churches belonging to the Diocese of Portsmouth (originally Winchester), there is now no administrative link with Hampshire; although a combined local authority with Portsmouth and Southampton was considered, this is now unlikely to proceed. The quickest public transport link to the mainland is the hovercraft from Ryde to Southsea; three vehicle ferry and two catamaran services cross the Solent to Southampton, Lymington and Portsmouth. During Pleistocene glacial periods, sea levels were lower and the present day Solent was part of the valley of the Solent River. The river flowed eastward from Dorset, following the course of the modern Solent strait, before travelling south and southwest towards the major Channel River system. The earliest evidence of archaic human presence on what is now the Isle of Wight is found at Priory Bay. Here more than 300 acheulean handaxes have been recovered from the beach and cliff slopes, originating from a sequence of Pleistocene gravels dating approximately to MIS 11 (424,000-374,000 years ago). A Mousterian flint assemblage, consisting of 50 handaxes and debitage has been recovered from Great Pan Farm near Newport. Possibly dating to MIS 7 (c.240,000 years ago), these tools are associated with Neanderthal occupation. A submerged escarpment 11m below sea level off Bouldnor Cliff on the island's northwest coastline is home to an internationally significant mesolithic archaeological site. The site has yielded evidence of seasonal occupation by mesolithic hunter-gatherers dating to c.8000 years BP. Finds include flint tools, burnt flint, worked timbers, wooden platforms and pits. The worked wood shows evidence of the splitting of large planks from oak trunks, interpreted as being intended for use as dug-out canoes. DNA analysis of sediments at the site yielded wheat DNA, not found in Britain until the Neolithic 2000 years after the occupation at Bouldnor Cliff. It has been suggested this is evidence of wide-reaching trade in mesolithic Europe, however the contemporaneity of the wheat with the Mesolithic occupation has been contested. When hunter-gatherers used the site it was located on a river bank surrounded by wetland and woodland. As sea levels rose throughout the Holocene the river valley slowly flooded, submerging the site. Evidence of mesolithic occupation on the island is generally found along the river valleys, particularly along the north of the Island, and in the former catchment of the western Yar. Further key sites are found at Newtown Creek, Werrar and Wootton-Quarr. Neolithic occupation on the Isle of Wight is primarily attested to by flint tools and monuments. Unlike the previous mesolithic hunter-gatherer population Neolithic communities on the Isle of Wight were based on farming and linked to a wide-scale migration of Neolithic populations from France and northwest Europe to Britain c.6000 years ago. The Isle of Wight's most visible Neolithic site is the Longstone at Mottistone, the remains of a long-barrow originally constructed with two standing stones at the entrance. Only one stone remains standing today. A Neolithic mortuary enclosure has been identified on Tennyson Down near Freshwater. Bronze Age Britain had large reserves of tin in the areas of Cornwall and Devon and tin is necessary to smelt bronze. At that time the sea level was much lower and carts of tin were brought across the Solent at low tide for export, possibly on the Ferriby Boats. Anthony Snodgrass suggests that a shortage of tin, as a part of the Bronze Age Collapse and trade disruptions in the Mediterranean around 1300 BC, forced metalworkers to seek an alternative to bronze. During Iron Age Britain, the Late Iron Age, the Isle of Wight would appear to have been occupied by the Celtic tribe, the Durotriges – as attested by finds of their coins, for example, the South Wight Hoard, and the Shalfleet Hoard. South eastern Britain experienced significant immigration that is reflected in the genetic makeup of the current residents. As the Iron Age began the value of tin likely dropped sharply and this likely greatly changed the economy of the Isle of Wight. Trade however continued as evidenced by the remarkable local abundance of European Iron Age coins. Julius Caesar reported that the Belgae took the Isle of Wight in about 85 BC, and recognised the culture of this general region as "Belgic", but made no reference to Vectis. The Roman historian Suetonius mentions that the island was captured by the commander Vespasian. The Romans built no towns on the island, but the remains of at least seven Roman villas have been found, indicating the prosperity of local agriculture. First-century exports were principally hides, slaves, hunting dogs, grain, cattle, silver, gold, and iron. Ferriby boats and later Blackfriars ships likely were important to the local economy. Starting in AD 449 (according to the Anglo Saxon Chronicles) the 5th and 6th centuries saw groups of Germanic speaking peoples from Northern Europe crossing the English Channel and setting up home. Bede's (731) "Historia ecclesiastica gentis Anglorum" identifies three separate groups of invaders: of these, the Jutes from Denmark settled the Isle of Wight and Kent. From then onwards, there are indications that the island had wide trading links, with a port at Bouldnor, evidence of Bronze Age tin trading, and finds of Late Iron Age coins. During the Dark Ages the island was settled by Jutes as the pagan kingdom of Wihtwara under King Arwald. In 685 it was invaded by Caedwalla, who tried to replace the inhabitants with his own followers. In 686 Arwald was defeated and the island became the last part of English lands to be converted to Christianity, added to Wessex and then becoming part of England under King Alfred the Great, included within the shire of Hampshire. It suffered especially from Viking raids, and was often used as a winter base by Viking raiders when they were unable to reach Normandy. Later, both Earl Tostig and his brother Harold Godwinson (who became King Harold II) held manors on the island. The Norman Conquest of 1066 created the position of Lord of the Isle of Wight; the island was given by William the Conqueror to his kinsman William FitzOsbern. Carisbrooke Priory and the fort of Carisbrooke Castle were then founded. Allegiance was sworn to FitzOsbern rather than the king; the Lordship was subsequently granted to the de Redvers family by Henry I, after his succession in 1100. For nearly 200 years the island was a semi-independent feudal fiefdom, with the de Redvers family ruling from Carisbrooke. The final private owner was the Countess Isabella de Fortibus, who, on her deathbed in 1293, was persuaded to sell it to Edward I. Thereafter the island was under control of the English Crown and its Lordship a royal appointment. The island continued to be attacked from the continent: it was raided in 1374 by the fleet of Castile, and in 1377 by French raiders who burned several towns, including Newtown, and laid siege to Carisbrooke Castle before they were defeated. Under Henry VIII, who developed the Royal Navy and its Portsmouth base, the island was fortified at Yarmouth, Cowes, East Cowes, and Sandown. The French invasion on 21 July 1545 (famous for the sinking of the Mary Rose on the 19th) was repulsed by local militia. During the English Civil War, King Charles fled to the Isle of Wight, believing he would receive sympathy from the governor Robert Hammond, but Hammond imprisoned the king in Carisbrooke Castle. During the Seven Years' War, the island was used as a staging post for British troops departing on expeditions against the French coast, such as the Raid on Rochefort. During 1759, with a planned French invasion imminent, a large force of soldiers was stationed there. The French called off their invasion following the Battle of Quiberon Bay. In the 1860s, what remains in real terms the most expensive ever government spending project saw fortifications built on the island and in the Solent, as well as elsewhere along the south coast, including the Palmerston Forts, The Needles Batteries and Fort Victoria, because of fears about possible French invasion. The future Queen Victoria spent childhood holidays on the island and became fond of it. When queen she made Osborne House her winter home, and so the island became a fashionable holiday resort, including for Alfred, Lord Tennyson, Julia Margaret Cameron, and Charles Dickens (who wrote much of "David Copperfield" there), as well as the French painter Berthe Morisot and members of European royalty. Until the queen's example, the island had been rural, with most people employed in farming, fishing or boat-building. The boom in tourism, spurred by growing wealth and leisure time, and by Victoria's presence, led to significant urban development of the island's coastal resorts. As one report summarizes, "The Queen’s regular presence on the island helped put the Isle of Wight 'on the map' as a Victorian holiday and wellness destination ... and her former residence Osborne House is now one of the most visited attractions on the island While on the island, the queen used a bathing machine that could be wheeled into the water on Osborne Beach; inside the small wooden hut she could undress and then bathe, without being visible to others. Her machine had a changing room and a WC with plumbing. The refurbished machine is now displayed at the beach. On 14 January 1878, Alexander Graham Bell demonstrated an early version of the telephone to the queen, placing calls to Cowes, Southampton and London. These were the first publicly-witnessed long distance telephone calls in the UK. The queen tried the device and considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The world's first radio station was set up by Marconi in 1897, during her reign, at the Needles Battery, at the western tip of the island. A 168 foot high mast was erected near the Royal Needles Hotel, as part of an experiment of communicating with ships at sea. That location is now the site of the Marconi Monument. In 1898 the first paid wireless telegram (called a "Marconigram") was sent from this station, and the island was for some time the home of the National Wireless Museum, near Ryde. Queen Victoria died at Osborne House on 22 January 1901, aged 81. During the Second World War the island was frequently bombed. With its proximity to German-occupied France, the island hosted observation stations and transmitters, as well as the RAF radar station at Ventnor. It was the starting-point for one of the earlier Operation Pluto pipelines to feed fuel to Europe after the Normandy landings. The Needles Battery was used to develop and test the Black Arrow and Black Knight space rockets, which were subsequently launched from Woomera, Australia. The Isle of Wight Festival was a very large rock festival that took place near Afton Down, West Wight in 1970, following two smaller concerts in 1968 and 1969. The 1970 show was notable both as one of the last public performances by Jimi Hendrix and for the number of attendees, reaching by some estimates 600,000. The festival was revived in 2002 in a different format, and is now an annual event. The oldest records that give a name for the Isle of Wight are from the Roman Empire: it was then called "Vectis" or "Vecta" in Latin, "Iktis" or "Ouiktis" in Greek. From the Anglo-Saxon period Latin "Vecta", Old English "Wiht" and Old Welsh forms "Gueid" and "Guith" are recorded. In Domesday Book it is "Wit"; the modern Welsh name is "Ynys Wyth" ("ynys" = island). These are all variant forms of the same name, possibly Celtic in origin. It may mean "place of the division", because the island divides the two arms of the Solent. The Isle of Wight is situated between the Solent and the English Channel, is roughly rhomboid in shape, and covers an area of . Slightly more than half, mainly in the west, is designated as the Isle of Wight Area of Outstanding Natural Beauty. The island has of farmland, of developed areas, and of coastline. Its landscapes are diverse, leading to its oft-quoted description as "England in miniature". In June 2019 the whole island was designated a UNESCO Biosphere Reserve, recognising the sustainable relationships between its residents and the local environment. West Wight is predominantly rural, with dramatic coastlines dominated by the chalk downland ridge, running across the whole island and ending in the Needles stacks. The southwestern quarter is commonly referred to as the Back of the Wight, and has a unique character. The highest point on the island is St Boniface Down in the south east, which at is a marilyn. The most notable habitats on the rest of the island are probably the soft cliffs and sea ledges, which are scenic features, important for wildlife, and internationally protected. The island has three principal rivers. The River Medina flows north into the Solent, the Eastern Yar flows roughly northeast to Bembridge Harbour, and the Western Yar flows the short distance from Freshwater Bay to a relatively large estuary at Yarmouth. Without human intervention the sea might well have split the island into three: at the west end where a bank of pebbles separates Freshwater Bay from the marshy backwaters of the Western Yar east of Freshwater, and at the east end where a thin strip of land separates Sandown Bay from the marshy Eastern Yar basin. The Undercliff between St Catherine's Point and Bonchurch is the largest area of landslip morphology in western Europe. The north coast is unusual in having four high tides each day, with a double high tide every twelve and a half hours. This arises because the western Solent is narrower than the eastern; the initial tide of water flowing from the west starts to ebb before the stronger flow around the south of the island returns through the eastern Solent to create a second high water. The Isle of Wight is made up of a variety of rock types dating from early Cretaceous (around 127 million years ago) to the middle of the Palaeogene (around 30 million years ago). The geological structure is dominated by a large monocline which causes a marked change in age of strata from the northern younger Tertiary beds to the older Cretaceous beds of the south. This gives rise to a dip of almost 90 degrees in the chalk beds, seen best at the Needles. The northern half of the island is mainly composed of clays, with the southern half formed of the chalk of the central east–west downs, as well as Upper and Lower Greensands and Wealden strata. These strata continue west from the island across the Solent into Dorset, forming the basin of Poole Harbour (Tertiary) and the Isle of Purbeck (Cretaceous) respectively. The chalky ridges of Wight and Purbeck were a single formation before they were breached by waters from the River Frome during the last ice age, forming the Solent and turning Wight into an island. The Needles, along with Old Harry Rocks on Purbeck, represent the edges of this breach. All the rocks found on the island are sedimentary, such as limestones, mudstones and sandstones. They are rich in fossils; many can be seen exposed on beaches as the cliffs erode. Lignitic coal is present in small quantities within seams, and can be seen on the cliffs and shore at Whitecliff Bay. Fossilised molluscs have been found there, and also on the northern coast along with fossilised crocodiles, turtles and mammal bones; the youngest date back to around 30 million years ago. The island is one of the most important areas in Europe for dinosaur fossils. The eroding cliffs often reveal previously hidden remains, particularly along the Back of the Wight. Dinosaur bones and fossilised footprints can be seen in and on the rocks exposed around the island's beaches, especially at Yaverland and Compton Bay. As a result, the island has been nicknamed "Dinosaur Island" and Dinosaur Isle was established in 2001. The area was affected by sea level changes during the repeated Quaternary glaciations. The island probably became separated from the mainland about 125,000 years ago, during the Ipswichian interglacial. Like the rest of the UK, the island has an oceanic climate, but is somewhat milder and sunnier, which makes it a holiday destination. It also has a longer growing season. Lower Ventnor and the neighbouring Undercliff have a particular microclimate, because of their sheltered position south of the downs. The island enjoys 1,800–2,100 hours of sunshine a year. Some years have almost no snow in winter, and only a few days of hard frost. The island is in Hardiness zone 9. The Isle of Wight is one of the few places in England where the red squirrel is still flourishing; no grey squirrels are to be found. There are occasional sightings of wild deer, and there is a colony of wild goats on Ventnor's downs. Protected species such as the dormouse and rare bats can be found. The Glanville fritillary butterfly's distribution in the United Kingdom is largely restricted to the edges of the island's crumbling cliffs. A competition in 2002 named the pyramidal orchid as the Isle of Wight's county flower. The island has a single Member of Parliament. The Isle of Wight constituency covers the entire island, with 138,300 permanent residents in 2011, being one of the most populated constituencies in the United Kingdom (more than 50% above the English average). In 2011 following passage of the Parliamentary Voting System and Constituencies Act, the Sixth Periodic Review of Westminster constituencies was to have changed this, but this was deferred to no earlier than October 2018 by the Electoral Registration and Administration Act 2013. Thus the single constituency remained for the 2015 and 2017 general elections. However, two separate East and West constituencies are proposed for the island under the 2018 review now under way. The Isle of Wight is a ceremonial and non-metropolitan county. Since the abolition of its two borough councils and restructuring of the Isle of Wight County Council into the new Isle of Wight Council in 1995, it has been administered by a single unitary authority. Elections in the constituency have traditionally been a battle between the Conservatives and the Liberal Democrats. Andrew Turner of the Conservative Party gained the seat from Peter Brand of the Lib Dems at the 2001 general election. Since 2009, Turner was embroiled in controversy over his expenses, health, and relationships with colleagues, with local Conservatives having tried but failed to remove him in the runup to the 2015 general election. He stood down prior to the 2017 snap general election, and the new Conservative Party candidate Bob Seely was elected with a majority of 21,069 votes. At the Isle of Wight Council election of 2013, the Conservatives lost the majority which they had held since 2005 to the Island Independents, with Island Independent councillors holding 16 of the 40 seats, and a further five councillors sitting as independents outside the group. The Conservatives regained control, winning 25 seats, at the 2017 local election. There have been small regionalist movements: the Vectis National Party and the Isle of Wight Party; but they have attracted little support at elections. The local accent is similar to the traditional dialect of Hampshire, featuring the dropping of some consonants and an emphasis on longer vowels. It is similar to the West Country dialects heard in South West England, but less pronounced. The island has its own local and regional words. Some, such as "nipper/nips" (a young male person), are still commonly used and are shared with neighbouring areas of the mainland. A few are unique to the island, for example "overner" and "caulkhead" (see below). Others are more obscure and now used mainly for comic emphasis, such as "mallishag" (meaning "caterpillar"), "gurt" meaning "large", "nammit" (a mid-morning snack) and "gallybagger" ("scarecrow", and now the name of a local cheese). There remains occasional confusion between the Isle of Wight as a county and its former position within Hampshire. The island was regarded and administered as a part of Hampshire until 1890, when its distinct identity was recognised with the formation of Isle of Wight County Council (see also "Politics of the Isle of Wight"). However, it remained a part of Hampshire until the local government reforms of 1974 when it became a full ceremonial county with its own Lord Lieutenant. In January 2009, the first general flag for the county was accepted by the Flag Institute. Island residents are sometimes referred to as "Vectensians", "Vectians" or, if born on the island, "caulkheads". One theory is that this last comes from the once prevalent local industry of caulking or sealing wooden boats; the term became attached to islanders either because they were so employed, or as a derisory term for perceived unintelligent labourers from elsewhere. The term "overner" is used for island residents originating from the mainland (an abbreviated form of "overlander", which is an archaic term for "outsider" still found in parts of Australia). Residents refer to the island as "The Island", as did Jane Austen in "Mansfield Park", and sometimes to the UK mainland as "North Island". To promote the island's identity and culture, the High Sheriff Robin Courage founded an Isle of Wight Day; the first was held on Saturday 24 September 2016. The island is said to be the most haunted in the world, sometimes being referred to as "Ghost Island". Notable claimed hauntings include God's Providence House in Newport (now a tea room), Appuldurcombe House, and the remains of Knighton Gorges. The island is well known for its cycling, and it was included within Lonely Planet's "Best in Travel Guide" (2010) top ten cycling locations. The island also hosts events such as the Isle of Wight Randonnée and the Isle of Wight Cycling Festival each year. A popular cycling track is the Sunshine Trail which starts in Newport and ends in Sandown. There are rowing clubs at Newport, Ryde and Shanklin, all members of the Hants and Dorset rowing association. There is a long tradition of rowing around the island dating back to the 1880s. In May 1999 a group of local women made history by becoming the first ladies' crew to row around the island, in ten hours and twenty minutes. Rowers from Ryde Rowing Club have rowed around the island several times since 1880. The fours record was set 16 August 1995 at 7 hours 54 minutes. Two rowers from Southampton ARC (Chris Bennett and Roger Slaymaker) set the two-man record in July 2003 at 8 hours 34 minutes, and in 2005 Gus McKechnie of Coalporters Rowing Club became the first adaptive rower to row around, completing a clockwise row. The route around the island is about and usually rowed anticlockwise. Even in good conditions, it includes a number of significant obstacles such as the Needles and the overfalls at St Catherine's Point. The traditional start and finish were at Ryde Rowing Club; however, other starts have been chosen in recent years to give a tidal advantage. Cowes is a centre for sailing, hosting several racing regattas. Cowes Week is the longest-running regular regatta in the world, with over 1,000 yachts and 8,500 competitors taking part in over 50 classes of racing. In 1851 the first America's Cup race was around the island. Other major sailing events hosted in Cowes include the Fastnet race, the Round the Island Race, the Admiral's Cup, and the Commodore's Cup. There are two main trampoline clubs on the island, in Freshwater and Newport, competing at regional, national and international grades. The Isle of Wight Marathon is the United Kingdom's oldest continuously held marathon, having been run every year since 1957. Since 2013 the course has started and finished in Cowes, heading out to the west of the island and passing through Gurnard, Rew Street, Porchfield, Shalfleet, Yarmouth, Afton, Willmingham, Thorley, Wellow, Shalfleet, Porchfield, and Northwood. It is an undulating course with a total climb of . The island is home to the Wightlink Warriors speedway team, who compete in the sport's third division, the National League. Following an amalgamation of local hockey clubs in 2011, the Isle of Wight Hockey Club now runs two men's senior and two ladies' senior teams. These compete at a range of levels in the Hampshire open leagues. The now-disbanded Ryde Sports F.C., founded in 1888, was one of the eight founder members of the Hampshire League in 1896. There are several non-league clubs such as Newport (IOW) F.C. There is an Isle of Wight Saturday Football League which feeds into the Hampshire League with two divisions and two reserve team leagues, and a rugby union club. The Isle of Wight is the 39th official county in English cricket, and the Isle of Wight Cricket Board organises a league of local clubs. Ventnor Cricket Club competes in the Southern Premier League, and has won the Second Division several times. Newclose County Cricket Ground near Newport opened officially in 2009 but with its first match held on 6 September 2008. The island has produced some notable cricketers, such as Danny Briggs, who plays county cricket for Sussex. The Isle of Wight competes in the biennial Island Games, which it hosted in 1993 and again in 2011. The annual Isle of Wight International Scooter Rally has since 1980 met on the August Bank Holiday. This is now one of the biggest scooter rallies in the world, attracting between four and seven thousand participants. There are eight Golf courses on the Isle of Wight. The island is home to the Isle of Wight Festival and until 2016, Bestival before it was relocated to Lulworth Estate in Dorset. In 1970, the festival was headlined by Jimi Hendrix attracting an audience of 600,000, some six times the local population at the time. It is the home of the bands The Bees, Trixie's Big Red Motorbike and Level 42. The table below shows the regional gross value (in millions of pounds) added by the Isle of Wight economy, at current prices, compiled by the Office for National Statistics. According to the 2011 census, the island's population of 138,625 lives in 61,085 households, giving an average household size of 2.27 people. 41% of households own their home outright and a further 29% own with a mortgage, so in total 70% of households are owned (compared to 68% for South East England). Compared to South East England, the island has fewer children (19% aged 0–17 against 22% for the South East) and more elderly (24% aged 65+ against 16%), giving an average age of 44 years for an island resident compared to 40 in South East England. The largest industry is tourism, but the island also has a strong agricultural heritage, including sheep and dairy farming and arable crops. Traditional agricultural commodities are more difficult to market off the island because of transport costs, but local farmers have succeeded in exploiting some specialist markets, with the higher price of such products absorbing the transport costs. One of the most successful agricultural sectors is now the growing of crops under cover, particularly salad crops including tomatoes and cucumbers. The island has a warmer climate and a longer growing season than much of the United Kingdom. Garlic has been successfully grown in Newchurch for many years, and is even exported to France. This has led to the establishment of an annual Garlic Festival at Newchurch, which is one of the largest events of the local calendar. A favourable climate supports two vineyards, including one of the oldest in the British Isles at Adgestone. Lavender is grown for its oil. The largest agricultural sector has been dairying, but due to low milk prices and strict legislation for UK milk producers, the dairy industry has been in decline: there were nearly 150 producers in the mid-1980s, but now just 24. Maritime industries, especially the making of sailcloth and boat building, have long been associated with the island, although this has diminished somewhat in recent years. GKN operates what began as the British Hovercraft Corporation, a subsidiary of (and known latterly as) Westland Aircraft, although they have reduced the extent of plant and workforce and sold the main site. Previously it had been the independent company Saunders-Roe, one of the island's most notable historic firms that produced many flying boats and the world's first hovercraft. Another manufacturing activity is in composite materials, used by boat-builders and the wind turbine manufacturer Vestas, which has a wind turbine blade factory and testing facilities in West Medina Mills and East Cowes. Bembridge Airfield is the home of Britten-Norman, manufacturers of the Islander and Trislander aircraft. This is shortly to become the site of the European assembly line for Cirrus light aircraft. The Norman Aeroplane Company is a smaller aircraft manufacturing company operating in Sandown. There have been three other firms that built planes on the island. In 2005, Northern Petroleum began exploratory drilling for oil at its Sandhills-2 borehole at Porchfield, but ceased operations in October that year after failing to find significant reserves. There are three breweries on the island. Goddards Brewery in Ryde opened in 1993. David Yates, who was head brewer of the Island Brewery, started brewing as Yates Brewery at the Inn at St Lawrence in 2000. Ventnor Brewery, which closed in 2009, was the last incarnation of Burt's Brewery, brewing since the 1840s in Ventnor. Until the 1960s most pubs were owned by Mews Brewery, situated in Newport near the old railway station, but it closed and the pubs were taken over by Strong's, and then by Whitbread. By some accounts Mews beer was apt to be rather cloudy and dark. In the 19th century they pioneered the use of screw top cans for export to British India. The island's heritage is a major asset that has for many years supported its tourist economy. Holidays focused on natural heritage, including wildlife and geology, are becoming an alternative to the traditional British seaside holiday, which went into decline in the second half of the 20th century due to the increased affordability of foreign holidays. The island is still an important destination for coach tours from other parts of the United Kingdom. Tourism is still the largest industry, and most island towns and villages offer hotels, hostels and camping sites. In 1999, it hosted 2.7 million visitors, with 1.5 million staying overnight, and 1.2 million day visits; only 150,000 of these were from abroad. Between 1993 and 2000, visits increased at an average rate of 3% per year. At the turn of the 19th century the island had ten pleasure piers, including two at Ryde and a "chain pier" at Seaview. The Victoria Pier in Cowes succeeded the earlier Royal Pier but was itself removed in 1960. The piers at Ryde, Seaview, Sandown, Shanklin and Ventnor originally served a coastal steamer service that operated from Southsea on the mainland. The piers at Seaview, Shanklin, Ventnor and Alum Bay were all destroyed by various storms during the 20th century; only the railway pier at Ryde and the piers at Sandown, Totland Bay (currently closed to the public) and Yarmouth survive. Blackgang Chine is the oldest theme park in Britain, opened in 1843. The skeleton of a dead whale that its founder Alexander Dabell found in 1844 is still on display. As well as its more traditional attractions, the island is often host to walking or cycling holidays through the attractive scenery. An annual walking festival has attracted considerable interest. The Isle of Wight Coastal Path follows the coastline as far as possible, deviating onto roads where the route along the coast is impassable. The tourist board for the island is Visit Isle of Wight, a not for profit company. It is the Destination Management Organisation for the Isle of Wight, a public and private sector partnership led by the private sector, and consists of over 1,200 companies, including the ferry operators, the local bus company, rail operator and tourism providers working together to collectively promote the island. Its income is derived from the Wight BID, a business improvement district levy fund. A major contributor to the local economy is sailing and marine-related tourism. Summer Camp at Camp Beaumont is an attraction at the old Bembridge School site. The Isle of Wight has of roadway. It does not have a motorway, although there is a short stretch of dual carriageway towards the north of Newport near the hospital and prison. A comprehensive bus network operated by Southern Vectis links most settlements, with Newport as its central hub. Journeys away from the island involve a ferry journey. Car ferry and passenger catamaran services are run by Wightlink and Red Funnel, and a hovercraft passenger service (the only such remaining in the world) by Hovertravel. The island formerly had its own railway network of over , but only one line remains in regular use. The Island Line is part of the United Kingdom's National Rail network, running a little under from to , where there is a connecting ferry service to station on the mainland network. The line was opened by the Isle of Wight Railway in 1864, and from 1996 to 2007 was run by the smallest train operating company on the network, Island Line Trains. It is notable for utilising old ex-London Underground rolling stock, due to the small size of its tunnels and unmodernised signalling. Branching off the Island Line at is the heritage Isle of Wight Steam Railway, which runs for to the outskirts of on the former line to Newport. There are two airfields for general aviation, Isle of Wight Airport at Sandown and Bembridge Airport. The island has over of cycleways, many of which can be enjoyed off-road. The principal trails are: The main local newspaper is the "Isle of Wight County Press", published most Fridays. The island hosts a news website, "Island Echo", which was launched in May 2012. The island has a local commercial radio station and a community radio station: commercial station Isle of Wight Radio has broadcast in the medium-wave band since 1990 and on 107.0 MHz (with three smaller transmitters on 102.0 MHz) FM since 1998, as well as streaming on the Internet. Community station Vectis Radio has broadcast online since 2010, and in 2017 started broadcasting on FM 104.6. The station operates from the Riverside Centre in Newport. The island is also covered by a number of local stations on the mainland, including the BBC station BBC Radio Solent broadcast from Southampton. The island's not-for-profit community radio station Angel Radio opened in 2007. Angel Radio began broadcasting on 91.5 MHz from studios in Cowes and a transmitter near Newport. Other online news sources for the Isle of Wight include "On the Wight". The island has had community television stations in the past, first TV12 and then Solent TV from 2002 until its closure on 24 May 2007. iWight.tv is a local internet video news channel. The Isle of Wight is part of the BBC South region and the ITV Meridian region. Important broadcasting infrastructure includes Chillerton Down transmitting station with a mast that is the tallest structure on the island, and Rowridge transmitting station, which broadcasts the main television signal both locally and for most of Hampshire and parts of Dorset and West Sussex. The Isle of Wight is near the densely populated south of England, yet separated from the mainland. This position led to it hosting three prisons: Albany, Camp Hill and Parkhurst, all located outside Newport near the main road to Cowes. Albany and Parkhurst were among the few Category A prisons in the UK until they were downgraded in the 1990s. The downgrading of Parkhurst was precipitated by a major escape: three prisoners (two murderers and a blackmailer) escaped from the prison on 3 January 1995 for four days, before being recaptured. Parkhurst enjoyed notoriety as one of the toughest jails in the United Kingdom, and housed many notable inmates including the Yorkshire Ripper Peter Sutcliffe, New Zealand drug lord Terry Clark and the Kray twins. Camp Hill is located adjacent but to the west of Albany and Parkhurst, on the very edge of Parkhurst Forest, having been converted first to a borstal and later to a Category C prison. It was built on the site of an army camp (both Albany and Parkhurst were barracks); there is a small estate of tree-lined roads with the former officers' quarters (now privately owned) to the south and east. Camp Hill closed as a prison in March 2013. The management of all three prisons was merged into a single administration, under HMP Isle of Wight in April 2009. There are 69 local education authority-maintained schools on the Isle of Wight, and two independent schools. As a rural community, many of these are small and with fewer pupils than in urban areas. The Isle of Wight College is located on the outskirts of Newport. From September 2010, there was a transition period from the three-tier system of primary, middle and high schools to the two-tier system that is usual in England. Some schools have now closed, such as Chale C.E. Primary. Others have become "federated", such as Brading C.E. Primary and St Helen's Primary. Christ the King College started as two "middle schools," Trinity Middle School and Archbishop King Catholic Middle School, but has now been converted into a dual-faith secondary school and sixth form. Since September 2011 five new secondary schools, with an age range of 11 to 18 years, replaced the island's high schools (as a part of the previous three-tier system). Notable residents have included: The Isle of Wight has given names to many parts of former colonies, most notably Isle of Wight County in Virginia founded by settlers from the island in the 17th century. Its county seat is a town named Isle of Wight. Other notable examples include: The Isle of Wight was:
https://en.wikipedia.org/wiki?curid=15102
Internet Control Message Protocol The Internet Control Message Protocol (ICMP) is a supporting protocol in the Internet protocol suite. It is used by network devices, including routers, to send error messages and operational information indicating success or failure when communicating with another IP address, for example, an error is indicated when a requested service is not available or that a host or router could not be reached. ICMP differs from transport protocols such as TCP and UDP in that it is not typically used to exchange data between systems, nor is it regularly employed by end-user network applications (with the exception of some diagnostic tools like ping and traceroute). ICMP for IPv4 is defined in RFC 792. ICMP is part of the Internet protocol suite as defined in RFC 792. ICMP messages are typically used for diagnostic or control purposes or generated in response to errors in IP operations (as specified in RFC 1122). ICMP errors are directed to the source IP address of the originating packet. For example, every device (such as an intermediate router) forwarding an IP datagram first decrements the time to live (TTL) field in the IP header by one. If the resulting TTL is 0, the packet is discarded and an ICMP time exceeded in transit message is sent to the datagram's source address. Many commonly used network utilities are based on ICMP messages. The traceroute command can be implemented by transmitting IP datagrams with specially set IP TTL header fields, and looking for ICMP time exceeded in transit and Destination unreachable messages generated in response. The related ping utility is implemented using the ICMP "echo request" and "echo reply" messages. ICMP uses the basic support of IP as if it were a higher-level protocol, however, ICMP is actually an integral part of IP. Although ICMP messages are contained within standard IP packets, ICMP messages are usually processed as a special case, distinguished from normal IP processing. In many cases, it is necessary to inspect the contents of the ICMP message and deliver the appropriate error message to the application responsible for transmitting the IP packet that prompted the ICMP message to be sent. ICMP is a network-layer protocol. There is no TCP or UDP port number associated with ICMP packets as these numbers are associated with the transport layer above. The ICMP packet is encapsulated in an IPv4 packet. The packet consists of header and data sections. The ICMP header starts after the IPv4 header and is identified by IP protocol number '1'. All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have fixed format, while the last 4 bytes depend on the type/code of that ICMP packet. ICMP error messages contain a data section that includes a copy of the entire IPv4 header, plus at least the first eight bytes of data from the IPv4 packet that caused the error message. The maximum length of ICMP error messages is 576 bytes. This data is used by the host to match the message to the appropriate process. If a higher level protocol uses port numbers, they are assumed to be in the first eight bytes of the original datagram's data. The variable size of the ICMP packet data section has been exploited. In the "Ping of death", large or fragmented ICMP packets are used for denial-of-service attacks. ICMP data can also be used to create covert channels for communication. These channels are known as ICMP tunnels. Control messages are identified by the value in the "type" field. The "code" field gives additional context information for the message. Some control messages have been deprecated since the protocol was first introduced. "Source Quench" requests that the sender decrease the rate of messages sent to a router or host. This message may be generated if a router or host does not have sufficient buffer space to process the request, or may occur if the router or host buffer is approaching its limit. Data is sent at a very high speed from a host or from several hosts at the same time to a particular router on a network. Although a router has buffering capabilities, the buffering is limited to within a specified range. The router cannot queue any more data than the capacity of the limited buffering space. Thus if the queue gets filled up, incoming data is discarded until the queue is no longer full. But as no acknowledgement mechanism is present in the network layer, the client does not know whether the data has reached the destination successfully. Hence some remedial measures should be taken by the network layer to avoid these kind of situations. These measures are referred to as source quench. In a source quench mechanism, the router sees that the incoming data rate is much faster than the outgoing data rate, and sends an ICMP message to the clients, informing them that they should slow down their data transfer speeds or wait for a certain amount of time before attempting to send more data. When a client receives this message, it will automatically slow down the outgoing data rate or wait for a sufficient amount of time, which enables the router to empty the queue. Thus the source quench ICMP message acts as flow control in the network layer. Since research suggested that "ICMP Source Quench [was] an ineffective (and unfair) antidote for congestion", routers' creation of source quench messages was deprecated in 1995 by RFC 1812. Furthermore, forwarding of and any kind of reaction to (flow control actions) source quench messages was deprecated from 2012 by RFC 6633. Where: "Redirect" requests data packets be sent on an alternative route. ICMP Redirect is a mechanism for routers to convey routing information to hosts. The message informs a host to update its routing information (to send packets on an alternative route). If a host tries to send data through a router (R1) and R1 sends the data on another router (R2) and a direct path from the host to R2 is available (that is, the host and R2 are on the same Ethernet segment), then R1 will send a redirect message to inform the host that the best route for the destination is via R2. The host should then send packets for the destination directly to R2. The router will still send the original datagram to the intended destination. However, if the datagram contains routing information, this message will not be sent even if a better route is available. RFC 1122 states that redirects should only be sent by gateways and should not be sent by Internet hosts. Where: "Time Exceeded" is generated by a gateway to inform the source of a discarded datagram due to the time to live field reaching zero. A time exceeded message may also be sent by a host if it fails to reassemble a fragmented datagram within its time limit. Time exceeded messages are used by the traceroute utility to identify gateways on the path between two hosts. Where: "Timestamp" is used for time synchronization. The originating timestamp is set to the time (in milliseconds since midnight) the sender last touched the packet. The receive and transmit timestamps are not used. Where: "Timestamp Reply" replies to a "Timestamp" message. It consists of the originating timestamp sent by the sender of the "Timestamp" as well as a receive timestamp indicating when the "Timestamp" was received and a transmit timestamp indicating when the "Timestamp reply" was sent. Where: "Address mask request" is normally sent by a host to a router in order to obtain an appropriate subnet mask. Recipients should reply to this message with an "Address mask reply" message. Where: ICMP Address Mask Request may be used as a part of reconnaissance attack to gather information on the target network, therefore ICMP Address Mask Reply is disabled by default on Cisco IOS. "Address mask reply" is used to reply to an address mask request message with an appropriate subnet mask. Where: "Destination unreachable" is generated by the host or its inbound gateway to inform the client that the destination is unreachable for some reason. Reasons for this message may include: the physical connection to the host does not exist (distance is infinite); the indicated protocol or port is not active; the data must be fragmented but the 'don't fragment' flag is on. Unreachable TCP ports notably respond with TCP RST rather than a "destination unreachable" type 3 as might be expected. "Destination unreachable" is never reported for IP Multicast transmissions. Where:
https://en.wikipedia.org/wiki?curid=15107
Inverse limit In mathematics, the inverse limit (also called the projective limit) is a construction that allows one to "glue together" several related objects, the precise manner of the gluing process being specified by morphisms between the objects. Inverse limits can be defined in any category, and they are a special case of the concept of a limit in category theory. We start with the definition of an inverse system (or projective system) of groups and homomorphisms. Let ("I", ≤) be a directed poset (not all authors require "I" to be directed). Let ("A""i")"i"∈"I" be a family of groups and suppose we have a family of homomorphisms "f""ij": "A""j" → "A""i" for all "i" ≤ "j" (note the order), called bonding maps, with the following properties: Then the pair (("A""i")"i"∈"I", ("f""ij")"i"≤ "j"∈"I") is called an inverse system of groups and morphisms over "I", and the morphisms "f""ij" are called the transition morphisms of the system. We define the inverse limit of the inverse system (("A""i")"i"∈"I", ("f""ij")"i"≤ "j"∈"I") as a particular subgroup of the direct product of the "A""i"'s: The inverse limit "A" comes equipped with "natural projections" π"i": "A" → "A""i" which pick out the "i"th component of the direct product for each "i" in "I". The inverse limit and the natural projections satisfy a universal property described in the next section. This same construction may be carried out if the "A""i"'s are sets, semigroups, topological spaces, rings, modules (over a fixed ring), algebras (over a fixed ring), etc., and the homomorphisms are morphisms in the corresponding category. The inverse limit will also belong to that category. The inverse limit can be defined abstractly in an arbitrary category by means of a universal property. Let ("X""i", "f""ij") be an inverse system of objects and morphisms in a category "C" (same definition as above). The inverse limit of this system is an object "X" in "C" together with morphisms π"i": "X" → "X""i" (called "projections") satisfying π"i" = "f""ij" ∘ π"j" for all "i" ≤ "j". The pair ("X", π"i") must be universal in the sense that for any other such pair ("Y", ψ"i") (i.e. ψ"i": "Y" → "X""i" with ψ"i" = "f""ij" ∘ ψ"j" for all "i" ≤ "j") there exists a unique morphism "u": "Y" → "X" such that the diagram commutes for all "i" ≤ "j", for which it suffices to show that ψ"i" = π"i" ∘ "u" for all "i". The inverse limit is often denoted with the inverse system ("X""i", "f""ij") being understood. In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limits "X" and "X"' of an inverse system, there exists a "unique" isomorphism "X"′ → "X" commuting with the projection maps. We note that an inverse system in a category "C" admits an alternative description in terms of functors. Any partially ordered set "I" can be considered as a small category where the morphisms consist of arrows "i" → "j" if and only if "i" ≤ "j". An inverse system is then just a contravariant functor "I" → "C", and the inverse limit functor formula_3 is a covariant functor. For an abelian category "C", the inverse limit functor is left exact. If "I" is ordered (not simply partially ordered) and countable, and "C" is the category Ab of abelian groups, the Mittag-Leffler condition is a condition on the transition morphisms "f""ij" that ensures the exactness of formula_23. Specifically, Eilenberg constructed a functor (pronounced "lim one") such that if ("A""i", "f""ij"), ("B""i", "g""ij"), and ("C""i", "h""ij") are three inverse systems of abelian groups, and is a short exact sequence of inverse systems, then is an exact sequence in Ab. If the ranges of the morphisms of an inverse system of abelian groups ("A""i", "f""ij") are "stationary", that is, for every "k" there exists "j" ≥ "k" such that for all "i" ≥ "j" :formula_27 one says that the system satisfies the Mittag-Leffler condition. The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof of Mittag-Leffler's theorem. The following situations are examples where the Mittag-Leffler condition is satisfied: An example where formula_28 is non-zero is obtained by taking "I" to be the non-negative integers, letting "A""i" = "p""i"Z, "B""i" = Z, and "C""i" = "B""i" / "A""i" = Z/"p""i"Z. Then where Z"p" denotes the p-adic integers. More generally, if "C" is an arbitrary abelian category that has enough injectives, then so does "C""I", and the right derived functors of the inverse limit functor can thus be defined. The "n"th right derived functor is denoted In the case where "C" satisfies Grothendieck's axiom (AB4*), Jan-Erik Roos generalized the functor lim1 on Ab"I" to series of functors limn such that It was thought for almost 40 years that Roos had proved (in "Sur les foncteurs dérivés de lim. Applications. ") that lim1 "A""i" = 0 for ("A""i", "f""ij") an inverse system with surjective transition morphisms and "I" the set of non-negative integers (such inverse systems are often called "Mittag-Leffler sequences"). However, in 2002, Amnon Neeman and Pierre Deligne constructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim1 "A""i" ≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct if "C" has a set of generators (in addition to satisfying (AB3) and (AB4*)). Barry Mitchell has shown (in "The cohomological dimension of a directed set") that if "I" has cardinality formula_32 (the "d"th infinite cardinal), then "R""n"lim is zero for all "n" ≥ "d" + 2. This applies to the "I"-indexed diagrams in the category of "R"-modules, with "R" a commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which lim"n", on diagrams indexed by a countable set, is nonzero for "n" > 1). The categorical dual of an inverse limit is a direct limit (or inductive limit). More general concepts are the limits and colimits of category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits.
https://en.wikipedia.org/wiki?curid=15109
Interplanetary spaceflight Interplanetary spaceflight or interplanetary travel is travel between planets, usually within a single planetary system. In practice, spaceflights of this type are confined to travel between the planets of the Solar System. Remotely guided space probes have flown by all of the planets of the Solar System from Mercury to Neptune, with the "New Horizons" probe having flown by the dwarf planet Pluto and the "Dawn" spacecraft currently orbiting the dwarf planet Ceres. The most distant spacecraft, "Voyager 1" and "Voyager 2" have left the Solar System as of while "Pioneer 10", "Pioneer 11", and "New Horizons" are on course to leave it. In general, planetary orbiters and landers return much more detailed and comprehensive information than fly-by missions. Space probes have been placed into orbit around all the five planets known to the ancients: The first being Venus (Venera 7, 1970), Jupiter ("Galileo", 1995), Saturn ("Cassini/Huygens", 2004), and most recently Mercury ("MESSENGER", March 2011), and have returned data about these bodies and their natural satellites. The NEAR Shoemaker mission in 2000 orbited the large near-Earth asteroid 433 Eros, and was even successfully landed there, though it had not been designed with this maneuver in mind. The Japanese ion-drive spacecraft "Hayabusa" in 2005 also orbited the small near-Earth asteroid 25143 Itokawa, landing on it briefly and returning grains of its surface material to Earth. Another powerful ion-drive mission, "Dawn", has orbited the large asteroid Vesta (July 2011 – September 2012) and later moved on to the dwarf planet Ceres, arriving in March 2015. Remotely controlled landers such as Viking, Pathfinder and the two Mars Exploration Rovers have landed on the surface of Mars and several Venera and Vega spacecraft have landed on the surface of Venus. The "Huygens" probe successfully landed on Saturn's moon, Titan. No crewed missions have been sent to any planet of the Solar System. NASA's Apollo program, however, landed twelve people on the Moon and returned them to Earth. The American Vision for Space Exploration, originally introduced by U.S. President George W. Bush and put into practice through the Constellation program, had as a long-term goal to eventually send human astronauts to Mars. However, on February 1, 2010, President Barack Obama proposed cancelling the program in Fiscal Year 2011. An earlier project which received some significant planning by NASA included a crewed fly-by of Venus in the Manned Venus Flyby mission, but was cancelled when the Apollo Applications Program was terminated due to NASA budget cuts in the late 1960s. The costs and risk of interplanetary travel receive a lot of publicity—spectacular examples include the malfunctions or complete failures of probes without a human crew, such as Mars 96, Deep Space 2, and Beagle 2 (the article List of Solar System probes gives a full list). Many astronomers, geologists and biologists believe that exploration of the Solar System provides knowledge that could not be gained by observations from Earth's surface or from orbit around Earth. But they disagree about whether human-crewed missions make a useful scientific contribution—some think robotic probes are cheaper and safer, while others argue that either astronauts advised by Earth-based scientists, or spacefaring scientists advised by Earth-based scientists, can respond more flexibly and intelligently to new or unexpected features of the region they are exploring. Those who pay for such missions (primarily in the public sector) are more likely to be interested in benefits for themselves or for the human race as a whole. So far the only benefits of this type have been "spin-off" technologies which were developed for space missions and then were found to be at least as useful in other activities (NASA publicizes spin-offs from its activities). Other practical motivations for interplanetary travel are more speculative, because our current technologies are not yet advanced enough to support test projects. But science fiction writers have a fairly good track record in predicting future technologies—for example geosynchronous communications satellites (Arthur C. Clarke) and many aspects of computer technology (Mack Reynolds). Many science fiction stories feature detailed descriptions of how people could extract minerals from asteroids and energy from sources including orbital solar panels (unhampered by clouds) and the very strong magnetic field of Jupiter. Some point out that such techniques may be the only way to provide rising standards of living without being stopped by pollution or by depletion of Earth's resources (for example peak oil). Finally, colonizing other parts of the Solar System would prevent the whole human species from being exterminated by any one of a number of possible events (see Human extinction). One of these possible events is an asteroid impact like the one which may have resulted in the Cretaceous–Paleogene extinction event. Although various Spaceguard projects monitor the Solar System for objects that might come dangerously close to Earth, current asteroid deflection strategies are crude and untested. To make the task more difficult, carbonaceous chondrites are rather sooty and therefore very hard to detect. Although carbonaceous chondrites are thought to be rare, some are very large and the suspected "dinosaur-killer" may have been a carbonaceous chondrite. Some scientists, including members of the Space Studies Institute, argue that the vast majority of mankind eventually will live in space and will benefit from doing this. One of the main challenges in interplanetary travel is producing the very large velocity changes necessary to travel from one body to another in the Solar System. Due to the Sun's gravitational pull, a spacecraft moving farther from the Sun will slow down, while a spacecraft moving closer will speed up. Also, since any two planets are at different distances from the Sun, the planet from which the spacecraft starts is moving around the Sun at a different speed than the planet to which the spacecraft is travelling (in accordance with Kepler's Third Law). Because of these facts, a spacecraft desiring to transfer to a planet closer to the Sun must decrease its speed with respect to the Sun by a large amount in order to intercept it, while a spacecraft traveling to a planet farther out from the Sun must increase its speed substantially. Then, if additionally the spacecraft wishes to enter into orbit around the destination planet (instead of just flying by it), it must match the planet's orbital speed around the Sun, usually requiring another large velocity change. Simply doing this by brute force – accelerating in the shortest route to the destination and then matching the planet's speed – would require an extremely large amount of fuel. And the fuel required for producing these velocity changes has to be launched along with the payload, and therefore even more fuel is needed to put both the spacecraft and the fuel required for its interplanetary journey into orbit. Thus, several techniques have been devised to reduce the fuel requirements of interplanetary travel. As an example of the velocity changes involved, a spacecraft travelling from low Earth orbit to Mars using a simple trajectory must first undergo a change in speed (also known as a delta-v), in this case an increase, of about 3.8 km/s. Then, after intercepting Mars, it must change its speed by another 2.3 km/s in order to match Mars' orbital speed around the Sun and enter an orbit around it. For comparison, launching a spacecraft into low Earth orbit requires a change in speed of about 9.5 km/s. For many years economical interplanetary travel meant using the Hohmann transfer orbit. Hohmann demonstrated that the lowest energy route between any two orbits is an elliptical "orbit" which forms a tangent to the starting and destination orbits. Once the spacecraft arrives, a second application of thrust will re-circularize the orbit at the new location. In the case of planetary transfers this means directing the spacecraft, originally in an orbit almost identical to Earth's, so that the aphelion of the transfer orbit is on the far side of the Sun near the orbit of the other planet. A spacecraft traveling from Earth to Mars via this method will arrive near Mars orbit in approximately 8.5 months, but because the orbital velocity is greater when closer to the center of mass (i.e. the Sun) and slower when farther from the center, the spacecraft will be traveling quite slowly and a small application of thrust is all that is needed to put it into a circular orbit around Mars. If the manoeuver is timed properly, Mars will be "arriving" under the spacecraft when this happens. The Hohmann transfer applies to any two orbits, not just those with planets involved. For instance it is the most common way to transfer satellites into geostationary orbit, after first being "parked" in low Earth orbit. However, the Hohmann transfer takes an amount of time similar to ½ of the orbital period of the outer orbit, so in the case of the outer planets this is many years – too long to wait. It is also based on the assumption that the points at both ends are massless, as in the case when transferring between two orbits around Earth for instance. With a planet at the destination end of the transfer, calculations become considerably more difficult. The gravitational slingshot technique uses the gravity of planets and moons to change the speed and direction of a spacecraft without using fuel. In typical example, a spacecraft is sent to a distant planet on a path that is much faster than what the Hohmann transfer would call for. This would typically mean that it would arrive at the planet's orbit and continue past it. However, if there is a planet between the departure point and the target, it can be used to bend the path toward the target, and in many cases the overall travel time is greatly reduced. A prime example of this are the two crafts of the Voyager program, which used slingshot effects to change trajectories several times in the outer Solar System. It is difficult to use this method for journeys in the inner part of the Solar System, although it is possible to use other nearby planets such as Venus or even the Moon as slingshots in journeys to the outer planets. This maneuver can only change an object's velocity relative to a third, uninvolved object, – possibly the “centre of mass” or the Sun. There is no change in the velocities of the two objects involved in the maneuver relative to each other. The Sun cannot be used in a gravitational slingshot because it is stationary compared to rest of the Solar System, which orbits the Sun. It may be used to send a spaceship or probe into the galaxy because the Sun revolves around the center of the Milky Way. A powered slingshot is the use of a rocket engine at or around closest approach to a body (periapsis). The use at this point multiplies up the effect of the delta-v, and gives a bigger effect than at other times. Computers did not exist when Hohmann transfer orbits were first proposed (1925) and were slow, expensive and unreliable when gravitational slingshots were developed (1959). Recent advances in computing have made it possible to exploit many more features of the gravity fields of astronomical bodies and thus calculate even lower-cost trajectories. Paths have been calculated which link the Lagrange points of the various planets into the so-called Interplanetary Transport Network. Such "fuzzy orbits" use significantly less energy than Hohmann transfers but are much, much slower. They aren't practical for human crewed missions because they generally take years or decades, but may be useful for high-volume transport of low-value commodities if humanity develops a space-based economy. Aerobraking uses the atmosphere of the target planet to slow down. It was first used on the Apollo program where the returning spacecraft did not enter Earth orbit but instead used a S-shaped vertical descent profile (starting with an initially steep descent, followed by a leveling out, followed by a slight climb, followed by a return to a positive rate of descent continuing to splash-down in the ocean) through Earth's atmosphere to reduce its speed until the parachute system could be deployed enabling a safe landing. Aerobraking does not require a thick atmosphere – for example most Mars landers use the technique, and Mars' atmosphere is only about 1% as thick as Earth's. Aerobraking converts the spacecraft's kinetic energy into heat, so it requires a heatshield to prevent the craft from burning up. As a result, aerobraking is only helpful in cases where the fuel needed to transport the heatshield to the planet is less than the fuel that would be required to brake an unshielded craft by firing its engines. This can be addressed by creating heatshields from material available near the target Several technologies have been proposed which both save fuel and provide significantly faster travel than the traditional methodology of using Hohmann transfers. Some are still just theoretical, but over time, several of the theoretical approaches have been tested on spaceflight missions. For example, the Deep Space 1 mission was a successful test of an ion drive. These improved technologies typically focus on one or more of: Besides making travel faster or cost less, such improvements could also allow greater design "safety margins" by reducing the imperative to make spacecraft lighter. All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, of initial ("M"0, including fuel) to final ("M"1, fuel depleted) mass. The main consequence is that mission velocities of more than a few times the velocity of the rocket motor exhaust (with respect to the vehicle) rapidly become impractical. In a nuclear thermal rocket or solar thermal rocket a working fluid, usually hydrogen, is heated to a high temperature, and then expands through a rocket nozzle to create thrust. The energy replaces the chemical energy of the reactive chemicals in a traditional rocket engine. Due to the low molecular mass and hence high thermal velocity of hydrogen these engines are at least twice as fuel efficient as chemical engines, even after including the weight of the reactor. The US Atomic Energy Commission and NASA tested a few designs from 1959 to 1968. The NASA designs were conceived as replacements for the upper stages of the Saturn V launch vehicle, but the tests revealed reliability problems, mainly caused by the vibration and heating involved in running the engines at such high thrust levels. Political and environmental considerations make it unlikely such an engine will be used in the foreseeable future, since nuclear thermal rockets would be most useful at or near the Earth's surface and the consequences of a malfunction could be disastrous. Fission-based thermal rocket concepts produce lower exhaust velocities than the electric and plasma concepts described below, and are therefore less attractive solutions. For applications requiring high thrust-to-weight ratio, such as planetary escape, nuclear thermal is potentially more attractive. Electric propulsion systems use an external source such as a nuclear reactor or solar cells to generate electricity, which is then used to accelerate a chemically inert propellant to speeds far higher than achieved in a chemical rocket. Such drives produce feeble thrust, and are therefore unsuitable for quick maneuvers or for launching from the surface of a planet. But they are so economical in their use of reaction mass that they can keep firing continuously for days or weeks, while chemical rockets use up reaction mass so quickly that they can only fire for seconds or minutes. Even a trip to the Moon is long enough for an electric propulsion system to outrun a chemical rocket – the Apollo missions took 3 days in each direction. NASA's Deep Space One was a very successful test of a prototype ion drive, which fired for a total of 678 days and enabled the probe to run down Comet Borrelly, a feat which would have been impossible for a chemical rocket. "Dawn", the first NASA operational (i.e., non-technology demonstration) mission to use an ion drive for its primary propulsion, successfully orbited the large main-belt asteroids 1 Ceres and 4 Vesta. A more ambitious, nuclear-powered version was intended for a Jupiter mission without human crew, the Jupiter Icy Moons Orbiter (JIMO), originally planned for launch sometime in the next decade. Due to a shift in priorities at NASA that favored human crewed space missions, the project lost funding in 2005. A similar mission is currently under discussion as the US component of a joint NASA/ESA program for the exploration of Europa and Ganymede. A NASA multi-center Technology Applications Assessment Team led from the Johnson Spaceflight Center, has as of January 2011 described "Nautilus-X", a concept study for a multi-mission space exploration vehicle useful for missions beyond low Earth orbit (LEO), of up to 24 months duration for a crew of up to six. Although Nautilus-X is adaptable to a variety of mission-specific propulsion units of various low-thrust, high specific impulse (Isp) designs, nuclear ion-electric drive is shown for illustrative purposes. It is intended for integration and checkout at the International Space Station (ISS), and would be suitable for deep-space missions from the ISS to and beyond the Moon, including Earth/Moon L1, Sun/Earth L2, near-Earth asteroidal, and Mars orbital destinations. It incorporates a reduced-g centrifuge providing artificial gravity for crew health to ameliorate the effects of long-term 0g exposure, and the capability to mitigate the space radiation environment. The electric propulsion missions already flown, or currently scheduled, have used solar electric power, limiting their capability to operate far from the Sun, and also limiting their peak acceleration due to the mass of the electric power source. Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, can reach speeds much greater than chemically powered vehicles. Fusion rockets, powered by nuclear fusion reactions, would "burn" such light element fuels as deuterium, tritium, or 3He. Because fusion yields about 1% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases only about 0.1% of the fuel's mass-energy. However, either fission or fusion technologies can in principle achieve velocities far higher than needed for Solar System exploration, and fusion energy still awaits practical demonstration on Earth. One proposal using a fusion rocket was Project Daedalus. Another fairly detailed vehicle system, designed and optimized for crewed Solar System exploration, "Discovery II", based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 "g", with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. See the spacecraft propulsion article for a discussion of a number of other technologies that could, in the medium to longer term, be the basis of interplanetary missions. Unlike the situation with interstellar travel, the barriers to fast interplanetary travel involve engineering and economics rather than any basic physics. Solar sails rely on the fact that light reflected from a surface exerts pressure on the surface. The radiation pressure is small and decreases by the square of the distance from the Sun, but unlike rockets, solar sails require no fuel. Although the thrust is small, it continues as long as the Sun shines and the sail is deployed. The original concept relied only on radiation from the Sun – for example in Arthur C. Clarke's 1965 story "Sunjammer". More recent light sail designs propose to boost the thrust by aiming ground-based lasers or masers at the sail. Ground-based lasers or masers can also help a light-sail spacecraft to "decelerate": the sail splits into an outer and inner section, the outer section is pushed forward and its shape is changed mechanically to focus reflected radiation on the inner portion, and the radiation focused on the inner section acts as a brake. Although most articles about light sails focus on interstellar travel, there have been several proposals for their use within the Solar System. Currently, the only spacecraft to use a solar sail as the main method of propulsion is IKAROS which was launched by JAXA on May 21, 2010. It has since been successfully deployed, and shown to be producing acceleration as expected. Many ordinary spacecraft and satellites also use solar collectors, temperature-control panels and Sun shades as light sails, to make minor corrections to their attitude and orbit without using fuel. A few have even had small purpose-built solar sails for this use (for example Eurostar E3000 geostationary communications satellites built by EADS Astrium). It is possible to put stations or spacecraft on orbits that cycle between different planets, for example a Mars cycler would synchronously cycle between Mars and Earth, with very little propellant usage to maintain the trajectory. Cyclers are conceptually a good idea, because massive radiation shields, life support and other equipment only need to be put onto the cycler trajectory once. A cycler could combine several roles: habitat (for example it could spin to produce an "artificial gravity" effect); mothership (providing life support for the crews of smaller spacecraft which hitch a ride on it). Cyclers could also possibly make excellent cargo ships for resupply of a colony. A space elevator is a theoretical structure that would transport material from a planet's surface into orbit. The idea is that, once the expensive job of building the elevator is complete, an indefinite number of loads can be transported into orbit at minimal cost. Even the simplest designs avoid the vicious circle of rocket launches from the surface, wherein the fuel needed to travel the last 10% of the distance into orbit must be lifted all the way from the surface, requiring even more fuel, and so on. More sophisticated space elevator designs reduce the energy cost per trip by using counterweights, and the most ambitious schemes aim to balance loads going up and down and thus make the energy cost close to zero. Space elevators have also sometimes been referred to as "beanstalks", "space bridges", "space lifts", "space ladders" and "orbital towers". A terrestrial space elevator is beyond our current technology, although a lunar space elevator could theoretically be built using existing materials. A skyhook is a theoretical class of orbiting tether propulsion intended to lift payloads to high altitudes and speeds. Proposals for skyhooks include designs that employ tethers spinning at hypersonic speed for catching high speed payloads or high altitude aircraft and placing them in orbit. In addition, it has been suggested that the rotating skyhook is "not engineeringly feasible using presently available materials". The SpaceX Starship, with maiden launch slated to be no earlier than 2020, is designed to be fully and rapidly reusable, making use of the SpaceX reusable technology that was developed during 2011–2018 for Falcon 9 and Falcon Heavy launch vehicles. SpaceX CEO Elon Musk estimates that the reusability capability alone, on both the launch vehicle and the spacecraft associated with the Starship will reduce overall system costs per tonne delivered to Mars by at least two orders of magnitude over what NASA had previously achieved. When launching interplanetary probes from the surface of Earth, carrying all energy needed for the long-duration mission, payload quantities are necessarily extremely limited, due to the basis mass limitations described theoretically by the rocket equation. One alternative to transport more mass on interplanetary trajectories is to use up nearly all of the upper stage propellant on launch, and then refill propellants in Earth orbit before firing the rocket to escape velocity for a heliocentric trajectory. These propellants could be stored on orbit at a propellant depot, or carried to orbit in a propellant tanker to be directly transferred to the interplanetary spacecraft. For returning mass to Earth, a related option is to mine raw materials from a solar system celestial object, refine, process, and store the reaction products (propellant) on the Solar System body until such time as a vehicle needs to be loaded for launch. As of 2019, SpaceX is developing a system in which a reusable first stage vehicle would transport a crewed interplanetary spacecraft to Earth orbit, detach, return to its launch pad where a tanker spacecraft would be mounted atop it, then both fueled, then launched again to rendezvous with the waiting crewed spacecraft. The tanker would then transfer its fuel to the human crewed spacecraft for use on its interplanetary voyage. The SpaceX Starship is a stainless steel-structure spacecraft propelled by six Raptor engines operating on densified methane/oxygen propellants. It is -long, -diameter at its widest point, and is capable of transporting up to of cargo and passengers per trip to Mars, with on-orbit propellant refill before the interplanetary part of the journey. As an example of a funded project currently under development, a key part of the system SpaceX has designed for Mars in order to radically decrease the cost of spaceflight to interplanetary destinations is the placement and operation of a physical plant on Mars to handle production and storage of the propellant components necessary to launch and fly the Starships back to Earth, or perhaps to increase the mass that can be transported onward to destinations in the outer Solar System. The first Starship to Mars will carry a small propellant plant as a part of its cargo load. The plant will be expanded over multiple synods as more equipment arrives, is installed, and placed into mostly-autonomous production. The SpaceX propellant plant will take advantage of the large supplies of carbon dioxide and water resources on Mars, mining the water (H2O) from subsurface ice and collecting CO2 from the atmosphere. A chemical plant will process the raw materials by means of electrolysis and the Sabatier process to produce oxygen (O2) and methane (CH4), and then liquefy it to facilitate long-term storage and ultimate use. Current space vehicles attempt to launch with all their fuel (propellants and energy supplies) on board that they will need for their entire journey, and current space structures are lifted from the Earth's surface. Non-terrestrial sources of energy and materials are mostly a lot further away, but most would not require lifting out of a strong gravity field and therefore should be much cheaper to use in space in the long term. The most important non-terrestrial resource is energy, because it can be used to transform non-terrestrial materials into useful forms (some of which may also produce energy). At least two fundamental non-terrestrial energy sources have been proposed: solar-powered energy generation (unhampered by clouds), either directly by solar cells or indirectly by focusing solar radiation on boilers which produce steam to drive generators; and electrodynamic tethers which generate electricity from the powerful magnetic fields of some planets (Jupiter has a very powerful magnetic field). Water ice would be very useful and is widespread on the moons of Jupiter and Saturn: Oxygen is a common constituent of the moon's crust, and is probably abundant in most other bodies in the Solar System. Non-terrestrial oxygen would be valuable as a source of water ice only if an adequate source of hydrogen can be found. Possible uses include: Unfortunately hydrogen, along with other volatiles like carbon and nitrogen, are much less abundant than oxygen in the inner Solar System. Scientists expect to find a vast range of organic compounds in some of the planets, moons and comets of the outer Solar System, and the range of possible uses is even wider. For example, methane can be used as a fuel (burned with non-terrestrial oxygen), or as a feedstock for petrochemical processes such as making plastics. And ammonia could be a valuable feedstock for producing fertilizers to be used in the vegetable gardens of orbital and planetary bases, reducing the need to lift food to them from Earth. Even unprocessed rock may be useful as rocket propellant if mass drivers are employed. Life support systems must be capable of supporting human life for weeks, months or even years. A breathable atmosphere of at least must be maintained, with adequate amounts of oxygen, nitrogen, and controlled levels of carbon dioxide, trace gases and water vapor. In October 2015, the NASA Office of Inspector General issued a health hazards report related to human spaceflight, including a human mission to Mars. Once a vehicle leaves low Earth orbit and the protection of Earth's magnetosphere, it enters the Van Allen radiation belt, a region of high radiation. Once through there the radiation drops to lower levels, with a constant background of high energy cosmic rays which pose a health threat. These are dangerous over periods of years to decades. Scientists of Russian Academy of Sciences are searching for methods of reducing the risk of radiation-induced cancer in preparation for the mission to Mars. They consider as one of the options a life support system generating drinking water with low content of deuterium (a stable isotope of hydrogen) to be consumed by the crew members. Preliminary investigations have shown that deuterium-depleted water features certain anti-cancer effects. Hence, deuterium-free drinking water is considered to have the potential of lowering the risk of cancer caused by extreme radiation exposure of the Martian crew. In addition, coronal mass ejections from the Sun are highly dangerous, and are fatal within a very short timescale to humans unless they are protected by massive shielding. Any major failure to a spacecraft en route is likely to be fatal, and even a minor one could have dangerous results if not repaired quickly, something difficult to accomplish in open space. The crew of the Apollo 13 mission survived despite an explosion caused by a faulty oxygen tank (1970). For astrodynamics reasons, economic spacecraft travel to other planets is only practical within certain time windows. Outside these windows the planets are essentially inaccessible from Earth with current technology. This constrains flights and limits rescue options in the case of an emergency.
https://en.wikipedia.org/wiki?curid=15111
Wave interference In physics, interference is a phenomenon in which two waves superpose to form a resultant wave of greater, lower, or the same amplitude. Constructive and destructive interference result from the interaction of waves that are correlated or coherent with each other, either because they come from the same source or because they have the same or nearly the same frequency. Interference effects can be observed with all types of waves, for example, light, radio, acoustic, surface water waves, gravity waves, or matter waves. The resulting images or graphs are called interferograms. The principle of superposition of waves states that when two or more propagating waves of same type are incident on the same point, the resultant amplitude at that point is equal to the vector sum of the amplitudes of the individual waves. If a crest of a wave meets a crest of another wave of the same frequency at the same point, then the amplitude is the sum of the individual amplitudes—this is constructive interference. If a crest of one wave meets a trough of another wave, then the amplitude is equal to the difference in the individual amplitudes—this is known as destructive interference. Constructive interference occurs when the phase difference between the waves is an even multiple of (180°) , whereas destructive interference occurs when the difference is an odd multiple of . If the difference between the phases is intermediate between these two extremes, then the magnitude of the displacement of the summed waves lies between the minimum and maximum values. Consider, for example, what happens when two identical stones are dropped into a still pool of water at different locations. Each stone generates a circular wave propagating outwards from the point where the stone was dropped. When the two waves overlap, the net displacement at a particular point is the sum of the displacements of the individual waves. At some points, these will be in phase, and will produce a maximum displacement. In other places, the waves will be in anti-phase, and there will be no net displacement at these points. Thus, parts of the surface will be stationary—these are seen in the figure above and to the right as stationary blue-green lines radiating from the centre. Interference of light is a common phenomenon that can be explained classically by the superposition of waves, however a deeper understanding of light interference requires knowledge of wave-particle duality of light which is due to quantum mechanics. Prime examples of light interference are the famous double-slit experiment, laser speckle, anti-reflective coatings and interferometers. Traditionally the classical wave model is taught as a basis for understanding optical interference, based on the Huygens–Fresnel principle. The above can be demonstrated in one dimension by deriving the formula for the sum of two waves. The equation for the amplitude of a sinusoidal wave traveling to the right along the x-axis is where formula_2 is the peak amplitude, formula_3 is the wavenumber and formula_4 is the angular frequency of the wave. Suppose a second wave of the same frequency and amplitude but with a different phase is also traveling to the right where formula_6 is the phase difference between the waves in radians. The two waves will superpose and add: the sum of the two waves is Using the trigonometric identity for the sum of two cosines: formula_8 this can be written This represents a wave at the original frequency, traveling to the right like the components, whose amplitude is proportional to the cosine of formula_10. A simple form of interference pattern is obtained if two plane waves of the same frequency intersect at an angle. Interference is essentially an energy redistribution process. The energy which is lost at the destructive interference is regained at the constructive interference. One wave is travelling horizontally, and the other is travelling downwards at an angle θ to the first wave. Assuming that the two waves are in phase at the point B, then the relative phase changes along the "x"-axis. The phase difference at the point A is given by It can be seen that the two waves are in phase when and are half a cycle out of phase when Constructive interference occurs when the waves are in phase, and destructive interference when they are half a cycle out of phase. Thus, an interference fringe pattern is produced, where the separation of the maxima is and is known as the fringe spacing. The fringe spacing increases with increase in wavelength, and with decreasing angle . The fringes are observed wherever the two waves overlap and the fringe spacing is uniform throughout. A point source produces a spherical wave. If the light from two point sources overlaps, the interference pattern maps out the way in which the phase difference between the two waves varies in space. This depends on the wavelength and on the separation of the point sources. The figure to the right shows interference between two spherical waves. The wavelength increases from top to bottom, and the distance between the sources increases from left to right. When the plane of observation is far enough away, the fringe pattern will be a series of almost straight lines, since the waves will then be almost planar. Interference occurs when several waves are added together provided that the phase differences between them remain constant over the observation time. It is sometimes desirable for several waves of the same frequency and amplitude to sum to zero (that is, interfere destructively, cancel). This is the principle behind, for example, 3-phase power and the diffraction grating. In both of these cases, the result is achieved by uniform spacing of the phases. It is easy to see that a set of waves will cancel if they have the same amplitude and their phases are spaced equally in angle. Using phasors, each wave can be represented as formula_21 for formula_22 waves from formula_23 to formula_24, where To show that one merely assumes the converse, then multiplies both sides by formula_27 The Fabry–Pérot interferometer uses interference between multiple reflections. A diffraction grating can be considered to be a multiple-beam interferometer; since the peaks which it produces are generated by interference between the light transmitted by each of the elements in the grating; see interference vs. diffraction for further discussion. Because the frequency of light waves (~1014 Hz) is too high to be detected by currently available detectors, it is possible to observe only the intensity of an optical interference pattern. The intensity of the light at a given point is proportional to the square of the average amplitude of the wave. This can be expressed mathematically as follows. The displacement of the two waves at a point is: where represents the magnitude of the displacement, represents the phase and represents the angular frequency. The displacement of the summed waves is The intensity of the light at is given by This can be expressed in terms of the intensities of the individual waves as Thus, the interference pattern maps out the difference in phase between the two waves, with maxima occurring when the phase difference is a multiple of 2. If the two beams are of equal intensity, the maxima are four times as bright as the individual beams, and the minima have zero intensity. The two waves must have the same polarization to give rise to interference fringes since it is not possible for waves of different polarizations to cancel one another out or add together. Instead, when waves of different polarization are added together, they give rise to a wave of a different polarization state. The discussion above assumes that the waves which interfere with one another are monochromatic, i.e. have a single frequency—this requires that they are infinite in time. This is not, however, either practical or necessary. Two identical waves of finite duration whose frequency is fixed over that period will give rise to an interference pattern while they overlap. Two identical waves which consist of a narrow spectrum of frequency waves of finite duration, will give a series of fringe patterns of slightly differing spacings, and provided the spread of spacings is significantly less than the average fringe spacing, a fringe pattern will again be observed during the time when the two waves overlap. Conventional light sources emit waves of differing frequencies and at different times from different points in the source. If the light is split into two waves and then re-combined, each individual light wave may generate an interference pattern with its other half, but the individual fringe patterns generated will have different phases and spacings, and normally no overall fringe pattern will be observable. However, single-element light sources, such as sodium- or mercury-vapor lamps have emission lines with quite narrow frequency spectra. When these are spatially and colour filtered, and then split into two waves, they can be superimposed to generate interference fringes. All interferometry prior to the invention of the laser was done using such sources and had a wide range of successful applications. A laser beam generally approximates much more closely to a monochromatic source, and it is much more straightforward to generate interference fringes using a laser. The ease with which interference fringes can be observed with a laser beam can sometimes cause problems in that stray reflections may give spurious interference fringes which can result in errors. Normally, a single laser beam is used in interferometry, though interference has been observed using two independent lasers whose frequencies were sufficiently matched to satisfy the phase requirements. This has also been observed for widefield interference between two incoherent laser sources. It is also possible to observe interference fringes using white light. A white light fringe pattern can be considered to be made up of a 'spectrum' of fringe patterns each of slightly different spacing. If all the fringe patterns are in phase in the centre, then the fringes will increase in size as the wavelength decreases and the summed intensity will show three to four fringes of varying colour. Young describes this very elegantly in his discussion of two slit interference. Since white light fringes are obtained only when the two waves have travelled equal distances from the light source, they can be very useful in interferometry, as they allow the zero path difference fringe to be identified. To generate interference fringes, light from the source has to be divided into two waves which have then to be re-combined. Traditionally, interferometers have been classified as either amplitude-division or wavefront-division systems. In an amplitude-division system, a beam splitter is used to divide the light into two beams travelling in different directions, which are then superimposed to produce the interference pattern. The Michelson interferometer and the Mach–Zehnder interferometer are examples of amplitude-division systems. In wavefront-division systems, the wave is divided in space—examples are Young's double slit interferometer and Lloyd's mirror. Interference can also be seen in everyday phenomena such as iridescence and structural coloration. For example, the colours seen in a soap bubble arise from interference of light reflecting off the front and back surfaces of the thin soap film. Depending on the thickness of the film, different colours interfere constructively and destructively. Interferometry has played an important role in the advancement of physics, and also has a wide range of applications in physical and engineering measurement. Thomas Young's double slit interferometer in 1803 demonstrated interference fringes when two small holes were illuminated by light from another small hole which was illuminated by sunlight. Young was able to estimate the wavelength of different colours in the spectrum from the spacing of the fringes. The experiment played a major role in the general acceptance of the wave theory of light. In quantum mechanics, this experiment is considered to demonstrate the inseparability of the wave and particle natures of light and other quantum particles (wave–particle duality). Richard Feynman was fond of saying that all of quantum mechanics can be gleaned from carefully thinking through the implications of this single experiment. The results of the Michelson–Morley experiment are generally considered to be the first strong evidence against the theory of a luminiferous aether and in favor of special relativity. Interferometry has been used in defining and calibrating length standards. When the metre was defined as the distance between two marks on a platinum-iridium bar, Michelson and Benoît used interferometry to measure the wavelength of the red cadmium line in the new standard, and also showed that it could be used as a length standard. Sixty years later, in 1960, the metre in the new SI system was defined to be equal to 1,650,763.73 wavelengths of the orange-red emission line in the electromagnetic spectrum of the krypton-86 atom in a vacuum. This definition was replaced in 1983 by defining the metre as the distance travelled by light in vacuum during a specific time interval. Interferometry is still fundamental in establishing the calibration chain in length measurement. Interferometry is used in the calibration of slip gauges (called gauge blocks in the US) and in coordinate-measuring machines. It is also used in the testing of optical components. In 1946, a technique called astronomical interferometry was developed. Astronomical radio interferometers usually consist either of arrays of parabolic dishes or two-dimensional arrays of omni-directional antennas. All of the telescopes in the array are widely separated and are usually connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. Interferometry increases the total signal collected, but its primary purpose is to vastly increase the resolution through a process called Aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array. An acoustic interferometer is an instrument for measuring the physical characteristics of sound waves in a gas or liquid, such velocity, wavelength, absorption, or impedance. A vibrating crystal creates ultrasonic waves that are radiated into the medium. The waves strike a reflector placed parallel to the crystal, reflected back to the source and measured. If a system is in state formula_33, its wavefunction is described in Dirac or bra–ket notation as: where the formula_35s specify the different quantum "alternatives" available (technically, they form an eigenvector basis) and the formula_36 are the probability amplitude coefficients, which are complex numbers. The probability of observing the system making a transition or quantum leap from state formula_33 to a new state formula_38 is the square of the modulus of the scalar or inner product of the two states: where formula_41 (as defined above) and similarly formula_42 are the coefficients of the final state of the system. * is the complex conjugate so that formula_43, etc. Now consider the situation classically and imagine that the system transited from formula_44 to formula_45 via an intermediate state formula_46. Then we would "classically" expect the probability of the two-step transition to be the sum of all the possible intermediate steps. So we would have The classical and quantum derivations for the transition probability differ by the presence, in the quantum case, of the extra terms formula_49; these extra quantum terms represent "interference" between the different formula_50 intermediate "alternatives". These are consequently known as the "quantum interference terms", or "cross terms". This is a purely quantum effect and is a consequence of the non-additivity of the probabilities of quantum alternatives. The interference terms vanish, via the mechanism of quantum decoherence, if the intermediate state formula_35 is measured or coupled with its environment.
https://en.wikipedia.org/wiki?curid=15112
Inter Milan Football Club Internazionale Milano, commonly referred to as Internazionale () or simply Inter, and known as Inter Milan outside Italy, is an Italian professional football club based in Milan, Lombardy. Inter is the only Italian club to have never been relegated from the top flight of Italian football. Founded in 1908 following a schism within the Milan Cricket and Football Club (now A.C. Milan), Inter won its first championship in 1910. Since its formation, the club has won 30 domestic trophies, including 18 league titles, 7 Coppa Italia and 5 Supercoppa Italiana. From 2006 to 2010, the club won five successive league titles, equalling the all-time record at that time. They have won the Champions League three times: two back-to-back in 1964 and 1965 and then another in 2010. Their latest win completed an unprecedented Italian seasonal treble, with Inter winning the Coppa Italia and the "Scudetto" the same year. The club has also won three UEFA Cups, two Intercontinental Cups and one FIFA Club World Cup. Inter's home games are played at the San Siro stadium, which they share with local rivals A.C. Milan. The stadium is the largest in Italian football with a capacity of 80,018. Matches between A.C. Milan and Inter, known as the Derby della Madonnina, are one of the most followed derbies in football. , Inter has the highest home game attendance in Italy and the sixth highest attendance in Europe. The club is one of the most valuable in Italian and world football. The club was founded on 9 March 1908 as "Football Club Internazionale", following the schism with the Milan Cricket and Football Club (now A.C. Milan). The name of the club derives from the wish of its founding members to accept foreign players without limits as well as Italians. The club won its very first championship in 1910 and its second in 1920. The captain and coach of the first championship winning team was Virgilio Fossati, who was later killed in battle while serving in the Italian army during World War I. In 1922, Inter was at risk of relegation to the second division, but they remained in the top league after winning two play-offs. Six years later, during the Fascist era, the club was forced to merge with the "Unione Sportiva Milanese" and was renamed "Società Sportiva Ambrosiana". During the 1928-29 season, the team wore white jerseys with a red cross emblazoned on it; the jersey's design was inspired by the flag and coat of arms of the city of Milan. In 1929, the new club chairman Oreste Simonotti changed the club's name to "Associazione Sportiva Ambrosiana" and restored the previous black-and-blue jerseys, however supporters continued to call the team "Inter", and in 1931 new chairman Pozzani caved in to shareholder pressure and changed the name to "Associazione Sportiva Ambrosiana-Inter". Their first Coppa Italia (Italian Cup) was won in 1938–39, led by the iconic Giuseppe Meazza, after whom the San Siro stadium is officially named. A fifth championship followed in 1940, despite Meazza incurring an injury. After the end of World War II the club regained its original name, winning its sixth championship in 1953 and its seventh in 1954. In 1960, manager Helenio Herrera joined Inter from Barcelona, bringing with him his midfield general Luis Suárez, who won the European Footballer of the Year in the same year for his role in Barcelona's La Liga/Fairs Cup double. He would transform Inter into one of the greatest teams in Europe. He modified a 5–3–2 tactic known as the ""Verrou"" ("door bolt") which created greater flexibility for counterattacks. The "catenaccio" system was invented by an Austrian coach, Karl Rappan. Rappan's original system was implemented with four fixed defenders, playing a strict man-to-man marking system, plus a playmaker in the middle of the field who plays the ball together with two midfield wings. Herrera would modify it by adding a fifth defender, the sweeper or libero behind the two centre backs. The sweeper or libero who acted as the free man would deal with any attackers who went through the two centre backs. Inter finished third in the Serie A in his first season, second the next year and first in his third season. Then followed a back-to-back European Cup victory in 1964 and 1965, earning him the title ""il Mago"" ("the Wizard"). The core of Herrera's team were the attacking fullbacks Tarcisio Burgnich and Giacinto Facchetti, Armando Picchi the sweeper, Suárez the playmaker, Jair the winger, Mario Corso the left midfielder, and Sandro Mazzola, who played on the inside-right. In 1964, Inter reached the European Cup Final by beating Borussia Dortmund in the semi-final and Partizan in the quarter-final. In the final, they met Real Madrid, a team that had reached seven out of the nine finals to date. Mazzola scored two goals in a 3–1 victory, and then the team won the Intercontinental Cup against Independiente. A year later, Inter repeated the feat by beating two-time winner Benfica in the final held at home, from a Jair goal, and then again beat Independiente in the Intercontinental Cup. In 1967, with Jair gone and Suárez injured, Inter lost the European Cup Final 2–1 to Celtic. During that year the club changed its name to "Football Club Internazionale Milano". Following the golden era of the 1960s, Inter managed to win their eleventh league title in 1971 and their twelfth in 1980. Inter were defeated for the second time in five years in the final of the European Cup, going down 0–2 to Johan Cruyff's Ajax in 1972. During the 1970s and the 1980s, Inter also added two to its Coppa Italia tally, in 1977–78 and 1981–82. Led by the German duo of Andreas Brehme and Lothar Matthäus, and Argentine Ramón Díaz, Inter captured the 1989 Serie A championship. Inter were unable to defend their title despite adding fellow German Jürgen Klinsmann to the squad and winning their first Supercoppa Italiana at the start of the season. The 1990s was a period of disappointment. While their great rivals Milan and Juventus were achieving success both domestically and in Europe, Inter were left behind, with repeated mediocre results in the domestic league standings, their worst coming in 1993–94 when they finished just one point out of the relegation zone. Nevertheless, they achieved some European success with three UEFA Cup victories in 1991, 1994 and 1998. With Massimo Moratti's takeover from Ernesto Pellegrini in 1995, Inter twice broke the world record transfer fee in this period (£19.5 million for Ronaldo from Barcelona in 1997 and £31 million for Christian Vieri from Lazio two years later). However, the 1990s remained the only decade in Inter's history in which they did not win a single Serie A championship. For Inter fans, it was difficult to find who in particular was to blame for the troubled times and this led to some icy relations between them and the chairman, the managers and even some individual players. Moratti later became a target of the fans, especially when he sacked the much-loved coach Luigi Simoni after only a few games into the 1998–99 season, having just received the Italian manager of the year award for 1998 the day before being dismissed. That season, Inter failed to qualify for any European competition for the first time in almost ten years, finishing in eighth place. The following season, Moratti appointed former Juventus manager Marcello Lippi, and signed players such as Angelo Peruzzi and Laurent Blanc together with other former Juventus players Vieri and Vladimir Jugović. The team came close to their first domestic success since 1989 when they reached the Coppa Italia final only to be defeated by Lazio. Inter's misfortunes continued the following season, losing the 2000 Supercoppa Italiana match against Lazio 4–3 after initially taking the lead through new signing Robbie Keane. They were also eliminated in the preliminary round of the Champions League by Swedish club Helsingborgs IF, with Álvaro Recoba missing a crucial late penalty. Lippi was sacked after only a single game of the new season following Inter's first ever Serie A defeat to Reggina. Marco Tardelli, chosen to replace Lippi, failed to improve results, and is remembered by Inter fans as the manager that lost 6–0 in the city derby against Milan. Other members of the Inter "family" during this period that suffered were the likes of Vieri and Fabio Cannavaro, both of whom had their restaurants in Milan vandalised after defeats to the "Rossoneri". In 2002, not only did Inter manage to make it to the UEFA Cup semi-finals, but were also only 45 minutes away from capturing the "Scudetto" when they needed to maintain their one-goal advantage away to Lazio. Inter were 2–1 up after only 24 minutes. Lazio equalised during first half injury time and then scored two more goals in the second half to clinch victory that eventually saw Juventus win the championship. The next season, Inter finished as league runners-up and also managed to make it to the 2002–03 Champions League semi-finals against Milan, losing on the away goals rule. On 8 July 2004, Inter appointed former Lazio coach Roberto Mancini as its new head coach. In his first season, the team collected 72 points from 18 wins, 18 draws and only two losses, as well as winning the Coppa Italia and later the Supercoppa Italiana. On 11 May 2006, Inter retained their Coppa Italia title once again after defeating Roma with a 4–1 aggregate victory (a 1–1 scoreline in Rome and a 3–1 win at the San Siro). Inter were awarded the 2005–06 Serie A championship retrospectively after points were stripped from Juventus and Milan due to the match fixing scandal that year. During the following season, Inter went on a record-breaking run of 17 consecutive victories in Serie A, starting on 25 September 2006 with a 4–1 home victory over Livorno, and ending on 28 February 2007, after a 1–1 draw at home to Udinese. On 22 April 2007, Inter won their second consecutive "Scudetto"—and first on the field since 1989—when they defeated Siena 2–1 at Stadio Artemio Franchi. Italian World Cup-winning defender Marco Materazzi scored both goals. Inter started the 2007–08 season with the goal of winning both Serie A and Champions League. The team started well in the league, topping the table from the first round of matches, and also managed to qualify for the Champions League knockout stage. However, a late collapse, leading to a 2–0 defeat with ten men away to Liverpool on 19 February in the Champions League, threw into question manager Roberto Mancini's future at Inter while domestic form took a sharp turn of fortune with the team failing to win in the three following Serie A games. After being eliminated by Liverpool in the Champions League, Mancini announced his intention to leave his job immediately only to change his mind the following day. On the final day of the 2007–08 Serie A season, Inter played Parma away, and two goals from Zlatan Ibrahimović sealed their third consecutive championship. Mancini, however, was sacked soon after due to his previous announcement to leave the club. On 2 June 2008, Inter appointed former Porto and Chelsea boss José Mourinho as new head coach. In his first season, the "Nerazzurri" won a Suppercoppa Italiana and a fourth consecutive title, though falling in the Champions League in the first knockout round for a third-straight year, losing to eventual finalist Manchester United. In winning the league title Inter became the first club in the last 60 years to win the title for the fourth consecutive time and joined Torino and Juventus as the only clubs to accomplish this feat, as well as being the first club based outside Turin. Inter enjoyed more success in the 2009–10 Champions League, defeating reigning champions Barcelona in the semi-final, and then beating Bayern Munich 2–0 in the final with two goals from Diego Milito. Inter also won the 2009–10 Serie A title by two points over Roma, and the 2010 Coppa Italia by defeating the same side 1–0 in the final. This made Inter the first Italian team to win Treble. At the end of the season, Mourinho left the club to manage Real Madrid; he was replaced by Rafael Benítez. On 21 August 2010, Inter defeated Roma 3–1 and won the 2010 Supercoppa Italiana, their fourth trophy of the year. In December 2010, they claimed the FIFA Club World Cup for the first time after a 3–0 win against TP Mazembe in the final. However, after this win, on 23 December 2010, due to their declining performance in Serie A, the team fired Benítez. He was replaced by Leonardo the following day. Leonardo started with 30 points from 12 games, with an average of 2.5 points per game, better than his predecessors Benítez and Mourinho. On 6 March 2011, Leonardo set a new Italian Serie A record by collecting 33 points in 13 games; the previous record was 32 points in 13 games made by Fabio Capello in the 2004–05 season. Leonardo led the club to the quarter-finals of the Champions League before losing to Schalke 04, and lead them to Coppa Italia title. At the end of the season, however, he resigned and was followed by new managers Gian Piero Gasperini, Claudio Ranieri and Andrea Stramaccioni, all hired during the following season. On 1 August 2012, the club announced that Moratti was to sell a minority interest of the club to a Chinese consortium led by Kenneth Huang. On the same day, Inter announced an agreement was formed with China Railway Construction Corporation Limited for a new stadium project, however, the deal with the Chinese eventually collapsed. The 2012–13 season was the worst in recent club history with Inter finishing ninth in Serie A and failing to qualify for any European competitions. Walter Mazzarri was appointed to replace Stramaccioni as the manager for 2013-14 season on 24 May 2013, having ended his tenure at Napoli. He guided the club to fifth in Serie A and to 2014–15 UEFA Europa League qualification. On 15 October 2013, an Indonesian consortium (International Sports Capital HK Ltd.) led by Erick Thohir, Handy Soetedjo and Rosan Roeslani, signed an agreement to acquire 70% of Inter shares from Internazionale Holding S.r.l. Immediately after the deal, Moratti's Internazionale Holding S.r.l. still retained 29.5% of the shares of FC Internazionale Milano S.p.A. After the deal, the shares of Inter was owned by a chain of holding companies, namely International Sports Capital S.p.A. of Italy (for 70% stake), International Sports Capital HK Limited and Asian Sports Ventures HK Limited of Hong Kong. Asian Sports Ventures HK Limited, itself another intermediate holding company, was owned by Nusantara Sports Ventures HK Limited (60% stake, a company owned by Thohir), Alke Sports Investment HK Limited (20% stake) and Aksis Sports Capital HK Limited (20% stake). Thohir, whom also co-owned Major League Soccer (MLS) club D.C. United and Indonesia Super League (ISL) club Persib Bandung, announced on 2 December 2013 that Inter and D.C. United had formed strategic partnership. During the Thohir era the club began to modify its financial structure from one reliant on continual owner investment to a more self sustain business model although the club still breached UEFA Financial Fair Play Regulations in 2015. The club was fined and received squad reduction in UEFA competitions, with additional penalties suspended in the probation period. During this time, Roberto Mancini returned as the club manager on 14 November 2014, with Inter finishing 8th. Inter finished 2015-2016 season fourth, failing to return to Champions League. On 6 June 2016, Suning Holdings Group (via a Luxembourg-based subsidiary Great Horizon S.á r.l.) a company owned by Zhang Jindong, co-founder and chairman of Suning Commerce Group, acquired a majority stake of Inter from Thohir's consortium International Sports Capital S.p.A. and from Moratti family's remaining shares in Internazionale Holding S.r.l. According to various filings, the total investment from Suning was €270 million. The deal was approved by an extraordinary general meeting on 28 June 2016, from which Suning Holdings Group had acquired a 68.55% stake in the club. The first season of new ownership, however, started with poor performance in pre-season friendlies. On 8 August 2016, Inter parted company with head coach Roberto Mancini by mutual consent over disagreements regarding the club's direction. He was replaced by Frank de Boer who was sacked on 1 November 2016 after leading Inter to a 4W–2D–5L record in 11 Serie A games as head coach. The successor, Stefano Pioli, didn't save the team from getting the worst group result in UEFA competitions in the club's history. Despite an eight-game winning streak, he and the club parted away before season's end when it became clear they would finish outside the league's top three for the sixth consecutive season. On 9 June 2017, former Roma coach Luciano Spalletti was appointed as Inter manager, signing a two-year contract, and eleven months later Inter clinched a UEFA Champions League group stage spot after going six years without Champions League participation thanks to a 3–2 victory against Lazio in the final game of 2017–18 Serie A. Due to this success, in August the club extended the contract with Spalletti to 2021. On 26 October 2018, Steven Zhang was appointed as new president of the club. On 25 January 2019, the club officially announced that LionRock Capital from Hong Kong reached an agreement with International Sports Capital HK Limited, in order to acquire its 31.05% shares in Inter and to become the club's new minority shareholder. After 2018–19 Serie A season, despite Inter finishing 4th, Spaletti was sacked. On 31 May 2019, Inter appointed former Juventus and Italian manager Antonio Conte as their new coach, signing a three-year deal. In September 2019 Steven Zhang was elected to the board of the European Club Association. One of the founders of Inter, a painter named Giorgio Muggiani, was responsible for the design of the first Inter logo in 1908. The first design incorporated the letters "FCIM" in the centre of a series of circles that formed the badge of the club. The basic elements of the design have remained constant even as finer details have been modified over the years. Starting at the 1999–2000 season, the original club crest was reduced in size, to give place for the addition of the club's name and foundation year at the upper and lower part of the logo respectively. In 2007, the logo was returned to the pre-1999–2000 era. It was given a more modern look with a smaller "Scudetto" star and lighter color scheme. This version was used until July 2014, when the club decided to undertake a rebranding. The most significant difference between the current and the previous logo is the omission of the star from other media except match kits. Since its founding in 1908, Inter have almost always worn black and blue stripes, earning them the nickname "Nerazzurri". According to the tradition, the colours were adopted to represent the nocturnal sky: in fact, the club was established on the night of 9 March, at 23:30; moreover, blue was chosen by Giorgio Muggiani because he considered it to be the opposite colour to red, worn by the Milan Cricket and Football Club rivals. During the 1928-29 season, however, Inter were forced to abandon their black and blue uniforms. In 1928, Inter's name and philosophy made the ruling Fascist Party uneasy; as a result, during the same year the 20-year-old club was merged with "Unione Sportiva Milanese": the new club was named "Società Sportiva Ambrosiana" after the patron saint of Milan. The flag of Milan (the red cross on white background) replaced the traditional black and blue. In 1929 the black-and-blue jerseys were restored, and after World War II, when the Fascists had fallen from power, the club reverted to their original name. In 2008, Inter celebrated their centenary with a red cross on their away shirt. The cross is reminiscent of the flag of their city, and they continue to use the pattern on their third kit. In 2014, the club adopted a predominantly black home kit with thin blue pinstripes before returning to a more traditional design the following season. Animals are often used to represent football clubs in Italy – the grass snake, called "Biscione", represents Inter. The snake is an important symbol for the city of Milan, appearing often in Milanese heraldry as a coiled viper with a man in its jaws. The symbol is present on the coat of arms of the House of Sforza (which ruled over Italy from Milan during the Renaissance period), the city of Milan, the historical Duchy of Milan (a 400-year state of the Holy Roman Empire) and Insubria (a historical region the city of Milan falls within). For the 2010–11 season, Inter's away kit featured the serpent. The team's stadium is the 80,018 seat San Siro, officially known as the "Stadio Giuseppe Meazza" after the former player who represented both Milan and Inter. The more commonly used name, "San Siro", is the name of the district where it is located. San Siro has been the home of Milan since 1926, when it was privately built by funding from Milan's chairman at the time, Piero Pirelli. Construction was performed by 120 workers, and took 13 and a half months to complete. The stadium was owned by the club until it was sold to the city in 1935, and since 1947 it has been shared with Inter, when they were accepted as joint tenant. The first game played at the stadium was on 19 September 1926, when Inter beat Milan 6–3 in a friendly match. Milan played its first league game in San Siro on 19 September 1926, losing 1–2 to Sampierdarenese. From an initial capacity of 35,000 spectators, the stadium has undergone several major renovations, most recently in preparation for the 1990 FIFA World Cup when its capacity was set to 85,700, all covered with a polycarbonate roof. In the summer of 2008, its capacity was reduced to 80,018 to meet the new standards set by UEFA. Based on the English model for stadiums, San Siro is specifically designed for football matches, as opposed to many multi-purpose stadiums used in Serie A. It is therefore renowned in Italy for its fantastic atmosphere during matches owing to the closeness of the stands to the pitch. The frequent use of flares by supporters contributes to the atmosphere, but the practice has occasionally also caused problems. Inter is one of the most supported clubs in Italy, according to an August 2007 research by Italian newspaper "La Repubblica". Historically, the largest section of Inter fans from the city of Milan were the middle-class bourgeoisie Milanese, while Milan fans were typically working-class. The traditional ultras group of Inter is "Boys San"; they hold a significant place in the history of the ultras scene in general due to the fact that they are one of the oldest, being founded in 1969. Politically, the ultras of Inter are usually considered right-wing and they have good relationships with the Lazio ultras. As well as the main group of "Boys San", there are four more significant groups: "Viking", "Irriducibili", "Ultras", and "Brianza Alcoolica". Inter's most vocal fans are known to gather in the Curva Nord, or north curve of the San Siro. This longstanding tradition has led to the Curva Nord being synonymous with the club's most die-hard supporters, who unfurl banners and wave flags in support of their team. Inter have several rivalries, two of which are highly significant in Italian football; firstly, they participate in the intra city "Derby della Madonnina" with Milan; the rivalry has existed ever since Inter splintered off from Milan in 1908. The name of the derby refers to the Blessed Virgin Mary, whose statue atop the Milan Cathedral is one of the city's main attractions. The match usually creates a lively atmosphere, with numerous (often humorous or offensive) banners unfolded before the match. Flares are commonly present, but they also led to the abandonment of the second leg of the 2004–05 Champions League quarter-final matchup between Milan and Inter on 12 April after a flare thrown from the crowd by an Inter supporter struck Milan keeper Dida on the shoulder. The other most significant rivalry is with Juventus; the two participate in the "Derby d'Italia". Up until the 2006 Italian football scandal, which saw Juventus relegated, the two were the only Italian clubs to have never played below Serie A. In recent years, post-"Calciopoli", Inter have developed a rivalry with Roma, having finished runners-up to Inter in all but one of Inter's five "Scudetto"-winning seasons between 2005 and 2010. The two sides have also contested in five Coppa Italia finals and four Supercoppa Italiana finals since 2006. Other clubs, like Atalanta and Napoli, are also considered amongst their rivals. Their supporters collectively go by "Interisti", or "Nerazzurri." Inter have won 30 domestic trophies, including the league 18 times, the Coppa Italia seven and the Supercoppa Italiana five. From 2006 to 2010, the club won five successive league titles, equalling the all-time record before 2017, when Juventus won the sixth successive league title. They have won the Champions League three times: two back-to-back in 1964 and 1965 and then another in 2010; the last completed an unprecedented Italian treble with the Coppa Italia and the "Scudetto". The club has also won three UEFA Cups, two Intercontinental Cups and one FIFA Club World Cup. Inter has never been relegated from the top flight of Italian football in its entire existence. It is the sole club to have competed in Serie A and its predecessors in every season. The "Nerrazurri" currently have the longest unbroken run in the top flight leagues of any club on the Continent, 106 seasons, and among European clubs, only five British clubs have longer current spells in the top flight. Javier Zanetti holds the records for both total appearances and Serie A appearances for Inter, with 858 official games played in total and 618 in Serie A. Giuseppe Meazza is Inter's all-time top goalscorer, with 284 goals in 408 games. Behind him, in second place, is Alessandro Altobelli with 209 goals in 466 games, and Roberto Boninsegna in third place, with 171 goals over 281 games. Helenio Herrera had the longest reign as Inter coach, with nine years (eight consecutive) in charge, and is the most successful coach in Inter history with three "Scudetti", two European Cups, and two Intercontinental Cup wins. José Mourinho, who was appointed on 2 June 2008, and completed his first season in Italy by winning the Serie A league title and the Supercoppa Italiana, in the second season he won the first "treble" in Italian history, the Serie A league title, Coppa Italia and the UEFA Champions League in the season 2009–2010. . Players in bold will officially leave the team (e.g. bought out), while those in "italics" will end their contract with Inter at the end of the season. 3 – Giacinto Facchetti, left back, 1960–1978 "(posthumous honour)". The number was retired on 8 September 2006. The last player to wear the shirt was Argentinian center back Nicolás Burdisso, who took on the number 16 shirt for the rest of the season. 4 – Javier Zanetti, defensive midfielder, played 858 games for Inter between 1995 and his retirement in the summer of 2014. Club chairman Erick Thohir confirmed that Zanetti's number 4 was to be retired out of respect. Below is a list of Inter chairmen from 1908 until the present day. Below is a list of Inter coaches from 1909 until the present day. FC Internazionale Milano S.p.A. was described as one of the financial "black-holes" among the Italian clubs, which was heavily dependent on the financial contribution from the owner Massimo Moratti. In June 2006, the shirt sponsor and the minority shareholder of the club, Pirelli, sold 15.26% shares of the club to Moratti family, for €13.5 million. The tyre manufacturer retained 4.2%. However, due to several capital increases of Inter, such as a reversed merger with an intermediate holding company, Inter Capital S.r.l. in 2006, which held 89% shares of Inter and €70 million capitals at that time, or issues new shares for €70.8 million in June 2007, €99.9 million in December 2007, €86.6 million in 2008, €70 million in 2009, €40 million in 2010 and 2011, €35 million in 2012 or allowing Thoir subscribed €75 million new shares of Inter in 2013, Pirelli became the third largest shareholders of just 0.5%, . Inter had yet another recapitalization that was reserved for Suning Holdings Group in 2016. In the prospectus of Pirelli's second IPO in 2017, the company also revealed that the value of the remaining shares of Inter that was owned by Pirelli, was write-off to zero in 2016 financial year. Inter also received direct capital contribution from the shareholders to cover loss which was excluded from issuing shares in the past. () Right before the takeover of Thohir, the consolidated balance sheets of "Internazionale Holding S.r.l." showed the whole companies group had a bank debt of €157 million, including the bank debt of a subsidiary "Inter Brand Srl", as well as the club itself, to Istituto per il Credito Sportivo (ICS), for €15.674 million on the balance sheet at end of 2012–13 financial year. In 2006 Inter sold its brand to the new subsidiary, "Inter Brand S.r.l.", a special purpose entity with a shares capital of €40 million, for €158 million (the deal made Internazionale make a net loss of just €31 million in a separate financial statement). At the same time the subsidiary secured a €120 million loan from Banca Antonveneta, which would be repaid in installments until 30 June 2016; "La Repubblica" described the deal as "doping". In September 2011 Inter secured a loan from ICS by factoring the sponsorship of Pirelli of 2012–13 and 2013–14 season, for €24.8 million, in an interest rate of 3 months Euribor + 1.95% spread. In June 2014 new Inter Group secured €230 million loan from Goldman Sachs and UniCredit at a new interest rate of 3 months Euribor + 5.5% spread, as well as setting up a new subsidiary to be the debt carrier: "Inter Media and Communication S.r.l.". €200 million of which would be utilized in debt refinancing of the group. The €230million loan, €1 million (plus interests) would be due on 30 June 2015, €45 million (plus interests) would be repaid in 15 installments from 30 September 2015 to 31 March 2019, as well as €184 million (plus interests) would be due on 30 June 2019. In ownership side, the Hong Kong-based International Sports Capital HK Limited, had pledged the shares of Italy-based International Sports Capital S.p.A. (the direct holding company of Inter) to CPPIB Credit Investments for €170 million in 2015, at an interest rate of 8% p.a (due March 2018) to 15% p.a. (due March 2020). ISC repaid the notes on 1 July 2016 after they sold part of the shares of Inter to Suning Holdings Group. However, in the late 2016 the shares of ISC S.p.A. was pledged again by ISC HK to private equity funds of OCP Asia for US$80 million. In December 2017, the club also refinanced its debt of €300 million, by issuing corporate bond to the market, via Goldman Sachs as the bookkeeper, for an interest rate of 4.875% p.a. Considering revenue alone, Inter surpassed city rivals in Deloitte Football Money League for the first time, in the 2008–09 season, to rank in 9th place, one place behind Juventus in 8th place. (Milan in 10th place.) In the 2009–10 season, Inter remained in 9th place, surpassing Juventus (10th) but Milan re-took the leading role as the 7th. Inter became the 8th in 2010–11, but was still one place behind Milan. Since 2011, Inter fell to 11th in 2011–12, 15th in 2012–13, 17th in 2013–14, 19th in 2014–15 and 2015–16 season. In 2016–17 season, Inter was ranked 15th in the "Money League". In 2010 "Football Money League" (2008–09 season), the normalized revenue of €196.5 million were divided up between matchday (14%, €28.2 million), broadcasting (59%, €115.7 million, +7%, +€8 million) and commercial (27%, €52.6 million, +43%). Kit sponsors Nike and Pirelli contributed €18.1 million and €9.3 million respectively to commercial revenues, while broadcasting revenues were boosted €1.6 million (6%) by Champions League distribution. Deloitte expressed the idea that issues in Italian football, particularly matchday revenue issues were holding Inter back compared to other European giants, and developing their own stadia would result in Serie A clubs being more competitive on the world stage. In 2009–10 season the revenue of Inter was boosted by the sales of Ibrahimović, the treble and the release clause of coach José Mourinho. According to the normalized figures by Deloitte in their 2011 "Football Money League", in 2009–10 season, the revenue had increased €28.3 million (14%) to €224.8 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 17%:62%:21%. For the 2010–11 season, Serie A clubs started negotiating club TV rights collectively rather than individually. This was predicted to result in lower broadcasting revenues for big clubs such as Juventus and Inter, with smaller clubs gaining from the loss. Eventually the result included an extraordinary income of €13 million from RAI. In 2012 "Football Money League" (2010–11 season), the normalized revenue was €211.4 million. The ratio of matchday, broadcasting and commercial in the adjusted figures was 16%:58%:26%. However, combining revenue and cost, in the 2006–07 season they had a net loss of €206 million (€112 million extraordinary basis, due to the abolition of non-standard accounting practice of the special amortization fund), followed by a net loss of €148 million in the 2007–08 season, a net loss of €154 million in 2008–09 season, a net loss of €69 million in the 2009–10 season, a net loss of €87 million in the 2010–11 season, a net loss of €77 million in the 2011–12 season, a net loss of €80 million in 2012–13 season and a net profit of €33 million in 2013–14 season, due to special income from the establishment of subsidiary Inter Media and Communication. All aforementioned figures were in separate financial statement. Figures from consolidated financial statement were announced since 2014–15 season, which were net losses of €140.4 million (2014–15), €59.6 million (2015–16 season, before 2017 restatement) and €24.6 million (2016–17). In 2015 Inter and Roma were the only two Italian clubs that were sanctioned by the UEFA due to their breaking of UEFA Financial Fair Play Regulations, which was followed by Milan which was once barred from returning to European competition in 2018. As a probation to avoid further sanction, Inter agreed to have a three-year aggregate break-even from 2015 to 2018, with the 2015–16 season being allowed to have a net loss of a maximum of €30 million, followed by break-even in the 2016–17 season and onwards. Inter was also fined €6 million plus an additional €14 million in probation. Inter also made a financial trick in the transfer market in mid-2015, in which Stevan Jovetić and Miranda were signed by Inter on temporary deals plus an obligation to sign outright in 2017, making their cost less in the loan period. Moreover, despite heavily investing in new signings, namely Geoffrey Kondogbia and Ivan Perišić that potentially increased the cost in amortization, Inter also sold Mateo Kovačić for €29 million, making a windfall profit. In November 2018, documents from Football Leaks further revealed that the loan signings such as Xherdan Shaqiri in January 2015, was in fact had inevitable conditions to trigger the outright purchase. On 21 April 2017, Inter announced that their net loss (FFP adjusted) of 2015–16 season was within the allowable limit of €30 million. However, on the same day UEFA also announced that the reduction of squad size of Inter in European competitions would not be lifted yet, due to partial fulfilment of the targets in the settlement agreement. Same announcement was made by UEFA in June 2018, based on Inter's 2016–17 season financial result. In February 2020, Inter Milan is suing MLS for trademark infringement, claiming that the term “Inter” is synonymous with its club and no one else.
https://en.wikipedia.org/wiki?curid=15116
Interferon Interferons (IFNs, ) are a group of signaling proteins made and released by host cells in response to the presence of several viruses. In a typical scenario, a virus-infected cell will release interferons causing nearby cells to heighten their anti-viral defenses. IFNs belong to the large class of proteins known as cytokines, molecules used for communication between cells to trigger the protective defenses of the immune system that help eradicate pathogens. Interferons are named for their ability to "interfere" with viral replication by protecting cells from virus infections. IFNs also have various other functions: they activate immune cells, such as natural killer cells and macrophages; they increase host defenses by up-regulating antigen presentation by virtue of increasing the expression of major histocompatibility complex (MHC) antigens. Certain symptoms of infections, such as fever, muscle pain and "flu-like symptoms", are also caused by the production of IFNs and other cytokines. More than twenty distinct IFN genes and proteins have been identified in animals, including humans. They are typically divided among three classes: Type I IFN, Type II IFN, and Type III IFN. IFNs belonging to all three classes are important for fighting viral infections and for the regulation of the immune system. Based on the type of receptor through which they signal, human interferons have been classified into three major types. In general, type I and II interferons are responsible for regulating and activating the immune response. Expression of type I and III IFNs can be induced in virtually all cell types upon recognition of viral components, especially nucleic acids, by cytoplasmic and endosomal receptors, whereas type II interferon is induced by cytokines such as IL-12, and its expression is restricted to immune cells such as T cells and NK cells. All interferons share several common effects: they are antiviral agents and they modulate functions of the immune system. Administration of Type I IFN has been shown experimentally to inhibit tumor growth in animals, but the beneficial action in human tumors has not been widely documented. A virus-infected cell releases viral particles that can infect nearby cells. However, the infected cell can protect neighboring cells against a potential infection of the virus by releasing interferons. In response to interferon, cells produce large amounts of an enzyme known as protein kinase R (PKR). This enzyme phosphorylates a protein known as eIF-2 in response to new viral infections; the phosphorylated eIF-2 forms an inactive complex with another protein, called eIF2B, to reduce protein synthesis within the cell. Another cellular enzyme, RNAse L—also induced by interferon action—destroys RNA within the cells to further reduce protein synthesis of both viral and host genes. Inhibited protein synthesis impairs both virus replication and infected host cells. In addition, interferons induce production of hundreds of other proteins—known collectively as interferon-stimulated genes (ISGs)—that have roles in combating viruses and other actions produced by interferon. They also limit viral spread by increasing p53 activity, which kills virus-infected cells by promoting apoptosis. The effect of IFN on p53 is also linked to its protective role against certain cancers. Another function of interferons is to up-regulate major histocompatibility complex molecules, MHC I and MHC II, and increase immunoproteasome activity. All interferons significantly enhance the presentation of MHC I dependent antigens. Interferon gamma (IFN-gamma) also significantly stimulates the MHC II-dependent presentation of antigens. Higher MHC I expression increases presentation of viral and abnormal peptides from cancer cells to cytotoxic T cells, while the immunoproteasome processes these peptides for loading onto the MHC I molecule, thereby increasing the recognition and killing of infected or malignant cells. Higher MHC II expression increases presentation of these peptides to helper T cells; these cells release cytokines (such as more interferons and interleukins, among others) that signal to and co-ordinate the activity of other immune cells. Interferons can also suppress angiogenesis by down regulation of angiogenic stimuli deriving from tumor cells. They also suppress the proliferation of endothelial cells. Such suppression causes a decrease in tumor angiogenesis, a decrease in its vascularization and subsequent growth inhibition. Interferons, such as interferon gamma, directly activate other immune cells, such as macrophages and natural killer cells. Production of interferons occurs mainly in response to microbes, such as viruses and bacteria, and their products. Binding of molecules uniquely found in microbes—viral glycoproteins, viral RNA, bacterial endotoxin (lipopolysaccharide), bacterial flagella, CpG motifs—by pattern recognition receptors, such as membrane bound Toll like receptors or the cytoplasmic receptors RIG-I or MDA5, can trigger release of IFNs. Toll Like Receptor 3 (TLR3) is important for inducing interferons in response to the presence of double-stranded RNA viruses; the ligand for this receptor is double-stranded RNA (dsRNA). After binding dsRNA, this receptor activates the transcription factors IRF3 and NF-κB, which are important for initiating synthesis of many inflammatory proteins. RNA interference technology tools such as siRNA or vector-based reagents can either silence or stimulate interferon pathways. Release of IFN from cells (specifically IFN-γ in lymphoid cells) is also induced by mitogens. Other cytokines, such as interleukin 1, interleukin 2, interleukin-12, tumor necrosis factor and colony-stimulating factor, can also enhance interferon production. By interacting with their specific receptors, IFNs activate "signal transducer and activator of transcription" (STAT) complexes; STATs are a family of transcription factors that regulate the expression of certain immune system genes. Some STATs are activated by both type I and type II IFNs. However each IFN type can also activate unique STATs. STAT activation initiates the most well-defined cell signaling pathway for all IFNs, the classical Janus kinase-STAT (JAK-STAT) signaling pathway. In this pathway, JAKs associate with IFN receptors and, following receptor engagement with IFN, phosphorylate both STAT1 and STAT2. As a result, an IFN-stimulated gene factor 3 (ISGF3) complex forms—this contains STAT1, STAT2 and a third transcription factor called IRF9—and moves into the cell nucleus. Inside the nucleus, the ISGF3 complex binds to specific nucleotide sequences called "IFN-stimulated response elements" (ISREs) in the promoters of certain genes, known as IFN stimulated genes ISGs. Binding of ISGF3 and other transcriptional complexes activated by IFN signaling to these specific regulatory elements induces transcription of those genes. A collection of known ISGs is available on Interferome, a curated online database of ISGs (www.interferome.org); Additionally, STAT homodimers or heterodimers form from different combinations of STAT-1, -3, -4, -5, or -6 during IFN signaling; these dimers initiate gene transcription by binding to IFN-activated site (GAS) elements in gene promoters. Type I IFNs can induce expression of genes with either ISRE or GAS elements, but gene induction by type II IFN can occur only in the presence of a GAS element. In addition to the JAK-STAT pathway, IFNs can activate several other signaling cascades. For instance, both type I and type II IFNs activate a member of the CRK family of adaptor proteins called CRKL, a nuclear adaptor for STAT5 that also regulates signaling through the C3G/Rap1 pathway. Type I IFNs further activate "p38 mitogen-activated protein kinase" (MAP kinase) to induce gene transcription. Antiviral and antiproliferative effects specific to type I IFNs result from p38 MAP kinase signaling. The "phosphatidylinositol 3-kinase" (PI3K) signaling pathway is also regulated by both type I and type II IFNs. PI3K activates P70-S6 Kinase 1, an enzyme that increases protein synthesis and cell proliferation; phosphorylates of ribosomal protein s6, which is involved in protein synthesis; and phosphorylates a translational repressor protein called "eukaryotic translation-initiation factor 4E-binding protein 1" (EIF4EBP1) in order to deactivate it. Interferons can disrupt signaling by other stimuli. For example, Interferon alpha induces RIG-G, which disrupts the CSN5-containing COP9 signalosome (CSN), a highly conserved multiprotein complex implicated in protein deneddylation, deubiquitination, and phosphorylation. RIG-G has shown the capacity to inhibit NF-κB and STAT3 signaling in lung cancer cells, which demonstrates the potential of type I IFNs. Many viruses have evolved mechanisms to resist interferon activity. They circumvent the IFN response by blocking downstream signaling events that occur after the cytokine binds to its receptor, by preventing further IFN production, and by inhibiting the functions of proteins that are induced by IFN. Viruses that inhibit IFN signaling include Japanese Encephalitis Virus (JEV), dengue type 2 virus (DEN-2) and viruses of the herpesvirus family, such as human cytomegalovirus (HCMV) and Kaposi's sarcoma-associated herpesvirus (KSHV or HHV8). Viral proteins proven to affect IFN signaling include EBV nuclear antigen 1 (EBNA1) and EBV nuclear antigen 2 (EBNA-2) from Epstein-Barr virus, the large T antigen of Polyomavirus, the E7 protein of Human papillomavirus (HPV), and the B18R protein of vaccinia virus. Reducing IFN-α activity may prevent signaling via STAT1, STAT2, or IRF9 (as with JEV infection) or through the JAK-STAT pathway (as with DEN-2 infection). Several poxviruses encode soluble IFN receptor homologs—like the B18R protein of the vaccinia virus—that bind to and prevent IFN interacting with its cellular receptor, impeding communication between this cytokine and its target cells. Some viruses can encode proteins that bind to double-stranded RNA (dsRNA) to prevent the activity of RNA-dependent protein kinases; this is the mechanism reovirus adopts using its sigma 3 (σ3) protein, and vaccinia virus employs using the gene product of its E3L gene, p25. The ability of interferon to induce protein production from interferon stimulated genes (ISGs) can also be affected. Production of protein kinase R, for example, can be disrupted in cells infected with JEV. Some viruses escape the anti-viral activities of interferons by gene (and thus protein) mutation. The H5N1 influenza virus, also known as bird flu, has resistance to interferon and other anti-viral cytokines that is attributed to a single amino acid change in its Non-Structural Protein 1 (NS1), although the precise mechanism of how this confers immunity is unclear. Interferon beta-1a and interferon beta-1b are used to treat and control multiple sclerosis, an autoimmune disorder. This treatment may help in reducing attacks in relapsing-remitting multiple sclerosis and slowing disease progression and activity in secondary progressive multiple sclerosis. Interferon therapy is used (in combination with chemotherapy and radiation) as a treatment for some cancers. This treatment can be used in hematological malignancy, such as in leukemia and lymphomas including hairy cell leukemia, chronic myeloid leukemia, nodular lymphoma, and cutaneous T-cell lymphoma. Patients with recurrent melanomas receive recombinant IFN-α2b. Both hepatitis B and hepatitis C are treated with IFN-α, often in combination with other antiviral drugs. Some of those treated with interferon have a sustained virological response and can eliminate hepatitis virus. The most harmful strain—hepatitis C genotype I virus—can be treated with a 60-80% success rate with the current standard-of-care treatment of interferon-α, ribavirin and recently approved protease inhibitors such as Telaprevir (Incivek) May 2011, Boceprevir (Victrelis) May 2011 or the nucleotide analog polymerase inhibitor Sofosbuvir (Sovaldi) December 2013. Biopsies of patients given the treatment show reductions in liver damage and cirrhosis. Some evidence shows giving interferon immediately following infection can prevent chronic hepatitis C, although diagnosis early in infection is difficult since physical symptoms are sparse in early hepatitis C infection. Control of chronic hepatitis C by IFN is associated with reduced hepatocellular carcinoma. Unconfirmed results suggested that interferon eye drops may be an effective treatment for people who have herpes simplex virus epithelial keratitis, a type of eye infection. There is no clear evidence to suggest that removing the infected tissue (debridement) followed by interferon drops is an effective treatment approach for these types of eye infections. Unconfirmed results suggested that the combination of interferon and an antiviral agent may speed the healing process compared to antiviral therapy alone. When used in systemic therapy, IFNs are mostly administered by an intramuscular injection. The injection of IFNs in the muscle or under the skin is generally well tolerated. The most frequent adverse effects are flu-like symptoms: increased body temperature, feeling ill, fatigue, headache, muscle pain, convulsion, dizziness, hair thinning, and depression. Erythema, pain, and hardness at the site of injection are also frequently observed. IFN therapy causes immunosuppression, in particular through neutropenia and can result in some infections manifesting in unusual ways. Several different types of interferons are approved for use in humans. One was first approved for medical use in 1986. For example, in January 2001, the Food and Drug Administration (FDA) approved the use of PEGylated interferon-alpha in the USA; in this formulation, PEGylated interferon-alpha-2b ("Pegintron"), polyethylene glycol is linked to the interferon molecule to make the interferon last longer in the body. Approval for PEGylated interferon-alpha-2a ("Pegasys") followed in October 2002. These PEGylated drugs are injected once weekly, rather than administering two or three times per week, as is necessary for conventional interferon-alpha. When used with the antiviral drug ribavirin, PEGylated interferon is effective in treatment of hepatitis C; at least 75% of people with hepatitis C genotypes 2 or 3 benefit from interferon treatment, although this is effective in less than 50% of people infected with genotype 1 (the more common form of hepatitis C virus in both the U.S. and Western Europe). Interferon-containing regimens may also include protease inhibitors such as boceprevir and telaprevir. There are also interferon-inducing drugs, notably tilorone that is shown to be effective against Ebola virus. Interferons were first described in 1957 by Alick Isaacs and Jean Lindenmann at the National Institute for Medical Research in London; the discovery was a result of their studies of viral interference. Viral interference refers to the inhibition of virus growth caused by previous exposure of cells to an active or a heat-inactivated virus. Isaacs and Lindenmann were working with a system that involved the inhibition of the growth of live influenza virus in chicken embryo chorioallantoic membranes by heat-inactivated influenza virus. Their experiments revealed that this interference was mediated by a protein released by cells in the heat-inactivated influenza virus-treated membranes. They published their results in 1957 naming the antiviral factor they had discovered "interferon". The findings of Isaacs and Lindenmann have been widely confirmed and corroborated in the literature. Furthermore, others may have made observations on interferons before the 1957 publication of Isaacs and Lindenmann. For example, during research to produce a more efficient vaccine for smallpox, Yasu-ichi Nagano and Yasuhiko Kojima—two Japanese virologists working at the Institute for Infectious Diseases at the University of Tokyo—noticed inhibition of viral growth in an area of rabbit-skin or testis previously inoculated with UV-inactivated virus. They hypothesised that some "viral inhibitory factor" was present in the tissues infected with virus and attempted to isolate and characterize this factor from tissue homogenates. Independently, Monto Ho, in John Enders's lab, observed in 1957 that attenuated poliovirus conferred a species specific anti-viral effect in human amniotic cell cultures. They described these observations in a 1959 publication, naming the responsible factor "viral inhibitory factor" (VIF). It took another fifteen to twenty years, using somatic cell genetics, to show that the interferon action gene and interferon gene reside in different human chromosomes. The purification of human beta interferon did not occur until 1977. Y.H. Tan and his co-workers purified and produced biologically active, radio-labeled human beta interferon by superinducing the interferon gene in fibroblast cells, and they showed its active site contains tyrosine residues. Tan's laboratory isolated sufficient amounts of human beta interferon to perform the first amino acid, sugar composition and N-terminal analyses. They showed that human beta interferon was an unusually hydrophobic glycoprotein. This explained the large loss of interferon activity when preparations were transferred from test tube to test tube or from vessel to vessel during purification. The analyses showed the reality of interferon activity by chemical verification. The purification of human alpha interferon was not reported until 1978. A series of publications from the laboratories of Sidney Pestka and Alan Waldman between 1978 and 1981, describe the purification of the type I interferons IFN-α and IFN-β. By the early 1980s, genes for these interferons had been cloned, adding further definitive proof that interferons were responsible for interfering with viral replication. Gene cloning also confirmed that IFN-α was encoded by a family of many related genes. The type II IFN (IFN-γ) gene was also isolated around this time. Interferon was scarce and expensive until 1980, when the interferon gene was inserted into bacteria using recombinant DNA technology, allowing mass cultivation and purification from bacterial cultures or derived from yeasts. Interferon can also be produced by recombinant mammalian cells. Before the early 1970s, large scale production of human interferon had been pioneered by Kari Cantell. He produced large amounts of human alpha interferon from large quantities of human white blood cells collected by the Finnish Blood Bank. Large amounts of human beta interferon were made by superinducing the beta interferon gene in human fibroblast cells. Cantell's and Tan's methods of making large amounts of natural interferon were critical for chemical characterisation, clinical trials and the preparation of small amounts of interferon messenger RNA to clone the human alpha and beta interferon genes. The superinduced human beta interferon messenger RNA was prepared by Tan's lab for Cetus corp. to clone the human beta interferon gene in bacteria and the recombinant interferon was developed as 'betaseron' and approved for the treatment of MS. Superinduction of the human beta interferon gene was also used by Israeli scientists to manufacture human beta interferon.
https://en.wikipedia.org/wiki?curid=15120
Israeli settlement Israeli settlements are civilian communities inhabited by Israeli citizens, almost exclusively of Jewish ethnicity, built on lands occupied by Israel in the 1967 Six-Day War. Israeli settlements currently exist in the Palestinian territory of the West Bank, including East Jerusalem, and in the Syrian territory of the Golan Heights, and had previously existed within the Egyptian territory of the Sinai Peninsula, and within the Palestinian territory of the Gaza Strip; however, Israel evacuated and dismantled the 18 Sinai settlements following the 1979 Egypt–Israel peace agreement and all of the 21 settlements in the Gaza Strip, along with four in the West Bank, in 2005 as part of its unilateral disengagement from Gaza. and the United Nations has repeatedly upheld the view that Israel's construction of settlements constitutes a violation of the Fourth Geneva Convention.
https://en.wikipedia.org/wiki?curid=15123
Irrealism (the arts) Irrealism is a term that has been used by various writers in the fields of philosophy, literature, and art to denote specific modes of unreality and/or the problems in concretely defining reality. While in philosophy the term specifically refers to a position put forward by the American philosopher Nelson Goodman, in literature and art it refers to a variety of writers and movements. If the term has nonetheless retained a certain consistency in its use across these fields and would-be movements, it perhaps reflects the word’s position in general English usage: though the standard dictionary definition of "irreal" gives it the same meaning as "unreal", "irreal" is very rarely used in comparison with "unreal". Thus, it has generally been used to describe something which, while unreal, is so in a very specific or unusual fashion, usually one emphasizing not just the "not real," but some form of estrangement from our generally accepted sense of reality. In literature, the term irrealism was first used extensively in the United States in the 1970s to describe the post-realist "new fiction" of writers such as Donald Barthelme or John Barth. More generally, it described the notion that all forms of writing could only "offer particular versions of reality rather than actual descriptions of it," and that a story need not offer a clear resolution at its end. John Gardner, in "The Art of Fiction", cites in this context the work of Barthelme and its "seemingly limitless ability to manipulate [literary] techniques as modes of apprehension [which] apprehend nothing." Though Barth, in a 1974 interview, stated, "irrealism—not antirealism or unrealism, but irrealism—is all that I would confidently predict is likely to characterize the prose fiction of the 1970s," this did not prove to be the case. Instead writing in the United States quickly returned to its realist orthodoxy and the term irrealism fell into disuse. In recent years, however, the term has been revived in an attempt to describe and categorize, in literary and philosophical terms, how it is that the work of an irrealist writer differs from the work of writers in other, non-realistic genres (e.g., the fantasy of J.R.R. Tolkien, the magical realism of Gabriel García Márquez) and what the significance of this difference is. This can be seen in Dean Swinford's essay "Defining irrealism: scientific development and allegorical possibility". Approaching the issue from a structuralist and narratological point of view, he has defined irrealism as a "peculiar mode of postmodern allegory" that has resulted from modernity’s fragmentation and dismantling of the well-ordered and coherent medieval system of symbol and allegory. Thus a lion, when presented in a given context in medieval literature, could only be interpreted in a single, approved way. Contemporary literary theory, however, denies the attribution of such fixed meanings. According to Swinford, this change can be attributed in part to the fact that "science and technical culture have changed perceptions of the natural world, have significantly changed the natural world itself, thereby altering the vocabulary of symbols applicable to epistemological and allegorical attempts to understand it." Thus irreal works such as Italo Calvino's "Cosmicomics" and Jorge Luis Borges' "Ficciones" can be seen as an attempt to find a new allegorical language to explain our changed perceptions of the world that have been brought about by our scientific and technical culture, especially concepts such as quantum physics or the theory of relativity. "The Irrealist work, then, operates within a given system," writes Swinford, "and attests to its plausibility, despite the fact that this system, and the world it represents, is often a mutation, an aberration." The online journal "The Cafe Irreal" , on the other hand, has defined irrealism as being a type of existentialist literature in which the means are continually and absurdly rebelling against the ends that we have determined for them. An example of this would be Franz Kafka's story "The Metamorphosis", in which the salesman Gregor Samsa's plans for supporting his family and rising up in rank by hard work and determination are suddenly thrown topsy-turvy by his sudden and inexplicable transformation into a man-sized insect. Such fiction is said to emphasize the fact that human consciousness, being finite in nature, can never make complete sense of, or successfully order, a universe that is infinite in its aspects and possibilities. Which is to say: as much as we might try to order our world with a certain set of norms and goals (which we consider our real world), the paradox of a finite consciousness in an infinite universe creates a zone of irreality ("that which is beyond the real") that offsets, opposes, or threatens the real world of the human subject. Irrealist writing often highlights this irreality, and our strange fascination with it, by combining the unease we feel because the real world doesn't conform to our desires with the narrative quality of the dream state (where reality is constantly and inexplicably being undermined); it is thus said to communicate directly, "by feeling rather than articulation, the uncertainties inherent in human existence or, to put it another way... the irreconcilability between human aspiration and human reality." If the irreal story can be considered an allegory, then, it would be an allegory that is "so many pointers to an unknown meaning," in which the meaning is felt more than it is articulated or systematically analyzed. Various writers have addressed the question of Irrealism in Art. Many salient observations on Irrealism in Art are found in Nelson Goodman's "Languages of Art". Goodman himself produced some multimedia shows, one of which inspired by hockey and is entitled "Hockey Seen: A Nightmare in Three Periods and Sudden Death". Garret Rowlan, writing in "The Cafe Irreal", writes that the malaise present in the work of the Italian artist Giorgio de Chirico, "which recalls Kafka, has to do with the sense of another world lurking, hovering like the long shadows that dominate de Chirico's paintings, which frequently depict a landscape at twilight's uncertain hour. Malaise and mystery are all by-products of the interaction of the real and the unreal, the rub and contact of two worlds caught on irrealism's shimmering surface." The writer Dean Swinford, whose concept of irrealism was described at length in the section "Irrealism in Literature", wrote that the artist Remedios Varos, in her painting "The Juggler", "creates a personal allegorical system which relies on the predetermined symbols of Christian and classical iconography. But these are quickly refigured into a personal system informed by the scientific and organized like a machine...in the Irreal work, allegory operates according to an altered, but constant and orderly iconographic system." Artist Tristan Tondino claims "There is no specific style to Irrealist Art. It is the result of awareness that every human act is the result of the limitations of the world of the actor." In Australia, the art journal "the art life" has recently detected the presence of a "New Irrealism" among the painters of that country, which is described as being an "approach to painting that is decidedly low key, deploying its effects without histrionic showmanship, while creating an eerie other world of ghostly images and abstract washes." What exactly constituted the "old" irrealism, they do not say. Irrealist Art Edition is a publishing company created in the 90s by contemporary plastic artist Frédéric Iriarte. Together with the Estonian poet, writer and art critic Ilmar Laaban, they developed their concept of Irrealism through several essays, exhibitions, projects, manifest and a book, "Irréalisation". Irrealist Art Edition Some hardcore bands in Italy have claimed to be irrealist.
https://en.wikipedia.org/wiki?curid=15125
International Electrotechnical Commission The International Electrotechnical Commission (IEC; in French: "Commission électrotechnique internationale") is an international standards organization that prepares and publishes international standards for all electrical, electronic and related technologies – collectively known as "electrotechnology". IEC standards cover a vast range of technologies from power generation, transmission and distribution to home appliances and office equipment, semiconductors, fibre optics, batteries, solar energy, nanotechnology and marine energy as well as many others. The IEC also manages four global conformity assessment systems that certify whether equipment, system or components conform to its international standards. All electrotechnologies are covered by IEC Standards, including energy production and distribution, electronics, magnetics and electromagnetics, electroacoustics, multimedia, telecommunication and medical technology, as well as associated general disciplines such as terminology and symbols, electromagnetic compatibility, measurement and performance, dependability, design and development, safety and the environment. The first International Electrical Congress took place in 1881 at the International Exposition of Electricity, held in Paris. At that time the International System of Electrical and Magnetic Units was agreed to. The International Electrotechnical Commission held its inaugural meeting on 26 June 1906, following discussions among the British Institution of Electrical Engineers, the American Institute of Electrical Engineers, and others, which began at the 1900 Paris International Electrical Congress, and continued with Colonel R. E. B. Crompton playing a key role. In 1906, Lord Kelvin was elected as the first President of the International Electrotechnical Commission. The IEC was instrumental in developing and distributing standards for units of measurement, particularly the gauss, hertz, and weber. It also first proposed a system of standards, the Giorgi System, which ultimately became the SI, or Système International d’unités (in English, the International System of Units). In 1938, it published a multilingual international vocabulary to unify terminology relating to electrical, electronic and related technologies. This effort continues, and the International Electrotechnical Vocabulary (the on-line version of which is known as the "Electropedia") remains an important work in the electrical and electronic industries. The CISPR ("Comité International Spécial des Perturbations Radioélectriques") – in English, the International Special Committee on Radio Interference – is one of the groups founded by the IEC. Currently, 86 countries are IEC members while another 87 participate in the Affiliate Country Programme, which is not a form of membership but is designed to help industrializing countries get involved with the IEC. Originally located in London, the Commission moved to its current headquarters in Geneva in 1948. It has regional centres in Africa [(Nairobi, Kenya)], Asia-Pacific (Singapore), Latin America (São Paulo, Brazil) and North America (Boston, United States). Today, the IEC is the world's leading international organization in its field, and its standards are adopted as national standards by its members. The work is done by some 10,000 electrical and electronics experts from industry, government, academia, test labs and others with an interest in the subject. IEC standards have numbers in the range 60000–79999 and their titles take a form such as "IEC 60417: Graphical symbols for use on equipment". Following the Dresden Agreement with CENELEC the numbers of older IEC standards were converted in 1997 by adding 60000, for example IEC 27 became IEC 60027. Standards of the 60000 series are also found preceded by EN to indicate that the IEC standard is also adopted by CENELEC as a European standard; for example IEC 60034 is also available as EN 60034. The IEC cooperates closely with the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). In addition, it works with several major standards development organizations, including the IEEE with which it signed a cooperation agreement in 2002, which was amended in 2008 to include joint development work. Standards developed jointly with ISO such as ISO/IEC 26300 ("Open Document Format for Office Applications (OpenDocument) v1.0"), ISO/IEC 27001 ("Information technology, Security techniques, Information security management systems, Requirements"), and CASCO ISO/IEC 17000 series, carry the acronym of both organizations. The use of the ISO/IEC prefix covers publications from ISO/IEC Joint Technical Committee 1 - Information Technology, as well as conformity assessment standards developed by ISO CASCO and IEC CAB (Conformity Assessment Board). Other standards developed in cooperation between IEC and ISO are assigned numbers in the 80000 series, such as IEC 82045–1. IEC standards are also being adopted by other certifying bodies such as BSI (United Kingdom), CSA (Canada), UL & ANSI/INCITS (United States), SABS (South Africa), Standards Australia, SPC/GB (China) and DIN (Germany). IEC standards adopted by other certifying bodies may have some noted differences from the original IEC standard. The IEC is made up of members, called national committees, and each NC represents its nation's electrotechnical interests in the IEC. This includes manufacturers, providers, distributors and vendors, consumers and users, all levels of governmental agencies, professional societies and trade associations as well as standards developers from national standards bodies. National committees are constituted in different ways. Some NCs are public sector only, some are a combination of public and private sector, and some are private sector only. About 90% of those who prepare IEC standards work in industry. IEC Member countries include: Source: In 2001 and in response to calls from the WTO to open itself to more developing nations, the IEC launched the Affiliate Country Programme to encourage developing nations to become involved in the Commission's work or to use its International Standards. Countries signing a pledge to participate in the work and to encourage the use of IEC Standards in national standards and regulations are granted access to a limited number of technical committee documents for the purposes of commenting. In addition, they can select a limited number of IEC Standards for their national standards' library. Countries as of 2011 participating in the Affiliate Country Programme are:
https://en.wikipedia.org/wiki?curid=15144
ISO 9660 ISO 9660 is a file system for optical disc media. Being published by the International Organization for Standardization (ISO) the file system is considered an international technical standard. Since the specification is available for anybody to purchase, implementations have been written for many operating systems. ISO 9660 traces its roots to the High Sierra Format file system. High Sierra arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. In 2013, ISO published Amendment 1 to the ISO 9660 standard, introducing new data structures and relaxed file name rules intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1. In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard. The following is the rough overall structure of the ISO 9660 file system: The System Area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. For example, a CD-ROM may contain an alternative file system descriptor in this area, as it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content. All multi-byte values are stored twice, in little-endian and big-endian format, either one-after-another in what the specification calls "both-byte orders", or in duplicated data structures such as the path table. As the structures have been designed with unaligned members, this "both endian" encoding does however not help implementors as the data structures need to be read byte-wise to convert them to properly aligned data. The data area begins with a set of one or more "volume descriptors", terminated with a "volume descriptor set terminator". Collectively the "volume descriptor set" acts as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks). The "volume descriptor set terminator" is simply a particular type of "volume descriptor" with the purpose of marking the end of this set of structures. Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure: The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Standard volume descriptor types are the following: An ISO 9660 compliant disc contains at least one "Primary Volume Descriptor" describing the file system and a "Volume Descriptor Set Terminator" for indicating the end of the descriptor sequence. The Primary Volume Descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks). In addition to the Primary Volume Descriptor(s), "Supplementary Volume Descriptors" or "Enhanced Volume Descriptors" may be present. Supplementary Volume Descriptors describe the same volume as the Primary Volume Descriptor does, and are normally used for providing additional code page support when the standard code tables are insufficient. The standard specifies that ISO 2022 is used for managing code sets that are wider than 8 bytes, and that ISO 2375 escape sequences are used to identify each particular code page used. Consequently, ISO 9660 supports international single-byte and multi-byte character sets, provided they fit into the framework of the referenced standards. However, ISO 9660 does not specify any code pages that are guaranteed to be supported: all use of code tables other than those defined in the standard itself are subject to agreement between the originator and the recipient of the volume. Enhanced Volume Descriptors were introduced in ISO 9660, Amendment 1. They relax some of the requirements of the other volume descriptors and the directory records referenced by them: for example, the directory depth can exceed eight, file identifiers need not contain '.' or file version number, the length of a file and directory identifier is maximized to 207. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt. Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time. The standard specifies three nested levels of interchange (paraphrased from section 10): Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1). The standard also specifies the following name restrictions (sections 7.5 and 7.6): Path tables summarize the directory structure of the relevant directory hierarchy, providing only the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The restrictions on filename length (8 characters plus 3 character extension for interchange level 1) and directory depth (8 levels, including the root directory) are a more serious limitation of the ISO 9660 file system. The Rock Ridge extension works around the eight-directory depth limit by folding paths. In practice, however, few drivers and OSes care about the directory depth, so this rule is often ignored. In addition to the restrictions mentioned above, a CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard. (This is sometimes mistakenly interpreted as a restriction in the ISO 9660 standard itself.) All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). (Note: Some older operating systems may handle such values incorrectly (i.e. signed instead of unsigned), which would make it impossible to access files larger than 2 GB in size. The latter holds true also for operating systems without large file support.) Based on this, it is often assumed that a file on an ISO 9660 formatted disc cannot be larger than 232-1 in size, as the file's size is stored in an unsigned 32 bit value, for which 232-1 is the maximum. It is, however, possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TiB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Empirical tests with a 4.2 GB fragmented file on a DVD media have shown that Microsoft Windows XP supports this, while Mac OS X (as of 10.4.8) does not handle this case properly. In the case of Mac OS X, the driver appears not to support file fragmentation at all (i.e. it only supports ISO 9660 Level 2 but not Level 3). Linux supports multiple extents. Another limitation is the number of directories. The ISO image has a structure called "path table". For each directory in the image, the path table provides the number of its parent directory entry. The problem is that the parent directory number is a 16-bit number, limiting its range from 1 to 65,535. The content of each directory is written also in a different place, making the path table redundant, and suitable only for fast searching. Some operating systems (e.g. Windows) use the path table, while others (e.g. Linux) do not. If an ISO image or disc consists of more than 65,535 directories, it will be readable in Linux, while in early Windows versions all files from the additional directories will be visible, but show up as empty (zero length). Some software tools can have problems managing the path table if the directory limit is exceeded. A popular application using ISO format, "mkisofs", aborts if there is a path table overflow. Nero Burning ROM (for Windows) and Pinnacle Instant CD/DVD do not check whether the problem occurs, and will produce an invalid ISO file or disc without warning. There are several extensions to ISO 9660 that relax some of its limitations. For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. It should be located in every directory, including the root directory. This is now obsolete, since few such operating systems are in use today. The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or "session" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions. JIS X 0606:1998, also known as ISO 9660:1999, is a Japanese Industrial Standard draft created by the Japanese National Body (JTC1 N4222) in order to make some improvements and remove some limitations from the original ISO 9660 standard. This draft was submitted in 1998, but it has not been ratified as an ISO standard yet. Some of its changes includes the removal of some restrictions imposed by the original standard by extending the maximum file name length to 207 characters, removing the eight-level maximum directory nesting limit, and removing the special meaning of the dot character in filenames. Some operating systems allow these relaxations as well when reading optical discs. Several disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) support a so-called "ISO 9660:1999" mode (sometimes called "ISO 9660 v2" or "ISO 9660 Level 4" mode) that removes restrictions following the guidelines in the ISO 9660:1999 draft. The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated. ISO 9660 file system images (ISO images) are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension codice_1 (codice_2 is less common, but also in use) and are commonly referred to as "ISOs". Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc. Operating systems that support ISO 9660 and its extensions include the following:
https://en.wikipedia.org/wiki?curid=15145
Ice skating Ice skating is the self-propulsion of a person across a sheet of ice, using metal-bladed ice skates to glide on the ice surface. This activity can be carried out for various reasons, including recreation, sport, exercise, and travel. Ice skating may be performed on specially prepared ice surfaces (arenas, tracks, parks), both indoors and outdoors, as well as on naturally occurring bodies of frozen water, such as ponds, lakes and rivers. Research suggests that the earliest ice skating happened in southern Finland more than 4,000 years ago. This was done to save energy during winter journeys. True skating emerged when a steel blade with sharpened edges was used. Skates now cut into the ice instead of gliding on top of it. Adding edges to ice skates was invented by the Dutch in the 13th or 14th century. These ice skates were made of steel, with sharpened edges on the bottom to aid movement. The fundamental construction of modern ice skates has stayed largely the same since then, although differing greatly in the details, particularly in the method of binding and the shape and construction of the steel blades. In the Netherlands, ice skating was considered proper for all classes of people, as shown in many pictures by the Old Masters. Ice skating was also practiced in China during the Song dynasty, and became popular among the ruling family of the Qing dynasty. Ice skating was brought to Britain from the Netherlands, where James II was briefly exiled in the 17th century. When he returned to England, this 'new' sport was introduced to the British aristocracy, and was soon enjoyed by people from all walks of life. The first organised skating club was the Edinburgh Skating Club, formed in the 1740s, (some claim the club was established as early as 1642). An early contemporary reference to the club appeared in the second edition (1783) of the Encyclopædia Britannica: From this description and others, it is apparent that the form of skating practiced by club members was indeed an early form of figure skating rather than speed skating. For admission to the club, candidates had to pass a skating test where they performed a complete circle on either foot (e.g., a figure eight), and then jumped over first one hat, then two and three, placed over each other on the ice. On the Continent, participation in ice skating was limited to members of the upper classes. Emperor Rudolf II of the Holy Roman Empire enjoyed ice skating so much, he had a large ice carnival constructed in his court in order to popularise the sport. King Louis XVI of France brought ice skating to Paris during his reign. Madame de Pompadour, Napoleon I, Napoleon III and the House of Stuart were, among others, royal and upper class fans of ice skating. The next skating club to be established was in London and was not founded until 1830. By the mid-19th century, ice skating was a popular pastime among the British upper and middle classes—Queen Victoria became acquainted with her future husband, Prince Albert, through a series of ice skating trips—and early attempts at the construction of artificial ice rinks were made during the "rink mania" of 1841–44. As the technology for the maintenance of natural ice did not exist, these early rinks used a substitute consisting of a mixture of hog's lard and various salts. An item in the 8 May 1844 issue of Littell's 'Living Age' headed the 'Glaciarium' reported that "This establishment, which has been removed to Grafton street East' Tottenham Court Road, was opened on Monday afternoon. The area of artificial ice is extremely convenient for such as may be desirous of engaging in the graceful and manly pastime of skating". Skating became popular as a recreation, a means of transport and spectator sport in The Fens in England for people from all walks of life. Racing was the preserve of workers, most of them agricultural labourers. It is not known when the first skating matches were held, but by the early nineteenth century racing was well established and the results of matches were reported in the press. Skating as a sport developed on the lakes of Scotland and the canals of the Netherlands. In the 13th and 14th centuries wood was substituted for bone in skate blades, and in 1572 the first iron skates were manufactured. When the waters froze, skating matches were held in towns and villages all over the Fens. In these local matches men (or sometimes women or children) would compete for prizes of money, clothing or food. The winners of local matches were invited to take part in the grand or championship matches, in which skaters from across the Fens would compete for cash prizes in front of crowds of thousands. The championship matches took the form of a Welsh main or "last man standing" contest. The competitors, 16 or sometimes 32, were paired off in heats and the winner of each heat went through to the next round. A course of 660 yards was measured out on the ice, and a barrel with a flag on it placed at either end. For a one-and-a-half mile race the skaters completed two rounds of the course, with three barrel turns. In the Fens skates were called pattens, fen runners, or Whittlesey runners. The footstock was made of beechwood. A screw at the back was screwed into the heel of the boot, and three small spikes at the front kept the skate steady. There were holes in the footstock for leather straps to fasten it to the foot. The metal blades were slightly higher at the back than the front. In the 1890s, fen skaters started to race in Norwegian style skates. On Saturday 1 February 1879, a number of professional ice skaters from Cambridgeshire and Huntingdonshire met in the Guildhall, Cambridge, to set up the National Skating Association, the first national ice skating body in the world. The founding committee consisted of several landowners, a vicar, a fellow of Trinity College, a magistrate, two Members of Parliament, the mayor of Cambridge, the Lord Lieutenant of Cambridge, journalist James Drake Digby, the president of Cambridge University Skating Club, and Neville Goodman, a graduate of Peterhouse, Cambridge (and son of Potto Brown's milling partner, Joseph Goodman). The newly formed Association held their first one-and-a-half-mile British professional championship at Thorney in December 1879. The first instructional book concerning ice skating was published in London in 1772. The book, written by a British artillery lieutenant, Robert Jones, describes basic figure skating forms such as circles and figure eights. The book was written solely for men, as women did not normally ice skate in the late 18th century. It was with the publication of this manual that ice skating split into its two main disciplines, speed skating and figure skating. The founder of modern figure skating as it is known today was Jackson Haines, an American. He was the first skater to incorporate ballet and dance movements into his skating, as opposed to focusing on tracing patterns on the ice. Haines also invented the sit spin and developed a shorter, curved blade for figure skating that allowed for easier turns. He was also the first to wear blades that were permanently attached to the boot. The International Skating Union was founded in 1892 as the first international ice skating organisation in Scheveningen, in the Netherlands. The Union created the first codified set of figure skating rules and governed international competition in speed and figure skating. The first Championship, known as the Championship of the Internationale Eislauf-Vereingung, was held in Saint Petersburg in 1896. The event had four competitors and was won by Gilbert Fuchs. A skate can glide over ice because there is a layer of ice molecules on the surface that are not as tightly bound as the molecules of the mass of ice beneath. These molecules are in a semiliquid state, providing lubrication. The molecules in this "quasi-fluid" or "water-like" layer are less mobile than liquid water, but are much more mobile than the molecules deeper in the ice. At about the slippery layer is one molecule thick; as the temperature increases the slippery layer becomes thicker. It had long been believed that ice is slippery because the pressure of an object in contact with it causes a thin layer to melt. The hypothesis was that the blade of an ice skate, exerting pressure on the ice, melts a thin layer, providing lubrication between the ice and the blade. This explanation, called "pressure melting", originated in the 19th century. This, however, did not account for skating on ice temperatures lower than −3.5 °C, whereas skaters often skate on lower-temperature ice. In the 20th century, an alternative explanation, called "friction melting", proposed by Lozowski, Szilder, Le Berre, Pomeau and others showed that because of the viscous frictional heating, a macroscopic layer of melt ice is in-between the ice and the skate. With this they fully explained the low friction with nothing else but macroscopic physics, whereby the frictional heat generated between skate and ice melts a layer of ice . This is a self-stabilizing mechanism of skating. If by fluctuation the friction gets high, the layer grows in thickness and lowers the friction, and if it gets low, the layer decreases in thickness and increases the friction. The friction generated in the sheared layer of water between skate and ice grows as "√V" with "V" the velocity of the skater, such that for low velocities the friction is also low. Whatever the origin of the water layer, skating is more destructive than simply gliding. A skater leaves a visible trail behind on virgin ice and skating rinks have to be regularly resurfaced to improve the skating conditions. It means that the deformation caused by the skate is plastic rather than elastic. The skate ploughs through the ice in particular due to the sharp edges. Thus another component has to be added to the friction: the “ploughing friction”. The calculated frictions are of the same order as the measured frictions in real skating in a rink. The ploughing friction decreases with the velocity "V" , since the pressure in the water layer increases with V and lifts the skate (aquaplaning). As a result the sum of the water-layer friction and the ploughing friction only increases slightly with "V", making skating at high speeds (>90 km/h) possible. A person's ability to ice skate depends on the roughness of the ice, the design of the ice skate, and the skill and experience of the skater. While serious injury is rare, a number of short track speed skaters have been paralysed after a heavy fall when they collided with the boarding. A fall can be fatal if a helmet is not worn to protect against severe head trauma. Accidents are rare but there is a risk of injury from collisions, particularly during hockey games or in pair skating. A significant danger when skating outdoors on a frozen body of water is falling through the ice into the freezing water underneath. Death can result from shock, hypothermia or drowning. It is often difficult or impossible for the skater to climb out of the water, due to the weight of their ice skates and thick winter clothing, and the ice repeatedly breaking as they struggle to get back onto the surface. Also, if the skater becomes disoriented under the water, they might not be able to find the hole in the ice through which they have fallen. Although this can prove fatal, it is also possible for the rapid cooling to produce a condition in which a person can be revived up to hours after falling into the water. A number of recreational and sporting activities take place on ice. Broomball and curling are also played on ice but the players are not required to wear ice skates.
https://en.wikipedia.org/wiki?curid=15146
International Olympic Committee The International Olympic Committee (IOC; , CIO) is a non-governmental sports organisation based in Lausanne, Switzerland. Founded by Pierre de Coubertin and Demetrios Vikelas in 1894, it is the authority responsible for organising the modern Summer and Winter Olympic Games. The IOC is the governing body of the National Olympic Committees (NOCs), which are the national constituents of the worldwide Olympic Movement. As of 2016, there are 206 NOCs officially recognised by the IOC. The current president of the IOC is Thomas Bach of Germany, who succeeded Jacques Rogge of Belgium in September 2013. The IOC was created by Pierre de Coubertin, on 23 June 1894 with Demetrios Vikelas as its first president. As of April 2019, its membership consists of 95 active members, 44 honorary members, an honorary president (Jacques Rogge) and two honour members (Henry Kissinger and Youssoupha Ndiaye). The IOC is the supreme authority of the worldwide modern Olympic Movement. The IOC organises the modern Olympic Games and Youth Olympic Games (YOG), held in summer and winter, every four years. The first Summer Olympics was held in Athens, Greece, in 1896; the first Winter Olympics was in Chamonix, France, in 1924. The first Summer YOG were in Singapore in 2010 and the first Winter YOG in Innsbruck were in 2012. Until 1992, both Summer and Winter Olympics were held in the same year. After that year, however, the IOC shifted the Winter Olympics to the even years between Summer Games, to help space the planning of the two events from one another, and improve the financial balance of the IOC, which receives a proportionally greater income in Olympic years. In 2009, the UN General Assembly granted the IOC Permanent Observer status. The decision enables the IOC to be directly involved in the UN Agenda and to attend UN General Assembly meetings where it can take the floor. In 1993, the General Assembly approved a Resolution to further solidify IOC–UN cooperation by reviving the Olympic Truce. During each proclamation at the Olympics, announcers speak in different languages: French is always spoken first, followed by an English translation, and then the dominant language of the host nation (when this is not English or French). The IOC received approval in November 2015 to construct a new headquarters in Vidy, Lausanne. The cost of the project was estimated to stand at $156m. The IOC announced on 11 February 2019 that "Olympic House" would be inaugurated on 23 June 2019 to coincide with its 125th anniversary. The Olympic Museum remains in Ouchy, Lausanne. The stated mission of the IOC is to promote the Olympics throughout the world and to lead the Olympic Movement: The IOC Session is the general meeting of the members of the IOC, held once a year in which each member has one vote. It is the IOC's supreme organ and its decisions are final. Extraordinary Sessions may be convened by the President or upon the written request of at least one third of the members. Among others, the powers of the Session are: In addition to the Olympic medals for competitors, the IOC awards a number of other honours. For most of its existence, the IOC was controlled by members who were selected by other members. Countries that had hosted the Games were allowed two members. When named, they did not become the representatives of their respective countries to the IOC, but rather the opposite, IOC members in their respective countries. "Granted the honour of becoming a member of the International Olympic Committee and declaring myself aware of my responsibilities in such a capacity, I undertake to serve the Olympic Movement to the very best of my ability; to respect and ensure the respect of all the provisions of the Olympic Charter and the decisions of the International Olympic Committee which I consider as not the subject to appeal on my part; to comply with the code of ethics to keep myself free from any political or commercial influence and from any racial or religious consideration; to fight against all other forms of discrimination; and to promote in all circumstances the interests of the International Olympic Committee and those of the Olympic Movement." The membership of IOC members ceases in the following circumstances: There are currently 73 international sports federations (IFs) recognised by the IOC. These are: During the first half of the 20th century the IOC ran on a small budget. As president of the IOC from 1952 to 1972, Avery Brundage rejected all attempts to link the Olympics with commercial interest. Brundage believed the lobby of corporate interests would unduly impact the IOC's decision-making. Brundage's resistance to this revenue stream meant the IOC left organising committees to negotiate their own sponsorship contracts and use the Olympic symbols. When Brundage retired the IOC had US$2 million in assets; eight years later the IOC coffers had swelled to US$45 million. This was primarily due to a shift in ideology toward expansion of the Games through corporate sponsorship and the sale of television rights. When Juan Antonio Samaranch was elected IOC president in 1980 his desire was to make the IOC financially independent. Samaranch appointed Canadian IOC member Richard Pound to lead the initiative as Chairman of the "New Sources of Finance Commission". In 1982 the IOC drafted ISL Marketing, a Swiss sports marketing company, to develop a global marketing programme for the Olympic Movement. ISL successfully developed the programme but was replaced by Meridian Management, a company partly owned by the IOC in the early 1990s. In 1989, one of the staff members at ISL Marketing, Michael Payne, moved to the IOC and became the organisation's first marketing director. However ISL and subsequently Meridian, continued in the established role as the IOC's sales and marketing agents until 2002. In 2002 the IOC terminated the relationship with Meridian and took its marketing programme in-house under the Direction of Timo Lumme, the IOC's managing director of IOC Television and Marketing Services. During his 17 years with the IOC, in collaboration with ISL Marketing and subsequently Meridian Management, Payne made major contributions to the creation of a multibillion-dollar sponsorship marketing programme for the organisation which, along with improvements in TV marketing and improved financial management, helped to restore the IOC's financial viability. The Olympic Movement generates revenue through five major programmes. The OCOGs have responsibility for the domestic sponsorship, ticketing and licensing programmes, under the direction of the IOC. The Olympic Movement generated a total of more than US$4 billion (€2.5 billion) in revenue during the Olympic quadrennium from 2001 to 2004. The IOC distributes some of the Olympic marketing revenue to organisations throughout the Olympic Movement to support the staging of the Olympic Games and to promote the worldwide development of sport. The IOC retains approximately 10% of the Olympic marketing revenue for the operational and administrative costs of governing the Olympic Movement. The IOC provides TOP programme contributions and Olympic broadcast revenue to the OCOGs to support the staging of the Summer Olympic Games and the Winter Olympic Games: The NOCs receive financial support for the training and development of Olympic teams, Olympic athletes and Olympic hopefuls. The IOC distributes TOP programme revenue to each of the NOCs throughout the world. The IOC also contributes Olympic broadcast revenue to Olympic Solidarity, an IOC organisation that provides financial support to NOCs with the greatest need. The continued success of the TOP programme and Olympic broadcast agreements has enabled the IOC to provide increased support for the NOCs with each Olympic quadrennium. The IOC provided approximately US$318.5 million to NOCs for the 2001–2004 quadrennium. The IOC is now the largest single revenue source for the majority of IFs, with its contributions of Olympic broadcast revenue that assist the IFs in the development of their respective sports worldwide. The IOC provides financial support from Olympic broadcast revenue to the 28 IFs of Olympic summer sports and the seven IFs of Olympic winter sports after the completion of the Summer Olympics and the Winter Olympics, respectively. The continually increasing value of Olympic broadcast partnership has enabled the IOC to deliver substantially increased financial support to the IFs with each successive Games. The seven winter sports IFs shared US$85.8 million, €75 million in Salt Lake 2002 broadcast revenue. The contribution to the 28 summer sports IFs from Athens 2004 broadcast revenue has not yet been determined, but the contribution is expected to mark a significant increase over the US$190 million, €150 million that the IOC provided to the summer IFs following Sydney 2000. The IOC contributes Olympic marketing revenue to the programmes of various recognised international sports organisations, including the International Paralympic Committee (IPC), and the World Anti-Doping Agency (WADA). The Olympic Partner (TOP) sponsorship programme includes the following commercial sponsors of the Olympic Games. The IOC recognizes that the Olympic Games demand substantial environmental resources, activities, and construction projects that could be detrimental to a host city's environment.
https://en.wikipedia.org/wiki?curid=15147
Integrated circuit An integrated circuit or monolithic integrated circuit (also referred to as an IC, a chip, or a microchip) is a set of electronic circuits on one small flat piece (or "chip") of semiconductor material that is normally silicon. The integration of large numbers of tiny MOS transistors into a small chip results in circuits that are orders of magnitude smaller, faster, and less expensive than those constructed of discrete electronic components. The IC's mass production capability, reliability, and building-block approach to integrated circuit design has ensured the rapid adoption of standardized ICs in place of designs using discrete transistors. ICs are now used in virtually all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other digital home appliances are now inextricable parts of the structure of modern societies, made possible by the small size and low cost of ICs. Integrated circuits were made practical by technological advancements in metal–oxide–silicon (MOS) semiconductor device fabrication. Since their origins in the 1960s, the size, speed, and capacity of chips have progressed enormously, driven by technical advances that fit more and more MOS transistors on chips of the same size – a modern chip may have many billions of MOS transistors in an area the size of a human fingernail. These advances, roughly following Moore's law, make computer chips of today possess millions of times the capacity and thousands of times the speed of the computer chips of the early 1970s. ICs have two main advantages over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time. Furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the IC's components switch quickly and consume comparatively little power because of their small size and proximity. The main disadvantage of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only commercially viable when high production volumes are anticipated. An "integrated circuit" is defined as: A circuit in which all or some of the circuit elements are inseparably associated and electrically interconnected so that it is considered to be indivisible for the purposes of construction and commerce. Circuits meeting this definition can be constructed using many different technologies, including thin-film transistors, thick-film technologies, or hybrid integrated circuits. However, in general usage "integrated circuit" has come to refer to the single-piece circuit construction originally known as a "monolithic integrated circuit". An early attempt at combining several components in one device (like modern ICs) was the Loewe 3NF vacuum tube from the 1920s. Unlike ICs, it was designed with the purpose of tax evasion, as in Germany, radio receivers had a tax that was levied depending on how many tube holders a radio receiver had. It allowed radio receivers to have a single tube holder. Early concepts of an integrated circuit go back to 1949, when German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate in a 3-stage amplifier arrangement. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent. An immediate commercial use of his patent has not been reported. Another early proponent of the concept was Geoffrey Dummer (1909–2002), a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on 7 May 1952. He gave many symposia publicly to propagate his ideas and unsuccessfully attempted to build such a circuit in 1956. Between 1953 and 1957, Sidney Darlington and Yasuro Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. The monolithic integrated circuit chip was enabled by the surface passivation process, which electrically stabilized silicon surfaces via thermal oxidation, making it possible to fabricate monolithic integrated circuit chips using silicon. The surface passivation process was developed by Mohamed M. Atalla at Bell Labs in 1957. This was the basis for the planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, which was critical to the invention of the monolithic integrated circuit chip. A key concept behind the monolithic IC is the principle of p–n junction isolation, which allows each transistor to operate independently despite being part of the same piece of silicon. Atalla's surface passivation process isolated individual diodes and transistors, which was extended to independent transistors on a single piece of silicon by Kurt Lehovec at Sprague Electric in 1959, and then independently by Robert Noyce at Fairchild later the same year. A precursor idea to the IC was to create small ceramic substrates (so-called "micromodules"), each containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC. Newly employed by Texas Instruments, Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working example of an integrated circuit on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material … wherein all the components of the electronic circuit are completely integrated." The first customer for the new invention was the US Air Force. Kilby won the 2000 Nobel Prize in physics for his part in the invention of the integrated circuit. However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (monolithic IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Half a year after Kilby, Robert Noyce at Fairchild Semiconductor invented the first true monolithic IC chip. It was a new variety of integrated circuit, more practical than Kilby's implementation. Noyce's design was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC put all components on a chip of silicon and connected them with copper lines. Noyce's monolithic IC was fabricated using the planar process, developed in early 1959 by his colleague Jean Hoerni. In turn, Hoerni's planar process was based on Mohamed Atalla's surface passivation process. Modern IC chips are based on Noyce's monolithic IC, rather than Kilby's hybrid IC. NASA's Apollo Program was the largest single consumer of integrated circuits between 1961 and 1965. Transistor–transistor logic (TTL) was developed by James L. Buie in the early 1960s at TRW Inc. TTL became the dominant integrated circuit technology during the 1970s to early 1980s. Dozens of TTL integrated circuits were a standard method of construction for the processors of minicomputers and mainframe computers. Computers such as IBM 360 mainframes, PDP-11 minicomputers and the desktop Datapoint 2200 were built from bipolar integrated circuits, either TTL or the even faster emitter-coupled logic (ECL). Nearly all modern IC chips are metal–oxide–semiconductor (MOS) integrated circuits, built from MOSFETs (metal–oxide–silicon field-effect transistors). The MOSFET (also known as the MOS transistor), which was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959, made it possible to build high-density integrated circuits. Atalla first proposed the concept of the MOS integrated circuit (MOS IC) chip in 1960, noting that the MOSFET's ease of fabrication made it useful for integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The list of IEEE milestones includes the first integrated circuit by Kilby in 1958, Hoerni's planar process and Noyce's planar IC in 1959, and the MOSFET by Atalla and Kahng in 1959. The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuit in 1964, a 120-transistor shift register developed by Robert Norman. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. Following the development of the self-aligned gate (silicon-gate) MOSFET by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC technology with self-aligned gates, the basis of all modern CMOS integrated circuits, was developed at Fairchild Semiconductor by Federico Faggin in 1968. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. This led to the inventions of the microprocessor and the microcontroller by the early 1970s. During the early 1970s, MOS integrated circuit technology enabled the very large-scale integration (VLSI) of more than 10,000 transistors on a single chip. At first, MOS-based computers only made sense when high density was required, such as aerospace and pocket calculators. Computers built entirely from TTL, such as the 1970 Datapoint 2200, were much faster and more powerful than single-chip MOS microprocessors such as the 1972 Intel 8008 until the early 1980s. Advances in IC technology, primarily smaller features and larger chips, have allowed the number of MOS transistors in an integrated circuit to double every two years, a trend known as Moore's law. Moore originally stated it would double every year, but he went on to change the claim to every two years in 1975. This increased capacity has been used to decrease cost and increase functionality. In general, as the feature size shrinks, almost every aspect of an IC's operation improves. The cost per transistor and the switching power consumption per transistor goes down, while the memory capacity and speed go up, through the relationships defined by Dennard scaling (MOSFET scaling). Because speed, capacity, and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. Over the years, transistor sizes have decreased from 10s of microns in the early 1970s to 10 nanometers in 2017 with a corresponding million-fold increase in transistors per unit area. As of 2016, typical chip areas range from a few square millimeters to around 600 mm2, with up to 25 million transistors per mm2. The expected shrinking of feature sizes and the needed progress in related areas was forecast for many years by the International Technology Roadmap for Semiconductors (ITRS). The final ITRS was issued in 2016, and it is being replaced by the International Roadmap for Devices and Systems. Initially, ICs were strictly electronic devices. The success of ICs has led to the integration of other technologies, in an attempt to obtain the same advantages of small size and low cost. These technologies include mechanical devices, optics, and sensors. , the vast majority of all transistors are MOSFETs fabricated in a single layer on one side of a chip of silicon in a flat two-dimensional planar process. Researchers have produced prototypes of several promising alternatives, such as: 2013. stacked wire bonding, and other methodologies. As it becomes more difficult to manufacture ever smaller transistors, companies are using multi-chip modules, three-dimensional integrated circuits, 3D NAND, package on package, and through-silicon vias to increase performance and reducing size, without having to reduce the size of the transistors. The cost of designing and developing a complex integrated circuit is quite high, normally in the multiple tens of millions of dollars. Therefore, it only makes economic sense to produce integrated circuit products with high production volume, so the non-recurring engineering (NRE) costs are spread across typically millions of production units. Modern semiconductor chips have billions of components, and are too complex to be designed by hand. Software tools to help the designer are essential. Electronic Design Automation (EDA), also referred to as Electronic Computer-Aided Design (ECAD), is a category of software tools for designing electronic systems, including integrated circuits. The tools work together in a design flow that engineers use to design and analyze entire semiconductor chips. Integrated circuits can be classified into analog, digital and mixed signal, consisting of both analog and digital signaling on the same IC. Digital integrated circuits can contain anywhere from one to billions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and microcontrollers, work using boolean algebra to process "one" and "zero" signals. Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from personal computers and cellular phones to digital microwave ovens. Digital memory chips and application-specific integrated circuits (ASICs) are examples of other families of integrated circuits that are important to the modern information society. In the 1980s, programmable logic devices were developed. These devices contain circuits whose logical function and connectivity can be programmed by the user, rather than being fixed by the integrated circuit manufacturer. This allows a single chip to be programmed to implement different LSI-type functions such as logic gates, adders and registers. Programmability comes in at least four forms - devices that can be programmed only once, devices that can be erased and then re-programmed using UV light, devices that can be (re)programmed using flash memory, and field-programmable gate arrays (FPGAs) which can be programmed at any time, including during operation. Current FPGAs can (as of 2016) implement the equivalent of millions of gates and operate at frequencies up to 1 GHz. Analog ICs, such as sensors, power management circuits, and operational amplifiers (op-amps), work by processing continuous signals. They perform analog functions such as amplification, active filtering, demodulation, and mixing. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing and/or constructing a difficult analog circuit from scratch. ICs can also combine analog and digital circuits on a single chip to create functions such as analog-to-digital converters and digital-to-analog converters. Such mixed-signal circuits offer smaller size and lower cost, but must carefully account for signal interference. Prior to the late 1990s, radios could not be fabricated in the same low-cost CMOS processes as microprocessors. But since 1998, a large number of radio chips have been developed using RF CMOS processes. Examples include Intel's DECT cordless phone, or 802.11 (Wi-Fi) chips created by Atheros and other companies. Modern often further sub-categorize the huge variety of integrated circuits now available: The semiconductors of the periodic table of the chemical elements were identified as the most likely materials for a "solid-state vacuum tube". Starting with copper oxide, proceeding to germanium, then silicon, the materials were systematically studied in the 1940s and 1950s. Today, monocrystalline silicon is the main substrate used for ICs although some III-V compounds of the periodic table such as gallium arsenide are used for specialized applications like LEDs, lasers, solar cells and the highest-speed integrated circuits. It took decades to perfect methods of creating crystals with minimal defects in semiconducting materials' crystal structure. Semiconductor ICs are fabricated in a planar process which includes three key process steps photolithography, deposition (such as chemical vapor deposition), and etching. The main process steps are supplemented by doping and cleaning. More recent or high-performance ICs may instead use multi-gate FinFET or GAAFET transistors instead of planar ones. Mono-crystal silicon wafers are used in most applications (or for special applications, other semiconductors such as gallium arsenide are used). The wafer need not be entirely silicon. Photolithography is used to mark different areas of the substrate to be doped or to have polysilicon, insulators or metal (typically aluminium or copper) tracks deposited on them. Dopants are impurities intentionally introduced to a semiconductor to modulate its electronic properties. Doping is the process of adding dopants to a semiconductor material. Since a CMOS device only draws current on the "transition" between logic states, CMOS devices consume much less current than bipolar junction transistor devices. A random-access memory is the most regular type of integrated circuit; the highest density devices are thus memories; but even a microprocessor will have memory on the chip. (See the regular array structure at the bottom of the first image.) Although the structures are intricate – with widths which have been shrinking for decades – the layers remain much thinner than the device widths. The layers of material are fabricated much like a photographic process, although light waves in the visible spectrum cannot be used to "expose" a layer of material, as they would be too large for the features. Thus photons of higher frequencies (typically ultraviolet) are used to create the patterns for each layer. Because each feature is so small, electron microscopes are essential tools for a process engineer who might be debugging a fabrication process. Each device is tested before packaging using automated test equipment (ATE), in a process known as wafer testing, or wafer probing. The wafer is then cut into rectangular blocks, each of which is called a "die". Each good die (plural "dice", "dies", or "die") is then connected into a package using aluminium (or gold) bond wires which are thermosonically bonded to "pads", usually found around the edge of the die. Thermosonic bonding was first introduced by A. Coucoulas which provided a reliable means of forming these vital electrical connections to the outside world. After packaging, the devices go through final testing on the same or similar ATE used during wafer probing. Industrial CT scanning can also be used. Test cost can account for over 25% of the cost of fabrication on lower-cost products, but can be negligible on low-yielding, larger, or higher-cost devices. , a fabrication facility (commonly known as a "semiconductor fab") can cost over US$8 billion to construct. The cost of a fabrication facility rises over time because of increased complexity of new products. This is known as Rock's law. Today, the most advanced processes employ the following techniques: ICs can be manufactured either in-house by Integrated device manufacturers (IDMs) or using the Foundry model. IDMs are vertically integrated companies (like Intel and Samsung) that design, manufacture and sell their own ICs, and may offer design and/or manufacturing (foundry) services to other companies (the latter often to fabless companies). In the foundry model, fabless companies (like Nvidia only design and sell ICs and outsource all manufacturing to pure play foundries such as TSMC. These foundries may offer IC design services. The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by the small-outline integrated circuit (SOIC) package – a carrier which occupies an area about 30–50% less than an equivalent DIP and is typically 70% thinner. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches. In the late 1990s, plastic quad flat pack (PQFP) and thin small-outline package (TSOP) packages became the most common for high pin count devices, though PGA packages are still used for high-end microprocessors. Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery. BGA devices have the advantage of not needing a dedicated socket, but are much harder to replace in case of device failure. Intel transitioned away from PGA to land grid array (LGA) and BGA beginning in 2004, with the last PGA socket released in 2014 for mobile platforms. , AMD uses PGA packages on mainstream desktop processors, BGA packages on mobile processors, and high-end desktop and server microprocessors use LGA packages. Electrical signals leaving the die must pass through the material electrically connecting the die to the package, through the conductive traces (paths) in the package, through the leads connecting the package to the conductive traces on the printed circuit board. The materials and structures used in the path these electrical signals must travel have very different electrical properties, compared to those that travel to different parts of the same die. As a result, they require special design techniques to ensure the signals are not corrupted, and much more electric power than signals confined to the die itself. When multiple dies are put in one package, the result is a system in package, abbreviated . A multi-chip module (), is created by combining multiple dies on a small substrate often made of ceramic. The distinction between a large MCM and a small printed circuit board is sometimes fuzzy. Packaged integrated circuits are usually large enough to include identifying information. Four common sections are the manufacturer's name or logo, the part number, a part production batch number and serial number, and a four-digit date-code to identify when the chip was manufactured. Extremely small surface-mount technology parts often bear only a number used in a manufacturer's lookup table to find the integrated circuit's characteristics. The manufacturing date is commonly represented as a two-digit year followed by a two-digit week code, such that a part bearing the code 8341 was manufactured in week 41 of 1983, or approximately in October 1983. The possibility of copying by photographing each layer of an integrated circuit and preparing photomasks for its production on the basis of the photographs obtained is a reason for the introduction of legislation for the protection of layout-designs. The Semiconductor Chip Protection Act of 1984 established intellectual property protection for photomasks used to produce integrated circuits. A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits (IPIC Treaty). The Treaty on Intellectual Property in respect of Integrated Circuits, also called Washington Treaty or IPIC Treaty (signed at Washington on 26 May 1989) is currently not in force, but was partially integrated into the TRIPS agreement. National laws protecting IC layout designs have been adopted in a number of countries, including Japan, the EC, the UK, Australia, and Korea. The UK enacted the Copyright, Designs and Patents Act, 1988, c. 48, § 213, after it initially took the position that its copyright law fully protected chip topographies. See British Leyland Motor Corp. v. Armstrong Patents Co. Criticisms of inadequacy of the UK copyright approach as perceived by the US chip industry are summarized in Further chip rights developments. Australia passed the Circuit Layouts Act of 1989 as a "sui generis" form of chip protection. Korea passed the "Act Concerning the Layout-Design of Semiconductor Integrated Circuits". Future developments seem to follow the multi-core multi-microprocessor paradigm, already used by Intel and AMD multi-core processors. Rapport Inc. and IBM started shipping the KC256 in 2006, a 256-core microprocessor. Intel, as recently as February–August 2011, unveiled a prototype, "not for commercial sale" chip that bears 80 cores. Each core is capable of handling its own task independently of the others. This is in response to heat-versus-speed limit, that is about to be reached using existing transistor technology (see: thermal design power). This design provides a new challenge to chip programming. Parallel programming languages such as the open-source X10 programming language are designed to assist with this task. In the early days of simple integrated circuits, the technology's large scale limited each chip to only a few transistors, and the low degree of integration meant the design process was relatively simple. Manufacturing yields were also quite low by today's standards. As metal–oxide–semiconductor (MOS) technology progressed, millions and then billions of MOS transistors could be placed on one chip, and good designs required thorough planning, giving rise to the field of electronic design automation, or EDA. The first integrated circuits contained only a few transistors. Early digital circuits containing tens of transistors provided a few logic gates, and early linear ICs such as the Plessey SL201 or the Philips TAA320 had as few as two transistors. The number of transistors in an integrated circuit has increased dramatically since then. The term "large scale integration" (LSI) was first used by IBM scientist Rolf Landauer when describing the theoretical concept; that term gave rise to the terms "small-scale integration" (SSI), "medium-scale integration" (MSI), "very-large-scale integration" (VLSI), and "ultra-large-scale integration" (ULSI). The early integrated circuits were SSI. SSI circuits were crucial to early aerospace projects, and aerospace projects helped inspire development of the technology. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems. Although the Apollo guidance computer led and motivated integrated-circuit technology, it was the Minuteman missile that forced it into mass-production. The Minuteman missile program and various other United States Navy programs accounted for the total $4 million integrated circuit market in 1962, and by 1968, U.S. Government spending on space and defense still accounted for 37% of the $312 million total production. The demand by the U.S. Government supported the nascent integrated circuit market until costs fell enough to allow IC firms to penetrate the industrial market and eventually the consumer market. The average price per integrated circuit dropped from $50.00 in 1962 to $2.33 in 1968. Integrated circuits began to appear in consumer products by the turn of the 1970s decade. A typical application was FM inter-carrier sound processing in television receivers. The first application MOS chips were small-scale integration (SSI) chips. Following Mohamed M. Atalla's proposal of the MOS integrated circuit chip in 1960, the earliest experimental MOS chip to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. The first practical application of MOS SSI chips was for NASA satellites. The next step in the development of integrated circuits introduced devices which contained hundreds of transistors on each chip, called "medium-scale integration" (MSI). MOSFET scaling technology made it possible to build high-density chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. In 1964, Frank Wanlass demonstrated a single-chip 16-bit shift register he designed, with a then-incredible 120 MOS transistors on a single chip. The same year, General Microelectronics introduced the first commercial MOS integrated circuit chip, consisting of 120 p-channel MOS transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to chips with hundreds of MOSFETs on a chip by the late 1960s. Further development, driven by the same MOSFET scaling technology and economic factors, led to "large-scale integration" (LSI) by the mid-1970s, with tens of thousands of transistors per chip. The masks used to process and manufacture SSI, MSI and early LSI and VLSI devices (such as the microprocessors of the early 1970s) were mostly created by hand, often using Rubylith-tape or similar. For large or complex ICs (such as memories or processors), this was often done by specially hired professionals in charge of circuit layout, placed under the supervision of a team of engineers, who would also, along with the circuit designers, inspect and verify the correctness and completeness of each mask. Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4,000 transistors. True LSI circuits, approaching 10,000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors. Some SSI and MSI chips, like discrete transistors, are still mass-produced, both to maintain old equipment and build new devices that require only a few gates. The 7400 series of TTL chips, for example, has become a de facto standard and remains in production. The final step in the development process, starting in the 1980s and continuing through the present, is "very-large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, , transistor counts continue to grow beyond ten billion transistors per chip. Multiple developments were required to achieve this increased density. Manufacturers moved to smaller MOSFET design rules and cleaner fabrication facilities so that they could make chips with more transistors and maintain adequate yield. The path of process improvements was summarized by the International Technology Roadmap for Semiconductors (ITRS), which has since been succeeded by the International Roadmap for Devices and Systems (IRDS). Electronic design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy-efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Modern VLSI devices contain so many transistors, layers, interconnections, and other features that it is no longer feasible to check the masks or do the original design by hand. Instead, engineers use tools to perform most functional verification work. In 1986 the first one-megabit random-access memory (RAM) chips were introduced, containing more than one million transistors. Microprocessor chips passed the million-transistor mark in 1989 and the billion-transistor mark in 2005. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors. To reflect further growth of the complexity, the term "ULSI" that stands for "ultra-large-scale integration" was proposed for chips of more than 1 million transistors. Wafer-scale integration (WSI) is a means of building very large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed. A system-on-a-chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and whilst performance benefits can be had from integrating all needed components on one die, the cost of licensing and developing a one-die machine still outweigh having separate devices. With appropriate licensing, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging). Further, signal sources and destinations are physically closer on die, reducing the length of wiring and therefore latency, transmission power costs and waste heat from communication between modules on the same chip. This has led to an exploration of so-called Network-on-Chip (NoC) devices, which apply system-on-chip design methodologies to digital communication networks as opposed to traditional bus architectures. A three-dimensional integrated circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation. To allow identification during production most silicon chips will have a serial number in one corner. It is also common to add the manufacturer's logo. Ever since ICs were created, some chip designers have used the silicon surface area for surreptitious, non-functional images or words. These are sometimes referred to as chip art, silicon art, silicon graffiti or silicon doodling. General Patents Integrated circuit die manufacturing
https://en.wikipedia.org/wiki?curid=15150
IBM 3270 The IBM 3270 is a class of block oriented computer terminals (sometimes called "display devices") introduced by IBM in 1971 normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text colour on the original models, these terminals are informally known as "green screen" terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable. IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as "green screen applications". The 3270 series was designed to connect with mainframe computers, often at a remote location, using the technology then available in the early 1970s. The main goal of the system was to maximize the number of terminals that could be used on a single mainframe. To do this, the 3270 was designed to minimize the amount of data transmitted, and minimize the frequency of interrupts to the mainframe. By ensuring the CPU is not interrupted at every keystroke, a 1970s-era IBM 3033 mainframe fitted with only 16 MB of main memory was able to support up to 17,500 3270 terminals under CICS. 3270 devices are "clustered", with one or more displays or printers connected to a "control unit" (the 3275 and 3276 included an integrated control unit). Originally devices were connected to the control unit over coaxial cable; later token ring, twisted pair, or Ethernet connections were available. A "local" control unit attaches directly to the channel of a nearby mainframe. A "remote" control unit is connected to a communications line by a modem. Remote 3270 controllers are frequently "multi-dropped", with multiple control units on a line. In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be "painted" as a single output operation. The concept of formatting in these devices allows the screen to be divided into fields (clusters of contiguous character cells) for which numerous field attributes (colour, highlighting, character set, protection from modification) can be set. A field attribute occupies a physical location on the screen that also determines the beginning and end of a field. Using a technique known as "read modified", a single transmission back to the mainframe can contain the changes from any number of formatted fields that have been modified, but without sending any unmodified fields or static data. This technique enhances the terminal throughput of the CPU, and minimizes the data transmitted. Some users familiar with character interrupt-driven terminal interfaces find this technique unusual. There is also a "read buffer" capability that transfers the entire content of the 3270-screen buffer including field attributes. This is mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information. Early 3270s offered three types of keyboards. The "typewriter keyboard" came in both a 66 key version, with no programmed function (PF) keys, and a 78 key version with twelve. Both versions had two "program attention" (PA) keys. The "data entry keyboard" had five PF keys and two PA keys. The "operator console keyboard" had twelve PF keys and two PA keys. Later 3270s had twenty-four PF keys and three PA keys. When one of these keys is pressed, it will cause its control unit to generate an I/O interrupt to the host computer and present a special code identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors. A downside to this approach was that vi-like behaviour, responding to individual keystrokes, was not possible. For the same reason, a porting of Lotus 1-2-3 to mainframes with 3279 screens did not meet with success because its programmers were not able to properly adapt the spreadsheet's user interface to a "screen at a time" rather than "character at a time" device. But end-user responsiveness was arguably more predictable with 3270, something users appreciated. Following its introduction the 3270 and compatibles were by far the most commonly used terminals on IBM System/370 and successor systems. IBM and third-party software that included an interactive component took for granted the presence of 3270 terminals and provided a set of ISPF panels and supporting programs. Conversational Monitor System (CMS) in VM/SP has support for the 3270. Time Sharing Option (TSO) in OS/360 and successors has line mode command line support and also has facilities for full screen applications, e.g., ISPF. Device independent Display Operator Console Support (DIDOCS) in Multiple Console Support (MCS) for OS/360 and successors. The SPF and "Program Development Facility" (ISPF/PDF) editors for MVS and VM/SP (ISPF/PDF was available for VM, but little used) and XEDIT editors for VM/SP respectively make extensive use of 3270 features. Customer Information Control System (CICS) has support for 3270 panels. Various versions of Wylbur have support for 3270, including support for full-screen applications. The modified data tag is well suited to converting formatted, structured punched card input onto the 3270 display device. With the appropriate programming, any batch program that uses formatted, structured card input can be layered onto a 3270 terminal. IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world. A version of the WordPerfect word processor ported to System/370 was designed for the 3270 architecture. 3270 and The Web (and HTTP) are similar in that both follow a thin client client-server architecture whereby they, the clients, are given primary responsibility for managing presentation and user input. This minimizes host interactions while still facilitating server-based information retrieval and processing. With the arrival of the web, application development has in many ways returned to the 3270 approach. In the 3270 era, all application functionality was provided centrally. With the advent of the PC, the idea was to invoke central systems only when absolutely unavoidable, and to do all application processing with local software on the personal computer. Now in the web era (and with wikis in particular), the application again is strongly centrally controlled, with only technical functionality distributed to the PC. In the early 1990s a popular solution to link PCs with the mainframes was the Irma board, an expansion card that plugged into a PC and connected to the controller through a coaxial cable. IRMA also allows file transfers between the PC and the mainframe. One of the first groups to write and provide operating system support for the 3270 and its early predecessors was the University of Michigan, who created the Michigan Terminal System in order for the hardware to be useful outside of the manufacturer. MTS was the default OS at Michigan for many years, and was still used at Michigan well into the 1990s. Many manufacturers, such as GTE, Hewlett Packard, Honeywell/Incoterm Div, Memorex, ITT Courier and Teletype/AT&T created 3270 compatible terminals, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability that would transmit a screen at a time, with some form validation capability. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces. The IBM 3270 display terminal subsystem consists of displays, printers and controllers. Optional features for the 3275 and 3277 are the "selector-pen" or light pen, ASCII rather than EBCDIC character set, an audible alarm, and a keylock for the keyboard. A "keyboard numeric lock" was available and will lock the keyboard if the operator attempts to enter non-numeric data into a field defined as numeric. Later an "Operator Identification Card Reader" was added which could read information encoded on a magnetic stripe card. Generally, 3277 models allow only upper-case input, except for the mixed EBCDIC/APL or "text" keyboards, which have lower case. Lower-case capability and possibility of dead keys, at first a simple RPQ ("Request Price Quotation", tailored on request at extra cost) were only added in 3278 & 3279 models. A version of the IBM PC called the 3270 PC, released in October 1983, includes 3270 terminal emulation. Later, the 3270 PC/G (graphics) and 3270 PC/GX (extended graphics) followed. The IBM 3279 was IBM's first colour terminal. IBM initially announced four models, and later added a fifth model for use as a processor console. The 3279 was introduced in 1979. The 3279 was widely used as an IBM mainframe terminal before PCs became commonly used for the purpose. It was part of the 3270 series, using the 3270 data stream. from IBM. Terminals could be connected to a 3274 controller, either channel connected to an IBM mainframe or linked via an SDLC (Synchronous Data Link Control) link. In the Systems Network Architecture (SNA) protocol these terminals were logical unit type 2 (LU2). The basic model 2 used red, green for input fields, and blue and white for output fields. However, there were other models with seven colours and different screen sizes, and one kind had a loadable character set that could be used to show graphics. The IBM 3279 with its graphics software support, Graphical Data Display Manager (GDDM), was designed at IBM's Hursley Development Laboratory, near Winchester, England. The 3180 was a monochrome display, introduced on March 20, 1984, that the user could configure for several different basic and extended display modes; all of the basic modes have a primary screen size of 24x80. Modes 2 and 2+ have a secondary size of 24x80, 3 and 3+ have a secondary size of 32x80, 4 and 4+ have a secondary size of 43x80 and 5 and 5+ have a secondary size of 27x132. An application can override the primary and alternate screen sizes for the extended mode. The 3180 also supported a single explicit partition that could be reconfigured under application control. The original 3192 was a monochrome display with the same characteristics as the 3180. By 1994 the "3174 Establishment Controller" supported features such as attachment to multiple hosts via token ring, Ethernet, or X.25 in addition to the standard channel attach or SDLC, and terminal attachment via twisted pair, token ring or Ethernet in addition to co-ax. They also support attachment of asynchronous ASCII terminals, printers, and plotters alongside 3270 devices. These were specialized models: As with the 3179 ("No 3179 terminals, other than the 3179-G, can show graphics"), the 3279 and 3472 had "G" models. The IBM 3179G released in March 1984 is an IBM mainframe computer terminal providing 80×24 or 80×32 characters plus graphics. 3179-G terminals combine text and graphics as separate layers on the screen. Although the text and graphics appear combined on the screen, the text layer actually sits over the graphics layer. The text layer contains the usual 3270-style cells which display characters (letters, numbers, symbols, or invisible control characters). The graphics layer is an area of 720×384 pixels. 'All Points Addressable' or 'vector graphics' is used to paint each pixel in one of sixteen colors. As well as being separate layers on the screen, the text and graphics layers are sent to the display in separate data streams, making them completely independent. The G10 model is a standard 122-key typewriter keyboard, while the G20 model offers APL on the same layout. Compatible with IBM System/370, IBM 4300 series, 303x, 308x, IBM 3090, and IBM 9370. The 3279g has a capability called "Extended Data Stream" (EDS). Documentation for the SAS software package says "The ability to do graphics on a 3270 terminal implies that it is an EDS device." The IBM 3472G has Native Vector Graphics capability. The IBM 3270 display terminal subsystem was designed and developed by IBM's Kingston, New York, laboratory (which later closed during in the mid-1990s). The printers were developed by the Endicott, New York, laboratory. As the subsystem expanded, the 3276 display-controller was developed by the Fujisawa laboratory, Japan, and later the Yamato laboratory; and the 3279 colour display and 3287 colour printer by the Hursley, UK, laboratory. The subsystem products were manufactured in Kingston (displays and controllers), Endicott (printers), and Greenock, Scotland, UK, (most products) and shipped to users in U.S. and worldwide. 3278 terminals continued to be manufactured in Hortolândia, near Campinas, Brazil as far as late 1980s, having its internals redesigned by a local engineering team using modern CMOS technology, while retaining its external look and feel. Telnet 3270, or tn3270 describes both the process of sending and receiving 3270 data streams using the telnet protocol and the software that emulates a 3270 class terminal that communicates using that process. tn3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Telnet 3270 can be used for either terminal or print connections. Standard telnet clients cannot be used as a substitute for tn3270 clients, as they use fundamentally different techniques for exchanging data. The following table shows the 3275/3277/3284/3286 character set for US English EBCDIC (optional characters were available for US ASCII, and UK, French, German, and Italian EBCDIC). The numbers are the equivalent Unicode code points. On the 3275 and 3277 terminals without the a text feature, lower case characters display as uppercase. NL, EM, DUP, and FM control characters display and print as 5, 9, *, and ; characters, respectively, except by the printer when WCC or CCC bits 2 and 3 = '00'b, in which case NL and EM serve their control function and do not print. Data sent to the 3270 consists of "commands" and "orders". Commands instruct the 3270 control unit to perform some action on a specified device, such a read or write. Orders are sent as part of the data stream to control the format of the device buffer. The following description applies to the 3271, 3272, and 3275 control units. Later models of 3270 have additional capabilities. The data sent by Write or Erase/Write consists of the command code itself followed by a "Write Control Character" (WCC) optionally followed by a buffer containing orders or data (or both). The WCC controls the operation of the device. Bits may start printer operation and specify a print format. Other bit settings will sound the audible alarm if installed, unlock the keyboard to allow operator entry, or reset all the Modified Data Tags in the device buffer. Orders consist of the order code byte followed by zero to three bytes of variable information. The original 3277 and 3275 displays used an 8-bit field attribute byte of which five bits were used. Later models include "base colour" "In base color mode, the protection and intensity bits are used in combination to select among four colors: normally white, red, blue, and green; the protection bits retain their protection functions as well as determining color." Still later models used "extended attributes" to add support for seven colours, blinking, reverse video, underscoring, field outlining, field validation, and programmed symbols. In addition, later models added character attributes, which could establish, e.g., color for individual characters without starting a new field or taking up a screen position. 3270 displays and printers have a buffer containing one byte for every screen position. For example, a 3277 model 2 featured a screen size of 24 rows of 80 columns for a buffer size of 1920 bytes. Bytes are addressed from zero to the screen size minus one, in this example 1919. "There is a fixed relationship between each ... buffer storage location and its position on the display screen." Most orders start operation at the "current" buffer address, and executing an order or writing data will update this address. The buffer address can be set directly using the "Set Buffer Address (SBA)" order, often followed by "Start Field". For a device with a 1920 character display a twelve bit address is sufficient. Later 3270s with larger screen sizes use fourteen or sixteen bits. Addresses are encoded in orders in two bytes. For twelve bit addresses the high order two bits of each byte are normally set to form valid EBCDIC (or ASCII) characters. For example, address 0 is coded as X'4040', or space-space, address 1919 is coded as X'5D7F', or ". Programmers hand coding panels usually keep the table of addresses from the 3270 Component Description or the 3270 Reference Card handy. For fourteen and sixteen bit address, the address uses contiguous bits in two bytes. The following data stream writes an attribute in row 24, column 1, writes the (protected) characters '> ' in row 24, columns 2 and 3, and creates an unprotected field on row 24 from columns 5-79. Because the buffer wraps around an attribute is placed on row 24, column 80 to terminate the input field. This data stream would normally be written using an Erase/Write command which would set undefined positions on the screen to '00'x. Values are given in hexadecimal.
https://en.wikipedia.org/wiki?curid=15154
I. M. Pei Ieoh Ming Pei (), FAIA, RIBA ( 26 April 1917 – 16 May 2019) was a Chinese-American architect. Born in Guangzhou but raised in Hong Kong and Shanghai, Pei drew inspiration at an early age from the garden villas at Suzhou, the traditional retreat of the scholar-gentry to which his family belonged. In 1935, he moved to the United States and enrolled in the University of Pennsylvania's architecture school, but he quickly transferred to the Massachusetts Institute of Technology. He was unhappy with the focus at both schools on Beaux-Arts architecture, and spent his free time researching emerging architects, especially Le Corbusier. After graduating, he joined the Harvard Graduate School of Design (GSD) and became a friend of the Bauhaus architects Walter Gropius and Marcel Breuer. In 1948, Pei was recruited by New York City real estate magnate William Zeckendorf, for whom he worked for seven years before establishing an independent design firm in 1955, I. M. Pei & Associates. In 1966 that became I. M. Pei & Partners, and in 1989 became Pei Cobb Freed & Partners. Pei retired from full-time practice in 1990. In his retirement, he worked as an architectural consultant primarily from his sons' architectural firm Pei Partnership Architects. Pei's first major recognition came with the Mesa Laboratory at the National Center for Atmospheric Research in Colorado (designed in 1961, and completed in 1967). His new stature led to his selection as chief architect for the John F. Kennedy Library in Massachusetts. He went on to design Dallas City Hall and the East Building of the National Gallery of Art. He returned to China for the first time in 1975 to design a hotel at Fragrant Hills, and designed Bank of China Tower, Hong Kong, a skyscraper in Hong Kong for the Bank of China fifteen years later. In the early 1980s, Pei was the focus of controversy when he designed a glass-and-steel pyramid for the Musée du Louvre in Paris. He later returned to the world of the arts by designing the Morton H. Meyerson Symphony Center in Dallas, the Miho Museum in Japan, Shigaraki, near Kyoto, and the chapel of the junior and high school: MIHO Institute of Aesthetics, the Suzhou Museum in Suzhou, Museum of Islamic Art in Qatar, and the Grand Duke Jean Museum of Modern Art, abbreviated to Mudam, in Luxembourg. Pei won a wide variety of prizes and awards in the field of architecture, including the AIA Gold Medal in 1979, the first Praemium Imperiale for Architecture in 1989, and the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum in 2003. In 1983, he won the Pritzker Prize, which is sometimes referred to as the Nobel Prize of architecture. Pei's ancestry traces back to the Ming dynasty, when his family moved from Anhui province to Suzhou. The family made their wealth in medicinal herbs, then proceeded to join the ranks of the scholar-gentry, a class which stressed the importance of helping the less fortunate. Ieoh Ming Pei was born on 26 April 1917 to Tsuyee and Lien Kwun, and the family moved to Hong Kong one year later. The family eventually included five children. As a boy, Pei was very close to his mother, a devout Buddhist who was recognized for her skills as a flautist. She invited him (and not his brothers or sisters) to join her on meditation retreats. His relationship with his father was less intimate. Their interactions were respectful but distant. Pei's ancestors' success meant that the family lived in the upper echelons of society, but Pei said his father was "not cultivated in the ways of the arts". The younger Pei, drawn more to music and other cultural forms than to his father's domain of banking, explored art on his own. "I have cultivated myself," he said later. When Pei was 10, his father received a promotion and relocated with his family to Shanghai. Pei attended St. John's Middle School, run by Anglican missionaries. Academic discipline was rigorous; students were allowed only one half-day each month for leisure. Pei enjoyed playing billiards and watching Hollywood movies, especially those of Buster Keaton and Charlie Chaplin. He also learned rudimentary English skills by reading the Bible and novels by Charles Dickens. Shanghai's many international elements gave it the name "Paris of the East". The city's global architectural flavors had a profound influence on Pei, from The Bund waterfront area to the Park Hotel, built in 1934. He was also impressed by the many gardens of Suzhou, where he spent the summers with extended family and regularly visited a nearby ancestral shrine. The Shizilin Garden, built in the 14th century by a Buddhist monk and owned by Pei's uncle Bei Runsheng, was especially influential. Its unusual rock formations, stone bridges, and waterfalls remained etched in Pei's memory for decades. He spoke later of his fondness for the garden's blending of natural and human-built structures. Soon after the move to Shanghai, Pei's mother developed cancer. As a pain reliever, she was prescribed opium, and assigned the task of preparing her pipe to Pei. She died shortly after his thirteenth birthday, and he was profoundly upset. The children were sent to live with extended family; their father became more consumed by his work and more physically distant. Pei said: "My father began living his own separate life pretty soon after that." His father later married a woman named Aileen, who moved to New York later in her life. As Pei neared the end of his secondary education, he decided to study at a university. He was accepted in a number of schools, but decided to enroll at the University of Pennsylvania. Pei's choice had two roots. While studying in Shanghai, he had closely examined the catalogs for various institutions of higher learning around the world. The architectural program at the University of Pennsylvania stood out to him. The other major factor was Hollywood. Pei was fascinated by the representations of college life in the films of Bing Crosby, which differed tremendously from the academic atmosphere in China. "College life in the U.S. seemed to me to be mostly fun and games", he said in 2000. "Since I was too young to be serious, I wanted to be part of it ... You could get a feeling for it in Bing Crosby's movies. College life in America seemed very exciting to me. It's not real, we know that. Nevertheless, at that time it was very attractive to me. I decided that was the country for me." Pei added that "Crosby's films in particular had a tremendous influence on my choosing the United States instead of England to pursue my education." In 1935 Pei boarded a boat and sailed to San Francisco, then traveled by train to Philadelphia. What he found once he arrived, however, differed vastly from his expectations. Professors at the University of Pennsylvania based their teaching in the Beaux-Arts style, rooted in the classical traditions of ancient Greece and Rome. Pei was more intrigued by modern architecture, and also felt intimidated by the high level of drafting proficiency shown by other students. He decided to abandon architecture and transferred to the engineering program at Massachusetts Institute of Technology (MIT). Once he arrived, however, the dean of the architecture school commented on his eye for design and convinced Pei to return to his original major. MIT's architecture faculty was also focused on the Beaux-Arts school, and Pei found himself uninspired by the work. In the library he found three books by the Swiss-French architect Le Corbusier. Pei was inspired by the innovative designs of the new International style, characterized by simplified form and the use of glass and steel materials. Le Corbusier visited MIT in , an occasion which powerfully affected Pei: "The two days with Le Corbusier, or 'Corbu' as we used to call him, were probably the most important days in my architectural education." Pei was also influenced by the work of U.S. architect Frank Lloyd Wright. In 1938 he drove to Spring Green, Wisconsin, to visit Wright's famous Taliesin building. After waiting for two hours, however, he left without meeting Wright. Although he disliked the Beaux-Arts emphasis at MIT, Pei excelled in his studies. "I certainly don't regret the time at MIT," he said later. "There I learned the science and technique of building, which is just as essential to architecture." Pei received his B.Arch. degree in 1940; his thesis was titled "Standardized Propaganda Units for War Time and Peace Time China." While visiting New York City in the late 1930s, Pei met a Wellesley College student named Eileen Loo. They began dating and married in the spring of 1942. She enrolled in the landscape architecture program at Harvard University, and Pei was thus introduced to members of the faculty at Harvard's Graduate School of Design (GSD). He was excited by the lively atmosphere and joined the GSD in . Less than a month later, Pei suspended his work at Harvard to join the National Defense Research Committee, which coordinated scientific research into U.S. weapons technology during World War II. Pei's background in architecture was seen as a considerable asset; one member of the committee told him: "If you know how to build you should also know how to destroy." The fight against Germany was ending, so he focused on the Pacific War. The U.S. realized that its bombs used against the stone buildings of Europe would be ineffective against Japanese cities, mostly constructed from wood and paper; Pei was assigned to work on incendiary bombs. Pei spent two and a half years with the NDRC, but revealed few details of his work. In 1945 Eileen gave birth to a son, T'ing Chung; she withdrew from the landscape architecture program in order to care for him. Pei returned to Harvard in the autumn of 1945, and received a position as assistant professor of design. The GSD was developing into a hub of resistance to the Beaux-Arts orthodoxy. At the center were members of the Bauhaus, a European architectural movement that had advanced the cause of modernist design. The Nazi regime had condemned the Bauhaus school, and its leaders left Germany. Two of these, Walter Gropius and Marcel Breuer, took positions at the Harvard GSD. Their iconoclastic focus on modern architecture appealed to Pei, and he worked closely with both men. One of Pei's design projects at the GSD was a plan for an art museum in Shanghai. He wanted to create a mood of Chinese authenticity in the architecture without using traditional materials or styles. The design was based on straight modernist structures, organized around a central courtyard garden, with other similar natural settings arranged nearby. It was very well received; Gropius, in fact, called it "the best thing done in [my] master class." Pei received his M.Arch. degree in 1946, and taught at Harvard for another two years. In the spring of 1948 Pei was recruited by New York real estate magnate William Zeckendorf to join a staff of architects for his firm of Webb and Knapp to design buildings around the country. Pei found Zeckendorf's personality the opposite of his own; his new boss was known for his loud speech and gruff demeanor. Nevertheless, they became good friends and Pei found the experience personally enriching. Zeckendorf was well connected politically, and Pei enjoyed learning about the social world of New York's city planners. His first project for Webb and Knapp was an apartment building with funding from the Housing Act of 1949. Pei's design was based on a circular tower with concentric rings. The areas closest to the supporting pillar handled utilities and circulation; the apartments themselves were located toward the outer edge. Zeckendorf loved the design and even showed it off to Le Corbusier when they met. The cost of such an unusual design was too high, however, and the building never moved beyond the model stage. Pei finally saw his architecture come to life in 1949, when he designed a two-story corporate building for Gulf Oil in Atlanta, Georgia. The building was demolished in February 2013 although the front facade will be retained as part of an apartment development. His use of marble for the exterior curtain wall brought praise from the journal "Architectural Forum". Pei's designs echoed the work of Mies van der Rohe in the beginning of his career as also shown in his own weekend-house in Katonah, New York in 1952. Soon Pei was so inundated with projects that he asked Zeckendorf for assistants, which he chose from his associates at the GSD, including Henry N. Cobb and Ulrich Franzen. They set to work on a variety of proposals, including the Roosevelt Field Shopping Mall. The team also redesigned the Webb and Knapp office building, transforming Zeckendorf's office into a circular space with teak walls and a glass clerestory. They also installed a control panel into the desk that allowed their boss to control the lighting in his office. The project took one year and exceeded its budget, but Zeckendorf was delighted with the results. In 1952 Pei and his team began work on a series of projects in Denver, Colorado. The first of these was the Mile High Center, which compressed the core building into less than 25 percent of the total site; the rest is adorned with an exhibition hall and fountain-dotted plazas. One block away, Pei's team also redesigned Denver's Courthouse Square, which combined office spaces, commercial venues, and hotels. These projects helped Pei conceptualize architecture as part of the larger urban geography. "I learned the process of development," he said later, "and about the city as a living organism." These lessons, he said, became essential for later projects. Pei and his team also designed a united urban area for Washington, D.C., called L'Enfant Plaza (named for French-American architect Pierre Charles L'Enfant). Pei's associate Araldo Cossutta was the lead architect for the plaza's North Building (955 L'Enfant Plaza SW) and South Building (490 L'Enfant Plaza SW). Vlastimil Koubek was the architect for the East Building (L'Enfant Plaza Hotel, located at 480 L'Enfant Plaza SW), and for the Center Building (475 L'Enfant Plaza SW; now the United States Postal Service headquarters). The team set out with a broad vision that was praised by both "The Washington Post" and "Washington Star" (which rarely agreed on anything), but funding problems forced revisions and a significant reduction in scale. In 1955 Pei's group took a step toward institutional independence from Webb and Knapp by establishing a new firm called I. M. Pei & Associates. (The name changed later to I. M. Pei & Partners.) They gained the freedom to work with other companies, but continued working primarily with Zeckendorf. The new firm distinguished itself through the use of detailed architectural models. They took on the Kips Bay residential area on the east side of Manhattan, where Pei set up Kips Bay Towers, two large long towers of apartments with recessed windows (to provide shade and privacy) in a neat grid, adorned with rows of trees. Pei involved himself in the construction process at Kips Bay, even inspecting the bags of cement to check for consistency of color. The company continued its urban focus with the Society Hill project in central Philadelphia. Pei designed the Society Hill Towers, a three-building residential block injecting cubist design into the 18th-century milieu of the neighborhood. As with previous projects, abundant green spaces were central to Pei's vision, which also added traditional townhouses to aid the transition from classical to modern design. From 1958 to 1963 Pei and Ray Affleck developed a key downtown block of Montreal in a phased process that involved one of Pei's most admired structures in the Commonwealth, the cruciform tower known as the Royal Bank Plaza (Place Ville Marie). According to "The Canadian Encyclopedia" "its grand plaza and lower office buildings, designed by internationally famous US architect I. M. Pei, helped to set new standards for architecture in Canada in the 1960s ... The tower's smooth aluminum and glass surface and crisp unadorned geometric form demonstrate Pei's adherence to the mainstream of 20th-century modern design." Although these projects were satisfying, Pei wanted to establish an independent name for himself. In 1959 he was approached by MIT to design a building for its Earth science program. The Green Building continued the grid design of Kips Bay and Society Hill. The pedestrian walkway at the ground floor, however, was prone to sudden gusts of wind, which embarrassed Pei. "Here I was from MIT," he said, "and I didn't know about wind-tunnel effects." At the same time, he designed the Luce Memorial Chapel in at Tunghai University in Taichung, Taiwan. The soaring structure, commissioned by the same organisation that had run his middle school in Shanghai, broke severely from the cubist grid patterns of his urban projects. The challenge of coordinating these projects took an artistic toll on Pei. He found himself responsible for acquiring new building contracts and supervising the plans for them. As a result, he felt disconnected from the actual creative work. "Design is something you have to put your hand to," he said. "While my people had the luxury of doing one job at a time, I had to keep track of the whole enterprise." Pei's dissatisfaction reached its peak at a time when financial problems began plaguing Zeckendorf's firm. I. M. Pei and Associates officially broke from Webb and Knapp in 1960, which benefited Pei creatively but pained him personally. He had developed a close friendship with Zeckendorf, and both men were sad to part ways. Pei was able to return to hands-on design when he was approached in 1961 by Walter Orr Roberts to design the new Mesa Laboratory for the National Center for Atmospheric Research outside Boulder, Colorado. The project differed from Pei's earlier urban work; it would rest in an open area in the foothills of the Rocky Mountains. He drove with his wife around the region, visiting assorted buildings and surveying the natural environs. He was impressed by the United States Air Force Academy in Colorado Springs, but felt it was "detached from nature". The conceptualization stages were important for Pei, presenting a need and an opportunity to break from the Bauhaus tradition. He later recalled the long periods of time he spent in the area: "I recalled the places I had seen with my mother when I was a little boy—the mountaintop Buddhist retreats. There in the Colorado mountains, I tried to listen to the silence again—just as my mother had taught me. The investigation of the place became a kind of religious experience for me." Pei also drew inspiration from the Mesa Verde cliff dwellings of the Ancestral Puebloans; he wanted the buildings to exist in harmony with their natural surroundings. To this end, he called for a rock-treatment process that could color the buildings to match the nearby mountains. He also set the complex back on the mesa overlooking the city, and designed the approaching road to be long, winding, and indirect. Roberts disliked Pei's initial designs, referring to them as "just a bunch of towers". Roberts intended his comments as typical of scientific experimentation, rather than artistic critique; still, Pei was frustrated. His second attempt, however, fit Roberts' vision perfectly: a spaced-out series of clustered buildings, joined by lower structures and complemented by two underground levels. The complex uses many elements of cubist design, and the walkways are arranged to increase the probability of casual encounters among colleagues. Once the laboratory was built, several problems with its construction became apparent. Leaks in the roof caused difficulties for researchers, and the shifting of clay soil beneath caused cracks in the buildings which were expensive to repair. Still, both architect and project manager were pleased with the final result. Pei referred to the NCAR complex as his "breakout building", and he remained a friend of Roberts until the scientist died in . The success of NCAR brought renewed attention to Pei's design acumen. He was recruited to work on a variety of projects, including the S. I. Newhouse School of Public Communications at Syracuse University, the Everson Museum of Art in Syracuse, New York, the Sundrome terminal at John F. Kennedy International Airport in New York City, and dormitories at New College of Florida. After President John F. Kennedy was assassinated in , his family and friends discussed how to construct a library that would serve as a fitting memorial. A committee was formed to advise Kennedy's widow Jacqueline, who would make the final decision. The group deliberated for months and considered many famous architects. Eventually, Kennedy chose Pei to design the library, based on two considerations. First, she appreciated the variety of ideas he had used for earlier projects. "He didn't seem to have just one way to solve a problem," she said. "He seemed to approach each commission thinking only of it and then develop a way to make something beautiful." Ultimately, however, Kennedy made her choice based on her personal connection with Pei. Calling it "really an emotional decision", she explained: "He was so full of promise, like Jack; they were born in the same year. I decided it would be fun to take a great leap with him." The project was plagued with problems from the outset. The first was scope. President Kennedy had begun considering the structure of his library soon after taking office, and he wanted to include archives from his administration, a museum of personal items, and a political science institute. After the assassination, the list expanded to include a fitting memorial tribute to the slain president. The variety of necessary inclusions complicated the design process and caused significant delays. Pei's first proposed design included a large glass pyramid that would fill the interior with sunlight, meant to represent the optimism and hope that Kennedy's administration had symbolized for so many in the United States. Mrs. Kennedy liked the design, but resistance began in Cambridge, the first proposed site for the building, as soon as the project was announced. Many community members worried that the library would become a tourist attraction, causing particular problems with traffic congestion. Others worried that the design would clash with the architectural feel of nearby Harvard Square. By the mid-70s, Pei tried proposing a new design, but the library's opponents resisted every effort. These events pained Pei, who had sent all three of his sons to Harvard, and although he rarely discussed his frustration, it was evident to his wife. "I could tell how tired he was by the way he opened the door at the end of the day," she said. "His footsteps were dragging. It was very hard for I. M. to see that so many people didn't want the building." Finally the project moved to Columbia Point, near the University of Massachusetts Boston. The new site was less than ideal; it was located on an old landfill, and just over a large sewage pipe. Pei's architectural team added more fill to cover the pipe and developed an elaborate ventilation system to conquer the odor. A new design was unveiled, combining a large square glass-enclosed atrium with a triangular tower and a circular walkway. The John F. Kennedy Presidential Library and Museum was dedicated on 20 October 1979. Critics generally liked the finished building, but the architect himself was unsatisfied. The years of conflict and compromise had changed the nature of the design, and Pei felt that the final result lacked its original passion. "I wanted to give something very special to the memory of President Kennedy," he said in 2000. "It could and should have been a great project." Pei's work on the Kennedy project boosted his reputation as an architect of note. The Pei Plan was a failed urban redevelopment initiative designed for downtown Oklahoma City, Oklahoma, in 1964. The plan called for the demolition of hundreds of old downtown structures in favor of renewed parking, office building, and retail developments, in addition to public projects such as the Myriad Convention Center and the Myriad Botanical Gardens. It was the dominant template for downtown development in Oklahoma City from its inception through the 1970s. The plan generated mixed results and opinion, largely succeeding in re-developing office building and parking infrastructure but failing to attract its anticipated retail and residential development. Significant public resentment also developed as a result of the destruction of multiple historic structures. As a result, Oklahoma City's leadership avoided large-scale urban planning for downtown throughout the 1980s and early 1990s, until the passage of the Metropolitan Area Projects (MAPS) initiative in 1993. Another city which turned to Pei for urban renewal during this time was Providence, Rhode Island. In the late 1960s, Providence hired Pei to redesign Cathedral Square, a once-bustling civic center which had become neglected and empty, as part of an ambitious larger plan to redesign downtown. Pei's new plaza, modeled after the Greek Agora marketplace, opened in 1972. Unfortunately, the city ran out of money before Pei's vision could be fully realized. Also, recent construction of a low-income housing complex and Interstate 95 had changed the neighborhood's character permanently. In 1974, The Providence Evening Bulletin called Pei's new plaza a "conspicuous failure". By 2016, media reports characterized the plaza as a neglected, little-visited "hidden gem". In 1974, the city of Augusta, Georgia turned to Pei and his firm for downtown revitalization. The Chamber of Commerce building and Bicentennial Park were completed from his plan. In 1976, Pei designed a distinctive modern penthouse that was added to the roof of architect William Lee Stoddart's historic Lamar Building, designed in 1916. The penthouse is a modern take on a pyramid, predating Pei's more famous Louvre Pyramid. It has been criticized by architectural critic James Howard Kunstler as an "Eyesore of the Month," with him comparing it to Darth Vader's helmet. In 1980, Pei and his company designed the Augusta Civic Center, now known as the James Brown Arena. Kennedy's assassination also led indirectly to another commission for Pei's firm. In 1964 the acting mayor of Dallas, Erik Jonsson, began working to change the community's image. Dallas was known and disliked as the city where the president had been killed, but Jonsson began a program designed to initiate a community renewal. One of the goals was a new city hall, which could be a "symbol of the people". Jonsson, a co-founder of Texas Instruments, learned about Pei from his associate Cecil Howard Green, who had recruited the architect for MIT's Earth Sciences building. Pei's approach to the new Dallas City Hall mirrored those of other projects; he surveyed the surrounding area and worked to make the building fit. In the case of Dallas, he spent days meeting with residents of the city and was impressed by their civic pride. He also found that the skyscrapers of the downtown business district dominated the skyline, and sought to create a building which could face the tall buildings and represent the importance of the public sector. He spoke of creating "a public-private dialogue with the commercial high-rises". Working with his associate Theodore Musho, Pei developed a design centered on a building with a top much wider than the bottom; the facade leans at an angle of 34 degrees, which shades the building from the Texas sun. A plaza stretches out before the building, and a series of support columns holds it up. It was influenced by Le Corbusier's High Court building in Chandigarh, India; Pei sought to use the significant overhang to unify the building and plaza. The project cost much more than initially expected, and took 11 years to complete. Revenue was secured in part by including a subterranean parking garage. The interior of the city hall is large and spacious; windows in the ceiling above the eighth floor fill the main space with light. The city of Dallas received the building well, and a local television news crew found unanimous approval of the new city hall when it officially opened to the public in 1978. Pei himself considered the project a success, even as he worried about the arrangement of its elements. He said: "It's perhaps stronger than I would have liked; it's got more strength than finesse." He felt that his relative lack of experience left him without the necessary design tools to refine his vision, but the community liked the city hall enough to invite him back. Over the years he went on to design five additional buildings in the Dallas area. While Pei and Musho were coordinating the Dallas project, their associate Henry Cobb had taken the helm for a commission in Boston. John Hancock Insurance chairman Robert Slater hired I. M. Pei & Partners to design a building that could overshadow the Prudential Tower, erected by their rival. After the firm's first plan was discarded due to a need for more office space, Cobb developed a new plan around a towering parallelogram, slanted away from the Trinity Church and accented by a wedge cut into each narrow side. To minimize the visual impact, the building was covered in large reflective glass panels; Cobb said this would make the building a "background and foil" to the older structures around it. When the Hancock Tower was finished in 1976, it was the tallest building in New England. Serious issues of execution became evident in the tower almost immediately. Many glass panels fractured in a windstorm during construction in 1973. Some detached and fell to the ground, causing no injuries but sparking concern among Boston residents. In response, the entire tower was reglazed with smaller panels. This significantly increased the cost of the project. Hancock sued the glass manufacturers, Libbey-Owens-Ford, as well as I. M. Pei & Partners, for submitting plans that were "not good and workmanlike". LOF countersued Hancock for defamation, accusing Pei's firm of poor use of their materials; I. M. Pei & Partners sued LOF in return. All three companies settled out of court in 1981. The project became an albatross for Pei's firm. Pei himself refused to discuss it for many years. The pace of new commissions slowed and the firm's architects began looking overseas for opportunities. Cobb worked in Australia and Pei took on jobs in Singapore, Iran, and Kuwait. Although it was a difficult time for everyone involved, Pei later reflected with patience on the experience. "Going through this trial toughened us," he said. "It helped to cement us as partners; we did not give up on each other." In the mid-1960s, directors of the National Gallery of Art in Washington, D.C., declared the need for a new building. Paul Mellon, a primary benefactor of the gallery and a member of its building committee, set to work with his assistant J. Carter Brown (who became gallery director in 1969) to find an architect. The new structure would be located to the east of the original building, and tasked with two functions: offer a large space for public appreciation of various popular collections; and house office space as well as archives for scholarship and research. They likened the scope of the new facility to the Library of Alexandria. After inspecting Pei's work at the Des Moines Art Center in Iowa and the Johnson Museum at Cornell University, they offered him the commission. Pei took to the project with vigor, and set to work with two young architects he had recently recruited to the firm, William Pedersen and Yann Weymouth. Their first obstacle was the unusual shape of the building site, a trapezoid of land at the intersection of Constitution and Pennsylvania Avenues. Inspiration struck Pei in 1968, when he scrawled a rough diagram of two triangles on a scrap of paper. The larger building would be the public gallery; the smaller would house offices and archives. This triangular shape became a singular vision for the architect. As the date for groundbreaking approached, Pedersen suggested to his boss that a slightly different approach would make construction easier. Pei simply smiled and said: "No compromises." The growing popularity of art museums presented unique challenges to the architecture. Mellon and Pei both expected large crowds of people to visit the new building, and they planned accordingly. To this end, Pei designed a large lobby roofed with enormous skylights. Individual galleries are located along the periphery, allowing visitors to return after viewing each exhibit to the spacious main room. A large mobile sculpture by American artist Alexander Calder was later added to the lobby. Pei hoped the lobby would be exciting to the public in the same way as the central room of the Guggenheim Museum is in New York City. The modern museum, he said later, "must pay greater attention to its educational responsibility, especially to the young". Materials for the building's exterior were chosen with careful precision. To match the look and texture of the original gallery's marble walls, builders re-opened the quarry in Knoxville, Tennessee, from which the first batch of stone had been harvested. The project even found and hired Malcolm Rice, a quarry supervisor who had overseen the original 1941 gallery project. The marble was cut into three-inch-thick blocks and arranged over the concrete foundation, with darker blocks at the bottom and lighter blocks on top. The East Building was honored on 30 May 1978, two days before its public unveiling, with a black-tie party attended by celebrities, politicians, benefactors, and artists. When the building opened, popular opinion was enthusiastic. Large crowds visited the new museum, and critics generally voiced their approval. Ada Louise Huxtable wrote in "The New York Times" that Pei's building was "a palatial statement of the creative accommodation of contemporary art and architecture". The sharp angle of the smaller building has been a particular note of praise for the public; over the years it has become stained and worn from the hands of visitors. Some critics disliked the unusual design, however, and criticized the reliance on triangles throughout the building. Others took issue with the large main lobby, particularly its attempt to lure casual visitors. In his review for "Artforum", critic Richard Hennessy described a "shocking fun-house atmosphere" and "aura of ancient Roman patronage". One of the earliest and most vocal critics, however, came to appreciate the new gallery once he saw it in person. Allan Greenberg had scorned the design when it was first unveiled, but wrote later to J. Carter Brown: "I am forced to admit that you are right and I was wrong! The building is a masterpiece." After U.S. President Richard Nixon made his famous 1972 visit to China, a wave of exchanges took place between the two countries. One of these was a delegation of the American Institute of Architects in 1974, which Pei joined. It was his first trip back to China since leaving in 1935. He was favorably received, returned the welcome with positive comments, and a series of lectures ensued. Pei noted in one lecture that since the 1950s Chinese architects had been content to imitate Western styles; he urged his audience in one lecture to search China's native traditions for inspiration. In 1978, Pei was asked to initiate a project for his home country. After surveying a number of different locations, Pei fell in love with a valley that had once served as an imperial garden and hunting preserve known as Fragrant Hills. The site housed a decrepit hotel; Pei was invited to tear it down and build a new one. As usual, he approached the project by carefully considering the context and purpose. Likewise, he considered modernist styles inappropriate for the setting. Thus, he said, it was necessary to find "a third way". After visiting his ancestral home in Suzhou, Pei created a design based on some simple but nuanced techniques he admired in traditional residential Chinese buildings. Among these were abundant gardens, integration with nature, and consideration of the relationship between enclosure and opening. Pei's design included a large central atrium covered by glass panels that functioned much like the large central space in his East Building of the National Gallery. Openings of various shapes in walls invited guests to view the natural scenery beyond. Younger Chinese who had hoped the building would exhibit some of Cubist flavor for which Pei had become known were disappointed, but the new hotel found more favour with government officials and architects. The hotel, with 325 guest rooms and a four-story central atrium, was designed to fit perfectly into its natural habitat. The trees in the area were of special concern, and particular care was taken to cut down as few as possible. He worked with an expert from Suzhou to preserve and renovate a water maze from the original hotel, one of only five in the country. Pei was also meticulous about the arrangement of items in the garden behind the hotel; he even insisted on transporting of rocks from a location in southwest China to suit the natural aesthetic. An associate of Pei's said later that he never saw the architect so involved in a project. During construction, a series of mistakes collided with the nation's lack of technology to strain relations between architects and builders. Whereas 200 or so workers might have been used for a similar building in the US, the Fragrant Hill project employed over 3,000 workers. This was mostly because the construction company lacked the sophisticated machines used in other parts of the world. The problems continued for months, until Pei had an uncharacteristically emotional moment during a meeting with Chinese officials. He later explained that his actions included "shouting and pounding the table" in frustration. The design staff noticed a difference in the manner of work among the crew after the meeting. As the opening neared, however, Pei found the hotel still needed work. He began scrubbing floors with his wife and ordered his children to make beds and vacuum floors. The project's difficulties took an emotional and physical strain on the Pei family. The Fragrant Hill Hotel opened on 17 October 1982 but quickly fell into disrepair. A member of Pei's staff returned for a visit several years later and confirmed the dilapidated condition of the hotel. He and Pei attributed this to the country's general unfamiliarity with deluxe buildings. The Chinese architectural community at the time gave the structure little attention, as their interest at the time centered on the work of American postmodernists such as Michael Graves. As the Fragrant Hill project neared completion, Pei began work on the Jacob K. Javits Convention Center in New York City, for which his associate James Freed served as lead designer. Hoping to create a vibrant community institution in what was then a run-down neighborhood on Manhattan's west side, Freed developed a glass-coated structure with an intricate space frame of interconnected metal rods and spheres. The convention center was plagued from the start by budget problems and construction blunders. City regulations forbid a general contractor having final authority over the project, so architects and program manager Richard Kahan had to coordinate the wide array of builders, plumbers, electricians, and other workers. The forged steel globes to be used in the space frame came to the site with hairline cracks and other defects: 12,000 were rejected. These and other problems led to media comparisons with the disastrous Hancock Tower. One New York City official blamed Kahan for the difficulties, indicating that the building's architectural flourishes were responsible for delays and financial crises. The Javits Center opened on 3 April 1986, to a generally positive reception. During the inauguration ceremonies, however, neither Freed nor Pei was recognized for their role in the project. When François Mitterrand was elected President of France in 1981, he laid out an ambitious plan for a variety of construction projects. One of these was the renovation of the Louvre Museum. Mitterrand appointed a civil servant named Émile Biasini to oversee it. After visiting museums in Europe and the United States, including the U.S. National Gallery, he asked Pei to join the team. The architect made three secretive trips to Paris, to determine the feasibility of the project; only one museum employee knew why he was there. Pei finally agreed that a reconstruction project was not only possible, but necessary for the future of the museum. He thus became the first foreign architect to work on the Louvre. The heart of the new design included not only a renovation of the "Cour Napoléon" in the midst of the buildings, but also a transformation of the interiors. Pei proposed a central entrance, not unlike the lobby of the National Gallery East Building, which would link the three major buildings. Below would be a complex of additional floors for research, storage, and maintenance purposes. At the center of the courtyard he designed a glass and steel pyramid, first proposed with the Kennedy Library, to serve as entrance and anteroom skylight. It was mirrored by another inverted pyramid underneath, to reflect sunlight into the room. These designs were partly an homage to the fastidious geometry of the famous French landscape architect André Le Nôtre (1613–1700). Pei also found the pyramid shape best suited for stable transparency, and considered it "most compatible with the architecture of the Louvre, especially with the faceted planes of its roofs". Biasini and Mitterrand liked the plans, but the scope of the renovation displeased Louvre director André Chabaud. He resigned from his post, complaining that the project was "unfeasible" and posed "architectural risks". The public also reacted harshly to the design, mostly because of the proposed pyramid. One critic called it a "gigantic, ruinous gadget"; another charged Mitterrand with "despotism" for inflicting Paris with the "atrocity". Pei estimated that 90 percent of Parisians opposed his design. "I received many angry glances in the streets of Paris," he said. Some condemnations carried nationalistic overtones. One opponent wrote: "I am surprised that one would go looking for a Chinese architect in America to deal with the historic heart of the capital of France." Soon, however, Pei and his team won the support of several key cultural icons, including the conductor Pierre Boulez and Claude Pompidou, widow of former French President Georges Pompidou, after whom another controversial museum was named. In an attempt to soothe public ire, Pei took a suggestion from then-mayor of Paris Jacques Chirac and placed a full-sized cable model of the pyramid in the courtyard. During the four days of its exhibition, an estimated 60,000 people visited the site. Some critics eased their opposition after witnessing the proposed scale of the pyramid. To minimize the impact of the structure, Pei demanded a method of glass production that resulted in clear panes. The pyramid was constructed at the same time as the subterranean levels below, which caused difficulties during the building stages. As they worked, construction teams came upon an abandoned set of rooms containing 25,000 historical items; these were incorporated into the rest of the structure to add a new exhibition zone. The new Louvre courtyard was opened to the public on 14 October 1988, and the Pyramid entrance was opened the following March. By this time, public opinion had softened on the new installation; a poll found a 56 percent approval rating for the pyramid, with 23 percent still opposed. The newspaper "Le Figaro" had vehemently criticized Pei's design, but later celebrated the tenth anniversary of its magazine supplement at the pyramid. Prince Charles of Britain surveyed the new site with curiosity, and declared it "marvelous, very exciting". A writer in "Le Quotidien de Paris" wrote: "The much-feared pyramid has become adorable." The experience was exhausting for Pei, but also rewarding. "After the Louvre," he said later, "I thought no project would be too difficult." The pyramid achieved further widepread international recognition for its central role in the plot at the denouement of "The Da Vinci Code" by Dan Brown and its appearance in the final scene of the subsequent screen adaptation. The "Louvre Pyramid" has become Pei's most famous structure. The opening of the Louvre Pyramid coincided with four other projects on which Pei had been working, prompting architecture critic Paul Goldberger to declare 1989 "the year of Pei" in "The New York Times". It was also the year in which Pei's firm changed its name to Pei Cobb Freed & Partners, to reflect the increasing stature and prominence of his associates. At the age of 72, Pei had begun thinking about retirement, but continued working long hours to see his designs come to light. One of the projects took Pei back to Dallas, Texas, to design the Morton H. Meyerson Symphony Center. The success of city's performing artists, particularly the Dallas Symphony Orchestra then led by conductor Eduardo Mata, led to interest by city leaders in creating a modern center for musical arts that could rival the best halls in Europe. The organizing committee contacted 45 architects, but at first Pei did not respond, thinking that his work on the Dallas City Hall had left a negative impression. One of his colleagues from that project, however, insisted that he meet with the committee. He did and, although it would be his first concert hall, the committee voted unanimously to offer him the commission. As one member put it: "We were convinced that we would get the world's greatest architect putting his best foot forward." The project presented a variety of specific challenges. Because its main purpose was the presentation of live music, the hall needed a design focused on acoustics first, then public access and exterior aesthetics. To this end, a professional sound technician was hired to design the interior. He proposed a shoebox auditorium, used in the acclaimed designs of top European symphony halls such as the Amsterdam Concertgebouw and Vienna Musikverein. Pei drew inspiration for his adjustments from the designs of the German architect Johann Balthasar Neumann, especially the Basilica of the Fourteen Holy Helpers. He also sought to incorporate some of the panache of the Paris Opéra designed by Charles Garnier. Pei's design placed the rigid shoebox at an angle to the surrounding street grid, connected at the north end to a long rectangular office building, and cut through the middle with an assortment of circles and cones. The design attempted to reproduce with modern features the acoustic and visual functions of traditional elements like filigree. The project was risky: its goals were ambitious and any unforeseen acoustic flaws would be virtually impossible to remedy after the hall's completion. Pei admitted that he did not completely know how everything would come together. "I can imagine only 60 percent of the space in this building," he said during the early stages. "The rest will be as surprising to me as to everyone else." As the project developed, costs rose steadily and some sponsors considered withdrawing their support. Billionaire tycoon Ross Perot made a donation of US$10 million, on the condition that it be named in honor of Morton H. Meyerson, the longtime patron of the arts in Dallas. The building opened and immediately garnered widespread praise, especially for its acoustics. After attending a week of performances in the hall, a music critic for "The New York Times" wrote an enthusiastic account of the experience and congratulated the architects. One of Pei's associates told him during a party before the opening that the symphony hall was "a very mature building"; he smiled and replied: "Ah, but did I have to wait this long?" A new offer had arrived for Pei from the Chinese government in 1982. With an eye toward the transfer of sovereignty of Hong Kong from the British in 1997, authorities in China sought Pei's aid on a new tower for the local branch of the Bank of China. The Chinese government was preparing for a new wave of engagement with the outside world and sought a tower to represent modernity and economic strength. Given the elder Pei's history with the bank before the Communist takeover, government officials visited the 89-year-old man in New York to gain approval for his son's involvement. Pei then spoke with his father at length about the proposal. Although the architect remained pained by his experience with Fragrant Hills, he agreed to accept the commission. The proposed site in Hong Kong's Central District was less than ideal; a tangle of highways lined it on three sides. The area had also been home to a headquarters for Japanese military police during World War II, and was notorious for prisoner torture. The small parcel of land made a tall tower necessary, and Pei had usually shied away from such projects; in Hong Kong especially, the skyscrapers lacked any real architectural character. Lacking inspiration and unsure of how to approach the building, Pei took a weekend vacation to the family home in Katonah, New York. There he found himself experimenting with a bundle of sticks until he happened upon a cascading sequence. Pei felt that his design for the Bank of China Tower needed to reflect "the aspirations of the Chinese people". The design that he developed for the skyscraper was not only unique in appearance, but also sound enough to pass the city's rigorous standards for wind-resistance. The building is composed of four triangular shafts rising up from a square base, supported by a visible truss structure that distributes stress to the four corners of the base. Using the reflective glass that had become something of a trademark for him, Pei organized the facade around diagonal bracing in a union of structure and form that reiterates the triangle motif established in the plan. At the top, he designed the roofs at sloping angles to match the rising aesthetic of the building. Some influential advocates of "feng shui" in Hong Kong and China criticized the design, and Pei and government officials responded with token adjustments. As the tower neared completion, Pei was shocked to witness the government's massacre of unarmed civilians at the Tiananmen Square protests of 1989. He wrote an opinion piece for "The New York Times" titled "China Won't Ever Be the Same," in which he said that the killings "tore the heart out of a generation that carries the hope for the future of the country". The massacre deeply disturbed his entire family, and he wrote that "China is besmirched." As the 1990s began, Pei transitioned into a role of decreased involvement with his firm. The staff had begun to shrink, and Pei wanted to dedicate himself to smaller projects allowing for more creativity. Before he made this change, however, he set to work on his last major project as active partner: the Rock and Roll Hall of Fame in Cleveland, Ohio. Considering his work on such bastions of high culture as the Louvre and U.S. National Gallery, some critics were surprised by his association with what many considered a tribute to low culture. The sponsors of the hall, however, sought Pei for specifically this reason; they wanted the building to have an aura of respectability from the beginning. As in the past, Pei accepted the commission in part because of the unique challenge it presented. Using a glass wall for the entrance, similar in appearance to his Louvre pyramid, Pei coated the exterior of the main building in white metal, and placed a large cylinder on a narrow perch to serve as a performance space. The combination of off-centered wraparounds and angled walls was, Pei said, designed to provide "a sense of tumultuous youthful energy, rebelling, flailing about". The building opened in 1995, and was received with moderate praise. "The New York Times" called it "a fine building", but Pei was among those who felt disappointed with the results. The museum's early beginnings in New York combined with an unclear mission created a fuzzy understanding among project leaders for precisely what was needed. Although the city of Cleveland benefited greatly from the new tourist attraction, Pei was unhappy with it. At the same time, Pei designed a new museum for Luxembourg, the "Musée d'art moderne Grand-Duc Jean", commonly known as the Mudam. Drawing from the original shape of the Fort Thüngen walls where the museum was located, Pei planned to remove a portion of the original foundation. Public resistance to the historical loss forced a revision of his plan, however, and the project was nearly abandoned. The size of the building was halved, and it was set back from the original wall segments to preserve the foundation. Pei was disappointed with the alterations, but remained involved in the building process even during construction. In 1995, Pei was hired to design an extension to the "Deutsches Historisches Museum", or German Historical Museum in Berlin. Returning to the challenge of the East Building of the U.S. National Gallery, Pei worked to combine a modernist approach with a classical main structure. He described the glass cylinder addition as a "beacon," and topped it with a glass roof to allow plentiful sunlight inside. Pei had difficulty working with German government officials on the project; their utilitarian approach clashed with his passion for aesthetics. "They thought I was nothing but trouble", he said. Pei also worked at this time on two projects for a new Japanese religious movement called "Shinji Shumeikai". He was approached by the movement's spiritual leader, Kaishu Koyama, who impressed the architect with her sincerity and willingness to give him significant artistic freedom. One of the buildings was a bell tower, designed to resemble the "bachi" used when playing traditional instruments like the "shamisen". Pei was unfamiliar with the movement's beliefs, but explored them in order to represent something meaningful in the tower. As he said: "It was a search for the sort of expression that is not at all technical." The experience was rewarding for Pei, and he agreed immediately to work with the group again. The new project was the Miho Museum, to display Koyama's collection of tea ceremony artifacts. Pei visited the site in Shiga Prefecture, and during their conversations convinced Koyama to expand her collection. She conducted a global search and acquired more than 300 items showcasing the history of the Silk Road. One major challenge was the approach to the museum. The Japanese team proposed a winding road up the mountain, not unlike the approach to the NCAR building in Colorado. Instead, Pei ordered a hole cut through a nearby mountain, connected to a major road via a bridge suspended from ninety-six steel cables and supported by a post set into the mountain. The museum itself was built into the mountain, with 80 percent of the building underground. When designing the exterior, Pei borrowed from the tradition of Japanese temples, particularly those found in nearby Kyoto. He created a concise spaceframe wrapped into French limestone and covered with a glass roof. Pei also oversaw specific decorative details, including a bench in the entrance lobby, carved from a 350-year-old "keyaki" tree. Because of Koyama's considerable wealth, money was rarely considered an obstacle; estimates at the time of completion put the cost of the project at US$350 million. During the first decade of the 2000s, Pei designed a variety of buildings, including the Suzhou Museum near his childhood home. He also designed the Museum of Islamic Art in Doha, Qatar at the request of the Al-Thani Family. Although it was originally planned for the corniche road along Doha Bay, Pei convinced the project coordinators to build a new island to provide the needed space. He then spent six months touring the region and surveying mosques in Spain, Syria, and Tunisia. He was especially impressed with the elegant simplicity of the Mosque of Ibn Tulun in Cairo. Once again, Pei sought to combine new design elements with the classical aesthetic most appropriate for the location of the building. The sand-colored rectangular boxes rotate evenly to create a subtle movement, with small arched windows at regular intervals into the limestone exterior. Inside, galleries are arranged around a massive atrium, lit from above. The museum's coordinators were pleased with the project; its official website describes its "true splendour unveiled in the sunlight," and speaks of "the shades of colour and the interplay of shadows paying tribute to the essence of Islamic architecture". The Macao Science Center in Macau was designed by Pei Partnership Architects in association with I. M. Pei. The project to build the science center was conceived in 2001 and construction started in 2006. The center was completed in 2009 and opened by the Chinese President Hu Jintao. The main part of the building is a distinctive conical shape with a spiral walkway and large atrium inside, similar to that of the Solomon R. Guggenheim Museum in New York City. Galleries lead off the walkway, mainly consisting of interactive exhibits aimed at science education. The building is in a prominent position by the sea and is now a Macau landmark. Pei's career ended with his death in May 2019, at 102 years of age. Pei's style was described as thoroughly modernist, with significant cubist themes. He was known for combining traditional architectural principles with progressive designs based on simple geometric patterns—circles, squares, and triangles are common elements of his work in both plan and elevation. As one critic wrote: "Pei has been aptly described as combining a classical sense of form with a contemporary mastery of method." In 2000, biographer Carter Wiseman called Pei "the most distinguished member of his Late-Modernist generation still in practice". At the same time, Pei himself rejected simple dichotomies of architectural trends. He once said: "The talk about modernism versus post-modernism is unimportant. It's a side issue. An individual building, the style in which it is going to be designed and built, is not that important. The important thing, really, is the community. How does it affect life?" Pei's work is celebrated throughout the world of architecture. His colleague John Portman once told him: "Just once, I'd like to do something like the East Building." But this originality did not always bring large financial reward; as Pei replied to the successful architect: "Just once, I'd like to make the kind of money you do." His concepts, moreover, were too individualized and dependent on context to have given rise to a particular school of design. Pei referred to his own "analytical approach" when explaining the lack of a "Pei School". "For me," he said, "the important distinction is between a stylistic approach to the design; and an analytical approach giving the process of due consideration to time, place, and purpose ... My analytical approach requires a full understanding of the three essential elements ... to arrive at an ideal balance among them." In the words of his biographer, Pei won "every award of any consequence in his art", including the Arnold Brunner Award from the National Institute of Arts and Letters (1963), the Gold Medal for Architecture from the American Academy of Arts and Letters (1979), the AIA Gold Medal (1979), the first "Praemium Imperiale" for Architecture from the Japan Art Association (1989), the Lifetime Achievement Award from the Cooper-Hewitt, National Design Museum, the 1998 Edward MacDowell Medal in the Arts, and the 2010 Royal Gold Medal from the Royal Institute of British Architects. In 1983 he was awarded the Pritzker Prize, sometimes referred to as the Nobel Prize of architecture. In its citation, the jury said: "Ieoh Ming Pei has given this century some of its most beautiful interior spaces and exterior forms ... His versatility and skill in the use of materials approach the level of poetry." The prize was accompanied by a US$100,000 award, which Pei used to create a scholarship for Chinese students to study architecture in the U.S., on the condition that they return to China to work. In 1986, he was one of twelve recipients of the Medal of Liberty. When he was awarded the 2003 Henry C. Turner Prize by the National Building Museum, museum board chair Carolyn Brody praised his impact on construction innovation: "His magnificent designs have challenged engineers to devise innovative structural solutions, and his exacting expectations for construction quality have encouraged contractors to achieve high standards." In December 1992, Pei was awarded the Presidential Medal of Freedom by President George H. W. Bush. In 1996, Pei became the first person to be elected a foreign member of the Chinese Academy of Engineering. Pei's wife of over 70 years, Eileen Loo, died on 20 June 2014. Together they had three sons, T'ing Chung (1945–2003), Chien Chung (b. 1946; known as Didi), and Li Chung (b. 1949; known as Sandi); and a daughter, Liane (b. 1960). T'ing Chung was an urban planner and alumnus of his father's "alma mater" MIT and Harvard. Chieng Chung and Li Chung, who are both Harvard College and Harvard Graduate School of Design alumni, founded and run Pei Partnership Architects. Liane is a lawyer. In 2015, Pei's home health aide, Eter Nikolaishvili, grabbed Pei's right forearm and twisted it, resulting in bruising and bleeding and hospital treatment. Pei alleges that the assault occurred when Pei threatened to call the police about Nikolaishvili. Nikolaishvili agreed to plead guilty in 2016. Pei celebrated his 100th birthday on 26 April 2017. He died peacefully in Manhattan on 16 May 2019 at the age of 102. He was survived by three of his children, seven grandchildren, and five great-grandchildren.
https://en.wikipedia.org/wiki?curid=15155
Intel 80486 The Intel 80486, also known as the i486 or 486, is a higher performance follow-up to the Intel 80386 microprocessor. The 80486 was introduced in 1989 and was the first tightly pipelined x86 design as well as the first x86 chip to use more than a million transistors, due to a large on-chip cache and an integrated floating-point unit. It represents a fourth generation of binary compatible CPUs since the original 8086 of 1978. A 50 MHz 80486 executes around 40 million instructions per second on average and is able to reach 50 MIPS peak performance. Approximately twice as fast as the 80386 or 80286 per clock cycle, thanks to its five stage pipeline with all stages bound to a single cycle. The on chip enhanced FPU unit was also significantly faster than the 80387 per cycle. The 80486 was announced at Spring Comdex in April 1989. At the announcement, Intel stated that samples would be available in the third quarter of 1989 and production quantities would ship in the fourth quarter of 1989. The first 80486-based PCs were announced in late 1989, but some advised that people wait until 1990 to purchase an 80486 PC because there were early reports of bugs and software incompatibilities. The instruction set of the i486 is very similar to its predecessor, the Intel 80386, with the addition of only a few extra instructions, such as CMPXCHG which implements a compare-and-swap atomic operation and XADD, a fetch-and-add atomic operation returning the original value (unlike a standard ADD which returns flags only). From a performance point of view, the architecture of the i486 is a vast improvement over the 80386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit (FPU) and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions (such as ALU reg,reg and ALU reg,im) could sustain a single clock cycle throughput (one instruction completed every clock). These improvements yielded a rough doubling in integer ALU performance over the 386 at the same clock rate. A 16-MHz 80486 therefore had a performance similar to a 33-MHz 386, and the older design had to reach 50 MHz to be comparable with a 25-MHz 80486 part. Just as in the 80386, a simple flat 4 GB memory model could be implemented by setting all "segment selector" registers to a neutral value in protected mode, or setting (the same) "segment registers" to zero in real mode, and using only the 32-bit "offset registers" (x86-terminology for general CPU registers used as address registers) as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were then normally mapped onto physical addresses by the paging system except when it was disabled. ("Real" mode had no "virtual" addresses.) Just as with the 80386, circumventing memory segmentation could substantially improve performance in some operating systems and applications. On a typical PC motherboard, either four matched 30-pin (8-bit) SIMMs or one 72-pin (32-bit) SIMM per bank were required to fit the 80486's 32-bit data bus. The address bus used 30-bits (A31..A2) complemented by four byte-select pins (instead of A0,A1) to allow for any 8/16/32-bit selection. This meant that the limit of directly addressable physical memory was 4 gigabytes as well (230 "32-bit" words = 232 "8-bit" words). There are several suffixes and variants. (see Table). Other variants include: The specified maximal internal clock frequency (on Intel's versions) ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers. One of the few 80486 models specified for a 50 MHz bus (486DX-50) initially had overheating problems and was moved to the 0.8-micrometre fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it rather unpopular with mainstream consumers, as local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems. The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed (50 MHz), was nevertheless slower due to the external bus running at only 25 MHz. The i486DX2 at 66 MHz (with 33 MHz external bus) was faster than the 486DX-50, overall. More powerful 80486 iterations such as the OverDrive and DX4 were less popular (the latter available as an OEM part only), as they came out after Intel had released the next-generation P5 Pentium processor family. Certain steppings of the DX4 also officially supported 50 MHz bus operation, but it was a seldom used feature. *"WT" = write-through cache strategy, "WB" = write-back cache strategy 80486 compatible processors have been produced by other companies such as IBM, Texas Instruments, AMD, Cyrix, UMC, and SGS Thomson. Some were clones (identical at the microarchitectural level), others were clean room implementations of the Intel instruction-set. (IBM's multiple source requirement is one of the reasons behind its x86-manufacturing since the 80286.) The 80486 was, however, covered by many of Intel's patents covering new R&D as well as that of the prior 80386. Intel and IBM have broad cross-licenses of these patents, and AMD was granted rights to the relevant patents in the 1995 settlement of a lawsuit between the companies. AMD produced several clones of the 80486 using a 40 MHz bus (486DX-40, 486DX/2-80, and 486DX/4-120) which had no equivalent available from Intel, as well as a part specified for 90 MHz, using a 30 MHz external clock, that was sold only to OEMs. The fastest running 80486 CPU, the Am5x86, ran at 133 MHz and was released by AMD in 1995. 150 MHz and 160 MHz parts were planned but never officially released. Cyrix made a variety of 80486-compatible processors, positioned at the cost-sensitive desktop and low-power (laptop) markets. Unlike AMD's 80486 clones, the Cyrix processors were the result of clean-room reverse-engineering. Cyrix's early offerings included the 486DLC and 486SLC, two hybrid chips which plugged into 386DX or SX sockets respectively, and offered 1 KB of cache (versus 8 KB for the then-current Intel/AMD parts). Cyrix also made "real" 80486 processors, which plugged into the i486's socket and offered 2 or 8 KB of cache. Clock-for-clock, the Cyrix-made chips were generally slower than their Intel/AMD equivalents, though later products with 8 KB caches were more competitive, if late to market. The Motorola 68040, while not compatible with the 80486, was often positioned as the 80486's equivalent in features and performance. Clock-for-clock basis the Motorola 68040 could significantly outperform the Intel 80486 chip. However, the 80486 had the ability to be clocked significantly faster without suffering from overheating problems. The Motorola 68040 performance lagged behind the later production 80486 systems. Early 80486 machines were equipped with several ISA slots (using an emulated PC/AT-bus) and sometimes one or two 8-bit–only slots (compatible with the PC/XT-bus). Many motherboards enabled overclocking of these up from the default 6 or 8 MHz to perhaps 16.7 or 20 MHz (half the i486 bus clock) in a number of steps, often from within the BIOS setup. Especially older peripheral cards normally worked well at such speeds as they often used standard MSI chips instead of slower (at the time) custom VLSI designs. This could give significant performance gains (such as for old video cards moved from a 386 or 286 computer, for example). However, operation beyond 8 or 10 MHz could sometimes lead to stability problems, at least in systems equipped with SCSI or sound cards. Some motherboards came equipped with a 32-bit bus called EISA that was backward compatible with the ISA-standard. EISA offered a number of attractive features such as increased bandwidth, extended addressing, IRQ sharing, and card configuration through software (rather than through jumpers, DIP switches, etc.) However, EISA cards were expensive and therefore mostly employed in servers and workstations. Consumer desktops often used the simpler but faster VESA Local Bus (VLB), unfortunately somewhat prone to electrical and timing-based instability; typical consumer desktops had ISA slots combined with a single VLB slot for a video card. VLB was gradually replaced by PCI during the final years of the 80486 period. Few Pentium class motherboards had VLB support as VLB was based directly on the i486 bus; it was no trivial matter adapting it to the quite different P5 Pentium-bus. ISA persisted through the P5 Pentium generation and was not completely displaced by PCI until the Pentium III era. Late 80486 boards were normally equipped with both PCI and ISA slots, and sometimes a single VLB slot as well. In this configuration VLB or PCI throughput suffered depending on how buses were bridged. Initially, the VLB slot in these systems was usually fully compatible only with video cards (quite fitting as "VESA" stands for "Video Electronics Standards Association"); VLB-IDE, multi I/O, or SCSI cards could have problems on motherboards with PCI slots. The VL-Bus operated at the same clock speed as the i486-bus (basically "being" a local 80486-bus) while the PCI bus also usually depended on the i486 clock but sometimes had a divider setting available via the BIOS. This could be set to 1/1 or 1/2, sometimes even 2/3 (for 50 MHz CPU clocks). Some motherboards limited the PCI clock to the specified maximum of 33 MHz and certain network cards depended on this frequency for correct bit-rates. The ISA clock was typically generated by a divider of the CPU/VLB/PCI clock (as implied above). One of the earliest complete systems to use the 80486 chip was the Apricot VX FT, produced by British hardware manufacturer Apricot Computers. Even overseas in the United States it was popularised as "The World's First 80486" in the September 1989 issue of "Byte" magazine (shown right). Later 80486 boards also supported Plug-And-Play, a specification designed by Microsoft that began as a part of Windows 95 to make component installation easier for consumers. The 486DX2 66 MHz processor was popular on home-oriented PCs during the early to mid 1990s, toward the end of the MS-DOS gaming era. It was often coupled with a VESA Local Bus video card. The introduction of 3D computer graphics spelled the end of the 80486's reign, because 3D graphics make heavy use of floating-point calculations and require a faster CPU cache and more memory bandwidth. Developers began to target the P5 Pentium processor family almost exclusively with x86 assembly language optimizations (e.g., "Quake") which led to the usage of terms like "Pentium-compatible processor" for software requirements. Many of these games required the speed of the P5 Pentium processor family's double-pipelined architecture. The AMD Am5x86, up to 133 MHz, and Cyrix Cx5x86, up to 120 MHz, were the last 80486 processors that were often used in late generation 80486 motherboards with PCI slots and 72-pin SIMMs that are designed to be able to run Windows 95, and also often used as upgrades for older 80486 motherboards. While the Cyrix Cx5x86 faded quite quickly when the Cyrix 6x86 took over, the AMD Am5x86 was important during the time when the AMD K5 was delayed. 80486-based machines remained popular through the late 1990s, serving as low end processors for entry level PCs. Production for traditional desktop and laptop systems ceased in 1998, when Intel introduced the Celeron brand as an modern replacement for the aging chip, though it continued to be produced for embedded systems through the late 2000s. In the general-purpose desktop computer role, 80486-based machines remained in use into the early 2000s, especially as Windows 95, Windows 98, and Windows NT 4.0 were the latest Microsoft operating systems to officially support installation on an 80486-based system. However, as Windows 95/98 and Windows NT 4.0 were eventually overtaken by newer operating systems, 80486 systems likewise fell out of use. Still, a number of 80486 machines have remained in use today, mostly for backward compatibility with older programs (most notably games), especially since many of them have problems running on newer operating systems. However, DOSBox is also available for current operating systems and provides emulation of the 80486 instruction set, as well as full compatibility with most DOS-based programs. Although the 80486 was eventually overtaken by the Pentium for personal computer applications, Intel had continued production for use in embedded systems. In May 2006 Intel announced that production of the 80486 would stop at the end of September 2007.
https://en.wikipedia.org/wiki?curid=15161
Intel 80486SX Intel's i486SX was a modified Intel 486DX microprocessor with its floating-point unit (FPU) disabled. It was intended as a lower-cost CPU for use in low-end systems. Computer manufacturers that used these processors include Packard Bell, Compaq, ZEOS and IBM. In the early 1990s, common applications did not need or benefit from an FPU. Among the rare exceptions were CAD applications, which could often simulate floating point operations in software, but benefited from a hardware floating point unit immensely. AMD had begun manufacturing its 386DX clone which was faster than Intel's. To respond to this new situation Intel wanted to provide a lower cost i486 CPU for system integrators, but without sacrificing the better profit margins of a "full" i486. This was accomplished through a debug feature called Disable Floating Point (DFP), by grounding a certain bond wire in the CPU package. The i486SX was introduced in mid-1991 at 20 MHz in a PGA package. Later (1992) versions of the i486SX had the FPU entirely removed for cost-cutting reasons and comes in surface-mount packages as well. Many systems allowed the user to upgrade the i486SX to a CPU with the FPU enabled. The upgrade was shipped as the i487, which was a full-blown i486DX chip with an extra pin. The extra pin prevents the chip from being installed incorrectly. The NC# pin, one of the standard 168 pins, was used to shut off the i486SX. Although i486SX devices were not used at all when the i487 was installed, they were hard to remove because the i486SX was typically installed in non-ZIF sockets or in a plastic package that was surface mounted on the motherboard. Later OverDrive processors also plugged into the socket and offered performance enhancements as well. Intel Datasheets
https://en.wikipedia.org/wiki?curid=15164
Ivory Ivory is a hard, white material from the tusks (traditionally elephants') and teeth of animals, that consists mainly of dentine, one of the physical structures of teeth and tusks. The chemical structure of the teeth and tusks of mammals is the same, regardless of the species of origin. The trade in certain teeth and tusks other than elephant is well established and widespread; therefore, "ivory" can correctly be used to describe any mammalian teeth or tusks of commercial interest which are large enough to be carved or scrimshawed. Ivory has been valued since ancient times in art or manufacturing for making a range of items from ivory carvings to false teeth, piano keys, fans, dominoes and joint tubes. Elephant ivory is the most important source, but ivory from mammoth, walrus, hippopotamus, sperm whale, killer whale, narwhal and warthog are used as well. Elk also have two ivory teeth, which are believed to be the remnants of tusks from their ancestors. The national and international trade in ivory of threatened species such as African and Asian elephants is illegal. The word "ivory" ultimately derives from the ancient Egyptian "âb, âbu" ("elephant"), through the Latin "ebor-" or "ebur". Both the Greek and Roman civilizations practiced ivory carving to make large quantities of high value works of art, precious religious objects, and decorative boxes for costly objects. Ivory was often used to form the white of the eyes of statues. There is some evidence of either whale or walrus ivory used by the ancient Irish. Solinus, a Roman writer in the 3rd century claimed that the Celtic peoples in Ireland would decorate their sword-hilts with the 'teeth of beasts that swim in the sea'. Adomnan of Iona wrote a story about St Columba giving a sword decorated with carved ivory as a gift that a penitent would bring to his master so he could redeem himself from slavery. The Syrian and North African elephant populations were reduced to extinction, probably due to the demand for ivory in the Classical world. The Chinese have long valued ivory for both art and utilitarian objects. Early reference to the Chinese export of ivory is recorded after the Chinese explorer Zhang Qian ventured to the west to form alliances to enable the eventual free movement of Chinese goods to the west; as early as the first century BC, ivory was moved along the Northern Silk Road for consumption by western nations. Southeast Asian kingdoms included tusks of the Indian elephant in their annual tribute caravans to China. Chinese craftsmen carved ivory to make everything from images of deities to the pipe stems and end pieces of opium pipes. The Buddhist cultures of Southeast Asia, including Myanmar, Thailand, Laos and Cambodia, traditionally harvested ivory from their domesticated elephants. Ivory was prized for containers due to its ability to keep an airtight seal. It was also commonly carved into elaborate seals utilized by officials to "sign" documents and decrees by stamping them with their unique official seal. In Southeast Asian countries, where Muslim Malay peoples live, such as Malaysia, Indonesia and the Philippines, ivory was the material of choice for making the handles of kris daggers. In the Philippines, ivory was also used to craft the faces and hands of Catholic icons and images of saints prevalent in the Santero culture. Tooth and tusk ivory can be carved into a vast variety of shapes and objects. Examples of modern carved ivory objects are okimono, netsukes, jewelry, flatware handles, furniture inlays, and piano keys. Additionally, warthog tusks, and teeth from sperm whales, orcas and hippos can also be scrimshawed or superficially carved, thus retaining their morphologically recognizable shapes. Ivory usage in the last thirty years has moved towards mass production of souvenirs and jewelry. In Japan, the increase in wealth sparked consumption of solid ivory "hanko" – name seals – which before this time had been made of wood. These "hanko" can be carved out in a matter of seconds using machinery and were partly responsible for massive African elephant decline in the 1980s, when the African elephant population went from 1.3 million to around 600,000 in ten years. Prior to the introduction of plastics, ivory had many ornamental and practical uses, mainly because of the white color it presents when processed. It was formerly used to make cutlery handles, billiard balls, piano keys, Scottish bagpipes, buttons and a wide range of ornamental items. Synthetic substitutes for ivory in the use of most of these items have been developed since 1800: the billiard industry challenged inventors to come up with an alternative material that could be manufactured; the piano industry abandoned ivory as a key covering material in the 1970s. Ivory can be taken from dead animals – however, most ivory came from elephants that were killed for their tusks. For example, in 1930 to acquire 40 tons of ivory required the killing of approximately 700 elephants. Other animals which are now endangered were also preyed upon, for example, hippos, which have very hard white ivory prized for making artificial teeth. In the first half of the 20th century, Kenyan elephant herds were devastated because of demand for ivory, to be used for piano keys. During the Art Deco era from 1912 to 1940, dozens (if not hundreds) of European artists used ivory in the production of chryselephantine statues. Two of the most frequent users of ivory in their sculptured artworks were Ferdinand Preiss and Claire Colinet. Owing to the rapid decline in the populations of the animals that produce it, the importation and sale of ivory in many countries is banned or severely restricted. In the ten years preceding a decision in 1989 by CITES to ban international trade in African elephant ivory, the population of African elephants declined from 1.3 million to around 600,000. It was found by investigators from the Environmental Investigation Agency (EIA) that CITES sales of stockpiles from Singapore and Burundi (270 tonnes and 89.5 tonnes respectively) had created a system that increased the value of ivory on the international market, thus rewarding international smugglers and giving them the ability to control the trade and continue smuggling new ivory. Since the ivory ban, some Southern African countries have claimed their elephant populations are stable or increasing, and argued that ivory sales would support their conservation efforts. Other African countries oppose this position, stating that renewed ivory trading puts their own elephant populations under greater threat from poachers reacting to demand. CITES allowed the sale of 49 tonnes of ivory from Zimbabwe, Namibia and Botswana in 1997 to Japan. In 2007, under pressure from the International Fund for Animal Welfare, eBay banned all international sales of elephant-ivory products. The decision came after several mass slaughters of African elephants, most notably the 2006 Zakouma elephant slaughter in Chad. The IFAW found that up to 90% of the elephant-ivory transactions on eBay violated their own wildlife policies and could potentially be illegal. In October 2008, eBay expanded the ban, disallowing any sales of ivory on eBay. A more recent sale in 2008 of 108 tonnes from the three countries and South Africa took place to Japan and China. The inclusion of China as an "approved" importing country created enormous controversy, despite being supported by CITES, the World Wide Fund for Nature and Traffic. They argued that China had controls in place and the sale might depress prices. However, the price of ivory in China has skyrocketed. Some believe this may be due to deliberate price fixing by those who bought the stockpile, echoing the warnings from the Japan Wildlife Conservation Society on price-fixing after sales to Japan in 1997, and monopoly given to traders who bought stockpiles from Burundi and Singapore in the 1980s. A 2019 peer-reviewed study reported that the rate of African elephant poaching was in decline, with the annual poaching mortality rate peaking at over 10% in 2011 and falling to below 4% by 2017. The study found that the "annual poaching rates in 53 sites strongly correlate with proxies of ivory demand in the main Chinese markets, whereas between-country and between-site variation is strongly associated with indicators of corruption and poverty." Based on these findings, the study authors recommended action to both reduce demand for ivory in China and other main markets and to decrease corruption and poverty in Africa. In 2006, 19 African countries signed the "Accra Declaration" calling for a total ivory trade ban, and 20 range states attended a meeting in Kenya calling for a 20-year moratorium in 2007. The use and trade of elephant ivory have become controversial because they have contributed to seriously declining elephant populations in many countries. It is estimated that consumption in Great Britain alone in 1831 amounted to the deaths of nearly 4,000 elephants. In 1975, the Asian elephant was placed on Appendix I of the Convention on International Trade in Endangered Species (CITES), which prevents international trade between member states of species that are threatened by trade. The African elephant was placed on Appendix I in January 1990. Since then, some southern African countries have had their populations of elephants "downlisted" to Appendix II, allowing the domestic trade of non-ivory items; there have also been two "one off" sales of ivory stockpiles. In June 2015, more than a ton of confiscated ivory was crushed in New York's Times Square by the Wildlife Conservation Society to send a message that the illegal trade will not be tolerated. The ivory, confiscated in New York and Philadelphia, was sent up a conveyor belt into a rock crusher. The Wildlife Conservation Society has pointed out that the global ivory trade leads to the slaughter of up to 35,000 elephants a year in Africa. In June 2018, Conservative MEPs’ Deputy Leader Jacqueline Foster MEP urged the EU to follow the UK's lead and introduce a tougher ivory ban across Europe. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015. In September of the same year, China and the U.S. announced they would "enact a nearly complete ban on the import and export of ivory." The Chinese market has a high degree of influence on the elephant population. Trade in the ivory from the tusks of dead woolly mammoths frozen in the tundra has occurred for 300 years and continues to be legal. Mammoth ivory is used today to make handcrafted knives and similar implements. Mammoth ivory is rare and costly because mammoths have been extinct for millennia, and scientists are hesitant to sell museum-worthy specimens in pieces. Some estimates suggest that 10 million mammoths are still buried in Siberia. A species of hard nut is gaining popularity as a replacement for ivory, although its size limits its usability. It is sometimes called vegetable ivory, or tagua, and is the seed endosperm of the ivory nut palm commonly found in coastal rainforests of Ecuador, Peru and Colombia. Fossil walrus ivory from animals that died before 1972 is legal to buy and sell or possess in the United States, unlike many other types of ivory.
https://en.wikipedia.org/wiki?curid=15165
Infantry fighting vehicle An infantry fighting vehicle ("IFV"), also known as a mechanized infantry combat vehicle ("MICV"), is a type of armoured fighting vehicle used to carry infantry into battle and provide direct-fire support. The 1990 Treaty on Conventional Armed Forces in Europe defines an infantry fighting vehicle as "an armoured combat vehicle which is designed and equipped primarily to transport a combat infantry squad, and which is armed with an integral or organic cannon of at least 20 millimeters calibre and sometimes an antitank missile launcher". IFVs often serve both as the principal weapons system and as the mode of transport for a mechanized infantry unit. Infantry fighting vehicles are distinct from armored personnel carriers (APCs), which are transport vehicles armed only for self-defense and not specifically engineered to fight on their own. IFVs are designed to be more mobile than tanks and are equipped with a rapid-firing autocannon or a large conventional gun; they may include side ports for infantrymen to fire their personal weapons while on board. The IFV rapidly gained popularity with armies worldwide due to a demand for vehicles with high firepower that were less expensive and easier to maintain than tanks. Nevertheless, it did not supersede the APC concept altogether, due to the latter's continued usefulness in specialized roles. Some armies continue to maintain fleets of both IFVs and APCs. The infantry fighting vehicle (IFV) concept evolved directly out of that of the armored personnel carrier (APC). During the Cold War, there was an increasing trend towards fitting heavier and heavier weapons systems on an APC chassis to deliver suppressive covering fire as infantry debussed from the vehicle's troop compartment. With the growing mechanization of infantry units worldwide, some armies also came to believe that the embarked personnel should fire their weapons from inside the protection of the APC and only fight on foot as a last resort. These two trends led to the IFV, which had firing ports in the troop compartment and a crew-manned weapons system. The IFV established a new niche between combat vehicles which functioned primarily as armored weapons-carriers and APCs. During the 1950s, Soviet, US, and most Western European armies had adopted tracked APCs. In 1958, however, the newly-organized Bundeswehr adopted the Schützenpanzer Lang HS.30 (also known simply as the "SPz 12-3"), which resembled a conventional tracked APC but carried a turret-mounted 20 mm autocannon that enabled it to engage other armored vehicles. The SPz 12-3 was the first purpose-built IFV. The Bundeswehr's doctrine called for mounted infantry to fight and maneuver alongside tank formations rather than simply being ferried to the edge of the battlefield before dismounting. Each SPz 12-3 could carry five troops in addition to a three- man crew. Despite this, it lacked firing ports, forcing the embarked infantry to expose themselves through open hatches to return fire. As the SPz 12-3 was being inducted into service, the French and Austrian armies adopted new APCs which possessed firing ports, allowing embarked infantry to observe and fire their weapons from inside the vehicle. These were known as the AMX-VCI and Saurer 4K, respectively. Austria subsequently introduced an IFV variant of the Saurer 4K which carried a 20 mm autocannon, making it the first vehicle of this class to possess both firing ports and a turreted weapons system. In the mid-1960s, the Swedish Army also adopted a variant of the Pansarbandvagn 302 APC which carried a 20 mm autocannon. Following the trend towards converting preexisting APCs into IFVs, the Dutch, US, and Belgian armies experimented with a variety of modified M113s during the late 1960s; these were collectively identified as the AIFV. The first US M113-based IFV appeared in 1969; known as the XM765, it had a sharply angled hull, ten vision blocks, a cupola-mounted 20 mm autocannon. The XM765 design was rejected for service but later became the basis for the very similar Dutch YPR-765. The YPR-765 had five firing ports and a 25 mm autocannon with a co-axial machine gun. The Soviet Army had fielded its first tracked APC, the BTR-50, in 1957. Its first wheeled APC, the BTR-152, had been designed as early as the late 1940s. Early versions of both these lightly armored vehicles were open-topped and carried only general-purpose machine guns for armament. As Soviet strategists became more preoccupied with the possibility of a war involving weapons of mass destruction, they became convinced of the need to deliver mounted troops to a battlefield without exposing them to the radioactive fallout from an atomic weapon. The IFV concept was received favorably because it would enable a Soviet infantry squad to fight from inside their vehicles when operating in contaminated environments. Design work on a new tracked IFV began in the late 1950s and the first prototype appeared as the "Obyekt 765" in 1961. After the Soviets had evaluated and rejected a number of other wheeled and tracked prototypes, the "Obyekt 765" was accepted for service; it entered serial production as the BMP-1 in 1966. In addition to being amphibious and superior in cross-country mobility to its predecessors, the BMP-1 carried a 73mm smoothbore cannon, a co-axial PKT machine gun, and a launcher for 9M14 Malyutka anti-tank missiles. Its hull was also heavily armored enough to resist .50 caliber armor-piercing ammunition along its frontal arc. Eight firing ports and vision blocks allowed the embarked infantry squad to observe and engage targets with rifles or machine guns. The BMP-1 was so heavily armed and armored that it was widely regarded as having combined the qualities of a light tank with those of the traditional APC. Its use of a relatively large caliber main gun marked a notable departure from the Western trend of fitting IFVs with automatic cannon, which were more suitable for engaging low-flying aircraft, light armor, and dismounted personnel. About 20,000 BMP-1s were produced in the Soviet Union from 1966 to 1983, at which time it was regarded as the most ubiquitous IFV in the world. In Soviet service, the BMP-1 was ultimately superseded by the more sophisticated BMP-2 (in service from 1980) and the BMP-3 (in servce from 1987). A similar vehicle known as the BMD-1 was designed to accompany Soviet airborne infantry and for a number of years was the world's only airborne IFV. In 1971 the Bundeswehr adopted the Marder, which became increasingly heavily armored through its successive marks and like the BMP was later fitted as standard with a launcher for anti-tank guided missiles. Between 1973 and 1975, the French and Yugoslav armies developed the AMX-10P and BVP M-80, respectively, which were the first amphibious IFVs to appear outside the Soviet Union. The Marder, AMX-10P, and M-80 were all armed with similar 20 mm autocannon and carried seven to eight passengers. They could also be armed with various anti-tank missile configurations. Wheeled IFVs did not begin appearing until 1976, when the Ratel was introduced in response to a South African Army specification for a wheeled combat vehicle suited to the demands of rapid offensives combining maximum firepower and strategic mobility. Unlike European IFVs, the Ratel was not designed to allow mounted infantrymen to fight in concert with tanks but rather to operate independently across vast distances. South African officials chose a very simple, economical design because it helped reduce the significant logistical commitment necessary to keep heavier combat vehicles operational in undeveloped areas. Excessive track wear was also an issue in the region's abrasive, sandy terrain, making the Ratel's wheeled configuration more attractive. The Ratel was typically armed with a 20 mm autocannon featuring what was then a unique twin-linked ammunition feed, allowing its gunner to rapidly switch between armor-piercing or high-explosive ammunition. Other variants were also fitted with mortars, a bank of anti-tank guided missiles, or a 90 mm cannon. Most notably, the Ratel was the first mine-protected IFV; it had a blastproof hull and was built to withstand the explosive force of anti-tank mines favored by local insurgents. Like the BMP-1, the Ratel proved to be a major watershed in IFV development, albeit for different reasons: until its debut wheeled IFV designs were evaluated unfavorably, since they lacked the weight-carrying capacity and off-road mobility of tracked vehicles, and their wheels were more vulnerable to hostile fire. However, during the 1970s improvements in power trains, suspension technology, and tires had increased their potential strategic mobility. Reduced production, operation, and maintenance costs also helped make wheeled IFVs attractive to several nations. During the late 1960s and early 1970s, the US Army had gradually abandoned its attempts to utilize the M113 as an IFV and refocused on creating a dedicated IFV design able to match the BMP. Although considered reliable, the M113 chassis did not meet the necessary requirements for protection or stealth. The US also considered the M113 too heavy and slow to serve as an IFV capable of keeping pace with tanks. Its MICV-65 program produced a number of unique prototypes, none of which were accepted for service owing to concerns about speed, armor protection, and weight. US Army evaluation staff were sent to Europe to review the AMX-10P and the Marder, both of which were rejected due to high cost, insufficient armor, or lackluster amphibious capabilities. In 1973, the FMC Corporation developed and tested the XM723, which was a 21-ton tracked chassis which could accommodate three crew members and eight passengers. It initially carried a single 20 mm autocannon in a one-man turret but in 1976 a two-man turret was introduced; this carried a 25 mm autocannon, a co-axial machine gun, and a TOW anti-tank missile launcher. The XM723 possessed amphibious capability, nine firing ports, and spaced laminate armor on its hull. It was accepted for service with the US Army in 1980 as the Bradley Fighting Vehicle. Successive variants have been retrofitted with improved missile systems, gas particulate filter systems, Kevlar spall liners, and increased stowage. The amount of space taken up by the hull and stowage modifications has reduced the number of passengers to six. By 1982 30,000 IFVs had entered service worldwide, and the IFV concept appeared in the doctrines of 30 national armies. The popularity of the IFV was increased by the growing trend on the part of many nations to mechanize armies previously dominated by light infantry. However, contrary to expectation the IFV did not render APCs obsolete. The US, Russian, French, and German armies have all retained large fleets of IFVs and APCs, finding the APC more suitable for multi-purpose or auxiliary roles. The British Army was one of the few Western armies which had neither recognized a niche for IFVs nor adopted a dedicated IFV design by the late 1970s. In 1980, it made the decision to adopt a new tracked armored vehicle, the FV510 Warrior. While normally classified as an IFV, the Warrior fills the role of an APC in British service and infantrymen do not remain embarked during combat. The role of the IFV is closely linked to mechanized infantry doctrine. While some IFVs are armed with an organic direct fire gun or anti-tank guided missiles for close infantry support, they are not intended to assault armored and mechanized forces with any type of infantry on their own, mounted or not. Rather, the IFV's role is to give an infantry unit battlefield, tactical, and operational mobility during combined arms operations. Most IFVs either complement tanks as part of an armored battalion, brigade, or division; others perform traditional infantry missions supported by tanks. Early development of IFVs in a number of Western nations was promoted primarily by armor officers who wanted to integrate tanks with supporting infantry in armored divisions. There were a few exceptions to the rule: for example, the Bundeswehr's decision to adopt the SPz 12-3 was largely due to the experiences of Wehrmacht panzergrenadiers who had been inappropriately ordered to undertake combat operations better suited for armor. Hence, the Bundeswehr concluded that infantry should only fight while mounted in their own armored vehicles, ideally supported by tanks. This doctrinal trend was later subsumed into the armies of other Western nations, including the US, leading to the widespread conclusion that IFVs should be confined largely to assisting the forward momentum of tanks. The Soviet Army granted more flexibility in this regard to its IFV doctrine, allowing for the mechanized infantry to occupy terrain that compromised an enemy defense, carry out flanking movements, or lure armor into ill-advised counterattacks. While they still performed an auxiliary role to tanks, the notion of using IFVs in these types of engagements dictated that they be heavily armed, which was reflected in the BMP-1 and its successors. Additionally, Soviet airborne doctrine made use of the BMD series of IFVs to operate in concert with paratroops rather than traditional mechanized or armored formations. IFVs assumed a new significance after the Yom Kippur War. In addition to heralding the combat debut of the BMP-1, that conflict demonstrated the newfound significance of anti-tank guided missiles and the obsolescence of independent armored attacks. More emphasis was placed on combined arms offensives, and the importance of mechanized infantry to support tanks reemerged. As a result of the Yom Kippur War, the Soviet Union attached more infantry to its armored formations and the US accelerated its long-delayed IFV development program. An IFV capable of accompanying tanks for the purpose of suppressing anti-tank weapons and the hostile infantry which operated them was seen as necessary to avoid the devastation wreaked on purely armored Israeli formations. The US Army defines all vehicles classed as IFVs as having three essential characteristics: they are armed with at least a medium-caliber cannon or automatic grenade launcher, at least sufficiently protected against small arms fire, and possess off-road mobility. It also identifies all IFVs as having some characteristics of an APC and a light tank. The United Nations Register for Conventional Arms (UNROCA) simply defines an IFV as any armored vehicle "designed to fight with soldiers on board" and "to accompany tanks". UNROCA makes a clear distinction between IFVs and APCs, as the former's primary mission is combat rather than general transport. All IFVs possess armored hulls protected against rifle and machine gun fire, and some are equipped with active protection systems. Most have lighter armor than main battle tanks to ensure mobility. Armies have generally accepted risk in reduced protection to recapitalize on an IFV's mobility, weight and speed. Their fully enclosed hulls offer protection from artillery fragments and residual environmental contaminants as well as limit exposure time to the mounted infantry during extended movements over open ground. Many IFVs also have sharply angled hulls that offer a relatively high degree of protection for their armor thickness. The BMP, Boragh, BVP M-80, and their respective variants all possess steel hulls with a distribution of armor and steep angling that protect them during frontal advances. The BMP-1 was vulnerable to heavy machine guns at close range on its flanks or rear, leading to a variety of more heavily armored marks appearing from 1979 onward. The Bradley possessed a lightweight aluminum alloy hull, which in most successive marks has been bolstered by the addition of explosive reactive and slat armor, spaced laminate belts, and steel track skirts. Throughout its life cycle, an IFV is expected to gain 30% more weight from armor additions. As asymmetric conflicts become more common, an increasing concern with regards to IFV protection has been adequate countermeasures against land mines and improvised explosive devices. During the Iraq War, inadequate mine protection in US Bradleys forced their crews to resort to makeshift strategies such as lining the hull floors with sandbags. A few IFVs, such as the Ratel, have been specifically engineered to resist mine explosions. IFVs are equipped with turrets carrying autocannons of various calibers between 20mm - 57mm, 73mm - 100mm low or medium velocity tank guns, anti-tank guided missiles, or automatic grenade launchers. With a few exceptions, such as the BMP-1 and the BMP-3, designs such as the Marder and the BMP-2 have set the trend of arming IFVs with an autocannon suitable for use against lightly armored vehicles, low-flying aircraft, and dismounted infantry. This reflected the growing inclination to view IFVs as auxiliaries of armored formations: a small or medium caliber autocannon was perceived as an ideal suppressive weapon to complement large caliber tank fire. IFVs armed with miniature tank guns did not prove popular because many of the roles they were expected to perform were better performed by accompanying tanks. The BMP-1, which was the first IFV to carry a relatively large cannon, came under criticism during the Yom Kippur War for its mediocre individual accuracy, due in part to the low velocities of its projectiles. During the Soviet–Afghan War, BMP-1 crews also complained that their armament lacked the elevation necessary to engage insurgents in mountainous terrain. The effectiveness of large caliber, low-velocity guns like the 2A28 Grom on the BMP-1 and BMD-1 was also much reduced by the appearance of Chobham armor on Western tanks. The Ratel, which included a variant armed with a 90mm low-velocity gun, was utilized in South African combat operations against Angolan and Cuban armored formations during the South African Border War, with mixed results. Although the Ratels succeeded in destroying a large number of Angolan tanks and APCs, they were hampered by many of the same problems as the BMP-1: mediocre standoff ranges, inferior fire control, and a lack of stabilized main gun. The Ratels' heavy armament also tempted South African commanders to utilize them as light tanks rather than in their intended role of infantry support. Another design feature of the BMP-1 did prove more successful in establishing a precedent for future IFVs: its inclusion of an anti-tank missile system. This consisted of a rail-launcher firing 9M14 Malyutka missiles which had to be reloaded manually from outside the BMP's turret. Crew members had to expose themselves to enemy fire to reload the missiles, and they could not guide them effectively from inside the confines of the turret space. The BMP-2 and later variants of the BMP-1 made use of semiautonomous guided missile systems. In 1978, the Bundeswehr became the first Western army to embrace this trend when it retrofitted all its Marders with launchers for MILAN anti-tank missiles. The US Army added a launcher for TOW anti-tank missiles to its fleet of Bradleys, despite the fact that this greatly reduced the interior space available for seating the embarked infantry. This was justified on the basis that the Bradley needed to not only engage and destroy other IFVs, but support tanks in the destruction of other tanks during combined arms operations. IFVs are designed to have the strategic and tactical mobility necessary to keep pace with tanks during rapid maneuvers. Some, like the BMD series, have airborne and amphibious capabilities. IFVs may be either wheeled or tracked; tracked IFVs are usually more heavily armored and possess greater carrying capacity. Wheeled IFVs are cheaper and simpler to produce, maintain, and operate. From a logistical perspective, they are also ideal for an army without widespread access to transporters or a developed rail network to deploy its armor.
https://en.wikipedia.org/wiki?curid=15166
ICQ ICQ is a cross-platform messenger and VoIP client. The name ICQ derives from the English phrase "I Seek You". Originally developed by the Israeli company Mirabilis in 1996, the client was bought by AOL in 1998, and then by Mail.Ru Group in 2010. The ICQ client application and service were initially released in November 1996, freely available to download. ICQ was among the first stand-alone instant messenger (IM) — while real-time chat was not in itself new (Internet Relay Chat (IRC) being the most common platform at the time), the concept of a fully centralized service with individual user accounts focused on one-on-one conversations set the blueprint for later instant messaging services like AIM, and its influence is seen in modern social media applications. ICQ became the first widely adopted IM platform. At its peak around 2001, ICQ had more than 100 million accounts registered. At the time of the Mail.Ru acquisition in 2010, there were around 42 million daily users. Since 2013, ICQ has 11 million monthly users. In 2020, the Mail.Ru Group, which owns ICQ, decided to launch its new ICQ New software, based on its messenger. The updated messenger was presented to the general public on April 6, 2020. Private chats are a conversation between two users. When logging into an account, the chat can be accessed from any device thanks to cloud synchronization. A user can delete a sent message at any time either in their own chat or in their conversation partner's, and a notification will be received instead indicating that the message has been deleted. Any important messages from group or private chats, as well as an unlimited number and size of media content, can be sent to the conversation with oneself. Essentially, this chat acts as a free cloud storage. These are special chats where chats can take place of up to 25 thousand participants at the same time. Any user can create a group. A user can hide their phone number from other participants; there is an advanced polling feature; there is the possibility to see which group members have read a message, and notifications can be switched off for messages from specific group members. An alternative to blogs. Channel authors can publish posts as text messages and also attach media files. Once the post is published, subscribers receive a notification as they would from regular and group chats. The channel author can remain anonymous and does not have to show any information in the channel description. A special API-bot is available and can be used by anyone to create a bot, i.e. a small program which performs specific actions and interacts with the user. Bots can be used in a variety of ways ranging from entertainment to business services. Stickers (small images or photos expressing some form of emotion) are available to make communication via the application more emotive and personalized. Users can use the sticker library already available or upload their own. In addition, thanks to machine learning the software will recommend a sticker during communication by itself. Masks are images that are superimposed onto the camera in real-time. They can be used during video calls, superimposed onto photos and sent to other users. A nickname is a name made up by a user. It can replace a phone number when searching for and adding user contact. By using a nickname, users can share their contact details without providing a phone number. Smart answers are short phrases that appear above the message box which can be used to answer messages. ICQ NEW analyzes the contents of a conversation and suggests a few pre-set answers. ICQ NEW makes it possible to send audio messages. However, for people who do not want to or cannot listen to the audio, the audio can be automatically transcribed into text. All the user needs to do is click the relevant button and they will see the message in text form. Aside from text messaging, users can call each other as well as arrange audio or video calls for up to five people. During the video call, AR-masks can be used. ICQ users are identified and distinguished from one another by UIN, or User Identification Numbers, distributed in sequential order. The UIN was invented by Mirabilis, as the user name assigned to each user upon registration. Issued UINs started at '10,000' (5 digits) and every user receives a UIN when first registering with ICQ. As of ICQ6 users are also able to log in using the specific e-mail address they associated with their UIN during the initial registration process. Unlike other instant messaging software or web applications, on ICQ the only permanent user info is the UIN, although it is possible to search for other users using their associated e-mail address or any other detail they have made public by updating it in their account's public profile. In addition the user can change all of his or her personal information, including screen name and e-mail address, without having to re-register. Since 2000 ICQ and AIM users were able to add each other to their contact list without the need for any external clients. (The AIM service has since been discontinued.) As a response to UIN theft or sale of attractive UINs, ICQ started to store email addresses previously associated with a UIN. As such UINs that are stolen can sometimes be reclaimed. This applies only if (since 1999 onwards) a valid primary email address was entered into the user profile. The founding company of ICQ, Mirabilis, was established in June 1996 by five Israeli developers: Yair Goldfinger, Sefi Vigiser, Amnon Amir, Arik Vardi, and Arik's father Yossi Vardi. They recognized that many people were accessing the internet through non-UNIX operating systems, such as Microsoft Windows, and those users were unfamiliar with established chat technologies, e.g. IRC. The technology Mirabilis developed for ICQ was distributed free of charge. The technology's success encouraged AOL to acquire Mirabilis on June 8, 1998, for $287 million up front and $120 million in additional payments over three years based on performance levels. At the time this was the highest price ever paid to purchase an Israeli technology company. In 2002 AOL successfully patented the technology. After the purchase the product was initially managed by Ariel Yarnitsky and Avi Shechter. ICQ's management changed at the end of 2003. Under the leadership of the new CEO, Orey Gilliam, who also assumed the responsibility for all of AOL's messaging business in 2007, ICQ resumed its growth; it was not only a highly profitable company, but one of AOL's most successful businesses. Eliav Moshe replaced Gilliam in 2009 and became ICQ's managing director. In April 2010, AOL sold ICQ to Digital Sky Technologies, headed by Alisher Usmanov, for $187.5 million. While ICQ was displaced by AOL Instant Messenger, Google Talk, and other competitors in the U.S. and many other countries over the 2000s, it remained the most popular instant messaging network in Russian-speaking countries, and an important part of online culture. Popular UINs demanded over 11,000₽ in 2010. In September of that year, Digital Sky Technologies changed its name to Mail.Ru Group. Since the acquisition, Mail.ru has invested in turning ICQ from a desktop client to a mobile messaging system. As of 2013, around half of ICQ's users were using its mobile apps, and in 2014, the number of users began growing for the first time since the purchase. In March 2016 the source code of the client was released under the Apache license on github.com. AOL pursued an aggressive policy regarding alternative ("unauthorized") ICQ clients. "Системное сообщение System Message On icq.com there is an "important message" for Russian-speaking ICQ users: "ICQ осуществляет поддержку только авторизированных версий программ: ICQ Lite и ICQ 6.5." ("ICQ supports only authorized versions of programs: ICQ Lite and ICQ 6.5.") From December 28, we will no longer support old versions of ICQ and other unofficial applications. To continue your conversations, you need to update your ICQ here: https://icq.com You can also use the web version here: https://web.icq.com With the new version of ICQ you can: - edit and delete already sent messages - quote and forward messages to another chat - send stickers - search through chat history and view previously sent media in the chat gallery - create group chats - make voice and video calls According to a Novaya Gazeta article published in May 2018, Russian intelligence agencies have access to online reading of ICQ users' correspondence. The article examined 34 sentences of Russian courts, during the investigation of which the evidence of the defendants' guilt was obtained by reading correspondence on a PC or mobile devices. Of the fourteen cases in which ICQ was involved, in six cases the capturing of information occurred before the seizure of the device. The reason for the article was the blocking of the Telegram service and the recommendation of the Advisor to the President of the Russian Federation Herman Klimenko to use ICQ instead. AOL's OSCAR network protocol used by ICQ is proprietary and using a third party client is a violation of ICQ Terms of Service. Nevertheless, a number of third-party clients have been created by using reverse-engineering and protocol descriptions. These clients include: AOL supported clients include:
https://en.wikipedia.org/wiki?curid=15167
Impressionism Impressionism is a 19th-century art movement characterized by relatively small, thin, yet visible brush strokes, open composition, emphasis on accurate depiction of light in its changing qualities (often accentuating the effects of the passage of time), ordinary subject matter, inclusion of "movement" as a crucial element of human perception and experience, and unusual visual angles. Impressionism originated with a group of Paris-based artists whose independent exhibitions brought them to prominence during the 1870s and 1880s. The Impressionists faced harsh opposition from the conventional art community in France. The name of the style derives from the title of a Claude Monet work, "Impression, soleil levant" ("Impression, Sunrise"), which provoked the critic Louis Leroy to coin the term in a satirical review published in the Parisian newspaper "Le Charivari". The development of Impressionism in the visual arts was soon followed by analogous styles in other media that became known as impressionist music and impressionist literature. Radicals in their time, early Impressionists violated the rules of academic painting. They constructed their pictures from freely brushed colours that took precedence over lines and contours, following the example of painters such as Eugène Delacroix and J. M. W. Turner. They also painted realistic scenes of modern life, and often painted outdoors. Previously, still lifes and portraits as well as landscapes were usually painted in a studio. The Impressionists found that they could capture the momentary and transient effects of sunlight by painting outdoors or "en plein air". They portrayed overall visual effects instead of details, and used short "broken" brush strokes of mixed and pure unmixed colour—not blended smoothly or shaded, as was customary—to achieve an effect of intense colour vibration. Impressionism emerged in France at the same time that a number of other painters, including the Italian artists known as the Macchiaioli, and Winslow Homer in the United States, were also exploring "plein-air" painting. The Impressionists, however, developed new techniques specific to the style. Encompassing what its adherents argued was a different way of seeing, it is an art of immediacy and movement, of candid poses and compositions, of the play of light expressed in a bright and varied use of colour. The public, at first hostile, gradually came to believe that the Impressionists had captured a fresh and original vision, even if the art critics and art establishment disapproved of the new style. By recreating the sensation in the eye that views the subject, rather than delineating the details of the subject, and by creating a welter of techniques and forms, Impressionism is a precursor of various painting styles, including Neo-Impressionism, Post-Impressionism, Fauvism, and Cubism. In the middle of the 19th century—a time of change, as Emperor Napoleon III rebuilt Paris and waged war—the Académie des Beaux-Arts dominated French art. The Académie was the preserver of traditional French painting standards of content and style. Historical subjects, religious themes, and portraits were valued; landscape and still life were not. The Académie preferred carefully finished images that looked realistic when examined closely. Paintings in this style were made up of precise brush strokes carefully blended to hide the artist's hand in the work. Colour was restrained and often toned down further by the application of a golden varnish. The Académie had an annual, juried art show, the Salon de Paris, and artists whose work was displayed in the show won prizes, garnered commissions, and enhanced their prestige. The standards of the juries represented the values of the Académie, represented by the works of such artists as Jean-Léon Gérôme and Alexandre Cabanel. In the early 1860s, four young painters—Claude Monet, Pierre-Auguste Renoir, Alfred Sisley, and Frédéric Bazille—met while studying under the academic artist Charles Gleyre. They discovered that they shared an interest in painting landscape and contemporary life rather than historical or mythological scenes. Following a practice that had become increasingly popular by mid-century, they often ventured into the countryside together to paint in the open air, but not for the purpose of making sketches to be developed into carefully finished works in the studio, as was the usual custom. By painting in sunlight directly from nature, and making bold use of the vivid synthetic pigments that had become available since the beginning of the century, they began to develop a lighter and brighter manner of painting that extended further the Realism of Gustave Courbet and the Barbizon school. A favourite meeting place for the artists was the Café Guerbois on Avenue de Clichy in Paris, where the discussions were often led by Édouard Manet, whom the younger artists greatly admired. They were soon joined by Camille Pissarro, Paul Cézanne, and Armand Guillaumin. During the 1860s, the Salon jury routinely rejected about half of the works submitted by Monet and his friends in favour of works by artists faithful to the approved style. In 1863, the Salon jury rejected Manet's "The Luncheon on the Grass" "(Le déjeuner sur l'herbe)" primarily because it depicted a nude woman with two clothed men at a picnic. While the Salon jury routinely accepted nudes in historical and allegorical paintings, they condemned Manet for placing a realistic nude in a contemporary setting. The jury's severely worded rejection of Manet's painting appalled his admirers, and the unusually large number of rejected works that year perturbed many French artists. After Emperor Napoleon III saw the rejected works of 1863, he decreed that the public be allowed to judge the work themselves, and the Salon des Refusés (Salon of the Refused) was organized. While many viewers came only to laugh, the Salon des Refusés drew attention to the existence of a new tendency in art and attracted more visitors than the regular Salon. Artists' petitions requesting a new Salon des Refusés in 1867, and again in 1872, were denied. In December 1873, Monet, Renoir, Pissarro, Sisley, Cézanne, Berthe Morisot, Edgar Degas and several other artists founded the "Société Anonyme Coopérative des Artistes Peintres, Sculpteurs, Graveurs" ("Cooperative and Anonymous Association of Painters, Sculptors, and Engravers") to exhibit their artworks independently. Members of the association were expected to forswear participation in the Salon. The organizers invited a number of other progressive artists to join them in their inaugural exhibition, including the older Eugène Boudin, whose example had first persuaded Monet to adopt "plein air" painting years before. Another painter who greatly influenced Monet and his friends, Johan Jongkind, declined to participate, as did Édouard Manet. In total, thirty artists participated in their first exhibition, held in April 1874 at the studio of the photographer Nadar. The critical response was mixed. Monet and Cézanne received the harshest attacks. Critic and humorist Louis Leroy wrote a scathing review in the newspaper "Le Charivari" in which, making wordplay with the title of Claude Monet's "Impression, Sunrise" "(Impression, soleil levant)", he gave the artists the name by which they became known. Derisively titling his article "", Leroy declared that Monet's painting was at most, a sketch, and could hardly be termed a finished work. He wrote, in the form of a dialog between viewers, The term "Impressionist" quickly gained favour with the public. It was also accepted by the artists themselves, even though they were a diverse group in style and temperament, unified primarily by their spirit of independence and rebellion. They exhibited together—albeit with shifting membership—eight times between 1874 and 1886. The Impressionists' style, with its loose, spontaneous brushstrokes, would soon become synonymous with modern life. Monet, Sisley, Morisot, and Pissarro may be considered the "purest" Impressionists, in their consistent pursuit of an art of spontaneity, sunlight, and colour. Degas rejected much of this, as he believed in the primacy of drawing over colour and belittled the practice of painting outdoors. Renoir turned away from Impressionism for a time during the 1880s, and never entirely regained his commitment to its ideas. Édouard Manet, although regarded by the Impressionists as their leader, never abandoned his liberal use of black as a colour (while Impressionists avoided its use and preferred to obtain darker colours by mixing), and never participated in the Impressionist exhibitions. He continued to submit his works to the Salon, where his painting "Spanish Singer" had won a 2nd class medal in 1861, and he urged the others to do likewise, arguing that "the Salon is the real field of battle" where a reputation could be made. Among the artists of the core group (minus Bazille, who had died in the Franco-Prussian War in 1870), defections occurred as Cézanne, followed later by Renoir, Sisley, and Monet, abstained from the group exhibitions so they could submit their works to the Salon. Disagreements arose from issues such as Guillaumin's membership in the group, championed by Pissarro and Cézanne against opposition from Monet and Degas, who thought him unworthy. Degas invited Mary Cassatt to display her work in the 1879 exhibition, but also insisted on the inclusion of Jean-François Raffaëlli, Ludovic Lepic, and other realists who did not represent Impressionist practices, causing Monet in 1880 to accuse the Impressionists of "opening doors to first-come daubers". The group divided over invitations to Paul Signac and Georges Seurat to exhibit with them in 1886. Pissarro was the only artist to show at all eight Impressionist exhibitions. The individual artists achieved few financial rewards from the Impressionist exhibitions, but their art gradually won a degree of public acceptance and support. Their dealer, Durand-Ruel, played a major role in this as he kept their work before the public and arranged shows for them in London and New York. Although Sisley died in poverty in 1899, Renoir had a great Salon success in 1879. Monet became secure financially during the early 1880s and so did Pissarro by the early 1890s. By this time the methods of Impressionist painting, in a diluted form, had become commonplace in Salon art. French painters who prepared the way for Impressionism include the Romantic colourist Eugène Delacroix, the leader of the realists Gustave Courbet, and painters of the Barbizon school such as Théodore Rousseau. The Impressionists learned much from the work of Johan Barthold Jongkind, Jean-Baptiste-Camille Corot and Eugène Boudin, who painted from nature in a direct and spontaneous style that prefigured Impressionism, and who befriended and advised the younger artists. A number of identifiable techniques and working habits contributed to the innovative style of the Impressionists. Although these methods had been used by previous artists—and are often conspicuous in the work of artists such as Frans Hals, Diego Velázquez, Peter Paul Rubens, John Constable, and J. M. W. Turner—the Impressionists were the first to use them all together, and with such consistency. These techniques include: New technology played a role in the development of the style. Impressionists took advantage of the mid-century introduction of premixed paints in tin tubes (resembling modern toothpaste tubes), which allowed artists to work more spontaneously, both outdoors and indoors. Previously, painters made their own paints individually, by grinding and mixing dry pigment powders with linseed oil, which were then stored in animal bladders. Many vivid synthetic pigments became commercially available to artists for the first time during the 19th century. These included cobalt blue, viridian, cadmium yellow, and synthetic ultramarine blue, all of which were in use by the 1840s, before Impressionism. The Impressionists' manner of painting made bold use of these pigments, and of even newer colours such as cerulean blue, which became commercially available to artists in the 1860s. The Impressionists' progress toward a brighter style of painting was gradual. During the 1860s, Monet and Renoir sometimes painted on canvases prepared with the traditional red-brown or grey ground. By the 1870s, Monet, Renoir, and Pissarro usually chose to paint on grounds of a lighter grey or beige colour, which functioned as a middle tone in the finished painting. By the 1880s, some of the Impressionists had come to prefer white or slightly off-white grounds, and no longer allowed the ground colour a significant role in the finished painting. Prior to the Impressionists, other painters, notably such 17th-century Dutch painters as Jan Steen, had emphasized common subjects, but their methods of composition were traditional. They arranged their compositions so that the main subject commanded the viewer's attention. J. M. W. Turner, while an artist of the Romantic era, anticipated the style of impressionism with his artwork. The Impressionists relaxed the boundary between subject and background so that the effect of an Impressionist painting often resembles a snapshot, a part of a larger reality captured as if by chance. Photography was gaining popularity, and as cameras became more portable, photographs became more candid. Photography inspired Impressionists to represent momentary action, not only in the fleeting lights of a landscape, but in the day-to-day lives of people. The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist's skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography "produced lifelike images much more efficiently and reliably". In spite of this, photography actually inspired artists to pursue other means of creative expression, and rather than compete with photography to emulate reality, artists focused "on the one thing they could inevitably do better than the photograph—by further developing into an art form its very subjectivity in the conception of the image, the very subjectivity that photography eliminated". The Impressionists sought to express their perceptions of nature, rather than create exact representations. This allowed artists to depict subjectively what they saw with their "tacit imperatives of taste and conscience". Photography encouraged painters to exploit aspects of the painting medium, like colour, which photography then lacked: "The Impressionists were the first to consciously offer a subjective alternative to the photograph". Another major influence was Japanese ukiyo-e art prints (Japonism). The art of these prints contributed significantly to the "snapshot" angles and unconventional compositions that became characteristic of Impressionism. An example is Monet's "Jardin à Sainte-Adresse", 1867, with its bold blocks of colour and composition on a strong diagonal slant showing the influence of Japanese prints Edgar Degas was both an avid photographer and a collector of Japanese prints. His "The Dance Class" "(La classe de danse)" of 1874 shows both influences in its asymmetrical composition. The dancers are seemingly caught off guard in various awkward poses, leaving an expanse of empty floor space in the lower right quadrant. He also captured his dancers in sculpture, such as the "Little Dancer of Fourteen Years". Impressionists, in varying degrees, were looking for ways to depict visual experience and contemporary subjects. Women Impressionists were interested in these same ideals but had many social and career limitations compared to male Impressionists. In particular, they were excluded from the imagery of the bourgeois social sphere of the boulevard, cafe, and dance hall. As well as imagery, women were excluded from the formative discussions that resulted in meetings in those places; that was where male Impressionists were able to form and share ideas about Impressionism. In the academic realm, women were believed to be incapable of handling complex subjects which led teachers to restrict what they taught female students. It was also considered unladylike to excel in art since women's true talents were then believed to center on homemaking and mothering. Yet several women were able to find success during their lifetime, even though their careers were affected by personal circumstances – Bracquemond, for example, had a husband who was resentful of her work which caused her to give up painting. The four most well known, namely, Mary Cassatt, Eva Gonzalès, Marie Bracquemond, and Berthe Morisot, are, and were, often referred to as the 'Women Impressionists'. Their participation in the series of eight Impressionist exhibitions that took place in Paris from 1874 to 1886 varied: Morisot participated in seven, Cassatt in four, Bracquemond in three, and Gonzalès did not participate. The critics of the time lumped these four together without regard to their personal styles, techniques, or subject matter. Critics viewing their works at the exhibitions often attempted to acknowledge the women artists' talents but circumscribed them within a limited notion of femininity. Arguing for the suitability of Impressionist technique to women's manner of perception, Parisian critic S.C. de Soissons wrote:One can understand that women have no originality of thought, and that literature and music have no feminine character; but surely women know how to observe, and what they see is quite different from that which men see, and the art which they put in their gestures, in their toilet, in the decoration of their environment is sufficient to give is the idea of an instinctive, of a peculiar genius which resides in each one of them.While Impressionism legitimized the domestic social life as subject matter, of which women had intimate knowledge, it also tended to limit them to that subject matter. Portrayals of often-identifiable sitters in domestic settings (which could offer commissions) were dominant in the exhibitions. The subjects of the paintings were often women interacting with their environment by either their gaze or movement. Cassatt, in particular, was aware of her placement of subjects: she kept her predominantly female figures from objectification and cliche; when they are not reading, they converse, sew, drink tea, and when they are inactive, they seem lost in thought. The women Impressionists, like their male counterparts, were striving for "truth," for new ways of seeing and new painting techniques; each artist had an individual painting style. Women Impressionists (particularly Morisot and Cassatt) were conscious of the balance of power between women and objects in their paintings – the bourgeois women depicted are not defined by decorative objects, but instead, interact with and dominate the things with which they live. There are many similarities in their depictions of women who seem both at ease and subtly confined. Gonzalès' "Box at the Italian Opera" depicts a woman staring into the distance, at ease in a social sphere but confined by the box and the man standing next to her. Cassatt's painting "Young Girl at a Window" is brighter in color but remains constrained by the canvas edge as she looks out the window. Despite their success in their ability to have a career and Impressionism's demise attributed to its allegedly feminine characteristics (its sensuality, dependence on sensation, physicality, and fluidity) the four women artists (and other, lesser-known women Impressionists) were largely omitted from art historical textbooks covering Impressionist artists until Tamar Garb's "Women Impressionists" published in 1986. For example, "Impressionism" by Jean Leymarie, published in 1955 included no information on any women Impressionists. The central figures in the development of Impressionism in France, listed alphabetically, were: The Impressionists Among the close associates of the Impressionists were several painters who adopted their methods to some degree. These include Jean-Louis Forain (who participated in Impressionist exhibitions in 1879, 1880, 1881 and 1886) and Giuseppe De Nittis, an Italian artist living in Paris who participated in the first Impressionist exhibit at the invitation of Degas, although the other Impressionists disparaged his work. Federico Zandomeneghi was another Italian friend of Degas who showed with the Impressionists. Eva Gonzalès was a follower of Manet who did not exhibit with the group. James Abbott McNeill Whistler was an American-born painter who played a part in Impressionism although he did not join the group and preferred grayed colours. Walter Sickert, an English artist, was initially a follower of Whistler, and later an important disciple of Degas; he did not exhibit with the Impressionists. In 1904 the artist and writer Wynford Dewhurst wrote the first important study of the French painters published in English, "Impressionist Painting: its genesis and development", which did much to popularize Impressionism in Great Britain. By the early 1880s, Impressionist methods were affecting, at least superficially, the art of the Salon. Fashionable painters such as Jean Béraud and Henri Gervex found critical and financial success by brightening their palettes while retaining the smooth finish expected of Salon art. Works by these artists are sometimes casually referred to as Impressionism, despite their remoteness from Impressionist practice. The influence of the French Impressionists lasted long after most of them had died. Artists like J.D. Kirszenbaum were borrowing Impressionist techniques throughout the twentieth century. As the influence of Impressionism spread beyond France, artists, too numerous to list, became identified as practitioners of the new style. Some of the more important examples are: The sculptor Auguste Rodin is sometimes called an Impressionist for the way he used roughly modeled surfaces to suggest transient light effects. Pictorialist photographers whose work is characterized by soft focus and atmospheric effects have also been called Impressionists. French Impressionist Cinema is a term applied to a loosely defined group of films and filmmakers in France from 1919–1929, although these years are debatable. French Impressionist filmmakers include Abel Gance, Jean Epstein, Germaine Dulac, Marcel L’Herbier, Louis Delluc, and Dmitry Kirsanoff. Musical Impressionism is the name given to a movement in European classical music that arose in the late 19th century and continued into the middle of the 20th century. Originating in France, musical Impressionism is characterized by suggestion and atmosphere, and eschews the emotional excesses of the Romantic era. Impressionist composers favoured short forms such as the nocturne, arabesque, and prelude, and often explored uncommon scales such as the whole tone scale. Perhaps the most notable innovations of Impressionist composers were the introduction of major 7th chords and the extension of chord structures in 3rds to five- and six-part harmonies. The influence of visual Impressionism on its musical counterpart is debatable. Claude Debussy and Maurice Ravel are generally considered the greatest Impressionist composers, but Debussy disavowed the term, calling it the invention of critics. Erik Satie was also considered in this category, though his approach was regarded as less serious, more musical novelty in nature. Paul Dukas is another French composer sometimes considered an Impressionist, but his style is perhaps more closely aligned to the late Romanticists. Musical Impressionism beyond France includes the work of such composers as Ottorino Respighi (Italy), Ralph Vaughan Williams, Cyril Scott, and John Ireland (England), Manuel De Falla and Isaac Albeniz (Spain), and Charles Griffes (America). The term Impressionism has also been used to describe works of literature in which a few select details suffice to convey the sensory impressions of an incident or scene. Impressionist literature is closely related to Symbolism, with its major exemplars being Baudelaire, Mallarmé, Rimbaud, and Verlaine. Authors such as Virginia Woolf, D.H. Lawrence, and Joseph Conrad have written works that are Impressionistic in the way that they describe, rather than interpret, the impressions, sensations and emotions that constitute a character's mental life. During the 1880s several artists began to develop different precepts for the use of colour, pattern, form, and line, derived from the Impressionist example: Vincent van Gogh, Paul Gauguin, Georges Seurat, and Henri de Toulouse-Lautrec. These artists were slightly younger than the Impressionists, and their work is known as post-Impressionism. Some of the original Impressionist artists also ventured into this new territory; Camille Pissarro briefly painted in a pointillist manner, and even Monet abandoned strict "plein air" painting. Paul Cézanne, who participated in the first and third Impressionist exhibitions, developed a highly individual vision emphasising pictorial structure, and he is more often called a post-Impressionist. Although these cases illustrate the difficulty of assigning labels, the work of the original Impressionist painters may, by definition, be categorised as Impressionism.
https://en.wikipedia.org/wiki?curid=15169
Internet slang Internet slang (also called Internet shorthand, cyber-slang, netspeak, or chatspeak) refers to various kinds of slang used by different people on the Internet. An example of Internet slang is "LOL" meaning "laugh out loud". It is difficult to provide a standardized definition of Internet slang due to the constant changes made to its nature. However, it can be understood to be any type of slang that Internet users have popularized, and in many cases, have coined. Such terms often originate with the purpose of saving keystrokes or to compensate for small character limits. Many people use the same abbreviations in texting and instant messaging, and social networking websites. Acronyms, keyboard symbols and abbreviations are common types of Internet slang. New dialects of slang, such as leet or Lolspeak, develop as ingroup internet memes rather than time savers. Many people use internet slang not only on the Internet but also face-to-face. Internet slang originated in the early days of the Internet with some terms predating the Internet. Internet slang is used in chat rooms, social networking services, online games, video games and in the online community. Since 1979, users of communications networks like Usenet created their own shorthand. In Japanese, the term moe has come into common use among slang users to mean something "preciously cute" and appealing. Aside from the more frequent abbreviations, acronyms, and emoticons, Internet slang also uses archaic words or the lesser-known meanings of mainstream terms. Regular words can also be altered into something with a similar pronunciation but altogether different meaning, or attributed new meanings altogether. Phonetic transcriptions of foreign words, such as the transformation of "impossible" into "impossibru" in Japanese and then [the transliteration of that] back to [the character set used for] English, also occur. In places where logographic languages are used, such as China, a visual Internet slang exists, giving characters dual meanings, one direct and one implied. The primary motivation for using a slang unique to the Internet is to ease communication. However, while Internet slang shortcuts save time for the writer, they take two times as long for the reader to understand, according to a study by the University of Tasmania. On the other hand, similar to the use of slang in traditional face-to-face speech or written language, slang on the Internet is often a way of indicating group membership. Internet slang provides a channel which facilitates and constrains our ability to communicate in ways that are fundamentally different from those found in other semiotic situations. Many of the expectations and practices which we associate with spoken and written language are no longer applicable. The Internet itself is ideal for new slang to emerge because of the richness of the medium and the availability of information. Slang is also thus motivated for the "creation and sustenance of online communities". These communities, in turn, play a role in solidarity or identification or an exclusive or common cause. David Crystal distinguishes among five areas of the Internet where slang is used- The Web itself, email, asynchronous chat (for example, mailing lists), synchronous chat (for example, Internet Relay Chat), and virtual worlds. The electronic character of the channel has a fundamental influence on the language of the medium. Options for communication are constrained by the nature of the hardware needed in order to gain Internet access. Thus, productive linguistic capacity (the type of information that can be sent) is determined by the preassigned characters on a keyboard, and receptive linguistic capacity (the type of information that can be seen) is determined by the size and configuration of the screen. Additionally, both sender and receiver are constrained linguistically by the properties of the internet software, computer hardware, and networking hardware linking them. Electronic discourse refers to writing that is "very often reads as if it were being spoken – that is, as if the sender were writing talking". Internet slang does not constitute a homogeneous language variety. Rather, it differs according to the user and type of Internet situation. However, within the language of Internet slang, there is still an element of prescriptivism, as seen in style guides, for example "Wired Style", which are specifically aimed at usage on the Internet. Even so, few users consciously heed these prescriptive recommendations on CMC, but rather adapt their styles based on what they encounter online. Although it is difficult to produce a clear definition of Internet slang, the following types of slang may be observed. This list is not exhaustive. Many debates about how the use of slang on the Internet influences language outside of the digital sphere go on. Even though the direct causal relationship between the Internet and language has yet to be proven by any scientific research, Internet slang has invited split views on its influence on the standard of language use in non-computer-mediated communications. Prescriptivists tend to have the widespread belief that the Internet has a negative influence on the future of language, and that it would lead to a degradation of standard. Some would even attribute any decline of standard formal English to the increase in usage of electronic communication. It has also been suggested that the linguistic differences between Standard English and CMC can have implications for literacy education. This is illustrated by the widely reported example of a school essay submitted by a Scottish teenager, which contained many abbreviations and acronyms likened to SMS language. There was great condemnation of this style by the mass media as well as educationists, who expressed that this showed diminishing literacy or linguistic abilities. On the other hand, descriptivists have counter-argued that the Internet allows better expressions of a language. Rather than established linguistic conventions, linguistic choices sometimes reflect personal taste. It has also been suggested that as opposed to intentionally flouting language conventions, Internet slang is a result of a lack of motivation to monitor speech online. Hale and Scanlon describe language in Emails as being derived from "writing the way people talk", and that there is no need to insist on 'Standard' English. English users, in particular, have an extensive tradition of etiquette guides, instead of traditional prescriptive treatises, that offer pointers on linguistic appropriateness. Using and spreading Internet slang also adds onto the cultural currency of a language. It is important to the speakers of the language due to the foundation it provides for identifying within a group, and also for defining a person's individual linguistic and communicative competence. The result is a specialized subculture based on its use of slang. In scholarly research, attention has, for example, been drawn to the effect of the use of Internet slang in ethnography, and more importantly to how conversational relationships online change structurally because slang is used. In German, there is already considerable controversy regarding the use of anglicisms outside of CMC. This situation is even more problematic within CMC, since the jargon of the medium is dominated by English terms. An extreme example of an anti-anglicisms perspective can be observed from the chatroom rules of a Christian site, which bans all anglicisms ("" [Using anglicisms is strictly prohibited!]), and also translates even fundamental terms into German equivalents. In April 2014, Gawker's editor-in-chief Max Read instituted new writing style guidelines banning internet slang for his writing staff. Internet slang has crossed from being mediated by the computer into other non-physical domains. Here, these domains are taken to refer to any domain of interaction where interlocutors need not be geographically proximate to one another, and where the Internet is not primarily used. Internet slang is now prevalent in telephony, mainly through short messages (SMS) communication. Abbreviations and interjections, especially, have been popularized in this medium, perhaps due to the limited character space for writing messages on mobile phones. Another possible reason for this spread is the convenience of transferring the existing mappings between expression and meaning into a similar space of interaction. At the same time, Internet slang has also taken a place as part of everyday offline language, among those with digital access. The nature and content of online conversation is brought forward to direct offline communication through the telephone and direct talking, as well as through written language, such as in writing notes or letters. In the case of interjections, such as numerically based and abbreviated Internet slang, are not pronounced as they are written physically or replaced by any actual action. Rather, they become lexicalized and spoken like non-slang words in a "stage direction" like fashion, where the actual action is not carried out but substituted with a verbal signal. The notions of flaming and trolling have also extended outside the computer, and are used in the same circumstances of deliberate or unintentional implicatures. The expansion of Internet slang has been furthered through codification and the promotion of digital literacy. The subsequently existing and growing popularity of such references among those online as well as offline has thus advanced Internet slang literacy and globalized it. Awareness and proficiency in manipulating Internet slang in both online and offline communication indicates digital literacy and teaching materials have even been developed to further this knowledge. A South Korean publisher, for example, has published a textbook that details the meaning and context of use for common Internet slang instances and is targeted at young children who will soon be using the Internet. Similarly, Internet slang has been recommended as language teaching material in second language classrooms in order to raise communicative competence by imparting some of the cultural value attached to a language that is available only in slang. Meanwhile, well-known dictionaries such as the ODE and Merriam-Webster have been updated with a significant and growing body of slang jargon. Besides common examples, lesser known slang and slang with a non-English etymology have also found a place in standardized linguistic references. Along with these instances, literature in user-contributed dictionaries such as Urban Dictionary has also been added to. Codification seems to be qualified through frequency of use, and novel creations are often not accepted by other users of slang. Although Internet slang began as a means of "opposition" to mainstream language, its popularity with today's globalized digitally literate population has shifted it into a part of everyday language, where it also leaves a profound impact. Frequently used slang also have become conventionalised into memetic "unit[s] of cultural information". These memes in turn are further spread through their use on the Internet, prominently through websites. The Internet as an "information superhighway" is also catalysed through slang. The evolution of slang has also created a 'slang union' as part of a unique, specialised subculture. Such impacts are, however, limited and requires further discussion especially from the non-English world. This is because Internet slang is prevalent in languages more actively used on the Internet, like English, which is the Internet's lingua franca. The Internet has helped people from all over the world to become connected to one another, enabling "global" relationships to be formed. As such, it is important for the various types of slang used online to be recognizable for everyone. It is also important to do so because of how other languages are quickly catching up with English on the Internet, following the increase in Internet usage in predominantly non-English speaking countries. In fact, as of May 31, 2011, only approximately 27% of the online population is made up of English speakers. Different cultures tend to have different motivations behind their choice of slang, on top of the difference in language used. For example, in China, because of the tough Internet regulations imposed, users tend to use certain slang to talk about issues deemed as sensitive to the government. These include using symbols to separate the characters of a word to avoid detection from manual or automated text pattern scanning and consequential censorship. An outstanding example is the use of the term river crab to denote censorship. River crab (hexie) is pronounced the same as "harmony"—the official term used to justify political discipline and censorship. As such Chinese netizens reappropriate the official terms in a sarcastic way. Abbreviations are popular across different cultures, including countries like Japan, China, France, Portugal, etc., and are used according to the particular language the Internet users speak. Significantly, this same style of slang creation is also found in non-alphabetical languages as, for example, a form of "e gao" or alternative political discourse. The difference in language often results in miscommunication, as seen in an onomatopoeic example, "555", which sounds like "crying" in Chinese, and "laughing" in Thai. A similar example is between the English "haha" and the Spanish "jaja", where both are onomatopoeic expressions of laughter, but the difference in language also meant a different consonant for the same sound to be produced. For more examples of how other languages express "laughing out loud", see also: LOL In terms of culture, in Chinese, the numerically based onomatopoeia "770880" (), which means to 'kiss and hug you', is used. This is comparable to "XOXO", which many Internet users use. In French, "pk" or "pq" is used in the place of pourquoi, which means 'why'. This is an example of a combination of onomatopoeia and shortening of the original word for convenience when writing online. In conclusion, every different country has their own language background and cultural differences and hence, they tend to have their own rules and motivations for their own Internet slang. However, at present, there is still a lack of studies done by researchers on some differences between the countries. On the whole, the popular use of Internet slang has resulted in a unique online and offline community as well as a couple sub-categories of "special internet slang which is different from other slang spread on the whole internet... similar to jargon... usually decided by the sharing community". It has also led to virtual communities marked by the specific slang they use and led to a more homogenized yet diverse online culture.
https://en.wikipedia.org/wiki?curid=15172
Impi Impi is a Zulu word meaning war or combat, and by association any body of men gathered for war, for example "impi ya mashosha" is a term denoting 'an army'. However, in English "impi" is often used to refer to a Zulu regiment, which is called an "ibutho" in Zulu. Its beginnings lie far back in historic tribal warfare customs, when groups of armed men called "impis" battled. They were systematised radically by the Zulu king Shaka, who was then only the exiled illegitimate son of king Senzangakhona kaJama, but already showing much prowess as a general in the army of Mthethwa king Dingiswayo in the Ndwandwe–Zulu War of 1817–1819. The Zulu impi is popularly identified with the ascent of Shaka, ruler of the relatively small Zulu tribe before its explosion across the landscape of southern Africa, but its earliest shape as an instrument of statecraft lies in the innovations of the Mthethwa chieftain Dingiswayo, according to some historians (Morris 1965). These innovations in turn drew upon existing tribal customs, such as the "iNtanga". This was an age grade tradition common among many of the Bantu peoples of the continent's southern region. Youths were organised into age groups, with each cohort responsible for certain duties and tribal ceremonies. Periodically, the older age grades were summoned to the kraals of sub-chieftains, or "inDunas", for consultations, assignments, and an induction ceremony that marked their transition from boys to full-fledged adults and warriors, the "ukuButwa". Kraal or settlement elders generally handled local disputes and issues. Above them were the inDunas, and above the inDunas stood the chief of a particular clan lineage or tribe. The inDunas handled administrative matters for their chiefs – ranging from settlement of disputes, to the collection of taxes. In time of war, the inDunas supervised the fighting men in their areas, forming leadership of the military forces deployed for combat. The age grade "iNtangas", under the guidance of the inDunas, formed the basis for the systematic regimental organisation that would become known worldwide as the impi. Militarily warfare was mild among the Bantu prior to the rise of Shaka, though it occurred frequently. Objectives were typically limited to such matters as cattle raiding, avenging some personal insult, or resolving disputes over segments of grazing land. Generally a loose mob, called an "impi" participated in these melees. There were no campaigns of extermination against the defeated. They simply moved on to other open spaces on the veldt, and equilibrium was restored. The bow and arrow were known but seldom used. Warfare, like the hunt, depended on skilled spearmen and trackers. The primary weapon was a thin 6-foot throwing spear, the "assegai". Several were carried into combat. Defensive weapons included a small cowhide shield, which was later improved by King Shaka. Many battles were prearranged, with the clan warriors meeting at an assigned place and time, while women and children of the clan watched the festivities from some distance away. Ritualized taunts, single combats and tentative charges were the typical pattern. If the affair did not dissipate before, one side might find enough courage to mount a sustained attack, driving off their enemies. Casualties were usually light. The defeated clan might pay in lands or cattle and have captives to be ransomed, but extermination and mass casualties were rare. Tactics were rudimentary. Outside the ritual battles, the quick raid was the most frequent combat action, marked by burning kraals, seizure of captives, and the driving off of cattle. Pastoral herders and light agriculturalists, the Bantu did not usually build permanent fortifications to fend off enemies. A clan under threat simply packed their meager material possessions, rounded up their cattle and fled until the marauders were gone. If the marauders did not stay to permanently dispossess them of grazing areas, the fleeing clan might return to rebuild in a day or two. The genesis of the Zulu impi thus lies in tribal structures existing long before the coming of Europeans or the Shaka era. In the early 19th century, a combination of factors began to change the customary pattern. These included rising populations, the growth of white settlement and slaving that dispossessed native peoples both at the Cape and in Portuguese Mozambique, and the rise of ambitious "new men." One such man, a warrior called Dingiswayo ("the Troubled One") of the Mthethwa rose to prominence. Historians such as Donald Morris hold that his political genius laid the basis for a relatively light hegemony. This was established through a combination of diplomacy and conquest, using not extermination or slavery, but strategic reconciliation and judicious force of arms. This hegemony reduced the frequent feuding and fighting among the small clans in the Mthethwa's orbit, transferring their energies to more centralised forces. Under Dingiswayo the age grades came to be regarded as military drafts, deployed more frequently to maintain the new order. It was from these small clans, including among them the eLangeni and the Zulu, that Shaka sprung. Shaka proved himself to be one of Dingiswayo's most able warriors after the military call up of his age grade to serve in the Mthethwa forces. He fought with his iziCwe regiment wherever he was assigned during this early period, but from the beginning, Shaka's approach to battle did not fit the traditional mould. He began to implement his own individual methods and style, designing the famous short stabbing spear the "iKlwa", a larger, stronger shield, and discarding the oxhide sandals that he felt slowed him down. These methods proved effective on a small scale, but Shaka himself was restrained by his overlord. His conception of warfare was far more extreme than the reconcilitory methods of Dingiswayo. He sought to bring combat to a swift and bloody decision, as opposed to duels of individual champions, scattered raids, or limited skirmishes where casualties were comparatively light. While his mentor and overlord Dingiswayo lived, Shakan methods were reined in, but the removal of this check gave the Zulu chieftain much broader scope. It was under his rule that a much more rigorous mode of tribal warfare came into being. This newer, brutal focus demanded changes in weapons, organisation and tactics. Shaka is credited with introducing a new variant of the traditional weapon, demoting the long, spindly throwing spear in favour of a heavy-bladed, short-shafted stabbing spear. He is also said to have introduced a larger, heavier cowhide shield ("isihlangu"), and trained his forces to thus close with the enemy in more effective hand-to-hand combat. The throwing spear was not discarded, but standardised like the stabbing implement and carried as a missile weapon, typically discharged at the foe, before close contact. These weapons changes integrated with and facilitated an aggressive mobility and tactical organisation. As weapons, the Zulu warrior carried the "iklwa" stabbing spear (losing one could result in execution) and a club or cudgel fashioned from dense hardwood known in Zulu as the "iwisa", usually called the knobkerrie or knobkerry English and knopkierie in Afrikaans, for beating an enemy in the manner of a mace. Zulu officers often carried the half-moon-shaped Zulu ax, but this weapon was more of a symbol to show their rank. The iklwa – so named because of the sucking sound it made when withdrawn from a human body – with its long and broad blade was an invention of Shaka that superseded the older thrown "ipapa" (so named because of the "pa-pa" sound it made as it flew through the air). It could theoretically be used both in melee and as a thrown weapon, but warriors were forbidden in Shaka's day from throwing it, which would disarm them and give their opponents something to throw back. Moreover, Shaka felt it discouraged warriors from closing into hand-to-hand combat. Shaka's brother, and successor, Dingane kaSenzangakhona reintroduced greater use of the throwing spear, perhaps as a counter to Boer firearms. As early as Shaka's reign small numbers of firearms, often obsolete muskets and rifles, were obtained by the Zulus from Europeans by trade. In the aftermath of the defeat of the British Empire at the Battle of Isandlwana in 1879, many Martini–Henry rifles were captured by the Zulus together with considerable amounts of ammunition. The advantage of this capture is debatable due to the alleged tendency of Zulu warriors to close their eyes when firing such weapons. The possession of firearms did little to change Zulu tactics, which continued to rely on a swift approach to the enemy to bring him into close combat. All warriors carried a shield made of oxhide, which retained the hair, with a central stiffening shaft of wood, the "mgobo". Shields were the property of the king; they were stored in specialised structures raised off the ground for protection from vermin when not issued to the relevant regiment. The large "isihlangu" shield of Shaka's day was about five feet in length and was later partially replaced by the smaller "umbumbuluzo," a shield of identical manufacture but around three and a half feet in length. Close combat relied on co-ordinated use of the "iklwa" and shield. The warrior sought to get the edge of his shield behind the edge of his enemy's, so that he could pull the enemy's shield to the side, thus opening him to a thrust with the "iklwa" deep into the abdomen or chest. The fast-moving host, like all military formations, needed supplies. These were provided by young boys, who were attached to a force and carried rations, cooking pots, sleeping mats, extra weapons and other material. Cattle were sometimes driven on the hoof as a movable larder. Again, such arrangements in the local context were probably nothing unusual. What was different was the systematisation and organisation, a pattern yielding major benefits when the Zulu were dispatched on raiding missions. Age-grade groupings of various sorts were common in the Bantu tribal culture of the day, and indeed are still important in much of Africa. Age grades were responsible for a variety of activities, from guarding the camp, to cattle herding, to certain rituals and ceremonies. It was customary in Zulu culture for young men to provide limited service to their local chiefs until they were married and recognised as official householders. Shaka manipulated this system, transferring the customary service period from the regional clan leaders to himself, strengthening his personal hegemony. Such groupings on the basis of age, did not constitute a permanent, paid military in the modern Western sense, nevertheless they did provide a stable basis for sustained armed mobilisation, much more so than ad hoc tribal levies or war parties. Shaka organised the various age grades into regiments, and quartered them in special military kraals, with each regiment having its own distinctive names and insignia. Some historians argue that the large military establishment was a drain on the Zulu economy and necessitated continual raiding and expansion. This may be true since large numbers of the society's men were isolated from normal occupations, but whatever the resource impact, the regimental system clearly built on existing tribal cultural elements that could be adapted and shaped to fit an expansionist agenda. After their 20th birthdays, young men would be sorted into formal "ibutho" (plural "amabutho") or regiments. They would build their "i=handa" (often referred to as a 'homestead', as it was basically a stockaded group of huts surrounding a corral for cattle), their gathering place when summoned for active service. Active service continued until a man married, a privilege only the king bestowed. The amabutho were recruited on the basis of age rather than regional or tribal origin. The reason for this was to enhance the centralised power of the Zulu king at the expense of clan and tribal leaders. They swore loyalty to the king of the Zulu nation. Shaka discarded sandals to enable his warriors to run faster. Initially the move was unpopular, but those who objected were simply killed, a practice that quickly concentrated the minds of remaining personnel. Zulu tradition indicates that Shaka hardened the feet of his troops by having them stamp thorny tree and bush branches flat. Shaka drilled his troops frequently, implementing forced marches covering more than fifty miles a day. He also drilled the troops to carry out encirclement tactics (see below). Such mobility gave the Zulu a significant impact in their local region and beyond. Upkeep of the regimental system and training seems to have continued after Shaka's death, although Zulu defeats by the Boers, and growing encroachment by British colonists, sharply curtailed raiding operations prior to the War of 1879. Morris (1965, 1982) records one such mission under King Mpande to give green warriors of the uThulwana regiment experience: a raid into Swaziland, dubbed ""Fund' uThulwana"" by the Zulu, or "Teach the uThulwana". Impi warriors were trained as early as age six, joining the army as "udibi" porters at first, being enrolled into same-age groups ("intanga"). Until they were "buta"'d, Zulu boys accompanied their fathers and brothers on campaign as servants. Eventually, they would go to the nearest "ikhanda" to "kleza" (literally, "to drink directly from the udder"), at which time the boys would become "inkwebane", cadets. They would spend their time training until they were formally enlisted by the king. They would challenge each other to stick fights, which had to be accepted on pain of dishonor. In Shaka's day, warriors often wore elaborate plumes and cow tail regalia in battle, but by the Anglo-Zulu War of 1879, many warriors wore only a loin cloth and a minimal form of headdress. The later period Zulu soldier went into battle relatively simply dressed, painting his upper body and face with chalk and red ochre, despite the popular conception of elaborately panoplied warriors. Each "ibutho" had a singular arrangement of headdress and other adornments, so that the Zulu army could be said to have had regimental uniforms; latterly the 'full-dress' was only worn on festive occasions. The men of senior regiments would wear, in addition to their other headdress, the head-ring ("isicoco") denoting their married state. A gradation of shield colour was found, junior regiments having largely dark shields the more senior ones having shields with more light colouring; Shaka's personal regiment "Fasimba" (The Haze) having white shields with only a small patch of darker colour. This shield uniformity was facilitated by the custom of separating the king's cattle into herds based on their coat colours. Certain adornments were awarded to individual warriors for conspicuous courage in action; these included a type of heavy brass arm-ring ("ingxotha") and an intricate necklace composed of interlocking wooden pegs ("iziqu"). The Zulu typically took the offensive, deploying in the well-known "buffalo horns" formation (). It comprised three elements: Encirclement tactics are not unique in warfare, and historians note that attempts to surround an enemy were not unknown even in the ritualised battles. The use of separate manoeuvre elements to support a stronger central group is also well known in pre-mechanised tribal warfare, as is the use of reserve echelons farther back. What was unique about the Zulu was the degree of organisation, consistency with which they used these tactics, and the speed at which they executed them. Developments and refinements may have taken place after Shaka's death, as witnessed by the use of larger groupings of regiments by the Zulu against the British in 1879. Missions, available manpower and enemies varied, but whether facing native spear, or European bullet, the impis generally fought in and adhered to the classical buffalo horns pattern. Regiments and corps. The Zulu forces were generally grouped into three levels: regiments, corps of several regiments, and "armies" or bigger formations, although the Zulu did not use these terms in the modern sense. Although size distinctions were taken account of, any grouping of men on a mission could collectively be called an impi, whether a raiding party of 100 or horde of 10,000. Numbers were not uniform but dependent on a variety of factors, including assignments by the king, or the manpower mustered by various clan chiefs or localities. A regiment might be 400 or 4000 men. These were grouped into corps that took their name from the military kraals where they were mustered, or sometimes the dominant regiment of that locality. There were 4 basic ranks: herdboy assistants, warriors, inDunas and higher ranked supremos for a particular mission. Higher command and unit leadership. Leadership was not a complicated affair. An inDuna guided each regiment, and he in turn answered to senior izinduna who controlled the corps grouping. Overall guidance of the host was furnished by elder izinduna usually with many years of experience. One or more of these elder chiefs might accompany a big force on an important mission, but there was no single "field marshal" in supreme command of all Zulu forces. Regimental izinduna, like the non-coms of today's army, and yesterday's Roman Centurions, were extremely important to morale and discipline. This was shown during the battle of Isandhlwana. Blanketed by a hail of British bullets, rockets and artillery, the advance of the Zulu faltered. Echoing from the mountain, however, were the shouted cadences and fiery exhortations of their regimental izinduna, who reminded the warriors that their king did not send them to run away. Thus encouraged, the encircling regiments remained in place, maintaining continual pressure, until weakened British dispositions enabled the host to make a final surge forward. (See Morris ref below—"The Washing of the Spears"). As noted above, Shaka was neither the originator of the impi, or the age grade structure, nor the concept of a bigger grouping than the small clan system. His major innovations were to blend these traditional elements in a new way, to systematise the approach to battle, and to standardise organization, methods and weapons, particularly in his adoption of the "ilkwa" – the Zulu thrusting spear, unique long-term regimental units, and the "buffalo horns" formation. Dingswayo's approach was of a loose federation of allies under his hegemony, combining to fight, each with their own contingents, under their own leaders. Shaka dispensed with this, insisting instead on a standardised organisation and weapons package that swept away and replaced old clan allegiances with loyalty to himself. This uniform approach also encouraged the loyalty and identification of warriors with their own distinctive military regiments. In time, these warriors, from many conquered tribes and clans came to regard themselves as one nation- the Zulu. The Marian reforms of Rome in the military sphere are referenced by some writers as similar. While other ancient powers such as the Carthaginians maintained a patchwork of force types, and the legions retained such phalanx-style holdovers like the "triarii," Marius implemented one consistent standardised approach for all the infantry. This enabled more disciplined formations and efficient execution of tactics over time against a variety of enemies. As one military historian notes: The impi, in its Shakan form, is best known among Western readers from the Anglo-Zulu War of 1879, particularly the famous Zulu victory at Isandhlwana, but its development was over 60 years in coming before that great clash. To understand the full scope of the impi's performance in battle, military historians of the Zulu typically look to its early operations against internal African enemies, not merely the British interlude. In terms of numbers, the operations of the impi would change- from the Western equivalent of small company and battalion size forces, to manoeuvres in multi-divisional strength of between 10,000 and 40,000 men. The victory won by Zulu king Cetawasyo at Ndondakusuka, for example, two decades before the British invasion involved a deployment of 30,000 troops. These were sizeable formations in regional context but represented the bulk of prime Zulu fighting strength. Few impi-style formations were to routinely achieve this level of mobilisation for a single battle. For example, at Cannae, the Romans deployed 80,000 men, and generally could put tens of thousands more into smaller combat actions). The popular notion of countless attacking black spearmen is a distorted one. Manpower supplies on the continent were often limited. In the words of one historian: "The savage hordes of popular lore seldom materialized on African battlefields." This limited resource base would hurt the Zulu when they confronted technologically advanced world powers such as Britain. The advent of new weapons like firearms would also have a profound impact on the African battlefield, but as will be seen, the impi-style forces largely eschewed firearms, or used them in a minor way. Whether facing native spear or European bullet, impis largely fought as they had since the days of Shaka, from Zululand to Zimbabwe, and from Mozambique to Tanzania. Upon his accession to power, Shaka was confronted by two potent threats, the Ndwandwes under Zwide, and the Qwabes. Both clans were twice as large as the Zulu. The first key test of the "new model" Shakan impis would be against the Ndwandwe, and the battle offers insight into both Shaka as a commander and the performance of his reorganised combat team. The Zulu king deployed his troops in a strong position on top of Gqokli Hill, using a deep depression on the summit to hide a large central reserve, while grouping his other warriors forward in defensive formation. Shaka also made a decoy gambit -- sending the Zulu cattle off with a small escort, luring Zwide into splitting his force. The battle began in the early morning as the Ndwandwe, under Zwide's son Nomahlanjana, made a series of frontal attacks up the steep hill. Slowed by the incline, and armed only with traditional throwing spears, they were badly mauled by Shaka's men in close quarters fighting. By mid-afternoon, the Ndwandwe were exhausted and their force weakened further by small groups of men going off in search of water. Shaka however had cunningly positioned himself so that his troops had access to a small stream nearby. In the late afternoon the Ndwandwe made a final attack. Leaving a part of their army surrounding the bottom of the hill, they pushed a huge column up to the top, hoping to drive the Zulu down into the blocking forces below. Shaka waited until the column was almost at the top, then ordered his fresh reserves to make a flanking "horn" attack, sprinting down both sides of the hill to encircle and liquidate the ascending Ndwandwe. The rest of the enemy force, which could not clearly see what was happening on the summit was next attacked in another encircling manoeuvre that sent it fleeing. In its first major battle, the Shakan impi had pulled off a multiple envelopment. On the negative side, the Ndwandwe remnants had been able to withdraw intact, and all the Zulu cattle were captured. Shaka furthermore was forced eventually to recall and pull back the warriors to his kraal at kwaBulawayo. Nevertheless, the impi had badly beaten an enemy force over twice its size, killing 5 of Zwide's sons in the process and succeeding in its first major test. A period of rebuilding now commenced and new recruits, either by conquest or alliance were incorporated into the growing Shakan force. Among the newcomers was one Mzilikazi, a small-time chieftain of the Kumalo, and a grandson of Zwide whose father had been killed by Zwide. Mzilikazi would eventually fall out with Shaka, and in fleeing, would extend the concept of the impi even further across the landscape of southern and eastern Africa. In this period Shaka's power grew, defeating several powerful local rivals and creating a vast monolith that was the most powerful nation in its region. Shaka's success was to spawn several offshoots of the impi-style formation. Chief among these was the Matebele, under Mzilkhazi, and the Shangaan, under the redoubtable Soshangane. The greatest expansion of the impi outside the Zululand/Zimbabwe area however was to come in East Africa, where bands of Ngoni fighting men, conquered large swathes of territory, using the methods first laid down by Shaka. The impi clashed with another tactical system introduced by European settlers: the horse-gun system of the Boer Commando. This conflict is often popularly conceived of in terms of the well known battles between Zulu King Dingane and the Boers, most notably at the Battle of Blood River. As will be seen however, this tells only part of the story. The impi was to clash with the mobile commando on the open fields of the high veldt in a series of epic confrontations, in which each force both suffered defeat and enjoyed victory, and both sides acquitted themselves well. Nearly 35,000 strong, well motivated and supremely confident, the Zulu were a formidable force on their own home ground, despite the almost total lack of modern weaponry. Their greatest assets were their morale, unit leadership, mobility and numbers. Tactically the Zulu acquitted themselves well in at least 3 encounters, Isandhlwana, Hlobane and the smaller Intombi action. Their stealthy approach march, camouflage and noise discipline at Isandhlwana, while not perfect, put them within excellent striking distance of their opponents, where they were able to exploit weaknesses in the camp layout. At Hlobane they caught a British column on the move rather than in the usual fortified position, partially cutting off its retreat and forcing it to withdraw. Strategically (and perhaps understandably in their own traditional tribal context) they lacked any clear vision of fighting their most challenging war, aside from smashing the three British columns by the weight and speed of their regiments. Despite the Isandhlwana victory, tactically there were major problems as well. They rigidly and predictably applied their three-pronged "buffalo horns" attack, paradoxically their greatest strength, but also their greatest weakness when facing concentrated firepower. The Zulu failed to make use of their superior mobility by attacking the British rear area such as Natal or in interdicting vulnerable British supply lines. However, an important consideration, which King Cetshwayo appreciated, was that there was a clear difference between defending one's territory, and encroaching on another, regardless of the fact that they are at war with the holder of that land. The King realised that peace would be impossible if a real invasion of Natal was launched, and that it would only provoke a more concerted effort on the part of the British against them. The attack on Rorke's Drift, in Natal, was an opportunist raid, as opposed to a real invasion. When they did, they achieved some success, such as the liquidation of a supply detachment at the Intombi River. A more expansive mobile strategy might have cut British communications and brought their lumbering advance to a halt, bottling up the redcoats in scattered strongpoints while the impis ran rampant between them. Just such a scenario developed with the No. 1 British column, which was penned up static and immobile in garrison for over two months at Eshowe. The Zulu also allowed their opponents too much time to set up fortified strongpoints, assaulting well defended camps and positions with painful losses. A policy of attacking the redcoats while they were strung out on the move, or crossing difficult obstacles like rivers, might have yielded more satisfactory results. For example, four miles past the Ineyzane River, after the British had comfortably crossed, and after they had spent a day consolidating their advance, the Zulu finally launched a typical "buffalo horn" encirclement attack that was seen off with withering fire from not only breech-loading Martini-Henry rifles, but 7-pounder artillery and Gatling guns. In fairness, the Zulu commanders could not conjure regiments out of thin air at the optimum time and place. They too needed time to marshal, supply and position their forces, and sort out final assignments to the three-prongs of attack. Still, the Battle of Hlobane Mountain offers just a glimpse of an alternative mobile scenario, where the manoeuvering Zulu "horns" cut off and drove back Buller's column when it was dangerously strung out on the mountain. Command and control of the impis was problematic at times. Indeed, the Zulu attacks on the British strongpoints at Rorke's Drift and at Kambula, (both bloody defeats) seemed to have been carried out by over-enthusiastic leaders and warriors despite contrary orders of the Zulu King, Cetshwayo. Popular film re-enactments display a grizzled "izinduna" directing the host from a promontory with elegant sweeps of the hand. This might have happened during the initial marshaling of forces from a jump off point, or the deployment of reserves, but once the great encircling sweep of frenzied warriors in the "horns" and "chest" was in motion, the "izinduna" could not generally exercise detailed control. Although the "loins" or reserves were on hand to theoretically correct or adjust an unfavorable situation, a shattered attack could make the reserves irrelevant. Against the Boers at Blood River, massed gunfire broke the back of the Zulu assault, and the Boers were later able to mount a cavalry sweep in counterattack that became a turkey shoot against fleeing Zulu remnants. Perhaps the Zulu threw everything forward and had little left. In similar manner, after exhausting themselves against British firepower at Kambula and Ulindi, few of the Zulu reserves were available to do anything constructive, although the tribal warriors still remained dangerous at the guerrilla level when scattered. At Isandhlwana however, the "classical" Zulu system struck gold, and after liquidating the British position, it was a relatively fresh reserve force that swept down on Rorke's Drift. The Zulu had greater numbers than their opponents, but greater numbers massed together in compact arrays simply presented easy targets in the age of modern firearms and artillery. African tribes that fought in smaller guerrilla detachments typically held out against European invaders for a much longer time, as witnessed by the 7-year resistance of the Lobi against the French in West Africa, or the operations of the Berbers in Algeria against the French. When the Zulu did acquire firearms, most notably captured stocks after the great victory at Isandhlwana, they lacked training and used them ineffectively, consistently firing high to give the bullets "strength." Southern Africa, including the areas near Natal, was teeming with bands like the Griquas who had learned to use guns. Indeed, one such group not only mastered the way of the gun, but became proficient horsemen as well, skills that helped build the Basotho tribe, in what is now the nation of Lesotho. In addition, numerous European renegades or adventurers (both Boer and non-Boer) skilled in firearms were known to the Zulu. Some had even led detachments for the Zulu kings on military missions. The Zulu thus had clear scope and opportunity to master and adapt the new weaponry. They also had already experienced defeat against the Boers, by concentrated firearms. They had had at least four decades to adjust their tactics to this new threat. A well-drilled corps of gunmen or grenadiers, or a battery of artillery operated by European mercenaries for example, might have provided much needed covering fire as the regiments manoeuvred into position. No such adjustments were on hand when they faced the redcoats. Immensely proud of their system, and failing to learn from their earlier defeats, they persisted in "human wave" attacks against well defended European positions where massed firepower devastated their ranks. The ministrations of an "isAngoma" (plural: "izAngoma") Zulu diviner or "witch doctor", and the bravery of individual regiments were ultimately of little use against the volleys of modern rifles, Gatling guns and artillery at the Ineyzane River, Rorke's Drift, Kambula, Gingingdlovu and finally Ulindi. Undoubtedly, Cetshwayo and his war leaders faced a tough and extremely daunting task – overcoming the challenge of concentrated rifle, Gatling gun, and artillery fire on the battlefield. It was one that also taxed European military leaders, as the carnage of the American Civil War and the later Boer War attests. Nevertheless, Shaka's successors could argue that within the context of their experience and knowledge, they had done the best they could, following his classical template, which had advanced the Zulu from a small, obscure tribe to a respectable regional power known for its fierce warriors. The demise of the impi finally came about with the success of European colonisation of Africa- first in southern Africa by the British, and finally in German East Africa as German colonialists defeated the last of the impi-style formations under Mkwawa, chief of the Hehe of Tanzania. The Boers, another major challenger to the impi, also saw defeat by imperial forces, in the Boer War of 1902. In its relatively brief history, the impi inspired both scorn (During the Anglo-Zulu War, British commander Lord Chelmsford complained that they did not 'fight fair') and admiration in its opponents, epitomised in Kipling's poem "Fuzzy Wuzzy": Today the impi lives on in popular lore and culture, even in the West. While the term "impi" has become synonymous with the Zulu nation in international popular culture, it appears in various video games such as "Civilization III", "", "", "", and "Civilization VI", where the Impi is the unique unit for the Zulu faction with Shaka as their leader. 'Impi' is also the title of a very famous South Africa song by Johnny Clegg and the band Juluka which has become something of an unofficial national anthem, especially at major international sports events and especially when the opponent is England. Lyrics: Before stage seven of the 2013 Tour de France, the Orica-GreenEDGE cycling team played 'Impi' on their team bus in honor of teammate Daryl Impey, the first South African Tour de France leader.
https://en.wikipedia.org/wiki?curid=15174
Mean Streets Mean Streets is a 1973 American crime film directed by Martin Scorsese and co-written by Scorsese and Mardik Martin. The film stars Harvey Keitel and Robert De Niro. It was released by Warner Bros. on October 2, 1973. De Niro won the National Society of Film Critics award for Best Supporting Actor for his role as "Johnny Boy" Civello. In 1997, "Mean Streets" was selected for preservation in the United States National Film Registry by the Library of Congress as being "culturally, historically, or aesthetically significant". Charlie is a young Italian-American man in Little Italy, New York City. He is hampered by his feeling of responsibility towards his reckless younger friend Johnny Boy, a small-time gambler, hoodlum, and ne'er-do-well who refuses to work and owes money to many loan sharks. Charlie works for his uncle Giovanni, a powerful mafioso, loan shark, and political fixer, mostly collecting debts. He is also having a secret affair with Johnny Boy's cousin Teresa, who has epilepsy and is ostracized because of her condition—especially by Charlie's uncle. Charlie's uncle also wants Charlie not to be such close friends with Johnny, saying "Honorable men go with honorable men." Charlie is torn between his devout Catholicism and his illicit work for his mafioso uncle. Johnny becomes increasingly self-destructive and disrespectful of his creditors. Failing to receive redemption in the Church, Charlie seeks it through sacrificing himself on Johnny's behalf. At a bar, Michael, a small time loan shark, comes looking for Johnny to "pay up". To his surprise, Johnny insults him. Michael lunges at Johnny, who pulls a gun. After a tense standoff, Michael walks away, and Charlie convinces Johnny that they should leave town for a brief period. Teresa insists on coming with them. Charlie borrows a car and they drive off, leaving the neighborhood without incident. A car that has been following them suddenly pulls up, Michael at the wheel and his henchman, Jimmy Shorts, in the backseat. Jimmy fires several shots at Charlie's car, hitting Johnny in the neck and Charlie in the hand, causing Charlie to crash the car into a fire hydrant. Johnny is seen in an alleyway staggering towards a white light which is revealed to be the police. Meanwhile, Charlie gets out of the crashed vehicle and kneels in the spurting water from the hydrant, dazed and bleeding. Paramedics take a surviving Teresa and Charlie away while the fate of Johnny remains unknown. Apart from his first actual feature, "Who's That Knocking at My Door", and a directing project given to him by early independent film maker Roger Corman, "Boxcar Bertha", this was Scorsese's first feature film of his own design. Director John Cassavetes told him after he completed "Boxcar Bertha": "You’ve just spent a year of your life making a piece of shit." This inspired Scorsese to make a film about his own experiences. Cassavetes told Scorsese he should do something like "Who's That Knocking at My Door", which Cassavetes had liked, and then came "Mean Streets", based on actual events Scorsese saw almost regularly while growing up in New York City's Little Italy. The screenplay for the movie initially began as a continuation of the characters in "Who's That Knocking". Scorsese changed the title from "Season of the Witch" to "Mean Streets", a reference to Raymond Chandler's essay "The Simple Art of Murder", where Chandler writes "But down these mean streets a man must go who is not himself mean, who is neither tarnished nor afraid." Scorsese sent the script to Corman, who agreed to back the film if all the characters were black. Scorsese was anxious to make the film so he considered this option, but actress Verna Bloom arranged a meeting with potential financial backer Jonathan Taplin, who was the road manager for the musical group The Band. Taplin liked the script and was willing to raise the $300,000 budget that Scorsese wanted if Corman promised, in writing, to distribute the film. The blaxploitation suggestion was to come to nothing when funding from Warner Bros. allowed him to make the film as he intended with Italian-American characters. The film was well received by most critics; Pauline Kael was among the enthusiastic critics, calling it "a true original, and a triumph of personal filmmaking" and "dizzyingly sensual". Dave Kehr of the "Chicago Reader" wrote that "the acting and editing have such original, tumultuous force that the picture is completely gripping". Vincent Canby of "The New York Times" reflected that "no matter how bleak the milieu, no matter how heartbreaking the narrative, some films are so thoroughly, beautifully realized they have a kind of tonic effect that has no relation to the subject matter". "Time Out" magazine called it "one of the best American films of the decade". Retrospectively, Roger Ebert of the "Chicago Sun-Times" inducted "Mean Streets" into his Great Movies list and wrote: "In countless ways, right down to the detail of modern TV crime shows, "Mean Streets" is one of the source points of modern movies." In 2013, the staff of "Entertainment Weekly" voted the film the seventh greatest of all time. In 2015, it was ranked 93rd on the BBC's list of the 100 greatest American films. James Gandolfini, when asked on "Inside the Actors Studio" (season 11, episode two) which films most influenced him, cited "Mean Streets" among them, saying "I saw that 10 times in a row." The film holds a 97% "Certified Fresh" rating on Rotten Tomatoes, based on 60 reviews, with an average rating of 8.93/10 and the consensus: ""Mean Streets" is a powerful tale of urban sin and guilt that marks Scorsese's arrival as an important cinematic voice and features electrifying performances from Harvey Keitel and Robert De Niro." "Mean Streets" was released on VHS and Betamax in 1985. The film debuted as a letterboxed LaserDisc on October 7, 1991 in the US. It was released on Blu-ray for the first time on April 6, 2011 in France, and in America on July 17, 2012.
https://en.wikipedia.org/wiki?curid=18996
Myasthenia gravis Myasthenia gravis (MG) is a long-term neuromuscular disease that leads to varying degrees of skeletal muscle weakness. The most commonly affected muscles are those of the eyes, face, and swallowing. It can result in double vision, drooping eyelids, trouble talking, and trouble walking. Onset can be sudden. Those affected often have a large thymus or develop a thymoma. Myasthenia gravis is an autoimmune disease which results from antibodies that block or destroy nicotinic acetylcholine receptors at the junction between the nerve and muscle. This prevents nerve impulses from triggering muscle contractions. Rarely, an inherited genetic defect in the neuromuscular junction results in a similar condition known as congenital myasthenia. Babies of mothers with myasthenia may have symptoms during their first few months of life, known as neonatal myasthenia. Diagnosis can be supported by blood tests for specific antibodies, the edrophonium test, or a nerve conduction study. Myasthenia gravis is generally treated with medications known as acetylcholinesterase inhibitors such as neostigmine and pyridostigmine. Immunosuppressants, such as prednisone or azathioprine, may also be used. The surgical removal of the thymus may improve symptoms in certain cases. Plasmapheresis and high dose intravenous immunoglobulin may be used during sudden flares of the condition. If the breathing muscles become significantly weak, mechanical ventilation may be required. Once intubated acetylcholinesterase inhibitors may be temporarily held to reduce airway secretions. MG affects 50 to 200 per million people. It is newly diagnosed in three to 30 per million people each year. Diagnosis is becoming more common due to increased awareness. It most commonly occurs in women under the age of 40 and in men over the age of 60. It is uncommon in children. With treatment, most of those affected lead relatively normal lives and have a normal life expectancy. The word is from the Greek "mys" "muscle" and "astheneia" "weakness", and the Latin "gravis" "serious". The initial, main symptom in MG is painless weakness of specific muscles, not fatigue. The muscle weakness becomes progressively worse during periods of physical activity and improves after periods of rest. Typically, the weakness and fatigue are worse toward the end of the day. MG generally starts with ocular (eye) weakness; it might then progress to a more severe generalized form, characterized by weakness in the extremities or in muscles that govern basic life functions. In about two-thirds of individuals, the initial symptom of MG is related to the muscles around the eye. There may be eyelid drooping (ptosis due to weakness of levator palpebrae superioris) and double vision (diplopia, due to weakness of the extraocular muscles). Eye symptoms tend to get worse when watching television, reading, or driving, particularly in bright conditions. Consequently, some affected individuals choose to wear sunglasses. The term "ocular myasthenia gravis" describes a subtype of MG where muscle weakness is confined to the eyes, i.e. extraocular muscles, levator palpebrae superioris, and orbicularis oculi. Typically, this subtype evolves into generalized MG, usually after a few years. The weakness of the muscles involved in swallowing may lead to swallowing difficulty (dysphagia). Typically, this means that some food may be left in the mouth after an attempt to swallow, or food and liquids may regurgitate into the nose rather than go down the throat (velopharyngeal insufficiency). Weakness of the muscles that move the jaw (muscles of mastication) may cause difficulty chewing. In individuals with MG, chewing tends to become more tiring when chewing tough, fibrous foods. Difficulty in swallowing, chewing, and speaking is the first symptom in about one-sixth of individuals. Weakness of the muscles involved in speaking may lead to dysarthria and hypophonia. Speech may be slow and slurred, or have a nasal quality. In some cases, a singing hobby or profession must be abandoned. Due to weakness of the muscles of facial expression and muscles of mastication, facial weakness may manifest as the inability to hold the mouth closed (the "hanging jaw sign") and as a snarling expression when attempting to smile. With drooping eyelids, facial weakness may make the individual appear sleepy or sad. Difficulty in holding the head upright may occur. The muscles that control breathing (dyspnea) and limb movements can also be affected; rarely do these present as the first symptoms of MG, but develop over months to years. In a myasthenic crisis, a paralysis of the respiratory muscles occurs, necessitating assisted ventilation to sustain life. Crises may be triggered by various biological stressors such as infection, fever, an adverse reaction to medication, or emotional stress. MG is an autoimmune synaptopathy. The disorder occurs when the immune system malfunctions and generates antibodies that attack the body's tissues. The antibodies in MG attack a normal human protein, the nicotinic acetylcholine receptor, or a related protein called MuSK a muscle-specific kinase. Other less frequent antibodies are found against LRP4, Agrin and titin proteins. Human leukocyte antigen (HLA) haplotypes are associated with increased susceptibility to myasthenia gravis and other autoimmune disorders. Relatives of people with MG have a higher percentage of other immune disorders. The thymus gland cells form part of the body's immune system. In those with myasthenia gravis, the thymus gland is large and abnormal. It sometimes contains clusters of immune cells which indicate lymphoid hyperplasia, and the thymus gland may give wrong instructions to immune cells. For women who are pregnant and already have MG, in a third of cases, they have been known to experience an exacerbation of their symptoms, and in those cases it usually occurs in the first trimester of pregnancy. Signs and symptoms in pregnant mothers tend to improve during the second and third trimesters. Complete remission can occur in some mothers. Immunosuppressive therapy should be maintained throughout pregnancy, as this reduces the chance of neonatal muscle weakness, and controls the mother's myasthenia. About 10–20% of infants with mothers affected by the condition are born with transient neonatal myasthenia (TNM), which generally produces feeding and respiratory difficulties that develop about 12 hours to several days after birth. A child with TNM typically responds very well to acetylcholinesterase inhibitors, and the condition generally resolves over a period of three weeks as the antibodies diminish and generally does not result in any complications. Very rarely, an infant can be born with arthrogryposis multiplex congenita, secondary to profound intrauterine weakness. This is due to maternal antibodies that target an infant's acetylcholine receptors. In some cases, the mother remains asymptomatic. MG can be difficult to diagnose, as the symptoms can be subtle and hard to distinguish from both normal variants and other neurological disorders. Three types of myasthenic symptoms in children can be distinguished: Congenital myasthenias cause muscle weakness and fatigability similar to those of MG. The signs of congenital myasthenia usually are present in the first years of childhood, although they may not be recognized until adulthood. When diagnosed with MG, a person is assessed for his or her neurological status and the level of illness is established. This is usually done using the accepted Myasthenia Gravis Foundation of America Clinical Classification scale. During a physical examination to check for MG, a doctor might ask the person to perform repetitive movements. For instance, the doctor may ask one to look at a fixed point for 30 seconds and to relax the muscles of the forehead. This is done because a person with MG and ptosis of the eyes might be involuntarily using the forehead muscles to compensate for the weakness in the eyelids. The clinical examiner might also try to elicit the "curtain sign" in a person by holding one of the person's eyes open, which in the case of MG will lead the other eye to close. If the diagnosis is suspected, serology can be performed: Muscle fibers of people with MG are easily fatigued, which the repetitive nerve stimulation test can help diagnose. In single-fiber electromyography (SFEMG), which is considered to be the most sensitive (although not the most specific) test for MG, a thin needle electrode is inserted into different areas of a particular muscle to record the action potentials from several samplings of different individual muscle fibers. Two muscle fibers belonging to the same motor unit are identified, and the temporal variability in their firing patterns is measured. Frequency and proportion of particular abnormal action potential patterns, called "jitter" and "blocking", are diagnostic. Jitter refers to the abnormal variation in the time interval between action potentials of adjacent muscle fibers in the same motor unit. Blocking refers to the failure of nerve impulses to elicit action potentials in adjacent muscle fibers of the same motor unit. Applying ice for two to five minutes to the muscles reportedly has a sensitivity and specificity of 76.9% and 98.3%, respectively, for the identification of MG. Acetylcholinesterase is thought to be inhibited at the lower temperature, and this is the basis for this diagnostic test. This generally is performed on the eyelids when a ptosis is present, and is deemed positive if a ≥2 mm rise in the eyelid occurs after the ice is removed. This test requires the intravenous administration of edrophonium chloride or neostigmine, drugs that block the breakdown of acetylcholine by cholinesterase (acetylcholinesterase inhibitors). This test is no longer typically performed, as its use can lead to life-threatening bradycardia (slow heart rate) which requires immediate emergency attention. Production of edrophonium was discontinued in 2008. A chest X-ray may identify widening of the mediastinum suggestive of thymoma, but computed tomography (CT) or magnetic resonance imaging (MRI) are more sensitive ways to identify thymomas and are generally done for this reason. MRI of the cranium and orbits may also be performed to exclude compressive and inflammatory lesions of the cranial nerves and ocular muscles. The forced vital capacity may be monitored at intervals to detect increasing muscular weakness. Acutely, negative inspiratory force may be used to determine adequacy of ventilation; it is performed on those individuals with MG. Treatment is by medication and/or surgery. Medication consists mainly of acetylcholinesterase inhibitors to directly improve muscle function and immunosuppressant drugs to reduce the autoimmune process. Thymectomy is a surgical method to treat MG. Worsening may occur with medication such as fluoroquinolones, aminoglycosides, and magnesium. About 10% of people with generalized MG are considered treatment-refractory. Autologous hematopoietic stem cell transplantation (HSCT) is sometimes used in severe, treatment-refractory MG. Available data provide preliminary evidence that HSCT can be an effective therapeutic option in carefully selected cases. Acetylcholinesterase inhibitors can provide symptomatic benefit and may not fully remove a person's weakness from MG. While they might not fully remove all symptoms of MG, they still may allow a person the ability to perform normal daily activities. Usually, acetylcholinesterase inhibitors are started at a low dose and increased until the desired result is achieved. If taken 30 minutes before a meal, symptoms will be mild during eating, which is helpful for those who have difficulty swallowing due to their illness. Another medication used for MG, atropine, can reduce the muscarinic side effects of acetylcholinesterase inhibitors. Pyridostigmine is a relatively long-acting drug (when compared to other cholinergic agonists), with a half-life around four hours with relatively few side effects. Generally, it is discontinued in those who are being mechanically ventilated as it is known to increase the amount of salivary secretions. A few high-quality studies have directly compared cholinesterase inhibitors with other treatments (or placebo); their practical benefit may be such that it would be difficult to conduct studies in which they would be withheld from some people. The steroid prednisone might also be used to achieve a better result, but it can lead to the worsening of symptoms for 14 days and takes 6–8 weeks to achieve its maximal effectiveness. Due to the myriad symptoms that steroid treatments can cause, it is not the preferred method of treatment. Other immune suppressing medications may also be used including rituximab. If the myasthenia is serious (myasthenic crisis), plasmapheresis can be used to remove the putative antibodies from the circulation. Also, intravenous immunoglobulins (IVIGs) can be used to bind the circulating antibodies. Both of these treatments have relatively short-lived benefits, typically measured in weeks, and often are associated with high costs which make them prohibitive; they are generally reserved for when MG requires hospitalization. As thymomas are seen in 10% of all people with the MG, people are often given a chest X-ray and CT scan to evaluate their need for surgical removal of their thymus and any cancerous tissue that may be present. Even if surgery is performed to remove a thymoma, it generally does not lead to the remission of MG. Surgery in the case of MG involves the removal of the thymus, although in 2013 there was no clear indication of any benefit except in the presence of a thymoma. A 2016 randomized controlled trial, however, found some benefits. People with MG should be educated regarding the fluctuating nature of their symptoms, including weakness and exercise-induced fatigue. Exercise participation should be encouraged with frequent rest. In people with generalized MG, some evidence indicates a partial home program including training in diaphragmatic breathing, pursed-lip breathing, and interval-based muscle therapy may improve respiratory muscle strength, chest wall mobility, respiratory pattern, and respiratory endurance. In people with myasthenia gravis, older forms of iodinated contrast used for medical imaging have caused an increased risk of exacerbation of the disease, but modern forms have no immediate increased risk. The prognosis of people with MG is generally good, as is quality of life, given very good treatment. Monitoring of a person with MG is very important, as at least 20% of people diagnosed with it will experience a myasthenic crisis within two years of their diagnosis, requiring rapid medical intervention. Generally, the most disabling period of MG might be years after the initial diagnosis. In the early 1900s, 70% of detected cases died from lung problems; now, that number is estimated to be around 3–5%, which is attributed to increased awareness and medications to manage symptoms. Myasthenia gravis occurs in all ethnic groups and both sexes. It most commonly affects women under 40 and people from 50 to 70 years old of either sex, but it has been known to occur at any age. Younger people rarely have thymoma. The number of people affected in the United States is estimated at between 0.5 and 20.4 cases per 100,000, with an estimated 60,000 Americans affected. Within the United Kingdom, an estimated 15 cases of MG occur per 100,000 people. The first to write about MG were Thomas Willis, Samuel Wilks, Erb, and Goldflam. The term "myasthenia gravis pseudo-paralytica" was proposed in 1895 by Jolly, a German physician. Mary Walker treated a person with MG with physostigmine in 1934. Simpson and Nastuck detailed the autoimmune nature of the condition. In 1973, Patrick and Lindstrom used rabbits to show that immunization with purified muscle-like acetylcholine receptors caused the development of MG-like symptoms. Immunomodulating substances, such as drugs that prevent acetylcholine receptor modulation by the immune system, are currently being researched. Some research recently has been on anti-c5 inhibitors for treatment research as they are safe and used in the treatment of other diseases. Ephedrine seems to benefit some people more than other medications, but it has not been properly studied as of 2014. In the laboratory MG is mostly studied in model organisms, such as rodents. In addition, in 2015, scientists developed an in vitro functional all-human, neuromuscular junction assay from human embryonic stem cells and somatic-muscle stem cells. After the addition of pathogenic antibodies against the acetylcholine receptor and activation of the complement system, the neuromuscular co-culture shows symptoms such as weaker muscle contractions.
https://en.wikipedia.org/wiki?curid=18998
Microsoft Microsoft Corporation () is an American multinational technology company with headquarters in Redmond, Washington. It develops, manufactures, licenses, supports, and sells computer software, consumer electronics, personal computers, and related services. Its best known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. In 2016, it was the world's largest software maker by revenue (currently Alphabet/Google has more revenue). The word "Microsoft" is a portmanteau of "microcomputer" and "software". Microsoft is ranked No. 30 in the 2018 Fortune 500 rankings of the largest United States corporations by total revenue. It is considered one of the Big Five technology companies alongside Amazon, Apple, Google, and Facebook. Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011. , Microsoft is market-dominant in the IBM PC compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android. The company also produces a wide range of other consumer and enterprise software for desktops, laptops, tabs, gadgets, and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure), and software development (Visual Studio). Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This unfolded with Microsoft acquiring Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers, and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division. Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999. Earlier dethroned by Apple in 2010, in 2018 Microsoft reclaimed its position as the most valuable publicly traded company in the world. In April 2019, Microsoft reached the market cap, becoming the third U.S. public company to be valued at over $1 trillion after Apple and Amazon respectively. Childhood friends Bill Gates and Paul Allen sought to make a business utilizing their shared skills in computer programming. In 1972, they founded Traf-O-Data which sold a rudimentary computer to track and analyze automobile traffic data. Gates enrolled at Harvard while Allen pursued a degree in computer science at Washington State University, though he later dropped out of school to work at Honeywell. The January 1975 issue of "Popular Electronics" featured Micro Instrumentation and Telemetry Systems's (MITS) Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. Gates called MITS and claimed that he had a working interpreter, and MITS requested a demonstration. Allen worked on a simulator for the Altair while Gates developed the interpreter, and it worked flawlessly when they demonstrated it to MITS in March 1975 in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as the CEO, and Allen suggested the name "Micro-Soft", short for micro-computer software. In August 1977, the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office of "ASCII Microsoft". Microsoft moved its headquarters to Bellevue, Washington in January 1979. Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xenix, but it was MS-DOS that solidified the company's dominance. IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS to be used in the IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products which it branded as MS-DOS, although IBM rebranded it to IBM PC DOS. Microsoft retained ownership of MS-DOS following the release of the IBM PC in August 1981. IBM had copyrighted the IBM PC BIOS, so other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Microsoft eventually became the leading PC operating systems vendor. The company expanded into new markets with the release of the "Microsoft Mouse" in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's disease. Allen claimed in "Idea Man: A Memoir by the Co-founder of Microsoft" that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he did not think that he was working hard enough. Allen later invested in low-tech sectors, sports teams, commercial real estate, neuroscience, private space flight, and more. Microsoft released Microsoft Windows on November 20, 1985, as a graphical extension for MS-DOS, despite having begun jointly developing OS/2 with IBM the previous August. Microsoft moved its headquarters from Bellevue to Redmond, Washington on February 26, 1986, and went public on March 13, with the resulting rise in stock making an estimated four billionaires and 12,000 millionaires from Microsoft employees. Microsoft released its version of OS/2 to original equipment manufacturers (OEMs) on April 2, 1987. In 1990, the Federal Trade Commission examined Microsoft for possible collusion due to the partnership with IBM, marking the beginning of more than a decade of legal clashes with the government. Meanwhile, the company was at work on Microsoft Windows NT, which was heavily based on their copy of the OS/2 code. It shipped on July 21, 1993, with a new modular kernel and the 32-bit Win32 application programming interface (API), making it easier to port from 16-bit (MS-DOS-based) Windows. Microsoft informed IBM of Windows NT, and the OS/2 partnership deteriorated. In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as Microsoft Word and Microsoft Excel. On May 22, Microsoft launched Windows 3.0, featuring streamlined user interface graphics and improved protected mode capability for the Intel 386 processor, and both Office and Windows became dominant in their respective areas. On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statement which said: "Beginning in 1988, and continuing until July 15, 1994, Microsoft induced many OEMs to execute anti-competitive 'per processor' licenses. Under a per processor license, an OEM pays Microsoft a royalty for each computer it sells containing a particular microprocessor, whether the OEM sells the computer with a Microsoft operating system or a non-Microsoft operating system. In effect, the royalty payment to Microsoft when no Microsoft product is being used acts as a penalty, or tax, on the OEM's use of a competing PC operating system. Since 1988, Microsoft's use of per processor licenses has increased." Following Bill Gates' internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefine its offerings and expand its product line into computer networking and the World Wide Web. With a few exceptions of new companies, like Netscape, Microsoft was the only major and established company that acted fast enough to be a part of the World Wide Web practically from the start. Other companies like Borland, WordPerfect, Novell, IBM and Lotus, being much slower to adapt to the new situation, would give Microsoft a market dominance. The company released Windows 95 on August 24, 1995, featuring pre-emptive multitasking, a completely new user interface with a novel start button, and 32-bit compatibility; similar to NT, it provided the Win32 API. Windows 95 came bundled with the online service MSN, which was at first intended to be a competitor to the Internet, and (for OEMs) Internet Explorer, a Web browser. Internet Explorer was not bundled with the retail Windows 95 boxes, because the boxes were printed before the team finished the Web browser, and instead was included in the Windows 95 Plus! pack. Branching out into new markets in 1996, Microsoft and General Electric's NBC unit created a new 24/7 cable news channel, MSNBC. Microsoft created Windows CE 1.0, a new OS designed for devices with low memory and other constraints, such as personal digital assistants. In October 1997, the Justice Department filed a motion in the Federal District Court, stating that Microsoft violated an agreement signed in 1994 and asked the court to stop the bundling of Internet Explorer with Windows. On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer, an old college friend of Gates and employee of the company since 1980, while creating a new position for himself as Chief Software Architect. Various companies including Microsoft formed the Trusted Computing Platform Alliance in October 1999 to (among other things) increase security and protect intellectual property through identifying changes in hardware and software. Critics decried the alliance as a way to enforce indiscriminate restrictions over how consumers use software, and over how computers behave, and as a form of digital rights management: for example the scenario where a computer is not only secured for its owner, but also secured against its owner as well. On April 3, 2000, a judgment was handed down in the case of "United States v. Microsoft Corp.", calling the company an "abusive monopoly." Microsoft later settled with the U.S. Department of Justice in 2004. On October 25, 2001, Microsoft released Windows XP, unifying the mainstream and NT lines of OS under the NT codebase. The company released the Xbox later that year, entering the video game console market dominated by Sony and Nintendo. In March 2004 the European Union brought antitrust legal action against the company, citing it abused its dominance with the Windows OS, resulting in a judgment of €497 million ($613 million) and requiring Microsoft to produce new versions of Windows XP without Windows Media Player: Windows XP Home Edition N and Windows XP Professional N. In November 2005, the company's second video game console, the Xbox 360, was released. There were two versions, a basic version for $299.99 and a deluxe version for $399.99. Released in January 2007, the next version of Windows, Vista, focused on features, security and a redesigned user interface dubbed Aero. Microsoft Office 2007, released at the same time, featured a "Ribbon" user interface which was a significant departure from its predecessors. Relatively strong sales of both products helped to produce a record profit in 2007. The European Union imposed another fine of €899 million ($1.4 billion) for Microsoft's lack of compliance with the March 2004 judgment on February 27, 2008, saying that the company charged rivals unreasonable prices for key information about its workgroup and backoffice servers. Microsoft stated that it was in compliance and that "these fines are about the past issues that have been resolved". 2007 also saw the creation of a multi-core unit at Microsoft, following the steps of server companies such as Sun and IBM. Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in June 2006, while retaining other positions related to the company in addition to being an advisor for the company on key projects. Azure Services Platform, the company's entry into the cloud computing market for Windows, launched on October 27, 2008. On February 12, 2009, Microsoft announced its intent to open a chain of Microsoft-branded retail stores, and on October 22, 2009, the first retail Microsoft Store opened in Scottsdale, Arizona; the same day Windows 7 was officially released to the public. Windows 7's focus was on refining Vista with ease-of-use features and performance enhancements, rather than an extensive reworking of Windows. As the smartphone industry boomed in 2007, Microsoft had struggled to keep up with its rivals Apple and Google in providing a modern smartphone operating system. As a result, in 2010 Microsoft revamped their aging flagship mobile operating system, Windows Mobile, replacing it with the new Windows Phone OS. Microsoft implemented a new strategy for the software industry that had them working more closely with smartphone manufacturers, such as Nokia, and providing a consistent user experience across all smartphones using the Windows Phone OS. It used a new user interface design language, codenamed "Metro", which prominently used simple shapes, typography and iconography, utilizing the concept of minimalism. Microsoft is a founding member of the Open Networking Foundation started on March 23, 2011. Fellow founders were Google, HP Networking, Yahoo!, Verizon Communications, Deutsche Telekom and 17 other companies. This nonprofit organization is focused on providing support for a cloud computing initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas. Following the release of Windows Phone, Microsoft undertook a gradual rebranding of its product range throughout 2011 and 2012, with the corporation's logos, products, services and websites adopting the principles and concepts of the Metro design language. Microsoft unveiled Windows 8, an operating system designed to power both personal computers and tablet computers, in Taipei in June 2011. A developer preview was released on September 13, which was subsequently replaced by a consumer preview on February 29, 2012, and released to the public in May. The Surface was unveiled on June 18, becoming the first computer in the company's history to have its hardware made by Microsoft. On June 25, Microsoft paid US$1.2 billion to buy the social network Yammer. On July 31, they launched the Outlook.com webmail service to compete with Gmail. On September 4, 2012, Microsoft released Windows Server 2012. In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC since 1996. On October 1, Microsoft announced its intention to launch a news operation, part of a new-look MSN, with Windows 8 later in the month. On October 26, 2012, Microsoft launched Windows 8 and the Microsoft Surface. Three days later, Windows Phone 8 was launched. To cope with the potential for an increase in demand for products and services, Microsoft opened a number of "holiday stores" across the U.S. to complement the increasing number of "bricks-and-mortar" Microsoft Stores that opened in 2012. On March 29, 2013, Microsoft launched a Patent Tracker. In August 2012, the New York City Police Department announced a partnership with Microsoft for the development of the Domain Awareness System which is used for Police surveillance in New York City. The Kinect, a motion-sensing input device made by Microsoft and designed as a video game controller, first introduced in November 2010, was upgraded for the 2013 release of the Xbox One video game console. Kinect's capabilities were revealed in May 2013: an ultra-wide 1080p camera, function in the dark due to an infrared sensor, higher-end processing power and new software, the ability to distinguish between fine movements (such as a thumb movements), and determining a user's heart rate by looking at their face. Microsoft filed a patent application in 2011 that suggests that the corporation may use the Kinect camera system to monitor the behavior of television viewers as part of a plan to make the viewing experience more interactive. On July 19, 2013, Microsoft stocks suffered its biggest one-day percentage sell-off since the year 2000, after its fourth-quarter report raised concerns among the investors on the poor showings of both Windows 8 and the Surface tablet. Microsoft suffered a loss of more than US$32 billion. In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize the business into four new business divisions, namely Operating System, Apps, Cloud, and Devices. All previous divisions will be dissolved into new divisions without any workforce cuts. On September 3, 2013, Microsoft agreed to buy Nokia's mobile unit for $7 billion, following Amy Hood taking the role of CFO. On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella, who previously led Microsoft's Cloud and Enterprise division. On the same day, John W. Thompson took on the role of chairman, in place of Bill Gates, who continued to participate as a technology advisor. Thompson became the second chairman in Microsoft's history. On April 25, 2014, Microsoft acquired Nokia Devices and Services for $7.2 billion. This new subsidiary was renamed Microsoft Mobile Oy. On September 15, 2014, Microsoft acquired the video game development company Mojang, best known for "Minecraft", for $2.5 billion. On June 8, 2017, Microsoft acquired Hexadite, an Israeli security firm, for $100 million. On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard, Microsoft Surface Hub. On July 29, 2015, Windows 10 was released, with its server sibling, Windows Server 2016, released in September 2016. In Q1 2015, Microsoft was the third largest maker of mobile phones, selling 33 million units (7.2% of all). While a large majority (at least 75%) of them do not run any version of Windows Phone— those other phones are not categorized as smartphones by Gartner in the same time frame 8 million Windows smartphones (2.5% of all smartphones) were made by all manufacturers (but mostly by Microsoft). Microsoft's share of the U.S. smartphone market in January 2016 was 2.7%. During the summer of 2015 the company lost $7.6 billion related to its mobile-phone business, firing 7,800 employees. On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer announcing that Universal Windows Platform (UWP) apps would be the focus for Microsoft's gaming in the future. On January 24, 2017, Microsoft showcased Intune for Education at the BETT 2017 education technology conference in London. Intune for Education is a new cloud-based application and device management service for the education sector. In May 2016, the company announced it was laying off 1,850 workers, and taking an impairment and restructuring charge of $950 million. In June 2016, Microsoft announced a project named Microsoft Azure Information Protection. It aims to help enterprises protect their data as it moves between servers and devices. In November 2016, Microsoft joined the Linux Foundation as a Platinum member during Microsoft's Connect(); developer event in New York. The cost of each Platinum membership is US$500,000 per year. Some analysts deemed this unthinkable ten years prior, however, as in 2001 then-CEO Steve Ballmer called Linux "cancer". Microsoft planned to launch a preview of Intune for Education "in the coming weeks", with general availability scheduled for spring 2017, priced at $30 per device, or through volume licensing agreements. In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdown security breach. The patch led to issues with the Microsoft Azure virtual machines reliant on Intel's CPU architecture. On January 12, Microsoft released PowerShell Core 6.0 for the macOS and Linux operating systems. In February 2018, Microsoft killed notification support for their Windows Phone devices which effectively ended firmware updates for the discontinued devices. In March 2018, Microsoft recalled Windows 10 S to change it to a mode for the Windows operating system rather than a separate and unique operating system. In March the company also established guidelines that censor users of Office 365 from using profanity in private documents. In April 2018, Microsoft released the source code for Windows File Manager under the MIT License to celebrate the program's 20th anniversary. In April the company further expressed willingness to embrace open source initiatives by announcing Azure Sphere as its own derivative of the Linux operating system. In May 2018, Microsoft partnered with 17 American intelligence agencies to develop cloud computing products. The project is dubbed "Azure Government" and has ties to the Joint Enterprise Defense Infrastructure (JEDI) surveillance program. On June 4, 2018, Microsoft officially announced the acquisition of GitHub for $7.5 billion, a deal that closed on October 26, 2018. On July 10, 2018, Microsoft revealed the Surface Go platform to the public. Later in the month it converted Microsoft Teams to gratis. In August 2018, Microsoft released two projects called Microsoft AccountGuard and Defending Democracy. It also unveiled Snapdragon 850 compatibility for Windows 10 on the ARM architecture. In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for Internet of things (IoT) technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The specific computer programs used in the process fall under the Azure Machine Learning and the Azure IoT Hub platforms. In September 2018, Microsoft discontinued Skype Classic. On October 10, 2018, Microsoft joined the Open Invention Network community despite holding more than 60,000 patents. In November 2018, Microsoft agreed to supply 100,000 Microsoft HoloLens headsets to the United States military in order to "increase lethality by enhancing the ability to detect, decide and engage before the enemy." In November 2018, Microsoft introduced Azure Multi-Factor Authentication for Microsoft Azure. In December 2018, Microsoft announced Project Mu, an open source release of the Unified Extensible Firmware Interface (UEFI) core used in Microsoft Surface and Hyper-V products. The project promotes the idea of Firmware as a Service. In the same month, Microsoft announced the open source implementation of Windows Forms and the Windows Presentation Foundation (WPF) which will allow for further movement of the company toward the transparent release of key frameworks used in developing Windows desktop applications and software. December also saw the company discontinue the Microsoft Edge project in favor of Chromium backends for their browsers. February 20, 2019 Microsoft Corp said it will offer its cyber security service AccountGuard to 12 new markets in Europe including Germany, France and Spain, to close security gaps and protect customers in political space from hacking. In February 2019, hundreds of Microsoft employees protested the company's war profiteering from a $480 million contract to develop virtual reality headsets for the United States Army. On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. Due to the COVID-19 pandemic, Microsoft closed all of its retail stores indefinitely due to health concerns. The company is run by a board of directors made up of mostly company outsiders, as is customary for publicly traded companies. Members of the board of directors as of January 2018 are Bill Gates, Satya Nadella, Reid Hoffman, Hugh Johnston, Teri List-Stoll, Charles Noski, Helmut Panke, Sandi Peterson, Penny Pritzker, Charles Scharf, Arne Sorenson, John W. Stanton, John W. Thompson and Padmasree Warrior. Board members are elected every year at the annual shareholders' meeting using a majority vote system. There are five committees within the board which oversee more specific matters. These committees include the Audit Committee, which handles accounting issues with the company including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Finance Committee, which handles financial matters such as proposing mergers and acquisitions; the Governance and Nominating Committee, which handles various corporate matters including nomination of the board; and the Antitrust Compliance Committee, which attempts to prevent company practices from violating antitrust laws. On March 13, 2020, Gates announced that he is leaving the board of directors of Microsoft and Berkshire Hathaway in order to focus more on his philanthropic efforts. According to Aaron Tilley of "The Wall Street Journal" this is "marking the biggest boardroom departure in the tech industry since the death of longtime rival and Apple Inc. co-founder Steve Jobs." When Microsoft went public and launched its Initial Public Offering (IPO) in 1986, the opening stock price was $21; after the trading day, the price closed at $27.75. As of July 2010, with the company's nine stock splits, any IPO shares would be multiplied by 288; if one were to buy the IPO today, given the splits and other factors, it would cost about 9 cents. The stock price peaked in 1999 at around $119 ($60.928, adjusting for splits). The company began to offer a dividend on January 16, 2003, starting at eight cents per share for the fiscal year followed by a dividend of sixteen cents per share the subsequent year, switching from yearly to quarterly dividends in 2005 with eight cents a share per quarter and a special one-time payout of three dollars per share for the second quarter of the fiscal year. Though the company had subsequent increases in dividend payouts, the price of Microsoft's stock remained steady for years. Standard & Poor's and Moody's Investors Service have both given a AAA rating to Microsoft, whose assets were valued at $41 billion as compared to only $8.5 billion in unsecured debt. Consequently, in February 2011 Microsoft released a corporate bond amounting to $2.25 billion with relatively low borrowing rates compared to government bonds. For the first time in 20 years Apple Inc. surpassed Microsoft in Q1 2011 quarterly profits and revenues due to a slowdown in PC sales and continuing huge losses in Microsoft's Online Services Division (which contains its search engine Bing). Microsoft profits were $5.2 billion, while Apple Inc. profits were $6 billion, on revenues of $14.5 billion and $24.7 billion respectively. Microsoft's Online Services Division has been continuously loss-making since 2006 and in Q1 2011 it lost $726 million. This follows a loss of $2.5 billion for the year 2010. On July 20, 2012, Microsoft posted its first quarterly loss ever, despite earning record revenues for the quarter and fiscal year, with a net loss of $492 million due to a writedown related to the advertising company aQuantive, which had been acquired for $6.2 billion back in 2007. As of January 2014, Microsoft's market capitalization stood at $314B, making it the 8th largest company in the world by market capitalization. On November 14, 2014, Microsoft overtook ExxonMobil to become the second most-valuable company by market capitalization, behind only Apple Inc. Its total market value was over $410B—with the stock price hitting $50.04 a share, the highest since early 2000. In 2015, Reuters reported that Microsoft Corp had earnings abroad of $76.4 billion which were untaxed by the Internal Revenue Service. Under U.S. law, corporations don't pay income tax on overseas profits until the profits are brought into the United States. In November 2018, the company won a $480 million military contract with the U.S. government to bring augmented reality (AR) headset technology into the weapon repertoires of American soldiers. The two-year contract may result in follow-on orders of more than 100,000 headsets, according to documentation describing the bidding process. One of the contract's tag lines for the augmented reality technology seems to be its ability to enable "25 bloodless battles before the 1st battle", suggesting that actual combat training is going to be an essential aspect of the augmented reality headset capabilities. In 2004, Microsoft commissioned research firms to do independent studies comparing the total cost of ownership (TCO) of Windows Server 2003 to Linux; the firms concluded that companies found Windows easier to administrate than Linux, thus those using Windows would administrate faster resulting in lower costs for their company (i.e. lower TCO). This spurred a wave of related studies; a study by the Yankee Group concluded that upgrading from one version of Windows Server to another costs a fraction of the switching costs from Windows Server to Linux, although companies surveyed noted the increased security and reliability of Linux servers and concern about being locked into using Microsoft products. Another study, released by the Open Source Development Labs, claimed that the Microsoft studies were "simply outdated and one-sided" and their survey concluded that the TCO of Linux was lower due to Linux administrators managing more servers on average and other reasons. As part of the "Get the Facts" campaign, Microsoft highlighted the .NET Framework trading platform that it had developed in partnership with Accenture for the London Stock Exchange, claiming that it provided "five nines" reliability. After suffering extended downtime and unreliability the London Stock Exchange announced in 2009 that it was planning to drop its Microsoft solution and switch to a Linux-based one in 2010. In 2012, Microsoft hired a political pollster named Mark Penn, whom "The New York Times" called "famous for bulldozing" his political opponents as Executive Vice-President, Advertising and Strategy. Penn created a series of negative advertisements targeting one of Microsoft's chief competitors, Google. The advertisements, called "Scroogled", attempt to make the case that Google is "screwing" consumers with search results rigged to favor Google's paid advertisers, that Gmail violates the privacy of its users to place ad results related to the content of their emails and shopping results, which favor Google products. Tech publications like TechCrunch have been highly critical of the advertising campaign, while Google employees have embraced it. In July 2014, Microsoft announced plans to lay off 18,000 employees. Microsoft employed 127,104 people as of June 5, 2014, making this about a 14 percent reduction of its workforce as the biggest Microsoft lay off ever. This included 12,500 professional and factory personnel. Previously, Microsoft had eliminated 5,800 jobs in 2009 in line with the Great Recession of 2008–2017. In September 2014, Microsoft laid off 2,100 people, including 747 people in the Seattle–Redmond area, where the company is headquartered. The firings came as a second wave of the layoffs that were previously announced. This brought the total number to over 15,000 out of the 18,000 expected cuts. In October 2014, Microsoft revealed that it was almost done with the elimination of 18,000 employees, which was its largest-ever layoff sweep. In July 2015, Microsoft announced another 7,800 job cuts in the next several months. In May 2016, Microsoft announced another 1,850 job cuts mostly in (Nokia) mobile phone division. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments. Microsoft provides information about reported bugs in their software to intelligence agencies of the United States government, prior to the public release of the fix. A Microsoft spokesperson has stated that the corporation runs several programs that facilitate the sharing of such information with the U.S. government. Following media reports about PRISM, NSA's massive electronic surveillance program, in May 2013, several technology companies were identified as participants, including Microsoft. According to leaks of said program, Microsoft joined the PRISM program in 2007. However, in June 2013, an official statement from Microsoft flatly denied their participation in the program: During the first six months in 2013, Microsoft had received requests that affected between 15,000 and 15,999 accounts. In December 2013, the company made statement to further emphasize the fact that they take their customers' privacy and data protection very seriously, even saying that "government snooping potentially now constitutes an "advanced persistent threat," alongside sophisticated malware and cyber attacks". The statement also marked the beginning of three-part program to enhance Microsoft's encryption and transparency efforts. On July 1, 2014, as part of this program they opened the first (of many) Microsoft Transparency Center, that provides "participating governments with the ability to review source code for our key products, assure themselves of their software integrity, and confirm there are no "back doors." Microsoft has also argued that the United States Congress should enact strong privacy regulations to protect consumer data. In April 2016, the company sued the U.S. government, arguing that secrecy orders were preventing the company from disclosing warrants to customers in violation of the company's and customers' rights. Microsoft argued that it was unconstitutional for the government to indefinitely ban Microsoft from informing its users that the government was requesting their emails and other documents, and that the Fourth Amendment made it so people or businesses had the right to know if the government searches or seizes their property. On October 23, 2017, Microsoft said it would drop the lawsuit as a result of a policy change by the United States Department of Justice (DoJ). The DoJ had "changed data request rules on alerting Internet users about agencies accessing their information." Technical reference for developers and articles for various Microsoft magazines such as "Microsoft Systems Journal" (MSJ) are available through the Microsoft Developer Network (MSDN). MSDN also offers subscriptions for companies and individuals, and the more expensive subscriptions usually offer access to pre-release beta versions of Microsoft software. In April 2004 Microsoft launched a community site for developers and users, titled Channel 9, that provides a wiki and an Internet forum. Another community site that provides daily videocasts and other services, On10.net, launched on March 3, 2006. Free technical support is traditionally provided through online Usenet newsgroups, and CompuServe in the past, monitored by Microsoft employees; there can be several newsgroups for a single product. Helpful people can be elected by peers or Microsoft employees for Microsoft Most Valuable Professional (MVP) status, which entitles them to a sort of special social status and possibilities for awards and other benefits. Noted for its internal lexicon, the expression "eating your own dog food" is used to describe the policy of using pre-release and beta versions of products inside Microsoft in an effort to test them in "real-world" situations. This is usually shortened to just "dog food" and is used as noun, verb, and adjective. Another bit of jargon, FYIFV or FYIV ("Fuck You, I'm [Fully] Vested"), is used by an employee to indicate they are financially independent and can avoid work anytime they wish. The company is also known for its hiring process, mimicked in other organizations and dubbed the "Microsoft interview", which is notorious for off-the-wall questions such as "Why is a manhole cover round?". Microsoft is an outspoken opponent of the cap on H-1B visas, which allow companies in the U.S. to employ certain foreign workers. Bill Gates claims the cap on H1B visas makes it difficult to hire employees for the company, stating "I'd certainly get rid of the H1B cap" in 2005. Critics of H1B visas argue that relaxing the limits would result in increased unemployment for U.S. citizens due to H1B workers working for lower salaries. The Human Rights Campaign Corporate Equality Index, a report of how progressive the organization deems company policies towards LGBT employees, rated Microsoft as 87% from 2002 to 2004 and as 100% from 2005 to 2010 after they allowed gender expression. In August 2018, Microsoft implemented a policy for all companies providing subcontractors to require 12 weeks of paid parental leave to each employee. This expands on the former requirement from 2015 requiring 15 days of paid vacation and sick leave each year. In 2015, Microsoft established its own parental leave policy to allow 12 weeks off for parental leave with an additional 8 weeks for the parent who gave birth. In 2011, Greenpeace released a report rating the top ten big brands in cloud computing on their sources of electricity for their data centers. At the time, data centers consumed up to 2% of all global electricity and this amount was projected to increase. Phil Radford of Greenpeace said "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today," and called on "Amazon, Microsoft and other leaders of the information-technology industry must embrace clean energy to power their cloud-based data centers." In 2013, Microsoft agreed to buy power generated by a Texas wind project to power one of its data centers. Microsoft is ranked on the 17th place in Greenpeace's "Guide to Greener Electronics" (16th Edition) that ranks 18 electronics manufacturers according to their policies on toxic chemicals, recycling and climate change. Microsoft's timeline for phasing out brominated flame retardant (BFRs) and phthalates in all products is 2012 but its commitment to phasing out PVC is not clear. As of January 2011, it has no products that are completely free from PVC and BFRs. Microsoft's main U.S. campus received a silver certification from the Leadership in Energy and Environmental Design (LEED) program in 2008, and it installed over 2,000 solar panels on top of its buildings at its Silicon Valley campus, generating approximately 15 percent of the total energy needed by the facilities in April 2005. Microsoft makes use of alternative forms of transit. It created one of the world's largest private bus systems, the "Connector", to transport people from outside the company; for on-campus transportation, the "Shuttle Connect" uses a large fleet of hybrid cars to save fuel. The company also subsidizes regional public transport, provided by Sound Transit and King County Metro, as an incentive. In February 2010 however, Microsoft took a stance against adding additional public transport and high-occupancy vehicle (HOV) lanes to the State Route 520 and its floating bridge connecting Redmond to Seattle; the company did not want to delay the construction any further. Microsoft was ranked number 1 in the list of the World's Best Multinational Workplaces by the Great Place to Work Institute in 2011. In January 2020, the company promised to remove from the environment all of the carbon that it has emitted since its foundation in 1975. Microsoft donates to politicians who deny climate change including Jim Inhofe. The corporate headquarters, informally known as the Microsoft Redmond campus, is located at One Microsoft Way in Redmond, Washington. Microsoft initially moved onto the grounds of the campus on February 26, 1986, weeks before the company went public on March 13. The headquarters has since experienced multiple expansions since its establishment. It is estimated to encompass over 8 million ft2 (750,000 m2) of office space and 30,000–40,000 employees. Additional offices are located in Bellevue and Issaquah, Washington (90,000 employees worldwide). The company is planning to upgrade its Mountain View, California, campus on a grand scale. The company has occupied this campus since 1981. In 2016, the company bought the 32-acre campus, with plans to renovate and expand it by 25%. Microsoft operates an East Coast headquarters in Charlotte, North Carolina. On October 26, 2015, the company opened its retail location on Fifth Avenue in New York City. The location features a five-story glass storefront and is 22,270 square feet. As per company executives, Microsoft had been on the lookout for a flagship location since 2009. The company's retail locations are part of a greater strategy to help build a connection with its consumers. The opening of the store coincided with the launch of the Surface Book and Surface Pro 4. On November 12, 2015, Microsoft opened a second flagship store, located in Sydney's Pitt Street Mall. Microsoft adopted the so-called ""Pac-Man" Logo", designed by Scott Baker, in 1987. Baker stated "The new logo, in Helvetica italic typeface, has a slash between the "o" and "s" to emphasize the "soft" part of the name and convey motion and speed." Dave Norris ran an internal joke campaign to save the old logo, which was green, in all uppercase, and featured a fanciful letter "O", nicknamed the "blibbet", but it was discarded. Microsoft's logo with the tagline "Your potential. Our passion."—below the main corporate name—is based on a slogan Microsoft used in 2008. In 2002, the company started using the logo in the United States and eventually started a television campaign with the slogan, changed from the previous tagline of ""Where do you want to go today?"" During the private MGX (Microsoft Global Exchange) conference in 2010, Microsoft unveiled the company's next tagline, ""Be What's Next."" They also had a slogan/tagline "Making it all make sense." On August 23, 2012, Microsoft unveiled a new corporate logo at the opening of its 23rd Microsoft store in Boston, indicating the company's shift of focus from the classic style to the tile-centric modern interface, which it uses/will use on the Windows Phone platform, Xbox 360, Windows 8 and the upcoming Office Suites. The new logo also includes four squares with the colors of the then-current Windows logo which have been used to represent Microsoft's four major products: Windows (blue), Office (red), Xbox (green) and Bing (yellow). The logo resembles the opening of one of the commercials for Windows 95. The company was the official jersey sponsor of Finland's national basketball team at EuroBasket 2015. During the COVID-19 pandemic, Microsoft's president, Brad Smith, announced that an initial batch of supplies, including 15,000 protection goggles, infrared thermometers, medical caps, and protective suits, were donated to Seattle, with further aid to come soon.
https://en.wikipedia.org/wiki?curid=19001