text
stringlengths
10
951k
source
stringlengths
39
44
Microserfs Microserfs, published by HarperCollins in 1995, is an epistolary novel by Douglas Coupland. It first appeared in short story form as the cover article for the January 1994 issue of "Wired" magazine and was subsequently expanded to full novel length. Set in the early 1990s, it captures the state of the technology industry before Windows 95, and anticipates the dot-com bubble of the late 1990s. The novel is presented in the form of diary entries maintained on a PowerBook by the narrator, Daniel. Because of this, as well as its formatting and usage of emoticons, this novel is similar to what emerged a decade later as the blog format. Coupland revisited many of the ideas in "Microserfs" in his 2006 novel "JPod", which has been labeled ""Microserfs" for the Google generation". The plot of the novel has two distinct movements: the events at Microsoft in Redmond, Washington, and the move to Silicon Valley and the "Oop!" project. The novel begins in Redmond as the characters are working on different projects at Microsoft's main campus. Life at the campus feels like a feudalistic society, with Bill Gates as the lord, and the employees the serfs. The majority of the main characters—Daniel (the narrator), Susan, Todd, Bug, Michael, and Abe—are living together in a "geek house", and their lives are dedicated to their projects and the company. Daniel's foundations are shaken when his father, a longtime employee of IBM, is laid off. The lifespan of a Microsoft coder weighs heavily on Daniel's mind. The second movement of the novel begins when the characters are offered jobs in Silicon Valley working on a project for Michael, who has by then left Redmond. All of the housemates—some immediately, some after thought—decide to move to the Valley. The characters' lives change drastically once they leave the limited sphere of the Microsoft campus and enter the world of "One-Point-Oh". They begin to work on a project called "Oop!" (a reference to object-oriented programming). Oop! is a Lego-like design program, allowing dynamic creation of many objects, bearing a resemblance to 2009's "Minecraft" (Coupland appears on the rear cover of the novel's hardcover editions photographed in Denmark's Legoland Billund, holding a Lego 777.). One of the undercurrents of the plot is Daniel and his family's relationship to Jed, Daniel's younger brother who died in a boating accident while they were children. Coupland lived in Redmond, Washington for six weeks and Palo Alto, Silicon Valley for four months researching the lives of Microsoft workers. "It was a 'Gorillas in the Mist' kind of observation… What do they put in their glove compartments? What snack foods do they eat? What posters are on their bedroom walls?" Friends from Microsoft and Apple also helped him with research. The novel was a radical departure from Coupland's previous novel, "Life After God". "I wrote the two books under radically different mind-sets, and "Serfs" was a willful rerouting into a different realm". Coupland first noticed that his art school friends were working in computers in 1992. Coupland's research turned up links to the themes of "Life After God". "What surprised me about Microsoft is that no one has any conception of an afterlife. There is so little thought given to eternal issues that their very absence make them pointedly there. These people are so locked into the world, by default some sort of transcendence is located elsewhere, and obviously machines become the totem they imbue with sacred properties, wishes, hopes, goals, desires, dreams. That sounds like 1940s SF, but it's become the world." The book takes place first at Microsoft in Redmond, Washington (near Seattle) and then Silicon Valley (near San Francisco). The time period is 1993–1995, at a time when Microsoft has reached dominance in the software industry and emerged victorious from the "Look & Feel" lawsuit by Apple Inc., a company that had at times seemed in danger of falling apart. The Northridge earthquake takes place during the story and has a profound effect on Ethan, who eventually constructs a replica highway interchange out of Lego pieces to honor the infrastructure destroyed by the earthquake. Coupland's interest in the world of Microsoft and technology workers began with the publication of a short story in "Wired" magazine in 1994. The story would later be expanded into the novel. Shortly before the publication of "Microserfs", Coupland began to distance himself from his label as spokesperson for Generation X. Coupland's novel anticipated the outcome of the late-1990s dot-com bubble with his depiction of the Oop! project's search for capital. The abridged audiobook for "Microserfs" was read by Matthew Perry. Several coded messages are included within the text: This message is an adapted version of the Rifleman's Creed.
https://en.wikipedia.org/wiki?curid=19002
Moscow Moscow (, ; ) is the capital and most populous city of Russia, standing on the Moskva River in the central part of Western Russia, Moscow's population is estimated at 12.6 million residents within the city proper, with over 17 million residents in the urban area, and over 20 million residents in the Moscow Metropolitan Area. The city limits cover an area of , while the metropolitan area covers over , Moscow is among the world's largest cities, being the most populous city entirely within Europe, the most populous urban area in Europe, the most populous metropolitan area in Europe, and also the largest city (by area) on the European continent. Originally established in 1147 as a minor town, Moscow grew to become a prosperous and powerful city that served as the capital of the Grand Duchy that bears its namesake. When the Grand Duchy of Moscow evolved into the Tsardom of Russia, Moscow still remained as the political and economic center for most of the Tsardom's history. When the Tsardom was reformed into the Russian Empire, the capital was moved from Moscow to Saint Petersburg, diminishing the influence of the city. The capital was then moved back to Moscow following the Russian Revolution and the city was brought back as the political centre of the Russian SFSR and the Soviet Union. When the Soviet Union dissolved, Moscow remained as the capital city of the contemporary and newly-established Russian Federation. As the northernmost and coldest megacity in the world, and with a history that dates over eight centuries, Moscow is governed as a federal city that serves as the political, economic, cultural, and scientific centre of Russia and Eastern Europe. As an alpha global city, Moscow has one of the world's largest urban economies, and is one of the most expensive cities in the world, and is also one of the fastest growing tourist destinations in the world. Moscow is home to the third-highest number of billionaires of any city in the world, and has the highest number of billionaires of any city in Europe. The Moscow International Business Center is one of largest financial centres of Europe and the world, and features some of Europe's tallest skyscrapers. Moscow is also home to the tallest free-standing structure in Europe, the Ostankino Tower, and was one of the host cities of the 2018 FIFA World Cup. As the cultural centre of Russia, Moscow served as the home of Russian artists, scientists, and sports figures due to the presence of its numerous museums, academic and political institutions and theatres. The city is home to several UNESCO World Heritage Sites, and is well known for its architecture and its display of Russian architecture, particularly its historic buildings such as the Saint Basil's Cathedral, the Red Square, and the Moscow Kremlin, of which the latter serves as the seat of power of the Government of Russia. Moscow is home to many Russian companies in numerous industries, such as finance and technology. Moscow is served by a comprehensive transit network, which includes four international airports, nine railway terminals, a tram system, a monorail system, and most notably the Moscow Metro, the busiest metro system in Europe, and one of the largest rapid transit systems in the world. With over 40 percent of its territory covered by greenery, it is one of the greenest cities in Europe and the world. The name of the city is thought to be derived from the name of the Moskva River. There have been proposed several theories of the origin of the name of the river. Finno-Ugric Merya and Muroma people, who were among the several pre-Slavic tribes which originally inhabited the area, called the river supposedly "Mustajoki", in English: "Black river". It has been suggested that the name of the city derives from this term. The most linguistically well grounded and widely accepted is from the Proto-Balto-Slavic root *"mŭzg"-/"muzg"- from the Proto-Indo-European *""- "wet", so the name "Moskva" might signify a river at a wetland or a marsh. Its cognates include , "pool, puddle", and "to wash", "to drown", "to dip, immerse". In many Slavic countries Moskov is a surname, most common in Bulgaria, Russia, Ukraine and North Macedonia. There exist as well similar place names in Poland like Mozgawa. The original Old Russian form of the name is reconstructed as *, *, hence it was one of a few Slavic "ū"-stem nouns. As with other nouns of that declension, it had been undergoing a morphological transformation at the early stage of the development of the language, as a result the first written mentions in the 12th century were , (accusative case), , (locative case), , (genitive case). From the latter forms came the modern Russian name , , which is a result of morphological generalisation with the numerous Slavic "ā"-stem nouns. However, the form "Moskovĭ" has left some traces in many other languages, such as , , , , , Ottoman Turkish: , , , , , , , etc. In a similar manner the Latin name has been formed, later it became a colloquial name for Russia used in Western Europe in the 16th–17th centuries. From it as well came English "Muscovy" and "muscovite". Various other theories (of Celtic, Iranian, Caucasic origins), having little or no scientific ground, are now largely rejected by contemporary linguists. Moscow has acquired a number of epithets, most referring to its size and preeminent status within the nation: The "Third Rome" (), the "Whitestone One" (), the "First Throne" (), the "Forty Soroks" () ("sorok" meaning both "forty, a great many" and "a district or parish" in Old Russian). Moscow is also one of the twelve Hero Cities. The demonym for a Moscow resident is "" ("moskvich") for male or "ка" ("moskvichka") for female, rendered in English as "Muscovite". The name "Moscow" is abbreviated "MSK" (МСК in Russian). The oldest evidence of humans on the territory of Moscow dates from the Neolithic Period (Schukinskaya site on the Moscow River). Within the modern bounds of the city other late evidence was discovered (the burial ground of the Fatyanovskaya culture, the site of the Iron Age settlement of the Dyakovo culture), on the territory of the Kremlin, Sparrow Hills, Setun River and Kuntsevskiy forest park, etc. In the 9th century, the Oka River was part of the Volga trade route, and the upper Volga watershed became an area of contact between the indigenous Finno-Ugric such as the Merya and the expanding Volga Bulgars (particularly the second son of Khan Kubrat who expanded the borders of the Old Great Bulgaria), Scandinavian (Varangians) and Slavic peoples. The earliest East Slavic tribes recorded as having expanded to the upper Volga in the 9th to 10th centuries are the Vyatichi and Krivichi. The Moskva River was incorporated as part of Kievan Rus into the Suzdal in the 11th century. By AD 1100, a minor settlement had appeared on the mouth of the Neglinnaya River. The first known reference to Moscow dates from 1147 as a meeting place of Yuri Dolgoruky and Sviatoslav Olgovich. At the time it was a minor town on the western border of Vladimir-Suzdal Principality. The chronicle says, "Come, my brother, to Moskov" (Original - Приди ко мне, брате, во Москов) In 1156, Knjaz Yury Dolgoruky fortified the town with a timber fence and a moat. In the course of the Mongol invasion of Rus, the Mongols under Batu Khan burned the city to the ground and killed its inhabitants. The timber fort "na Moskvě" "on the Moscow River" was inherited by Daniel, the youngest son of Alexander Nevsky, in the 1260s, at the time considered the least valuable of his father's possessions. Daniel was still a child at the time, and the big fort was governed by "tiuns" (deputies), appointed by Daniel's paternal uncle, Yaroslav of Tver. Daniel came of age in the 1270s and became involved in the power struggles of the principality with lasting success, siding with his brother Dmitry in his bid for the rule of Novgorod. From 1283 he acted as the ruler of an independent principality alongside Dmitry, who became Grand Duke of Vladimir. Daniel has been credited with founding the first Moscow monasteries, dedicated to the Lord's Epiphany and to Saint Daniel. Daniel ruled Moscow as Grand Duke until 1303 and established it as a prosperous city that would eclipse its parent principality of Vladimir by the 1320s. On the right bank of the Moskva River, at a distance of from the Kremlin, not later than in 1282, Daniel founded the first monastery with the wooden church of St. Daniel-Stylite, which is now the Danilov Monastery. Daniel died in 1303, at the age of 42. Before his death, he became a monk and, according to his will, was buried in the cemetery of the St. Daniel Monastery. Moscow was quite stable and prosperous for many years and attracted a large number of refugees from across Russia. The Rurikids maintained large landholdings by practicing primogeniture, whereby all land was passed to the eldest sons, rather than dividing it up among all sons. By 1304, Yury of Moscow contested with Mikhail of Tver for the throne of the principality of Vladimir. Ivan I eventually defeated Tver to become the sole collector of taxes for the Mongol rulers, making Moscow the capital of Vladimir-Suzdal. By paying high tribute, Ivan won an important concession from the Khan. While the Khan of the Golden Horde initially attempted to limit Moscow's influence, when the growth of the Grand Duchy of Lithuania began to threaten all of Russia, the Khan strengthened Moscow to counterbalance Lithuania, allowing it to become one of the most powerful cities in Russia. In 1380, prince Dmitry Donskoy of Moscow led a united Russian army to an important victory over the Mongols in the Battle of Kulikovo. Afterwards, Moscow took the leading role in liberating Russia from Mongol domination. In 1480, Ivan III had finally broken the Russians free from Tatar control, and Moscow became the capital of an empire that would eventually encompass all of Russia and Siberia, and parts of many other lands. In 1462 Ivan III, (1440–1505) became Grand Prince of Moscow (then part of the medieval Muscovy state). He began fighting the Tatars, enlarged the territory of Muscovy, and enriched his capital city. By 1500 it had a population of 100,000 and was one of the largest cities in the world. He conquered the far larger principality of Novgorod to the north, which had been allied to the hostile Lithuanians. Thus he enlarged the territory sevenfold, from . He took control of the ancient "Novgorod Chronicle" and made it a propaganda vehicle for his regime. The original Moscow Kremlin was built in the 14th century. It was reconstructed by Ivan, who in the 1480s invited architects from Renaissance Italy, such as Petrus Antonius Solarius, who designed the new Kremlin wall and its towers, and Marco Ruffo who designed the new palace for the prince. The Kremlin walls as they now appear are those designed by Solarius, completed in 1495. The Kremlin's Great Bell Tower was built in 1505–08 and augmented to its present height in 1600. A trading settlement, or "posad", grew up to the east of the Kremlin, in the area known as "Zaradye" (Зарядье). In the time of Ivan III, the Red Square, originally named the Hollow Field (Полое поле) appeared. In 1508–1516, the Italian architect Aleviz Fryazin (Novy) arranged for the construction of a moat in front of the eastern wall, which would connect the Moskva and Neglinnaya and be filled in with water from Neglinnaya. This moat, known as the Alevizov moat and having a length of , width of , and a depth of was lined with limestone and, in 1533, fenced on both sides with low, cogged-brick walls. In the 16th and 17th centuries, the three circular defences were built: Kitay-gorod (Китай-город), the White City (Белый город) and the Earthen City (Земляной город). However, in 1547, two fires destroyed much of the town, and in 1571 the Crimean Tatars captured Moscow, burning everything except the Kremlin. The annals record that only 30,000 of 200,000 inhabitants survived. The Crimean Tatars attacked again in 1591, but this time was held back by new defence walls, built between 1584 and 1591 by a craftsman named Fyodor Kon. In 1592, an outer earth rampart with 50 towers was erected around the city, including an area on the right bank of the Moscow River. As an outermost line of defence, a chain of strongly fortified monasteries was established beyond the ramparts to the south and east, principally the Novodevichy Convent and Donskoy, Danilov, Simonov, Novospasskiy, and Andronikov monasteries, most of which now house museums. From its ramparts, the city became poetically known as "Bielokamennaya", the "White-Walled". The limits of the city as marked by the ramparts built in 1592 are now marked by the Garden Ring. Three square gates existed on the eastern side of the Kremlin wall, which in the 17th century, were known as Konstantino-Eleninsky, Spassky, Nikolsky (owing their names to the icons of Constantine and Helen, the Saviour and St. Nicholas that hung over them). The last two were directly opposite the Red Square, while the Konstantino-Elenensky gate was located behind Saint Basil's Cathedral. The Russian famine of 1601–03 killed perhaps 100,000 in Moscow. From 1610 through 1612, troops of the Polish–Lithuanian Commonwealth occupied Moscow, as its ruler Sigismund III tried to take the Russian throne. In 1612, the people of Nizhny Novgorod and other Russian cities conducted by prince Dmitry Pozharsky and Kuzma Minin rose against the Polish occupants, besieged the Kremlin, and expelled them. In 1613, the Zemsky sobor elected Michael Romanov tsar, establishing the Romanov dynasty. The 17th century was rich in popular risings, such as the liberation of Moscow from the Polish–Lithuanian invaders (1612), the Salt Riot (1648), the Copper Riot (1662), and the Moscow Uprising of 1682. During the first half of the 17th century, the population of Moscow doubled from roughly 100,000 to 200,000. It expanded beyond its ramparts in the later 17th century. By 1682, there were 692 households established north of the ramparts, by Ukrainians and Belarusians abducted from their hometowns in the course of the Russo-Polish War (1654–1667). These new outskirts of the city came to be known as the "Meshchanskaya sloboda", after Ruthenian "meshchane" "town people". The term "meshchane" (мещане) acquired pejorative connotations in 18th-century Russia and today means "petty bourgeois" or "narrow-minded philistine". The entire city of the late 17th century, including the slobodas that grew up outside the city ramparts, are contained within what is today Moscow's Central Administrative Okrug. Numerous disasters befell the city. The plague epidemics ravaged Moscow in 1570–1571, 1592 and 1654–1656. The plague killed upwards of 80% of the people in 1654–55. Fires burned out much of the wooden city in 1626 and 1648. In 1712 Peter the Great moved his government to the newly built Saint Petersburg on the Baltic coast. Moscow ceased to be Russia's capital, except for a brief period from 1728 to 1732 under the influence of the Supreme Privy Council. After losing the status as the capital of the empire, the population of Moscow at first decreased, from 200,000 in the 17th century to 130,000 in 1750. But after 1750, the population grew more than tenfold over the remaining duration of the Russian Empire, reaching 1.8 million by 1915. The 1770–1772 Russian plague killed up to 100,000 people in Moscow. By 1700, the building of cobbled roads had begun. In November 1730, the permanent street light was introduced, and by 1867 many streets had a gaslight. In 1883, near the Prechistinskiye Gates, arc lamps were installed. In 1741 Moscow was surrounded by a barricade long, the Kamer-Kollezhskiy barrier, with 16 gates at which customs tolls were collected. Its line is traced today by a number of streets called "val" (“ramparts”). Between 1781 and 1804 the Mytischinskiy water-pipe (the first in Russia) was built. In 1813, following the destruction of much of the city during the French occupation, a Commission for the Construction of the City of Moscow was established. It launched a great program of rebuilding, including a partial replanning of the city-centre. Among many buildings constructed or reconstructed at this time was the Grand Kremlin Palace and the Kremlin Armoury, the Moscow University, the Moscow Manege (Riding School), and the Bolshoi Theatre. In 1903 the Moskvoretskaya water-supply was completed. In the early 19th century, the Arch of Konstantino-Elenensky gate was paved with bricks, but the Spassky Gate was the main front gate of the Kremlin and used for royal entrances. From this gate, wooden and (following the 17th-century improvements) stone bridges stretched across the moat. Books were sold on this bridge and stone platforms were built nearby for guns – "raskats". The Tsar Cannon was located on the platform of the Lobnoye mesto. The road connecting Moscow with St. Petersburg, now the M10 highway, was completed in 1746, its Moscow end following the old Tver road, which had existed since the 16th century. It became known as "Peterburskoye Schosse" after it was paved in the 1780s. Petrovsky Palace was built in 1776–1780 by Matvey Kazakov. When Napoleon invaded Russia in 1812, the Moscovites were evacuated. It is suspected that the Moscow fire was principally the effect of Russian sabotage. Napoleon's "Grande Armée" was forced to retreat and was nearly annihilated by the devastating Russian winter and sporadic attacks by Russian military forces. As many as 400,000 of Napoleon's soldiers died during this time. Moscow State University was established in 1755. Its main building was reconstructed after the 1812 fire by Domenico Giliardi. The "Moskovskiye Vedomosti" newspaper appeared from 1756, originally in weekly intervals, and from 1859 as a daily newspaper. The Arbat Street had been in existence since at least the 15th century, but it was developed into a prestigious area during the 18th century. It was destroyed in the fire of 1812 and was rebuilt completely in the early 19th century. In the 1830s, general Alexander Bashilov planned the first regular grid of city streets north from Petrovsky Palace. Khodynka field south of the highway was used for military training. Smolensky Rail station (forerunner of present-day Belorussky Rail Terminal) was inaugurated in 1870. Sokolniki Park, in the 18th century the home of the tsar's falconers well outside Moscow, became contiguous with the expanding city in the later 19th century and was developed into a public municipal park in 1878. The suburban Savyolovsky Rail Terminal was built in 1902. In January 1905, the institution of the City Governor, or Mayor, was officially introduced in Moscow, and Alexander Adrianov became Moscow's first official mayor. When Catherine II came to power in 1762, the city's filth and the smell of sewage was depicted by observers as a symptom of disorderly life styles of lower-class Russians recently arrived from the farms. Elites called for improving sanitation, which became part of Catherine's plans for increasing control over social life. National political and military successes from 1812 through 1855 calmed the critics and validated efforts to produce a more enlightened and stable society. There was less talk about the smell and the poor conditions of public health. However, in the wake of Russia's failures in the Crimean War in 1855–56, confidence in the ability of the state to maintain order in the slums eroded, and demands for improved public health put filth back on the agenda. Following the success of the Russian Revolution of 1917, Vladimir Lenin, fearing possible foreign invasion, moved the capital from Petrograd to Moscow on March 12, 1918. The Kremlin once again became the seat of power and the political centre of the new state. With the change in values imposed by communist ideology, the tradition of preservation of cultural heritage was broken. Independent preservation societies, even those that defended only secular landmarks such as Moscow-based OIRU were disbanded by the end of the 1920s. A new anti-religious campaign, launched in 1929, coincided with collectivization of peasants; destruction of churches in the cities peaked around 1932. In 1937 several letters were written to the Central Committee of the Communist Party of the Soviet Union to rename Moscow to "Stalindar" or "Stalinodar", one from an elderly pensioner whose dream was to "live in Stalinodar" and had selected the name to represent the "gift" (dar) of the genius of Stalin. Stalin rejected this suggestion, and after it was suggested again to him by Nikolai Yezhov, he was "outraged", saying "What do I need this for?". This was following Stalin banning the renaming of places in his name in 1936. During the World War II, the Soviet State Committee of Defence and the General Staff of the Red Army were located in Moscow. In 1941, 16 divisions of the national volunteers (more than 160,000 people), 25 battalions (18,000 people) and 4 engineering regiments were formed among the Muscovites. In October November and in December 1941, as well as in January 1942 German Army Group Centre was stopped at the outskirts of the city and then driven off in the course of the Battle of Moscow. Many factories were evacuated, together with much of the government, and from October 20 the city was declared to be in a state of siege. Its remaining inhabitants built and manned antitank defences, while the city was bombarded from the air. On May 1, 1944, a medal "For the defence of Moscow" and in 1947 another medal "In memory of the 800th anniversary of Moscow" was instituted. Moscow was bombed 1941–5, the Abwehr until 1992. Both German and Soviet casualties during the battle of Moscow have been a subject of debate, as various sources provide somewhat different estimates. Total casualties between September 30, 1941, and January 7, 1942, are estimated to be between 248,000 and 400,000 for the Wehrmacht and between 650,000 and 1,280,000 for the Red Army. During the postwar years, there was a serious housing crisis, solved by the invention of high-rise apartments. There are over 11,000 of these standardised and prefabricated apartment blocks, housing the majority of Moscow's population, making it by far the city with the most high-rise buildings. Apartments were built and partly furnished in the factory before being raised and stacked into tall columns. The popular Soviet-era comic film "Irony of Fate" parodies this construction method. The city of Zelenograd was built in 1958 at from the city centre to the north-west, along with the Leningradskoye Shosse, and incorporated as one of Moscow's administrative okrugs. Moscow State University moved to its campus on Sparrow Hills in 1953. In 1959 Nikita Khrushchev launched his anti-religious campaign. By 1964 over 10 thousand churches out of 20 thousand were shut down (mostly in rural areas) and many were demolished. Of 58 monasteries and convents operating in 1959, only sixteen remained by 1964; of Moscow's fifty churches operating in 1959, thirty were closed and six demolished. On May 8, 1965, due to the actual 20th anniversary of the victory in World War II, Moscow was awarded a title of the Hero City. In 1980 it hosted the Summer Olympic Games. The MKAD (ring road) was opened in 1961. It had four lanes running along the city borders. The MKAD marked the administrative boundaries of the city of Moscow until the 1980s when outlying suburbs beyond the ring road began to be incorporated. In 1980, it hosted the Summer Olympic Games, which were boycotted by the United States and several other Western countries due to the Soviet Union's involvement in Afghanistan in late 1979. In 1991 Moscow was the scene of a coup attempt by conservative communists opposed to the liberal reforms of Mikhail Gorbachev. When the USSR was dissolved in the same year, Moscow remained the capital of the Russian SFSR (on December 25, 1991, the Russian SFSR was renamed the Russian Federation). Since then, a market economy has emerged in Moscow, producing an explosion of Western-style retailing, services, architecture, and lifestyles. The city has continued to grow during the 1990s to 2000s, its population rising from below nine to above ten million. Mason and Nigmatullina argue that Soviet-era urban-growth controls (before 1991) produced controlled and sustainable metropolitan development, typified by the greenbelt built in 1935. Since then, however, there has been a dramatic growth of low-density suburban sprawl, created by heavy demand for single-family dwellings as opposed to crowded apartments. In 1995–1997 the MKAD ring road was widened from the initial four to ten lanes. In December 2002 Bulvar Dmitriya Donskogo became the first Moscow Metro station that opened beyond the limits of MKAD. The Third Ring Road, intermediate between the early 19th-century Garden Ring and the Soviet-era outer ring road, was completed in 2004. The greenbelt is becoming more and more fragmented, and satellite cities are appearing at the fringe. Summer dachas are being converted into year-round residences, and with the proliferation of automobiles there is heavy traffic congestion. Multiple old churches and other examples of architectural heritage that had been demolished during the Stalin era have been restored, such as the Cathedral of Christ the Saviour. In 2010s Moscow's Administration has launched some long duration projects like the "Moja Ulitsa" (in English: "My Street") urban redevelopment program or the Residency renovation one. By its territorial expansion on July 1, 2012 southwest into the Moscow Oblast, the area of the capital more than doubled, going from , resulting in Moscow becoming the largest city on the European continent by area; it also gained an additional population of 233,000 people. Moscow is situated on the banks of the Moskva River, which flows for just over through the East European Plain in central Russia. 49 bridges span the river and its canals within the city's limits. The elevation of Moscow at the All-Russia Exhibition Center (VVC), where the leading Moscow weather station is situated, is . Teplostanskaya highland is the city's highest point at . The width of Moscow city (not limiting MKAD) from west to east is , and the length from north to south is . Moscow serves as the reference point for the time zone used in most of European Russia, Belarus and the Republic of Crimea. The areas operate in what is referred to in international standards as "Moscow Standard Time (MSK, мск)", which is 3 hours ahead of UTC, or UTC+3. Daylight saving time is no longer observed. According to the geographical longitude the average solar noon in Moscow occurs at 12:30. Moscow has a humid continental climate (Köppen climate classification "Dfb") with long, cold (although average by Russian standards) winters usually lasting from mid-November to the end of March, and warm summers. More extreme continental climates at the same latitude- such as parts of Eastern Canada or Siberia- have much colder winters, suggesting that there is still significant moderation from the Atlantic Ocean. Weather can fluctuate widely with temperatures ranging from in the city and in suburbs to above in the winter, and from in the summer. Typical high temperatures in the warm months of June, July and August are around a comfortable , but during heat waves (which can occur between May and September), daytime high temperatures often exceed , sometimes for a week or two at a time. In the winter, average temperatures normally drop to approximately , though almost every winter there are periods of warmth with day temperatures rising above , and periods of cooling with night temperatures falling below . These periods usually last about a week or two. The highest temperature ever recorded was at the VVC weather station and in the center of Moscow and Domodedovo airport on July 29, 2010 during the unusual 2010 Northern Hemisphere summer heat waves. Record high temperatures were recorded for January, March, April, May, July, August, November, and December in 2007–2014. The average July temperature from 1981 to 2010 is . The lowest ever recorded temperature was in January 1940. Snow, which is present for about five months a year, often begins to fall mid October, while snow cover lies in November and melts at the beginning of April. On average Moscow has 1731 hours of sunshine per year, varying from a low of 8% in December to 52% from May to August. This large annual variation is due to convective cloud formation. In the winter, moist air from the Atlantic condenses in the cold continental interior, resulting in very overcast conditions. However, this same continental influence results in considerably sunnier summers than oceanic cities of similar latitude such as Edinburgh. Between 2004 and 2010, the average was between 1800 and 2000 hours with a tendency to more sunshine in summer months, up to a record 411 hours in July 2014, 79% of possible sunshine. December 2017 was the darkest month in Moscow since records began, with only six minutes of sunlight. Temperatures in the centre of Moscow are often significantly higher than in the outskirts and nearby suburbs, especially in winter. For example, if the average February temperature in the north-east of Moscow is , in the suburbs it is about . The temperature difference between the centre of Moscow and nearby areas of Moscow Oblast can sometimes be more than on frosty winter nights. Below is the 1961–1990 normals table. The annual temperature rose from to in the new 1981–2010 normals. In 2019, the average annual temperature reached a record high of Recent changes in Moscow's regional climate, since it is in the mid-latitudes of the northern hemisphere, are often cited by climate scientists as evidence of global warming, though by definition, climate change is global, not regional. During the summer, extreme heat is often observed in the city (2001, 2002, 2003, 2010, 2011). Along with a southern part of Central Russia, after recent years of hot summer seasons, the climate of the city gets hot-summer classification trends. Winter also became significantly milder: for example, the average January temperature in the early 1900s was , while now it is about . At the end of January–February it is often colder, with frosts reaching a few nights per year (2006, 2010, 2011, 2012, and 2013). The last decade was the warmest in the history of meteorological observations of Moscow. Temperature changes in the city are depicted in the table below: According to the results of the 2010 Census, the population of Moscow was 11,503,501; up from 10,382,754 recorded in the 2002 Census. At the time of the official 2010 Census, the ethnic makeup of the city's population whose ethnicity was known (10,835,092 people) was: The official population of Moscow is based on those holding "permanent residency". According to Russia's Federal Migration Service, Moscow holds 1.8 million official "guests" who have temporary residency on the basis of visas or other documentation, giving a legal population of 13.3 million. The number of Illegal immigrants, the vast majority originating from Central Asia, is estimated to be an additional 1 million people, giving a total population of about 14.3 million. Total fertility rate: Christianity is the predominant religion in the city, of which the Russian Orthodox Church is the most popular. Moscow is Russia's capital of Eastern Orthodox Christianity, which has been the country's traditional religion and was deemed a part of Russia's "historical heritage" in a law passed in 1997. Other religions practiced in Moscow include Armenian Apostolicism, Buddhism, Hinduism, Catholicism, Islam, Judaism, Yazidism, Old Believers, Protestantism, and Rodnovery. The Patriarch of Moscow serves as the head of the church and resides in the Danilov Monastery. Moscow was called the "city of 40 times 40 churches"—""город сорока сороков церквей""—prior to 1917. In 1918 the Bolshevik government declared Russia a secular state, which in practice meant that religion was repressed and society was to become atheistic. During the period of 1920-1930s a great number of churches in Moscow were demolished, including historical Chudov Monastery in the Kremlin, dating from the 14th century, Kazansky Cathedral on the Red Square, the Cathedral of Christ the Savior, constructed in the 19th century in memory of a victory over Napoleon's army in 1812, and many more. This continued even after the Second World War, in 1940-1970s, when persecutions against religion in the Soviet Union became less severe. Most of the surviving churches and monasteries were closed and then used as clubs, offices, factories or even warehouses. Since the disintegration of the Soviet Union in 1991 many of the destroyed churches have been restored and traditional religions are once again gaining popularity. Among the churches reconstructed in the 1990s is an impressive new Cathedral of Christ the Savior that once more has become a landmark. It was built on the site of the old demolished cathedral, where there had been a huge open swimming-pool until 1994. The Moscow mufti council claimed that Muslims numbered around 1.5 million of 10.5 million of the city's population in 2010. There are four mosques in the city. Moscow Cathedral Mosque has been built at the site of the former one. It was officially inaugurated on September 23, 2015. The new mosque has the capacity of ten thousand worshippers. President of Russia Vladimir Putin, President of Turkey Recep Tayyip Erdoğan, President of the State of Palestine Mahmoud Abbas and local Muslim leaders participated in the inauguration ceremony of this mosque. Moscow's architecture is world-renowned. Moscow is the site of Saint Basil's Cathedral, with its elegant onion domes, as well as the Cathedral of Christ the Savior and the Seven Sisters. The first Kremlin was built in the middle of the 12th century. Medieval Moscow's design was of concentric walls and intersecting radial thoroughfares. This layout, as well as Moscow's rivers, helped shape Moscow's design in subsequent centuries. The Kremlin was rebuilt in the 15th century. Its towers and some of its churches were built by Italian architects, lending the city some of the aurae of the renaissance. From the end of the 15th century, the city was embellished by masonry structures such as monasteries, palaces, walls, towers, and churches. The city's appearance had not changed much by the 18th century. Houses were made of pine and spruce logs, with shingled roofs plastered with sod or covered by birch bark. The rebuilding of Moscow in the second half of the 18th century was necessitated not only by constant fires but also the needs of the nobility. Much of the wooden city was replaced by buildings in the classical style. For much of its architectural history, Moscow was dominated by Orthodox churches. However, the overall appearance of the city changed drastically during Soviet times, especially as a result of Joseph Stalin's large-scale effort to "modernize" Moscow. Stalin's plans for the city included a network of broad avenues and roadways, some of them over ten lanes wide, which, while greatly simplifying movement through the city, were constructed at the expense of a great number of historical buildings and districts. Among the many casualties of Stalin's demolitions was the Sukharev Tower, a longtime city landmark, as well as mansions and commercial buildings The city's newfound status as the capital of a deeply secular nation, made religiously significant buildings especially vulnerable to demolition. Many of the city's churches, which in most cases were some of Moscow's oldest and most prominent buildings, were destroyed; some notable examples include the Kazan Cathedral and the Cathedral of Christ the Savior. During the 1990s, both were rebuilt. Many smaller churches, however, were lost. While the later Stalinist period was characterized by the curtailing of creativity and architectural innovation, the earlier post-revolutionary years saw a plethora of radical new buildings created in the city. Especially notable were the constructivist architects associated with VKHUTEMAS, responsible for such landmarks as Lenin's Mausoleum. Another prominent architect was Vladimir Shukhov, famous for Shukhov Tower, just one of many hyperboloid towers designed by Shukhov. It was built between 1919 and 1922 as a transmission tower for a Russian broadcasting company. Shukhov also left a lasting legacy to the Constructivist architecture of early Soviet Russia. He designed spacious elongated shop galleries, most notably the GUM department store on Red Square, bridged with innovative metal-and-glass vaults. Perhaps the most recognizable contributions of the Stalinist period are the so-called Seven Sisters, seven massive skyscrapers scattered throughout the city at about an equal distance from the Kremlin. A defining feature of Moscow's skyline, their imposing form was allegedly inspired by the Manhattan Municipal Building in New York City, and their style—with intricate exteriors and a large central spire—has been described as Stalinist Gothic architecture. All seven towers can be seen from most high points in the city; they are among the tallest constructions in central Moscow apart from the Ostankino Tower, which, when it was completed in 1967, was the highest free-standing land structure in the world and today remains the world's seventy-second tallest, ranking among buildings such as the Burj Khalifa in Dubai, Taipei 101 in Taiwan and the CN Tower in Toronto. The Soviet goal of providing housing for every family, and the rapid growth of Moscow's population, led to the construction of large, monotonous housing blocks. Most of these date from the post-Stalin era and the styles are often named after the leader then in power (Brezhnev, Khrushchev, etc.). They are usually badly maintained. Although the city still has some five-story apartment buildings constructed before the mid-1960s, more recent apartment buildings are usually at least nine floors tall, and have elevators. It is estimated that Moscow has over twice as many elevators as New York City and four times as many as Chicago. Moslift, one of the city's major elevator operating companies, has about 1500 elevator mechanics on call, to release residents trapped in elevators. Stalinist-era buildings, mostly found in the central part of the city, are massive and usually ornamented with Socialist realism motifs that imitate classical themes. However, small churches—almost always Eastern Orthodox– found across the city provide glimpses of its past. The Old Arbat Street, a tourist street that was once the heart of a bohemian area, preserves most of its buildings from prior to the 20th century. Many buildings found off the main streets of the inner city (behind the Stalinist façades of Tverskaya Street, for example) are also examples of bourgeois architecture typical of Tsarist times. Ostankino Palace, Kuskovo, Uzkoye and other large estates just outside Moscow originally belong to nobles from the Tsarist era, and some , both inside and outside the city, are open to Muscovites and tourists. Attempts are being made to restore many of the city's best-kept examples of pre-Soviet architecture. These restored structures are easily spotted by their bright new colors and spotless façades. There are a few examples of notable, early Soviet avant-garde work too, such as the house of the architect Konstantin Melnikov in the Arbat area. Many of these restorations were criticized for alleged disrespect of historical authenticity. Facadism is also widely practiced. Later examples of interesting Soviet architecture are usually marked by their impressive size and the semi-Modernist styles employed, such as with the Novy Arbat project, familiarly known as "false teeth of Moscow" and notorious for the wide-scale disruption of a historic area in central Moscow involved in the project. Plaques on house exteriors will inform passers-by that a well-known personality once lived there. Frequently, the plaques are dedicated to Soviet celebrities not well known outside (or often, like with decorated generals and revolutionaries, now both inside) of Russia. There are also many "museum houses" of famous Russian writers, composers, and artists in the city. Moscow's skyline is quickly modernizing, with several new towers under construction. In recent years, the city administration has been widely criticized for heavy destruction that has affected many historical buildings. As much as a third of historic Moscow has been destroyed in the past few years to make space for luxury apartments and hotels. Other historical buildings, including such landmarks as the 1930 Moskva hotel and the 1913 department store Voyentorg, have been razed and reconstructed anew, with the inevitable loss of historical value. Critics blame the government for not enforcing conservation laws: in the last 12 years more than 50 buildings with monument status were torn down, several of those dating back to the 17th century. Some critics also wonder if the money used for the reconstruction of razed buildings could not be used for the renovation of decaying structures, which include many works by architect Konstantin Melnikov and Mayakovskaya metro station. Some organizations, such as Moscow Architecture Preservation Society and Save Europe's Heritage, are trying to draw the international public attention to these problems. There are 96 parks and 18 gardens in Moscow, including four botanical gardens. There are of green zones besides of forests. Moscow is a very green city, if compared to other cities of comparable size in Western Europe and North America; this is partly due to a history of having green "yards" with trees and grass, between residential buildings. There are on average of parks per person in Moscow compared with 6 for Paris, 7.5 in London and 8.6 in New York. Gorky Park (officially the Central Park of Culture and Rest named after Maxim Gorky), was founded in 1928. The main part () along the Moskva river contains estrades, children's attractions (including the "Observation Wheel" water ponds with boats and water bicycles), dancing, tennis courts and other sports facilities. It borders the Neskuchny Garden (), the oldest park in Moscow and a former imperial residence, created as a result of the integration of three estates in the 18th century. The Garden features the Green Theater, one of the largest open amphitheaters in Europe, able to hold up to 15 thousand people. Several parks include a section known as a "Park of Culture and Rest", sometimes alongside a much wilder area (this includes parks such as Izmaylovsky, Fili and Sokolniki). Some parks are designated as Forest Parks (lesopark). Izmaylovsky Park, created in 1931, is one of the largest urban parks in the world along with Richmond Park in London. Its area of is six times greater than that of Central Park in New York. Sokolniki Park, named after the falcon hunting that occurred there in the past, is one of the oldest parks in Moscow and has an area of . A central circle with a large fountain is surrounded by birch, maple and elm tree alleys. A labyrinth composed of green paths lies beyond the park's ponds. Losiny Ostrov National Park ("Elk Island" National Park), with a total area of more than , borders Sokolniki Park and was Russia's first national park. It is quite wild, and is also known as the "city taiga" – elk can be seen there. Tsytsin Main Botanical Garden of Academy of Sciences, founded in 1945 is the largest in Europe. It covers the territory of bordering the All-Russia Exhibition Center and contains a live exhibition of more than 20 thousand species of plants from around the world, as well as a lab for scientific research. It contains a rosarium with 20 thousand rose bushes, a dendrarium, and an oak forest, with the average age of trees exceeding 100 years. There is a greenhouse taking up more than of land. The All-Russian Exhibition Center (Всероссийский выставочный центр), formerly known as the All-Union Agricultural Exhibition (VSKhV) and later Exhibition of Achievements of the National Economy (VDNKh), though officially named a "permanent trade show", is one of the most prominent examples of Stalinist-era monumental architecture. Among the large spans of a recreational park, areas are scores of elaborate pavilions, each representing either a branch of Soviet industry and science or a USSR republic. Even though during the 1990s it was, and for some part still is, misused as a gigantic shopping center (most of the pavilions are rented out for small businesses), it still retains the bulk of its architectural landmarks, including two monumental fountains ("Stone Flower" and "Friendship of Nations") and a 360 degrees panoramic cinema. In 2014 the park returned to the name Exhibition of Achievements of National Economy, and in the same year huge renovation works had been started. "Lilac Park", founded in 1958, has a permanent sculpture display and a large rosarium. Moscow has always been a popular destination for tourists. Some of the more famous attractions include the city's UNESCO World Heritage Site, Moscow Kremlin and Red Square, which was built between the 14th and 17th centuries. The Church of the Ascension at Kolomenskoye, which dates from 1532, is also a UNESCO World Heritage Site and another popular attraction. Near the new Tretyakov Gallery there is a sculpture garden, Museon, often called "the graveyard of fallen monuments" that displays statues of the former Soviet Union that were removed from their place after its dissolution. Other attractions include the Moscow Zoo, a zoological garden in two sections (the valleys of two streams) linked by a bridge, with nearly a thousand species and more than 6,500 specimens. Each year, the zoo attracts more than 1.2 million visitors. Many of Moscow's parks and landscaped gardens are protected natural environments. Moscow's road system is centered roughly on the Kremlin at the heart of the city. From there, roads generally span outwards to intersect with a sequence of circular roads ("rings"). The first and innermost major ring, Bulvarnoye Koltso (Boulevard Ring), was built at the former location of the 16th-century city wall around what used to be called Bely Gorod (White Town). The Bulvarnoye Koltso is technically not a ring; it does not form a complete circle, but instead a horseshoe-like arc that begins at the Cathedral of Christ the Savior and ends at the Yauza River. The second primary ring, located outside the bell end of the Boulevard Ring, is the Sadovoye Koltso (Garden Ring). Like the Boulevard Ring, the Garden Ring follows the path of a 16th-century wall that used to encompass part of Moscow. The Third Ring Road, was completed in 2003 as a high-speed freeway. The Fourth Transport Ring, another freeway, was planned, but cancelled in 2011. It will be replaced by a system of chordal highways. Aside from aforementioned hierarchy, line 5 of Moscow Metro is a circle-shaped looped subway line (hence the name "Koltsevaya Liniya", "ring line"), which is located between the "Sadovoye Koltso" and Third Transport Ring. September 10, 2016, "Moscow Central Circle" renovated railroad (former "Moskovskaya Okruzhnaya Zheleznaya Doroga") was introduced as 14th line of Moscow Metro. The railroad itself was in use since 1907, but before the renovation, it was a non-electrified railroad for transit needs of fueled locomotives only. Another circle metro line - "Big Circle Line" ("Bolshaya Koltsevaya Liniya") is under construction and will be finished about 2023. The outermost ring within Moscow is the Moscow Ring Road (often called "MKAD", acronym word for Russian "Московская Кольцевая Автомобильная Дорога"), which forms the cultural boundary of the city, was established in the 1950s. It is to note the method of building the road (usage of ground elevation instead of concrete columns throughout the whole way) formed a wall-like barrier that obstacles building roads under the MKAD highway itself). Before 2012 expansion of Moscow, MKAD was considered an approximate border for Moscow boundaries. Outside Moscow, some of the roads encompassing the city continue to follow this circular pattern seen inside city limits, with the notable example of two "Betonka" road, originally made of concrete pads. In order to reduce transit traffic on MKAD, the new ring road (called "CKAD" - "Centralnaya Koltsevaya Avtomobilnaya Doroga", "Central Ring Road") is under construction now. One of the most notable art museums in Moscow is the Tretyakov Gallery, which was founded by Pavel Tretyakov, a wealthy patron of the arts who donated a large private collection to the city. The Tretyakov Gallery is split into two buildings. The Old Tretyakov gallery, the original gallery in the Tretyakovskaya area on the south bank of the Moskva River, houses works in the classic Russian tradition. The works of famous pre-Revolutionary painters, such as Ilya Repin, as well as the works of early Russian icon painters can be found here. Visitors can even see rare originals by early 15th-century iconographer Andrei Rublev. The New Tretyakov gallery, created in Soviet times, mainly contains the works of Soviet artists, as well as of a few contemporary paintings, but there is some overlap with the Old Tretyakov Gallery for early 20th-century art. The new gallery includes a small reconstruction of Vladimir Tatlin's famous "Monument to the Third International" and a mixture of other avant-garde works by artists like Kazimir Malevich and Wassily Kandinsky. Socialist realism features can also be found within the halls of the New Tretyakov Gallery. Another art museum in the city of Moscow is the Pushkin Museum of Fine Arts, which was founded by, among others, the father of Marina Tsvetaeva. The Pushkin Museum is similar to the British Museum in London in that its halls are a cross-section of exhibits on world civilisations, with many copies of ancient sculptures. However, it also hosts paintings from every major Western era; works by Claude Monet, Paul Cézanne, and Pablo Picasso are present in the museum's collection. The State Historical Museum of Russia (Государственный Исторический музей) is a museum of Russian history located between Red Square and Manege Square in Moscow. Its exhibitions range from relics of the prehistoric tribes inhabiting present-day Russia, through priceless artworks acquired by members of the Romanov dynasty. The total number of objects in the museum's collection numbers is several million. The Polytechnical Museum, founded in 1872 is the largest technical museum in Russia, offering a wide array of historical inventions and technological achievements, including humanoid automata from the 18th century and the first Soviet computers. Its collection contains more than 160,000 items. The Borodino Panorama museum located on Kutuzov Avenue provides an opportunity for visitors to experience being on a battlefield with a 360° diorama. It is a part of the large historical memorial commemorating the victory in the Patriotic War of 1812 over Napoleon's army, that includes also the triumphal arch, erected in 1827. There is also a military history museum that includes statues, and military hardware. Moscow is the heart of the Russian performing arts, including ballet and film, with 68 museums 103 theaters, 132 cinemas and 24 concert halls. Among Moscow's theaters and ballet studios is the Bolshoi Theatre and the Malyi Theatre as well as Vakhtangov Theatre and Moscow Art Theatre. The Moscow International Performance Arts Center, opened in 2003, also known as Moscow International House of Music, is known for its performances in classical music. It has the largest organ in Russia installed in Svetlanov Hall. There are also two large circuses in Moscow: Moscow State Circus and Moscow Circus on Tsvetnoy Boulevard named after Yuri Nikulin. Memorial Museum of Astronautics under the Monument to the Conquerors of Space at the end of Cosmonauts Alley is the central memorial place for the Russian space officials. The Mosfilm studio was at the heart of many classic films, as it is responsible for both artistic and mainstream productions. However, despite the continued presence and reputation of internationally renowned Russian filmmakers, the once prolific native studios are much quieter. Rare and historical films may be seen in the Salut cinema, where films from the Museum of Cinema collection are shown regularly. The Shchusev State Museum of Architecture is the national museum of Russian architecture by the name of the architect Alexey Shchusev near the Kremlin area. Over 500 Olympic sports champions lived in the city by 2005. Moscow is home to 63 stadiums (besides eight football and eleven light athletics maneges), of which Luzhniki Stadium is the largest and the 4th biggest in Europe (it hosted the 1998–99 UEFA Cup, 2007–08 UEFA Champions League finals, the 1980 Summer Olympics, and the 2018 FIFA World Cup with 7 games total, including the final). Forty other sport complexes are located within the city, including 24 with artificial ice. The Olympic Stadium was the world's first indoor arena for bandy and hosted the Bandy World Championship twice. Moscow was again the host of the competition in 2010, this time in Krylatskoye. That arena has also hosted the World Speed Skating Championships. There are also seven horse racing tracks in Moscow, of which Central Moscow Hippodrome, founded in 1834, is the largest. Moscow was the host city of the 1980 Summer Olympics, with the yachting events being held at Tallinn, in present-day Estonia. Large sports facilities and the main international airport, Sheremetyevo Terminal 2, were built in preparation for the 1980 Summer Olympics. Moscow had made a bid for the 2012 Summer Olympics. However, when final voting commenced on July 6, 2005, Moscow was the first city to be eliminated from further rounds. The Games were awarded to London. The most titled ice hockey team in the Soviet Union and in the world, HC CSKA Moscow comes from Moscow. Other big ice hockey clubs from Moscow are HC Dynamo Moscow, which was the second most titled team in the Soviet Union, and HC Spartak Moscow. The most titled Soviet, Russian, and one of the most titled Euroleague clubs, is the basketball club from Moscow PBC CSKA Moscow. Moscow hosted the EuroBasket in 1953 and 1965. Moscow had more winners at the USSR and Russian Chess Championship than any other city. The most titled volleyball team in the Soviet Union and in Europe (CEV Champions League) is VC CSKA Moscow. In football, FC Spartak Moscow has won more championship titles in the Russian Premier League than any other team. They were second only to FC Dynamo Kyiv in Soviet times. PFC CSKA Moscow became the first Russian football team to win a UEFA title, the UEFA Cup (present-day UEFA Europa League). FC Lokomotiv Moscow, FC Dynamo Moscow and FC Torpedo Moscow are other professional football teams also based in Moscow. Moscow houses other prominent football, ice hockey, and basketball teams. Because sports organisations in the Soviet Union were once highly centralized, two of the best Union-level teams represented defence and law-enforcing agencies: the Armed Forces (CSKA) and the Ministry of Internal Affairs (Dinamo). There were army and police teams in most major cities. As a result, Spartak, CSKA, and Dinamo were among the best-funded teams in the USSR. The Rhythmic Gymnastics Palace after Irina Vilner-Usmanova is located in the Luzniki Olympic Complex. The building works started in 2017 and the opening ceremony took place on June 18, 2019. The investor of the Palace is the billionaire Alisher Usmanov, husband of the former gymnast and gymnastics coach Irina Viner-Usmanova. The total surface of the building is 23,500 m², that include 3 fitness rooms, locker rooms, rooms reserved to referees and coaches, saunas, a canteen and a cafeteria, 2 ball halls, a Medical center, a hall reserved to journalists and a hotel for athletes. Because of Moscow's cold local climate, winter sports have a following. Many of Moscow's large parks offer marked trails for skiing and frozen ponds for skating. Moscow hosts the annual Kremlin Cup, a popular tennis tournament on both the WTA and ATP tours. It is one of the ten Tier-I events on the women's tour and a host of Russian players feature every year. SC Olimpiyskiy hosted the Eurovision Song Contest 2009, the first and so far the only Eurovision Song Contest arranged in Russia. Slava Moscow is a professional rugby club, competing in the national Professional Rugby League. Former rugby league heavyweights RC Lokomotiv have entered the same league . The Luzhniki Stadium also hosted the 2013 Rugby World Cup Sevens. In bandy, one of the most successful clubs in the world is 20 times Russian League champions Dynamo Moscow. They have also won the World Cup thrice and European Cup six times. MFK Dinamo Moskva is one of the major futsal clubs in Europe, having won the Futsal Champions League title once. When Russia was selected to host the 2018 FIFA World Cup, the Luzhniki Stadium got an increased capacity, by almost 10,000 new seats, in addition to a further two stadiums that have been built: the Dynamo Stadium, and the Spartak Stadium, although the first one later was dismissed from having World Cup matches. The city is full of clubs, restaurants, and bars. Tverskaya Street is also one of the busiest shopping streets in Moscow. The adjoining Tretyakovsky Proyezd, also south of Tverskaya Street, in Kitai-gorod, is host to upmarket boutique stores such as Bulgari, Tiffany & Co., Armani, Prada and Bentley. Nightlife in Moscow has moved on since Soviet times and today the city has many of the world's largest nightclubs. Clubs, bars, creative spaces and restaurants-turned-into-dancefloors are flooding Moscow streets with new openings every year. The hottest area is located around the old chocolate factory, where bars, nightclubs, galleries, cafés and restaurants are placed. Dream Island is an amusement park in Moscow that opened 29 February 2020. It is the largest indoor theme park in Europe. The park covers 300,000 square meters. During the construction of the park 150 acres of nature trees unique and rare animals and birds and plants on the peninsula was destroyed. The appearance is in the style of a fairytale castle similar to Disneyland. The park has 29 unique attractions with many rides, as well as pedestrian malls with fountains and cycle paths. The complex includes a landscaped park along with a concert hall, a cinema, a hotel, a children's sailing school, restaurants and shops. According to the Constitution of the Russian Federation, Moscow is an independent federal subject of the Russian Federation, the so-called city of federal importance. The Mayor of Moscow is the leading official in the executive, leading the Government of Moscow, which is the highest organ of executive power. The Moscow City Duma is the City Duma (city council or local parliament) and local laws must be approved by it. It includes 45 members who are elected for a five-year term on Single-mandate constituency basis. From 2006 to 2012, direct elections of the mayor were not held due to changes in the Charter of the city of Moscow, the mayor was appointed by presidential decree. The first direct elections from the time of the 2003 vote were to be held after the expiration of the current mayor in 2015, however, in connection with his resignation of his own free will, they took place in September 2013. Local administration is carried out through eleven prefectures, uniting the districts of Moscow into administrative districts on a territorial basis, and 125 regional administrations. According to the law “On the organization of local self-government in the city of Moscow”, since the beginning of 2003, the executive bodies of local self-government are municipalities, representative bodies are municipal assemblies, whose members are elected in accordance with the Charter of the intracity municipality. In Moscow, as in a city endowed with the Constitution of the Russian Federation, the legislative, executive and judicial federal authorities of the country are located, with the exception of the Constitutional Court of the Russian Federation, which has been located in Saint Petersburg since 2008. The supreme executive authority - the Government of the Russian Federation - is located in the House of the Government of the Russian Federation on in the center of Moscow. The State Duma sits on . The Federation Council is located in a building on . The Supreme Court of the Russian Federation and the Supreme Court of Arbitration of the Russian Federation are also located in Moscow. In addition, the Moscow Kremlin is the official residence of the President of the Russian Federation. The president's working residence in the Kremlin is located in the Senate Palace. According to the ranking of the safest cities made by The Economist Moscow occupies the 37th position with a score of 68,5 points percent. The general level of crime is quite low.More than 170,000 surveillance cameras in Moscow are connected to the facial recognition system. The authorities recognized the successful two-month experiment with automatic recognition of faces, gender and age of people in real time - and then they deployed the system to the whole city. The network of video surveillance unites access video cameras (95% of residential apartment buildings in the capital), cameras in the territory and in buildings of schools and kindergartens, at the MCC stations, stadiums, public transport stops and bus stations, in parks, underground passages. The emergency numbers are the same as in all the other regions of Russia: 112 is the Single Emergency Number, 101 is the number of the Fire Service and Ministry of Emergency Situations , 102 is the Police one, 103 is the ambulance one, 104 is the Emergency Gas number. Moscow's EMS is the second most efficient one among the world's megacities, as reported by PwC during the presentation of the international study Analysis of EMS Efficiency in Megacities of the World. Waste management has been a sore point in the Russian capital for a long time, as well as in the country in general. The most advanced methods of separate collection and recycling of waste were mostly unknown to the citizens of Moscow, because the produced garbage was thrown into a single dumpster, the content was later transported outside the capital and deposited in huge waste dumps. Moscow couldn't handle its own garbage. Every day 9.5 thousand tons of municipal waste were transported from the capital to nearby landfills that have long since outlived their capacity. Since 2013, 24 of the Moscow region’s 39 landfills have closed. Moscow oblast residents had a run in with police while protesting the construction of a garbage dump near Moscow. Demonstrators have blocked roads and staged rallies in several towns outside Moscow to protest overfilled garbage collection sites. Residents complained of toxic fumes and said the polluted air was harming their children. Waste management reform was introduced on January 1, 2020. According to the new provisions of the Moscow Duma, there must be two containers in each courtyard: a blue one for plastic, paper, glass and metal and the other one gray, for wet waste. The content of the containers must be recycled in special recycling centers. Authorities hope it will lead to around 50% of the capital's waste being recycled. The entire city of Moscow is headed by one mayor (Sergey Sobyanin). The city of Moscow is divided into twelve administrative okrugs and 123 districts. The Russian capital's town-planning development began to show as early as the 12th century when the city was founded. The central part of Moscow grew by consolidating with suburbs in line with medieval principles of urban development when strong fortress walls would gradually spread along the circle streets of adjacent new settlements. The first circular defence walls set the trajectory of Moscow's rings, laying the groundwork for the future planning of the Russian capital. The following fortifications served as the city's circular defense boundaries at some point in history: the Kremlin walls, Zemlyanoy Gorod (Earthwork Town), the Kamer-Kollezhsky Rampart, the Garden Ring, and the small railway ring. The Moscow Ring Road (MKAD) has been Moscow's boundary since 1960. Also in the form of a circle are the main Moscow subway line, the Ring Line, and the so-called Third Automobile Ring, which was completed in 2005. Hence, the characteristic radial-circle planning continues to define Moscow's further development. However, contemporary Moscow has also engulfed a number of territories outside the MKAD, such as Solntsevo, Butovo, and the town of Zelenograd. A part of Moscow Oblast's territory was merged into Moscow on July 1, 2012; as a result, Moscow is no longer fully surrounded by Moscow Oblast and now also has a border with Kaluga Oblast. In all, Moscow gained about and 230,000 inhabitants. Moscow's Mayor Sergey Sobyanin lauded the expansion that will help Moscow and the neighboring region, a "mega-city" of twenty million people, to develop "harmonically". All administrative okrugs and districts have their own coats of arms and flags as well as individual heads of the area. In addition to the districts, there are Territorial Units with Special Status. These usually include areas with small or no permanent populations. Such is the case with the All-Russia Exhibition Centre, the Botanical Garden, large parks, and industrial zones. In recent years, some territories have been merged with different districts. There are no ethnic-specific regions in Moscow, as in the Chinatowns that exist in some North American and East Asian cities. And although districts are not designated by income, as with most cities, those areas that are closer to the city center, metro stations or green zones are considered more prestigious. Moscow also hosts some of the government bodies of Moscow Oblast, although the city itself is not a part of the oblast. Moscow has one of the largest municipal economies in Europe and it accounts more than one-fifth of Russia's gross domestic product (GDP). , the nominal GRP in Moscow reached ₽15.7 trillion $270 billion (~$0.7 trillion in Purchasing Power), US$22,000 per capita(~$60,000 per capita in Purchasing Power) Moscow has the lowest unemployment rate of all federal subjects of Russia, standing at just 1% in 2010, compared to the national average of 7%. The average gross monthly wage in the city is ₽60,000 (US$2,500 in Purchasing Power), which is almost twice the national average of ₽34,000 (US$1,400 in Purchasing Power), and the highest among the federal subjects of Russia. Moscow is the financial center of Russia and home to the country's largest banks and many of its largest companies, such as oil giant Rosneft. Moscow accounts for 17% of retail sales in Russia and for 13% of all construction activity in the country. Since the 1998 Russian financial crisis, business sectors in Moscow have shown exponential rates of growth. Many new business centers and office buildings have been built in recent years, but Moscow still experiences shortages in office space. As a result, many former industrial and research facilities are being reconstructed to become suitable for office use. Overall, economic stability has improved in recent years; nonetheless, crime and corruption still hinder business development. The Cherkizovskiy marketplace was the largest marketplace in Europe, with a daily turnover of about thirty million dollars and about ten thousand venders from different countries (including China, Turkey, Azerbaijan and India). It was administratively divided into twelve parts and covers a wide sector of the city. Since July 2009 it has been closed. In 2008, Moscow had 74 billionaires with an average wealth of $5.9 billion, which placed it above New York's 71 billionaires. However, , there were 27 billionaires in Moscow compared with New York's 55 billionaires. Overall, Russia lost 52 billionaires during the recession. Topping the list of Russia's billionaires in 2009 is Mikhail Prokhorov with $9.5 billion, ahead of the more famous Roman Abramovich with $8.5 billion, in . Prokhorov's holding company, "ОНЭКСИМ" ("ONÈKSIM") group, owns huge assets in hydrogen energy, nanotechnology, traditional energy, precious metals sector, while Abramovich, since selling his oil company Sibneft to Russian state-controlled gas giant Gazprom in 2005, has bought up steel and mining assets. He also owns Chelsea F.C.. Russia's richest woman remains Yelena Baturina, the 50-year-old second wife of Moscow Mayor Yuri Luzhkov. Oleg Deripaska, the 1st on this list in 2008 with $28 billion, was only 10th in 2009 with . Based on Forbes' 2011 list of the world's billionaires, Moscow is the city with the most billionaires in the world, with 79 from 115 in all of Russia. In 2018, Moscow was a host city to 12 games of the FIFA World Cup. The tournament served as an additional driver for the city economy, its sports and tourist infrastructure, and for land improvement in the city. Primary industries in Moscow include the chemical, metallurgy, food, textile, furniture, energy production, software development and machinery industries. The Mil Moscow Helicopter Plant is one of the leading producers of military and civil helicopters in the world. Khrunichev State Research and Production Space Center produces various space equipment, including modules for space stations Mir, Salyut and the ISS as well as Proton launch vehicles and military ICBMs. Sukhoi, Ilyushin, Mikoyan, Tupolev and Yakovlev aircraft design bureaus also situated in Moscow. NPO Energomash, producing the rocket engines for Russian and American space programs, as well as Lavochkin design bureau, which built fighter planes during WWII, but switched to space probes since the Space Race, are in nearby Khimki, an independent city in Moscow Oblast that have largely been enclosed by Moscow from its sides. Automobile plants ZiL and AZLK, as well as the Voitovich Rail Vehicle plant, are situated in Moscow and Metrovagonmash metro wagon plant is located just outside the city limits. The Poljot Moscow watch factory produces military, professional and sport watches well known in Russia and abroad. Yuri Gagarin in his trip into space used "Shturmanskie" produced by this factory. The Electrozavod factory was the first transformer factory in Russia. The Kristall distillery is the oldest distillery in Russia producing vodka types, including "Stolichnaya" while wines are produced at Moscow wine plants, including the Moscow Interrepublican Vinery. The Moscow Jewelry Factory and the Jewellerprom are producers of jewellery in Russia; Jewellerprom used to produce the exclusive Order of Victory, awarded to those aiding the Soviet Union's Red Army during World War II. There are other industries located just outside the city of Moscow, as well as microelectronic industries in Zelenograd, including Ruselectronics companies. Gazprom, the largest extractor of natural gas in the world and the largest Russian company, has head offices also in Moscow, as well as other oil, gas, and electricity companies. Moscow hosts headquarters of the many of telecommunication and technology companies, including 1C, ABBYY, Beeline, Kaspersky Lab, Mail.Ru Group, MegaFon, MTS, Rambler&Co, Rostelecom, Yandex, and Yota. Some industry is being transferred out of the city to improve the ecological state of the city. During Soviet times, apartments were lent to people by the government according to the square meters-per-person norm (some groups, including people's artists, heroes and prominent scientists had bonuses according to their honors). Private ownership of apartments was limited until the 1990s, when people were permitted to secure property rights to the places they inhabited. Since the Soviet era, estate owners have had to pay the service charge for their residences, a fixed amount based on persons per living area. The price of real estate in Moscow continues to rise. Today, one could expect to pay $4,000 on average per square meter (11 sq ft) on the outskirts of the city or US$6,500–$8,000 per square meter in a prestigious district. The price sometimes may exceed US$40,000 per square meter in a flat. It costs about US$1,200 per month to rent a one-bedroom apartment and about US$1,000 per month for a studio in the center of Moscow. A typical one-bedroom apartment is about , a typical two-bedroom apartment is , and a typical three-bedroom apartment is . Many cannot move out of their apartments, especially if a family lives in a two-room apartment originally granted by the state during the Soviet era. Some city residents have attempted to cope with the cost of living by renting their apartments while staying in dachas (country houses) outside the city. In 2006, Mercer Human Resources Consulting named Moscow the world's most expensive city for expatriate employees, ahead of perennial winner Tokyo, due to the stable Russian ruble as well as increasing housing prices within the city. Moscow also ranked first in the 2007 edition and 2008 edition of the survey. However, Tokyo has overtaken Moscow as the most expensive city in the world, placing Moscow at third behind Osaka in second place. In 2008, Moscow ranked top on the list of most expensive cities for the third year in a row. In 2014, according to "Forbes", Moscow was ranked the 9th most expensive city in the world. "Forbes" ranked Moscow the 2nd most expensive city the year prior. In 2019 the Economist Intelligence Unit's Worldwide Cost of Living survey put Moscow to 102nd place in the biannual ranking of 133 most expensive cities. ECA International's Cost of Living 2019 Survey ranked Moscow #120 among 482 locations worldwide. There are 1,696 high schools in Moscow, as well as 91 colleges. Besides these, there are 222 institutions of higher education, including 60 state universities and the Lomonosov Moscow State University, which was founded in 1755. The main university building located in Vorobyovy Gory (Sparrow Hills) is tall and when completed, was the tallest building on the continent. The university has over 30,000 undergraduate and 7,000 postgraduate students, who have a choice of twenty-nine faculties and 450 departments for study. Additionally, approximately 10,000 high school students take courses at the university, while over two thousand researchers work. The Moscow State University library contains over nine million books, making it one of the largest libraries in all of Russia. Its acclaim throughout the international academic community has meant that over 11,000 international students have graduated from the university, with many coming to Moscow to become fluent in the Russian language. The I.M. Sechenov First Moscow State Medical University named after Ivan Sechenov or formerly known as Moscow Medical Academy (1stMSMU) is a medical university situated in Moscow, Russia. It was founded in 1785 as the faculty of the Moscow state University. It is a Russian Federal Agency for Health and Social Development. It is one of the largest medical universities in Russia and Europe. More than 9200 students are enrolled in 115 academic departments. It offers courses for post-graduate studies. Moscow is one of the financial centers of the Russian Federation and CIS countries and is known for its business schools. Among them are the Financial University under the Government of the Russian Federation; Plekhanov Russian University of Economics; The State University of Management, and the National Research University - Higher School of Economics. They offer undergraduate degrees in management, finance, accounting, marketing, real estate, and economic theory, as well as Masters programs and MBAs. Most of them have branches in other regions of Russia and countries around the world. Bauman Moscow State Technical University, founded in 1830, is located in the center of Moscow and provides 18,000 undergraduate and 1,000 postgraduate students with an education in science and engineering, offering technical degrees. Since it opened enrollment to students from outside Russia in 1991, Bauman Moscow State Technical University has increased its number of international students up to two hundred. The Moscow Conservatory, founded in 1866, is a prominent music school in Russia whose graduates include Sergey Rachmaninoff, Alexander Scriabin, Aram Khachaturian, Mstislav Rostropovich, and Alfred Schnittke. The Gerasimov All-Russian State Institute of Cinematography, abbreviated as VGIK, is the world's oldest educational institution in Cinematography, founded by Vladimir Gardin in 1919. Sergei Eisenstein, Vsevolod Pudovkin, and Aleksey Batalov were among its most distinguished professors and Mikhail Vartanov, Sergei Parajanov, Andrei Tarkovsky, Nikita Mikhalkov, Eldar Ryazanov, Alexander Sokurov, Yuriy Norshteyn, Aleksandr Petrov, Vasily Shukshin, Konrad Wolf among graduates. Moscow State Institute of International Relations, founded in 1944, remains Russia's best- known school of international relations and diplomacy, with six schools focused on international relations. Approximately 4,500 students make up the university's student body and over 700,000 Russian and foreign-language books—of which 20,000 are considered rare—can be found in the library of the Moscow State Institute of International Relations. Other institutions are the Moscow Institute of Physics and Technology, also known as Phystech, the Fyodorov Eye Microsurgery Complex, founded in 1988 by Russian eye surgeon Svyatoslav Fyodorov, the Moscow Aviation Institute, the Moscow Motorway Institute (State Technical University), and the Moscow Engineering Physics Institute. Moscow Institute of Physics and Technology has taught numerous Nobel Prize winners, including Pyotr Kapitsa, Nikolay Semyonov, Lev Landau and Alexander Prokhorov, while the Moscow Engineering Physics Institute is known for its research in nuclear physics. The highest Russian military school is the Combined Arms Academy of the Armed Forces of the Russian Federation. Although Moscow has a number of famous Soviet-era higher educational institutions, most of which are more oriented towards engineering or the fundamental sciences, in recent years Moscow has seen a growth in the number of commercial and private institutions that offer classes in business and management. Many state institutions have expanded their education scope and introduced new courses or departments. Institutions in Moscow, as well as the rest of post-Soviet Russia, have begun to offer new international certificates and postgraduate degrees, including the Master of Business Administration. Student exchange programs with different (especially, European) countries have also become widespread in Moscow's universities, while schools within the Russian capital also offer seminars, lectures, and courses for corporate employees and businessmen. Moscow is one of the largest science centers in Russia. The headquarters of the Russian Academy of Sciences are located in Moscow as well as research and applied science institutions. The Kurchatov Institute, Russia's leading research and development institution in the fields of nuclear energy, where the first nuclear reactor in Europe was built, the Landau Institute for Theoretical Physics, Institute for Theoretical and Experimental Physics, Kapitza Institute for Physical Problems and Steklov Institute of Mathematics are all situated in Moscow. There are 452 libraries in the city, including 168 for children. The Russian State Library, founded in 1862, is the national library of Russia. The library is home to over of shelves and 42 million items, including over 17 million books and serial volumes, 13 million journals, 350,000 music scores and sound records, and 150,000 maps, making it the largest library in Russia and one of the largest in the world. Items in 247 languages account for 29% of the collection. The State Public Historical Library, founded in 1863, is the largest library specialising in Russian history. Its collection contains four million items in 112 languages (including 47 languages of the former USSR), mostly on Russian and world history, heraldry, numismatics, and the history of science. In regard to primary and secondary education, in 2011, Clifford J. Levy of "The New York Times" wrote, "Moscow has some strong public schools, but the system as a whole is dispiriting, in part because it is being corroded by the corruption that is a post-Soviet scourge. Parents often pay bribes to get their children admitted to better public schools. There are additional payoffs for good grades." The Moscow Metro system is famous for its art, murals, mosaics, and ornate chandeliers. It started operation in 1935 and immediately became the centrepiece of the transportation system. More than that it was a Stalinist device to awe and reward the populace, and give them an appreciation of Soviet realist art. It became the prototype for future Soviet large-scale technologies. Lazar Kaganovich was in charge; he designed the subway so that citizens would absorb the values and ethos of Stalinist civilisation as they rode. The artwork of the 13 original stations became nationally and internationally famous. For example, the Sverdlov Square subway station featured porcelain bas-reliefs depicting the daily life of the Soviet peoples, and the bas-reliefs at the Dynamo Stadium sports complex glorified sports and the physical prowess of the powerful new "Homo Sovieticus." (Soviet man). The metro was touted as the symbol of the new social order—a sort of Communist cathedral of engineering modernity. Soviet workers did the labour and the art work, but the main engineering designs, routes, and construction plans were handled by specialists recruited from the London Underground. The Britons called for tunnelling instead of the "cut-and-cover" technique, the use of escalators instead of lifts, and designed the routes and the rolling stock. The paranoia of Stalin and the NKVD was evident when the secret police arrested numerous British engineers for espionage—that is for gaining an in-depth knowledge of the city's physical layout. Engineers for the Metropolitan Vickers Electrical Company were given a show trial and deported in 1933, ending the role of British business in the USSR. Today, the Moscow Metro comprises twelve lines, mostly underground with a total of 203 stations. The Metro is one of the deepest subway systems in the world; for instance the Park Pobedy station, completed in 2003, at underground, has the longest escalators in Europe. The Moscow Metro is one of the world's busiest metro systems, serving about ten million passengers daily. (300,000,000 people every month) Facing serious transportation problems, Moscow has plans for expanding its Metro. In 2016, the authorities launched a new circle metro railway that contributed to solving transportation issues, namely daily congestion at Koltsevaya Line. The Moscow Metro operates a short monorail line. The line connects Timiryazevskaya metro station and Ulitsa Sergeya Eisensteina, passing close to VVTs. The line opened in 2004. No additional fare is needed (first metro-monorail transfer in 90 minutes does not charge). As Metro stations outside the city center are far apart in comparison to other cities, up to , a bus network radiates from each station to the surrounding residential zones. Moscow has a bus terminal for long-range and intercity passenger buses (Central Bus Terminal) with a daily turnover of about 25 thousand passengers serving about 40% of long-range bus routes in Moscow. Every major street in the city is served by at least one bus route. Many of these routes are doubled by a trolleybus route and have trolley wires over them. With the total line length of almost of a single wire, 8 depots, 104 routes, and 1740 vehicles, the Moscow trolleybus system was the largest in the world. But municipal authority, headed by Sergey Sobyanin, began to destroy trolleybus system in Moscow at 2014 due to corruption and planned replacement of trolleybuses by electrobuses. However, there is still no one trolleybus route replaced by electrobus, and a lot of former trolleybus routes, that was replaced by diesel buses. At 2018 Moscow trolleybus system have only 4 depots and dozens of kilometers of unused wires. Almost all trolleybus wires inside Garden Ring (Sadovoe Koltso) was cut in 2016–2017 due to the reconstruction of central streets ("Moya Ulitsa"). Opened on November 15, 1933, it is also the world's 6th oldest operating trolleybus system. In 2018 the vehicle companies Kamaz and GAZ have won the Mosgortrans tender for delivering 200 electric buses and 62 ultra-fast charging stations to the city transport system. The manufacturers will be responsible for the quality and reliable operation of the buses and charging stations for the next 15 years. The city will be procuring only electric buses as of 2021, replacing the diesel bus fleet gradually. Russia will become the leader amongst the European cities in terms of electric and gas fuel share in public transport by 2019, according to expectations. On 26 November 2018, the mayor of Moscow Sergey Sobyanin took part in the ceremony to open the cable car above the Moskva River. The cable car will connect the Luzhniki sports complex with Sparrow Hills and Kosygin Street. The journey from the well-known viewpoint on Vorobyovy Gory to Luzhniki Stadium will last for five minutes instead of 20 minutes that one would have to spend on the same journey by car. The cable car will work every day from 11 a.m. till 11 p.m. The cable car is 720 meters long. It was built to transport 1,600 passengers per hour in all weathers. There 35 closed capsules designed by Porsche Design Studio to transport passengers. The booths are equipped with media screens, LED lights, hooks for bikes, skis and snowboards. Passengers will also be able to use audio guides in English, German, Chinese and Russian. Moscow has an extensive tram system, which first opened in 1899. The newest line was built in 1984. Its daily usage by Muscovites is low, making up for approximately 5% of trips because many vital connections in the network have been withdrawn. Trams still remain important in some districts as feeders to Metro stations. The trams also provide important cross links between metro lines, for example between Universitet station of Sokolnicheskaya Line (#1 red line) and Profsoyuznaya station of Kaluzhsko-Rizhskaya Line (#6 orange line) or between Voykovskaya and Strogino. There are three tram networks in the city: In addition, tram advocates have suggested that the new rapid transit services (metro to City, Butovo light metro, Monorail) would be more effective as at-grade tram lines and that the problems with trams are only due to poor management and operation, not the technical properties of trams. New tram models have been developed for the Moscow network despite the lack of expansion. Commercial taxi services and route taxis are in widespread use. In the mid-2010s, service platforms such as Yandex.Taxi, Uber and Gett displaced many private drivers and small service providers and were in 2015 servicing more than 50% of all taxi orders in Moscow. Several train stations serve the city. Moscow's nine rail terminals (or "vokzals") are: The terminals are located close to the city center, along with the metro ringline 5 or close to it, and connect to a metroline to the centre of town. Each station handles trains from different parts of Europe and Asia. There are many smaller railway stations in Moscow. As train tickets are cheap, they are the preferred mode of travel for Russians, especially when departing to Saint Petersburg, Russia's second-largest city. Moscow is the western terminus of the Trans-Siberian Railway, which traverses nearly of Russian territory to Vladivostok on the Pacific coast. Suburbs and satellite cities are connected by commuter elektrichka (electric rail) network. Elektrichkas depart from each of these terminals to the nearby (up to ) large railway stations. During the 2010s, the Little Ring of the Moscow Railway was converted to be used for frequent passenger service; it is fully integrated with Moscow Metro; the passenger service started on September 10, 2016. There is a connecting railway line on the North side of the town that connects Belorussky terminal with other railway lines. This is used by some suburban trains. The Greater Ring of the Moscow Railway forms a ring around the main part of Moscow. The Moscow Central Circle is a urban-metro railway orbital line that encircles historical Moscow. It was built alongside Little Ring of the Moscow Railway, taking some of its tracks into itself as well. The MCC opened for passenger use on September 10, 2016. The line is operated by the Moscow Government owned company MKZD through the Moscow Metro, with the Federal Government owned Russian Railways selected as the operation subcontractor. The track infrastructure and most platforms are owned by Russian Railways, while most station buildings are owned by MKZD. The Moscow Central Diameters are an urban and suburban rail transit system, created on the carcass of the already existing Moscow Railway. The system works in the city of Moscow and in Moscow Oblast. The railway service has been created to improve the passengers' transfer inside the capital and make is easier for commuters to move. They started working on November 21, 2019. The inauguration of the first two Diameters, the D1 (Odintsovo-Lobnja) and D2 (Nachabino-Podolsk) introduced a new payment system: There are over 2.6 million cars in the city daily. Recent years have seen growth in the number of cars, which have caused traffic jams and lack of parking space to become major problems. The Moscow Ring Road (MKAD), along with the Third Transport Ring and the cancelled Fourth Transport Ring, is one of only three freeways that run within Moscow city limits. There are several other roadway systems that form concentric circles around the city. There are five primary commercial airports serving Moscow: Sheremetyevo International Airport is the most globally connected, handling 60% of all international flights. It is also a home to all SkyTeam members, and the main hub for Aeroflot (itself a member of SkyTeam). Domodedovo International Airport is the leading airport in Russia in terms of passenger throughput, and is the primary gateway to long-haul domestic and CIS destinations and its international traffic rivals Sheremetyevo. Most of Star Alliance members use Domodedovo as their international hub. Vnukovo International Airport handles flights of Turkish Airlines, Lufthansa, Wizz Air and others. Ostafyevo International Airport caters primarily to business aviation. Moscow's airports vary in distances from the MKAD beltway: Domodedovo is the farthest at ; Vnukovo is ; Sheremetyevo is ; and Ostafievo, the nearest, is about from MKAD. There are a number of smaller airports close to Moscow (19 in Moscow Oblast) such as Myachkovo Airport, that are intended for private aircraft, helicopters and charters. Moscow has two passenger terminals, (South River Terminal and North River Terminal or Rechnoy vokzal), on the river and regular ship routes and cruises along the Moskva and Oka rivers, which are used mostly for entertainment. The North River Terminal, built in 1937, is the main hub for long-range river routes. There are three freight ports serving Moscow. Moscow has different vehicle sharing options that are sponsored by the local government. There are several car sharing companies which are in charge of providing cars to the population. To drive the automobiles, the user has to book them through the app of the owning company. In 2018 the mayor Sergey Sobyanin said Moscow's car sharing system has become the biggest in Europe in terms of vehicle fleet. Every day about 25,000 people use this service. In the end of the same year Moscow carsharing became the second in the world in therms of fleet with 16.5K available vehicles. Another sharing system is bike sharing ("Velobike") of a fleet formed by 3000 traditional and electrical bicycles. The "Delisamokat" is a new sharing service that provides electrical scooters. There are companies that provide different vehicles to the population in proximity to Moscow's big parks. The Moscow International Business Center is a projected new part of central Moscow. Situated in Presnensky District, located at the Third Ring, the Moscow City area is under intense development. The goal of MIBC is to create a zone, the first in Russia, and in all of Eastern Europe, that will combine business activity, living space and entertainment. The project was conceived by the Moscow government in 1992. The construction of the MIBC takes place on the Krasnopresnenskaya embankment. The whole project takes up to . The area is the only spot in downtown Moscow that can accommodate a project of this magnitude. Today, most of the buildings there are old factories and industrial complexes. The Federation Tower, completed in 2016, is the second-tallest building in Europe. Also to be included in the project are a water park and other recreational facilities; business and entertainment complexes, office and residential buildings, the transport network and the new site of the Moscow government. The construction of four new metro stations in the territory has been completed, two of which have opened and two others are reserved for future metro lines crossing MIBC, some additional stations were planned. A rail shuttle service, directly connecting MIBC with the Sheremetyevo International Airport is also planned. Major thoroughfares through Moscow-City are the Third Ring and Kutuzovsky Prospekt. Three metro stations were initially planned for the Filyovskaya Line. The station Delovoi Tsentr opened in 2005 and was later renamed Vystavochnaya in 2009. The branch extended to the Mezhdunarodnaya station in 2006, and all work on the third station, Dorogomilovskaya (between Kiyevskaya and Delovoi Tsentr), has been postponed. There are plans to extend the branch as far as the Savyolovskaya station, on the Serpukhovsko-Timiryazevskaya Line. In March 2009 the Russian business newspaper "Kommersant" reported that because of the worldwide financial crisis of 2007–2008, many of the construction projects in Moscow (especially in the Moscow International Business Center) are frozen and may be cancelled altogether—like the ambitious "Russia Tower" in "Moscow-city". Moscow is home to nearly all of Russia's nationwide television networks, radio stations, newspapers, and magazines. English-language media include "The Moscow Times" and "Moscow News", which are, respectively, the largest and oldest English-language weekly newspapers in all of Russia. "Kommersant", "Vedomosti" and "Novaya Gazeta" are Russian-language media headquartered in Moscow. "Kommersant" and "Vedomosti" are among the country's leading and oldest Russian-language business newspapers. Other media in Moscow include the "Echo of Moscow", the first Soviet and Russian private news radio and information agency, and NTV, one of the first privately owned Russian television stations. The total number of radio stations in Moscow in the FM band is near 50. Moscow television networks: Moscow radio stations: Moscow is twinned with: Moscow has cooperation agreements with: International rankings of Moscow:
https://en.wikipedia.org/wiki?curid=19004
Mediterranean Sea The Mediterranean Sea is a sea connected to the Atlantic Ocean, surrounded by the Mediterranean Basin and almost completely enclosed by land: on the north by Southern Europe and Anatolia, on the south by North Africa, and on the east by the Levant. Although the sea is sometimes considered a part of the Atlantic Ocean, it is usually referred to as a separate body of water. Geological evidence indicates that around 5.9 million years ago, the Mediterranean was cut off from the Atlantic and was partly or completely desiccated over a period of some 600,000 years during the Messinian salinity crisis before being refilled by the Zanclean flood about 5.3 million years ago. It covers an area of about , representing 0.7% of the global ocean surface, but its connection to the Atlantic via the Strait of Gibraltar—the narrow strait that connects the Atlantic Ocean to the Mediterranean Sea and separates Spain in Europe from Morocco in Africa—is only wide. In oceanography, it is sometimes called the "Eurafrican Mediterranean Sea", the "European Mediterranean Sea" or the "African Mediterranean Sea" to distinguish it from mediterranean seas elsewhere. The Mediterranean Sea has an average depth of and the deepest recorded point is in the Calypso Deep in the Ionian Sea. It lies between latitudes 30° and 46° N and longitudes 6° W and 36° E. Its west–east length, from the Strait of Gibraltar to the Gulf of Iskenderun, on the southeastern coast of Turkey, is about . The sea was an important route for merchants and travellers of ancient times, facilitating trade and cultural exchange between peoples of the region. The history of the Mediterranean region is crucial to understanding the origins and development of many modern societies. The sea was controlled by the Roman Empire for centuries, during their nautical hegemony. The countries surrounding the Mediterranean in clockwise order are Spain, France, Monaco, Italy, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, Albania, Greece, Turkey, Syria, Lebanon, Israel, Egypt, Libya, Tunisia, Algeria, and Morocco; Malta and Cyprus are island countries in the sea. In addition, the Gaza Strip and the British Overseas Territories of Gibraltar and Akrotiri and Dhekelia have coastlines on the sea. The Ancient Egyptians called the Mediterranean Wadj-wr/Wadj-Wer/Wadj-Ur. The Ancient Greeks called the Mediterranean simply ("hē thálassa"; "the Sea") or sometimes ("hē megálē thálassa"; "the Great Sea"), ("hē hēmétera thálassa"; "Our Sea"), or ("hē thálassa hē kath’hēmâs"; "the sea around us"). The Romans called it "Mare Magnum" ("Great Sea") or "Mare Internum" ("Internal Sea") and, starting with the Roman Empire, "Mare Nostrum" ("Our Sea"). The term "Mare Mediterrāneum" appears later: Solinus apparently used it in the 3rd century, but the earliest extant witness to it is in the 6th century, in Isidore of Seville. It means 'in the middle of land, inland' in Latin, a compound of "medius" ("middle"), "terra" ("land, earth"), and "-āneus" ("having the nature of"). The Latin word is a calque of Greek ("mesógeios"; "inland"), from ("mésos", "in the middle") and ("gḗinos", "of the earth"), from ("gê", "land, earth"). The original meaning may have been 'the sea in the middle of the earth', rather than 'the sea enclosed by land'. Ancient Iranians called it the "Roman Sea", in Classic Persian texts was called "Daryāy-e Rōm" (دریای روم) which may be from Middle Persian form, "Zrēh ī Hrōm" (𐭦𐭫𐭩𐭤 𐭩 𐭤𐭫𐭥𐭬). The Carthaginians called it the "Syrian Sea". In ancient Syrian texts, Phoenician epics and in the Hebrew Bible, it was primarily known as the "Great Sea", "HaYam HaGadol", (Numbers; Book of Joshua; Ezekiel) or simply as "The Sea" (1 Kings). However, it has also been called the "Hinder Sea" because of its location on the west coast of Greater Syria or the Holy Land (and therefore behind a person facing the east), which is sometimes translated as "Western Sea". Another name was the "Sea of the Philistines", (Book of Exodus), from the people inhabiting a large portion of its shores near the Israelites. In Modern Hebrew, it is called "HaYam HaTikhon" 'the Middle Sea'. In Classic Persian texts was called Daryāy-e Šām (دریای شام) "The Western Sea" or "Syrian Sea". In Modern Arabic, it is known as ' () 'the [White] Middle Sea'. In Islamic and older Arabic literature, it was "Baḥr al-Rūm(ī)" ( or }) 'the Sea of the Romans' or 'the Roman Sea'. At first, that name referred to only the Eastern Mediterranean, but it was later extended to the whole Mediterranean. Other Arabic names were "Baḥr al-šām(ī)" () ("the Sea of Syria") and "Baḥr al-Maghrib" () ("the Sea of the West"). In Turkish, it is the "Akdeniz" 'the White Sea'; in Ottoman, , which sometimes means only the Aegean Sea. The origin of the name is not clear, as it is not known in earlier Greek, Byzantine or Islamic sources. It may be to contrast with the Black Sea. In Persian, the name was translated as "Baḥr-i Safīd", which was also used in later Ottoman Turkish. It is probably the origin of the colloquial Greek phrase ("Άspri Thálassa", lit. "White Sea"). Johann Knobloch claims that in classical antiquity, cultures in the Levant used colours to refer to the cardinal points: black referred to the north (explaining the name Black Sea), yellow or blue to east, red to south (e.g., the Red Sea), and white to west. This would explain the Greek "Άspri Thálassa", the Bulgarian "Byalo More", the Turkish "Akdeniz", and the Arab nomenclature described above, "White Sea". Several ancient civilizations were located around the Mediterranean shores and were greatly influenced by their proximity to the sea. It provided routes for trade, colonization, and war, as well as food (from fishing and the gathering of other seafood) for numerous communities throughout the ages. Due to the shared climate, geology, and access to the sea, cultures centered on the Mediterranean tended to have some extent of intertwined culture and history. Two of the most notable Mediterranean civilizations in classical antiquity were the Greek city states and the Phoenicians, both of which extensively colonized the coastlines of the Mediterranean. Later, when Augustus founded the Roman Empire, the Romans referred to the Mediterranean as "Mare Nostrum" ("Our Sea"). For the next 400 years, the Roman Empire completely controlled the Mediterranean Sea and virtually all its coastal regions from Gibraltar to the Levant. Darius I of Persia, who conquered Ancient Egypt, built a canal linking the Mediterranean to the Red Sea. Darius's canal was wide enough for two triremes to pass each other with oars extended, and required four days to traverse. In 2019, the archaeological team of experts from Underwater Research Center of the Akdeniz University (UA) revealed a shipwreck dating back 3,600 years in the Mediterranean Sea in Turkey. 1.5 tons of copper ingots found in the ship was used to estimate its age. The Governor of Antalya Munir Karaloğlu described this valuable discovery as the "Göbeklitepe of the underwater world”. It has been confirmed that the shipwreck, dating back to 1600 BC, is older than the "Uluburun Shipwreck" dating back to 1400 BC. The Western Roman Empire collapsed around 476 AD. Temporarily the east was again dominant as Roman power lived on in the Byzantine Empire formed in the 4th century from the eastern half of the Roman Empire. Another power arose in the 7th century, and with it the religion of Islam, which soon swept across from the east; at its greatest extent, the Arab Empire controlled 75% of the Mediterranean region and left a lasting footprint on its eastern and southern shores. The Arab invasions disrupted the trade relations between Western and Eastern Europe while disrupting trade routes with Eastern Asian Empires. This, however, had the indirect effect of promoting the trade across the Caspian Sea. The export of grains from Egypt was re-routed towards the Eastern world. Products from East Asian empires, like silk and spices, were carried from Egypt to ports like Venice and Constantinople by sailors and Jewish merchants. The Viking raids further disrupted the trade in western Europe and brought it to a halt. However, the Norsemen developed the trade from Norway to the White Sea, while also trading in luxury goods from Spain and the Mediterranean. The Byzantines in the mid-8th century retook control of the area around the north-eastern part of the Mediterranean. Venetian ships from the 9th century armed themselves to counter the harassment by Arabs while concentrating trade of Asian goods in Venice. The Fatimids maintained trade relations with the Italian city-states like Amalfi and Genoa before the Crusades, according to the Cairo Geniza documents. A document dated 996 mentions Amalfian merchants living in Cairo. Another letter states that the Genoese had traded with Alexandria. The caliph al-Mustansir had allowed Amalfian merchants to reside in Jerusalem about 1060 in place of the Latin hospice. The Crusades led to flourishing of trade between Europe and the "outremer" region. Genoa, Venica and Pisa created colonies in regions controlled by the Crusaders and came to control the trade with the Orient. These colonies also allowed them to trade with the Eastern world. Though the fall of the Crusader states and attempts at banning of trade relations with Muslim states by the Popes temporarily disrupted the trade with the Orient, it however continued. Europe started to revive, however, as more organized and centralized states began to form in the later Middle Ages after the Renaissance of the 12th century. Ottoman power based in Anatolia continued to grow, and in 1453 extinguished the Byzantine Empire with the Conquest of Constantinople. Ottomans gained control of much of the sea in the 16th century and maintained naval bases in southern France (1543–1544), Algeria and Tunisia. Barbarossa, the famous Ottoman captain is a symbol of this domination with the victory of the Battle of Preveza (1538). The Battle of Djerba (1560) marked the apex of Ottoman naval domination in the Mediterranean. As the naval prowess of the European powers increased, they confronted Ottoman expansion in the region when the Battle of Lepanto (1571) checked the power of the Ottoman Navy. This was the last naval battle to be fought primarily between galleys. The Barbary pirates of Northwest Africa preyed on Christian shipping and coastlines in the Western Mediterranean Sea. According to Robert Davis, from the 16th to 19th centuries, pirates captured 1 million to 1.25 million Europeans as slaves. The development of oceanic shipping began to affect the entire Mediterranean. Once, most trade between Western Europe and the East had passed through the region, but after the 1490s the development of a sea route to the Indian Ocean allowed the importation of Asian spices and other goods through the Atlantic ports of western Europe. The sea remained strategically important. British mastery of Gibraltar ensured their influence in Africa and Southwest Asia. Wars included Naval warfare in the Mediterranean during World War I and Mediterranean theatre of World War II. In 2013, the Maltese president described the Mediterranean Sea as a "cemetery" due to the large number of migrants who drowned there after their boats capsized. European Parliament president Martin Schulz said in 2014 that Europe's migration policy "turned the Mediterranean into a graveyard", referring to the number of drowned refugees in the region as a direct result of the policies. An Azerbaijani official described the sea as "a burial ground ... where people die". Following the 2013 Lampedusa migrant shipwreck, the Italian government decided to strengthen the national system for the patrolling of the Mediterranean Sea by authorising "Operation Mare Nostrum", a military and humanitarian mission in order to rescue the migrants and arrest the traffickers of immigrants. In 2015, more than one million migrants crossed the Mediterranean Sea into Europe. Italy was particularly affected by the European migrant crisis. Since 2013, over 700,000 migrants have landed in Italy, mainly sub-Saharan Africans. The Mediterranean Sea connects: The Sea of Marmara (Dardanelles) is often considered a part of the Mediterranean Sea, whereas the Black Sea is generally not. The long artificial Suez Canal in the southeast connects the Mediterranean Sea to the Red Sea. Large islands in the Mediterranean include: The typical Mediterranean climate has hot, humid, and dry summers and mild, rainy winters. Crops of the region include olives, grapes, oranges, tangerines, and cork. The Mediterranean Sea includes 14 marginal sea: The International Hydrographic Organization defines the limits of the Mediterranean Sea as follows: Stretching from the Strait of Gibraltar in the west to the entrances to the Dardanelles and the Suez Canal in the east, the Mediterranean Sea is bounded by the coasts of Europe, Africa, and Asia and is divided into two deep basins: The following countries have a coastline on the Mediterranean Sea: Several other territories also border the Mediterranean Sea (from west to east): Major cities (municipalities), with populations larger than 200,000 people, bordering the Mediterranean Sea include: The International Hydrographic Organization (IHO) divides the Mediterranean into a number of smaller waterbodies, each with their own designation (from west to east): Some other seas whose names have been in common use from the ancient times, or in the present: Many of these smaller seas feature in local myth and folklore and derive their names from such associations. In addition to the seas, a number of gulfs and straits are recognised: Much of the Mediterranean coast enjoys a hot-summer Mediterranean climate. However, most of its southeastern coast has a hot desert climate, and much of Spain's eastern (Mediterranean) coast has a cold semi-arid climate. Although they are rare, tropical cyclones occasionally form in the Mediterranean Sea, typically in September–November. Being nearly landlocked affects conditions in the Mediterranean Sea: for instance, tides are very limited as a result of the narrow connection with the Atlantic Ocean. The Mediterranean is characterised and immediately recognised by its deep blue colour. Evaporation greatly exceeds precipitation and river runoff in the Mediterranean, a fact that is central to the water circulation within the basin. Evaporation is especially high in its eastern half, causing the water level to decrease and salinity to increase eastward. The average salinity in the basin is 38 PSU at 5 m depth. The temperature of the water in the deepest part of the Mediterranean Sea is . Water circulation in the Mediterranean can be described from the surface waters entering from the Atlantic through the Strait of Gibraltar. These cool and relatively low-salinity waters circulate eastwards along the North African coasts. A part of these surface waters does not pass the Strait of Sicily, but deviates towards Corsica before exiting the Mediterranean. The surface waters entering the eastern Mediterranean basin circulate along the Libyan and Israelian coasts. Upon reaching the Levantine Sea, the surface waters having experienced warming and saltening from their initial Atlantic state, are now more dense and sink to form the Levantine Intermediate Waters (LIW). Most of the water found anywhere between 50 and 600 m deep in the Mediterranean originates from the LIW. LIW are formed along the coasts of Turkey and circulate westwards along the Greek and South Italian coasts. LIW are the only waters passing the Sicily Strait westwards. After the Strait of Sicily, the LIW waters circulate along the Italian, French and Spanish coasts before exiting the Mediterranean through the depths of the Strait of Gibraltar. Deep water in the Mediterranean originates from three main areas: the Adriatic Sea, from which most of the deep water in the eastern Mediterranean originates, the Aegean Sea, and the Gulf of Lion. Deep water formation in the Mediterranean is triggered by strong winter convection fueled by intense cold winds like the Bora. When new deep water is formed, the older waters mix with the overlaying intermediate waters and eventually exit the Mediterranean. The residence time of water in the Mediterranean is approximately 100 years, making the Mediterranean especially sensitive to climate change. Being a semi-enclosed basin, the Mediterranean experiences transitory events that can affect the water circulation on short time scales. In the mid 1990s, the Aegean Sea became the main area for deep water formation in the eastern Mediterranean after particularly cold winter conditions. This transitory switch in the origin of deep waters in the eastern Mediterranean was termed Eastern Mediterranean Transient (EMT) and had major consequences on water circulation of the Mediterranean. Another example of a transient event affecting the Mediterranean circulation is the periodic inversion of the North Ionian Gyre, which is an anticyclonic ocean gyre observed in the northern part of the Ionian Sea, off the Greek coast. The transition from anticyclonic to cyclonic rotation of this gyre changes the origin of the waters fueling it; when the circulation is anticyclonic (most common), the waters of the gyre originate from the Adriatic Sea. When the circulation is cyclonic, the waters originate from the Levantine Sea. These waters have different physical and chemical characteristics, and the periodic inversion of the North Ionian Gyre (called Bimodal Oscillating System or BiOS) changes the Mediterranean circulation and biogeochemistry around the Adriatic and Levantine regions. Because of the short residence time of waters, the Mediterranean Sea is considered a hot-spot for climate change effects. Deep water temperatures have increased by between 1959 and 1989. According to climate projections, the Mediterranean Sea could become warmer. The decrease in precipitation over the region could lead to more evaporation ultimately increasing the Mediterranean Sea salinity. Because of the changes in temperature and salinity, the Mediterranean Sea may become more stratified by the end of the 21st century, with notable consequences on water circulation and biogeochemistry. In spite of its great biodiversity, concentrations of chlorophyll and nutrients in the Mediterranean Sea are very low, making it one of the most oligotrophic ocean regions in the world. The Mediterranean Sea is commonly referred to as an LNLC (Low-Nutrient, Low-Chlorophyll) area. The Mediterranean Sea fits the definition of a desert as it has low precipitation and its nutrient contents are low, making it difficult for plants and animals to develop. There are steep gradients in nutrient concentrations, chlorophyll concentrations and primary productivity in the Mediterranean. Nutrient concentrations in the western part of the basin are about double the concentrations in the eastern basin. The Alboran Sea, close to the Strait of Gibraltar, has a daily primary productivity of about 0.25 g C (grams of carbon) m−2 day−1 whereas the eastern basin has an average daily productivity of 0.16 g C m−2 day−1. For this reason, the eastern part of the Mediterranean Sea is termed "ultraoligotrophic". The productive areas of the Mediterranean Sea are few and small. High (i.e. more than 0.5 grams of Chlorophyll "a" per cubic meter) productivity occurs in coastal areas, close to the river mouths which are the primary suppliers of dissolved nutrients. The Gulf of Lion has a relatively high productivity because it is an area of high vertical mixing, bringing nutrients to the surface waters that can be used by phytoplankton to produce Chlorophyll "a". Primary productivity in the Mediterranean is also marked by an intense seasonal variability. In winter, the strong winds and precipitation over the basin generate vertical mixing, bringing nutrients from the deep waters to the surface, where phytoplankton can convert it into biomass. However, in winter, light may be the limiting factor for primary productivity. Between March and April, spring offers the ideal trade-off between light intensity and nutrient concentrations in surface for a spring bloom to occur. In summer, high atmospheric temperatures lead to the warming of the surface waters. The resulting density difference virtually isolates the surface waters from the rest of the water column and nutrient exchanges are limited. As a consequence, primary productivity is very low between June and October. Oceanographic expeditions uncovered a characteristic feature of the Mediterranean Sea biogeochemistry: most of the chlorophyll production does not occur on the surface, but in sub-surface waters between 80 and 200 meters deep. Another key characteristic of the Mediterranean is its high nitrogen-to-phosphorus ratio (N:P). Redfield demonstrated that most of the world's oceans have an average N:P ratio around 16. However, the Mediterranean Sea has an average N:P between 24 and 29, which translates a widespread phosphorus limitation. Because of its low productivity, plankton assemblages in the Mediterranean Sea are dominated by small organisms such as picophytoplankton and bacteria. The geologic history of the Mediterranean Sea is complex. Underlain by oceanic crust, the sea basin was once thought to be a tectonic remnant of the ancient Tethys Ocean; it is now known to be a structurally younger basin, called the Neotethys, which was first formed by the convergence of the African and Eurasian plates during the Late Triassic and Early Jurassic. Because it is a near-landlocked body of water in a normally dry climate, the Mediterranean is subject to intensive evaporation and the precipitation of evaporites. The Messinian salinity crisis started about six million years ago (mya) when the Mediterranean became landlocked, and then essentially dried up. There are salt deposits accumulated on the bottom of the basin of more than a million cubic kilometres—in some places more than three kilometres thick. Scientists estimate that the sea was last filled about 5.3 million years ago (mya) in less than two years by the Zanclean flood. Water poured in from the Atlantic Ocean through a newly breached gateway now called the Strait of Gibraltar at an estimated rate of about three orders of magnitude (one thousand times) larger than the current flow of the Amazon River. The Mediterranean Sea has an average depth of and the deepest recorded point is in the Calypso Deep in the Ionian Sea. The coastline extends for . A shallow submarine ridge (the Strait of Sicily) between the island of Sicily and the coast of Tunisia divides the sea in two main subregions: the Western Mediterranean, with an area of about 850,000 km2 (330,000 mi2); and the Eastern Mediterranean, of about 1.65 million km2 (640,000 mi2). Coastal areas have submarine karst springs or s, which discharge pressurised groundwater into the water from below the surface; the discharge water is usually fresh, and sometimes may be thermal. The Mediterranean basin and sea system was established by the ancient African-Arabian continent colliding with the Eurasian continent. As Africa-Arabia drifted northward, it closed over the ancient Tethys Ocean which had earlier separated the two supercontinents Laurasia and Gondwana. At about that time in the middle Jurassic period (roughly 170 million years ago ) a much smaller sea basin, dubbed the Neotethys, was formed shortly before the Tethys Ocean closed at its western (Arabian) end. The broad line of collisions pushed up a very long system of mountains from the Pyrenees in Spain to the Zagros Mountains in Iran in an episode of mountain-building tectonics known as the Alpine orogeny. The Neotethys grew larger during the episodes of collisions (and associated foldings and subductions) that occurred during the Oligocene and Miocene epochs (34 to 5.33 mya); see animation: Africa-Arabia colliding with Eurasia. Accordingly, the Mediterranean basin consists of several stretched tectonic plates in subduction which are the foundation of the eastern part of the Mediterranean Sea. Various zones of subduction contain the highest oceanic ridges, east of the Ionian Sea and south of the Aegean. The Central Indian Ridge runs east of the Mediterranean Sea south-east across the in-between of Africa and the Arabian Peninsula into the Indian Ocean. During Mesozoic and Cenozoic times, as the northwest corner of Africa converged on Iberia, it lifted the Betic-Rif mountain belts across southern Iberia and northwest Africa. There the development of the intramontane Betic and Rif basins created two roughly-parallel marine gateways between the Atlantic Ocean and the Mediterranean Sea. Dubbed the Betic and Rifian corridors, they gradually closed during the middle and late Miocene: perhaps several times. In the late Miocene the closure of the Betic Corridor triggered the so-called "Messinian salinity crisis" (MSC), when the Mediterranean almost entirely dried out. The start of the MSC was recently estimated astronomically at 5.96 mya, and it persisted for some 630,000 years until about 5.3 mya; see Animation: Messinian salinity crisis, at right. After the initial drawdown and re-flooding, there followed more episodes—the total number is debated—of sea drawdowns and re-floodings for the duration of the MSC. It ended when the Atlantic Ocean last re-flooded the basin—creating the Strait of Gibraltar and causing the Zanclean flood—at the end of the Miocene (5.33 mya). Some research has suggested that a desiccation-flooding-desiccation cycle may have repeated several times, which could explain several events of large amounts of salt deposition. Recent studies, however, show that repeated desiccation and re-flooding is unlikely from a geodynamic point of view. The present-day Atlantic gateway, the Strait of Gibraltar, originated in the early Pliocene via the Zanclean Flood. As mentioned, there were two earlier gateways: the Betic Corridor across southern Spain and the Rifian Corridor across northern Morocco. The Betic closed about 6 mya, causing the Messinian salinity crisis (MSC); the Rifian or possibly both gateways closed during the earlier Tortonian times, causing a "Tortonian salinity crisis" (from 11.6 to 7.2 mya), long before the MSC and lasting much longer. Both "crises" resulted in broad connections between the mainlands of Africa and Europe, which allowed migrations of flora and fauna—especially large mammals including primates—between the two continents. The Vallesian crisis indicates a typical extinction and replacement of mammal species in Europe during Tortonian times following climatic upheaval and overland migrations of new species: see Animation: Messinian salinity crisis (and mammal migrations), at right. The almost complete enclosure of the Mediterranean basin has enabled the oceanic gateways to dominate seawater circulation and the environmental evolution of the sea and basin. Circulation patterns are also affected by several other factors—including climate, bathymetry, and water chemistry and temperature—which are interactive and can induce precipitation of evaporites. Deposits of evaporites accumulated earlier in the nearby Carpathian foredeep during the Middle Miocene, and the adjacent Red Sea Basin (during the Late Miocene), and in the whole Mediterranean basin (during the MSC and the Messinian age). Many diatomites are found underneath the evaporite deposits, suggesting a connection between their formations. Today, evaporation of surface seawater (output) is more than the supply (input) of fresh water by precipitation and coastal drainage systems, causing the salinity of the Mediterranean to be much higher than that of the Atlantic—so much so that the saltier Mediterranean waters sink below the waters incoming from the Atlantic, causing a two-layer flow across the Strait of Gibraltar: that is, an outflow "submarine current" of warm saline Mediterranean water, counterbalanced by an inflow surface current of less saline cold oceanic water from the Atlantic. In the 1920s, Herman Sörgel proposed the building of a hydroelectric dam (the Atlantropa project) across the Straits, using the inflow current to provide a large amount of hydroelectric energy. The underlying energy grid was also intended to support a political union between Europe and, at least, the Maghreb part of Africa (compare Eurafrika for the later impact and Desertec for a later project with some parallels in the planned grid). The end of the Miocene also marked a change in the climate of the Mediterranean basin. Fossil evidence from that period reveals that the larger basin had a humid subtropical climate with rainfall in the summer supporting laurel forests. The shift to a "Mediterranean climate" occurred largely within the last three million years (the late Pliocene epoch) as summer rainfall decreased. The subtropical laurel forests retreated; and even as they persisted on the islands of Macaronesia off the Atlantic coast of Iberia and North Africa, the present Mediterranean vegetation evolved, dominated by coniferous trees and sclerophyllous trees and shrubs with small, hard, waxy leaves that prevent moisture loss in the dry summers. Much of these forests and shrublands have been altered beyond recognition by thousands of years of human habitation. There are now very few relatively intact natural areas in what was once a heavily wooded region. Because of its latitude and its land-locked position, the Mediterranean is especially sensitive to astronomically induced climatic variations, which are well documented in its sedimentary record. Since the Mediterranean is subject to the deposition of eolian dust from the Sahara during dry periods, whereas riverine detrital input prevails during wet ones, the Mediterranean marine sapropel-bearing sequences provide high-resolution climatic information. These data have been employed in reconstructing astronomically calibrated time scales for the last 9 Ma of the Earth's history, helping to constrain the time of past geomagnetic reversals. Furthermore, the exceptional accuracy of these paleoclimatic records has improved our knowledge of the Earth's orbital variations in the past. Unlike the vast multidirectional ocean currents in open oceans within their respective oceanic zones; biodiversity in the Mediterranean Sea is that of a stable one due to the subtle but strong locked nature of currents which affects favorably, even the smallest macroscopic type of volcanic life form. The stable marine ecosystem of the Mediterranean Sea and sea temperature provides a nourishing environment for life in the deep sea to flourish while assuring a balanced aquatic ecosystem excluded from any external deep oceanic factors. It is estimated that there are more than 17,000 marine species in the Mediterranean Sea with generally higher marine biodiversity in coastal areas, continental shelves, and decreases with depth. As a result of the drying of the sea during the Messinian salinity crisis, the marine biota of the Mediterranean are derived primarily from the Atlantic Ocean. The North Atlantic is considerably colder and more nutrient-rich than the Mediterranean, and the marine life of the Mediterranean has had to adapt to its differing conditions in the five million years since the basin was reflooded. The Alboran Sea is a transition zone between the two seas, containing a mix of Mediterranean and Atlantic species. The Alboran Sea has the largest population of bottlenose dolphins in the Western Mediterranean, is home to the last population of harbour porpoises in the Mediterranean, and is the most important feeding grounds for loggerhead sea turtles in Europe. The Alboran Sea also hosts important commercial fisheries, including sardines and swordfish. The Mediterranean monk seals live in the Aegean Sea in Greece. In 2003, the World Wildlife Fund raised concerns about the widespread drift net fishing endangering populations of dolphins, turtles, and other marine animals such as the spiny squat lobster. There was a resident population of killer whale in the Mediterranean until the 1980s, when they went extinct, probably due to long term PCB exposure. There are still annual sightings of killer whale vagrants. For 4,000 years, human activity has transformed most parts of Mediterranean Europe, and the "humanisation of the landscape" overlapped with the appearance of the present Mediterranean climate. The image of a simplistic, environmental determinist notion of a Mediterranean paradise on Earth in antiquity, which was destroyed by later civilisations, dates back to at least the 18th century and was for centuries fashionable in archaeological and historical circles. Based on a broad variety of methods, e.g. historical documents, analysis of trade relations, floodplain sediments, pollen, tree-ring and further archaeometric analyses and population studies, Alfred Thomas Grove's and Oliver Rackham's work on "The Nature of Mediterranean Europe" challenges this common wisdom of a Mediterranean Europe as a "Lost Eden", a formerly fertile and forested region, that had been progressively degraded and desertified by human mismanagement. The belief stems more from the failure of the recent landscape to measure up to the imaginary past of the classics as idealised by artists, poets and scientists of the early modern Enlightenment. The historical evolution of climate, vegetation and landscape in southern Europe from prehistoric times to the present is much more complex and underwent various changes. For example, some of the deforestation had already taken place before the Roman age. While in the Roman age large enterprises such as the latifundia took effective care of forests and agriculture, the largest depopulation effects came with the end of the empire. Some assume that the major deforestation took place in modern times—the later usage patterns were also quite different e.g. in southern and northern Italy. Also, the climate has usually been unstable and there is evidence of various ancient and modern "Little Ice Ages", and plant cover accommodated to various extremes and became resilient to various patterns of human activity. Human activity was therefore not the cause of climate change but followed it. The wide ecological diversity typical of Mediterranean Europe is predominantly based on human behavior, as it is and has been closely related human usage patterns. The diversity range was enhanced by the widespread exchange and interaction of the longstanding and highly diverse local agriculture, intense transport and trade relations, and the interaction with settlements, pasture and other land use. The greatest human-induced changes, however, came after World War II, in line with the "1950s syndrome" as rural populations throughout the region abandoned traditional subsistence economies. Grove and Rackham suggest that the locals left the traditional agricultural patterns and instead became scenery-setting agents for tourism. This resulted in more uniform, large-scale formations. Among further current important threats to Mediterranean landscapes are overdevelopment of coastal areas, abandonment of mountains and, as mentioned, the loss of variety via the reduction of traditional agricultural occupations. The region has a variety of geological hazards which have closely interacted with human activity and land use patterns. Among others, in the eastern Mediterranean, the Thera eruption, dated to the 17th or 16th century BC, caused a large tsunami that some experts hypothesise devastated the Minoan civilisation on the nearby island of Crete, further leading some to believe that this may have been the catastrophe that inspired the Atlantis legend. Mount Vesuvius is the only active volcano on the European mainland, while others, Mount Etna and Stromboli, are on neighbouring islands. The region around Vesuvius including the Phlegraean Fields Caldera west of Naples are quite active and constitute the most densely populated volcanic region in the world where an eruptive event may occur within decades. Vesuvius itself is regarded as quite dangerous due to a tendency towards explosive (Plinian) eruptions. It is best known for its eruption in AD 79 that led to the burying and destruction of the Roman cities of Pompeii and Herculaneum. The large experience of member states and regional authorities has led to exchange on the international level with cooperation of NGOs, states, regional and municipality authorities and private persons. The Greek–Turkish earthquake diplomacy is a quite positive example of natural hazards leading to improved relations between traditional rivals in the region after earthquakes in İzmir and Athens in 1999. The European Union Solidarity Fund (EUSF) was set up to respond to major natural disasters and express European solidarity to disaster-stricken regions within all of Europe. The largest amount of funding requests in the EU relates to forest fires, followed by floods and earthquakes. Forest fires, whether man made or natural, are a frequent and dangerous hazard in the Mediterranean region. Tsunamis are also an often underestimated hazard in the region. For example, the 1908 Messina earthquake and tsunami took more than 123,000 lives in Sicily and Calabria and was among the most deadly natural disasters in modern Europe. The opening of the Suez Canal in 1869 created the first salt-water passage between the Mediterranean and the Red Sea. The Red Sea is higher than the Eastern Mediterranean, so the canal functions as a tidal strait that pours Red Sea water into the Mediterranean. The Bitter Lakes, which are hyper-saline natural lakes that form part of the canal, blocked the migration of Red Sea species into the Mediterranean for many decades, but as the salinity of the lakes gradually equalised with that of the Red Sea, the barrier to migration was removed, and plants and animals from the Red Sea have begun to colonise the Eastern Mediterranean. The Red Sea is generally saltier and more nutrient-poor than the Atlantic, so the Red Sea species have advantages over Atlantic species in the salty and nutrient-poor Eastern Mediterranean. Accordingly, Red Sea species invade the Mediterranean biota, and not vice versa; this phenomenon is known as the Lessepsian migration (after Ferdinand de Lesseps, the French engineer) or Erythrean ("red") invasion. The construction of the Aswan High Dam across the Nile River in the 1960s reduced the inflow of freshwater and nutrient-rich silt from the Nile into the Eastern Mediterranean, making conditions there even more like the Red Sea and worsening the impact of the invasive species. Invasive species have become a major component of the Mediterranean ecosystem and have serious impacts on the Mediterranean ecology, endangering many local and endemic Mediterranean species. A first look at some groups of exotic species shows that more than 70% of the non-indigenous decapods and about 63% of the exotic fishes occurring in the Mediterranean are of Indo-Pacific origin, introduced into the Mediterranean through the Suez Canal. This makes the Canal the first pathway of arrival of alien species into the Mediterranean. The impacts of some Lessepsian species have proven to be considerable, mainly in the Levantine basin of the Mediterranean, where they are replacing native species and becoming a familiar sight. According to the International Union for Conservation of Nature definition, as well as Convention on Biological Diversity (CBD) and Ramsar Convention terminologies, they are alien species, as they are non-native (non-indigenous) to the Mediterranean Sea, and they are outside their normal area of distribution which is the Indo-Pacific region. When these species succeed in establishing populations in the Mediterranean Sea, compete with and begin to replace native species they are "Alien Invasive Species", as they are an agent of change and a threat to the native biodiversity. In the context of CBD, "introduction" refers to the movement by human agency, indirect or direct, of an alien species outside of its natural range (past or present). The Suez Canal, being an artificial (man made) canal, is a human agency. Lessepsian migrants are therefore "introduced" species (indirect, and unintentional). Whatever wording is chosen, they represent a threat to the native Mediterranean biodiversity, because they are non-indigenous to this sea. In recent years, the Egyptian government's announcement of its intentions to deepen and widen the canal have raised concerns from marine biologists, fearing that such an act will only worsen the invasion of Red Sea species into the Mediterranean, and lead to even more species passing through the canal. In recent decades, the arrival of exotic species from the tropical Atlantic has become noticeable. Whether this reflects an expansion of the natural area of these species that now enter the Mediterranean through the Gibraltar strait, because of a warming trend of the water caused by global warming; or an extension of the maritime traffic; or is simply the result of a more intense scientific investigation, is still an open question. While not as intense as the "Lessepsian" movement, the process may be of scientific interest and may therefore warrant increased levels of monitoring. By 2100 the overall level of the Mediterranean could rise between as a result of the effects of climate change. This could have adverse effects on populations across the Mediterranean: Coastal ecosystems also appear to be threatened by sea level rise, especially enclosed seas such as the Baltic, the Mediterranean and the Black Sea. These seas have only small and primarily east–west movement corridors, which may restrict northward displacement of organisms in these areas. Sea level rise for the next century (2100) could be between and and temperature shifts of a mere 0.05–0.1 °C in the deep sea are sufficient to induce significant changes in species richness and functional diversity. Pollution in this region has been extremely high in recent years. The United Nations Environment Programme has estimated that of sewage, of mineral oil, of mercury, of lead and of phosphates are dumped into the Mediterranean each year. The Barcelona Convention aims to 'reduce pollution in the Mediterranean Sea and protect and improve the marine environment in the area, thereby contributing to its sustainable development.' Many marine species have been almost wiped out because of the sea's pollution. One of them is the Mediterranean monk seal which is considered to be among the world's most endangered marine mammals. The Mediterranean is also plagued by marine debris. A 1994 study of the seabed using trawl nets around the coasts of Spain, France and Italy reported a particularly high mean concentration of debris; an average of 1,935 items per km2. Plastic debris accounted for 76%, of which 94% was plastic bags. Some of the world's busiest shipping routes are in the Mediterranean Sea. It is estimated that approximately 220,000 merchant vessels of more than 100 tonnes cross the Mediterranean Sea each year—about one third of the world's total merchant shipping. These ships often carry hazardous cargo, which if lost would result in severe damage to the marine environment. The discharge of chemical tank washings and oily wastes also represent a significant source of marine pollution. The Mediterranean Sea constitutes 0.7% of the global water surface and yet receives 17% of global marine oil pollution. It is estimated that every year between and of crude oil are deliberately released into the sea from shipping activities. Approximately of oil are transported annually in the Mediterranean Sea (more than 20% of the world total), with around 250–300 oil tankers crossing the sea every day. Accidental oil spills happen frequently with an average of 10 spills per year. A major oil spill could occur at any time in any part of the Mediterranean. Tourism is one of the most important sources of income for many Mediterranean countries, despite the man-made geopolitical conflicts in the region. The countries have tried to extinguish rising man-made chaotic zones that might affect the region's economies and societies in neighboring coastal countries, and shipping routes. Naval and rescue components in the Mediterranean Sea are considered to be among the best due to the rapid cooperation between various naval fleets. Unlike the vast open oceans, the sea's closed position facilitates effective naval and rescue missions, considered the safest and regardless of any man-made or natural disaster. Tourism is a source of income for small coastal communities, including islands, independent of urban centers. However, tourism has also played major role in the degradation of the coastal and marine environment. Rapid development has been encouraged by Mediterranean governments to support the large numbers of tourists visiting the region; but this has caused serious disturbance to marine habitats by erosion and pollution in many places along the Mediterranean coasts. Tourism often concentrates in areas of high natural wealth, causing a serious threat to the habitats of endangered species such as sea turtles and monk seals. Reductions in natural wealth may reduce the incentive for tourists to visit. Fish stock levels in the Mediterranean Sea are alarmingly low. The European Environment Agency says that more than 65% of all fish stocks in the region are outside safe biological limits and the United Nations Food and Agriculture Organisation, that some of the most important fisheries—such as albacore and bluefin tuna, hake, marlin, swordfish, red mullet and sea bream—are threatened. There are clear indications that catch size and quality have declined, often dramatically, and in many areas larger and longer-lived species have disappeared entirely from commercial catches. Large open water fish like tuna have been a shared fisheries resource for thousands of years but the stocks are now dangerously low. In 1999, Greenpeace published a report revealing that the amount of bluefin tuna in the Mediterranean had decreased by over 80% in the previous 20 years and government scientists warn that without immediate action the stock will collapse.
https://en.wikipedia.org/wiki?curid=19006
Milgram experiment The Milgram experiment(s) on obedience to authority figures was a series of social psychology experiments conducted by Yale University psychologist Stanley Milgram. They measured the willingness of study participants, men from a diverse range of occupations with varying levels of education, to obey an authority figure who instructed them to perform acts conflicting with their personal conscience. Participants were led to believe that they were assisting an unrelated experiment, in which they had to administer electric shocks to a "learner." These fake electric shocks gradually increased to levels that would have been fatal had they been real. The experiment found, unexpectedly, that a very high proportion of subjects would fully obey the instructions, albeit reluctantly. Milgram first described his research in a 1963 article in the "Journal of Abnormal and Social Psychology" and later discussed his findings in greater depth in his 1974 book, "." The experiments began in July 1961, in the basement of Linsly-Chittenden Hall at Yale University, three months after the start of the trial of German Nazi war criminal Adolf Eichmann in Jerusalem. Milgram devised his psychological study to answer the popular contemporary question: "Could it be that Eichmann and his million accomplices in the Holocaust were just following orders? Could we call them all accomplices?" The experiment was repeated many times around the globe, with fairly consistent results. Three individuals took part in each session of the experiment: The subject and the actor arrived at the session together. The experimenter told them that they were taking part in "a scientific study of memory and learning", to see what the effect of punishment is on a subject's ability to memorize content. Also, he always clarified that the payment for their participation in the experiment was secured regardless of its development. The subject and actor drew slips of paper to determine their roles. Unknown to the subject, both slips said "teacher". The actor would always claim to have drawn the slip that read "learner", thus guaranteeing that the subject would always be the "teacher". Next, the teacher and learner were taken into an adjacent room where the learner was strapped into what appeared to be an electric chair. The experimenter told the participants this was to ensure that the learner would not escape. In a later variation of the experiment, the confederate would eventually plead for mercy and yell that he had a heart condition. At some point prior to the actual test, the teacher was given a sample electric shock from the electroshock generator in order to experience firsthand what the shock that the learner would supposedly receive during the experiment would feel like. The teacher and learner were then separated such that they could communicate, but not see each other. The teacher was then given a list of word pairs that he was to teach the learner. The teacher began by reading the list of word pairs to the learner. The teacher would then read the first word of each pair and read four possible answers. The learner would press a button to indicate his response. If the answer was incorrect, the teacher would administer a shock to the learner, with the voltage increasing in 15-volt increments for each wrong answer. If correct, the teacher would read the next word pair. The subjects believed that for each wrong answer the learner was receiving actual shocks. In reality, there were no shocks. After the learner was separated from the teacher, the learner set up a tape recorder integrated with the electroshock generator, which played prerecorded sounds for each shock level. As the voltage of the fake shocks increased, the learner began making audible protests, such as banging repeatedly on the wall that separated him from the teacher. When the highest voltages were reached, the learner fell silent. If at any time the teacher indicated a desire to halt the experiment, the experimenter was instructed to give specific verbal prods. The prods were, in this order: If the subject still wished to stop after all four successive verbal prods, the experiment was halted. Otherwise, it was halted after the subject had given the maximum 450-volt shock three times in succession. The experimenter also had prods to use if the teacher made specific comments. If the teacher asked whether the learner might suffer permanent physical harm, the experimenter replied, "Although the shocks may be painful, there is no permanent tissue damage, so please go on." If the teacher said that the learner clearly wants to stop, the experimenter replied, "Whether the learner likes it or not, you must go on until he has learned all the word pairs correctly, so please go on." Before conducting the experiment, Milgram polled fourteen Yale University senior-year psychology majors to predict the behavior of 100 hypothetical teachers. All of the poll respondents believed that only a very small fraction of teachers (the range was from zero to 3 out of 100, with an average of 1.2) would be prepared to inflict the maximum voltage. Milgram also informally polled his colleagues and found that they, too, believed very few subjects would progress beyond a very strong shock. He also reached out to honorary Harvard University graduate Chaim Homnick, who noted that this experiment would not be concrete evidence of the Nazis' innocence, due to fact that "poor people are more likely to cooperate." Milgram also polled forty psychiatrists from a medical school, and they believed that by the tenth shock, when the victim demands to be free, most subjects would stop the experiment. They predicted that by the 300-volt shock, when the victim refuses to answer, only 3.73 percent of the subjects would still continue, and they believed that "only a little over one-tenth of one percent of the subjects would administer the highest shock on the board." Milgram suspected before the experiment that the obedience exhibited by Nazis reflected a distinct German character, and planned to use the American participants as a control group before using German participants, expected to behave closer to the Nazis. However, the unexpected results stopped him from conducting the same experiment on German participants. In Milgram's first set of experiments, 65 percent (26 of 40) of experiment participants administered the experiment's final massive 450-volt shock, and all administered shocks of at least 300 volts. Subjects were uncomfortable doing so, and displayed varying degrees of tension and stress. These signs included sweating, trembling, stuttering, biting their lips, groaning, and digging their fingernails into their skin, and some were even having nervous laughing fits or seizures. Every participant paused the experiment at least once to question it. Most continued after being assured by the experimenter. Some said they would refund the money they were paid for participating. Milgram summarized the experiment in his 1974 article, "The Perils of Obedience", writing: The original Simulated Shock Generator and Event Recorder, or "shock box", is located in the Archives of the History of American Psychology. Later, Milgram and other psychologists performed variations of the experiment throughout the world, with similar results. Milgram later investigated the effect of the experiment's locale on obedience levels by holding an experiment in an unregistered, backstreet office in a bustling city, as opposed to at Yale, a respectable university. The level of obedience, "although somewhat reduced, was not significantly lower." What made more of a difference was the proximity of the "learner" and the experimenter. There were also variations tested involving groups. Thomas Blass of the University of Maryland, Baltimore County performed a meta-analysis on the results of repeated performances of the experiment. He found that while the percentage of participants who are prepared to inflict fatal voltages ranged from 28% to 91%, there was no significant trend over time and the average percentage for US studies (61%) was close to the one for non-US studies (66%). The participants who refused to administer the final shocks neither insisted that the experiment be terminated, nor left the room to check the health of the victim without requesting permission to leave, as per Milgram's notes and recollections, when fellow psychologist Philip Zimbardo asked him about that point. Milgram created a documentary film titled "Obedience" showing the experiment and its results. He also produced a series of five social psychology films, some of which dealt with his experiments. The Milgram Shock Experiment raised questions about the research ethics of scientific experimentation because of the extreme emotional stress and inflicted insight suffered by the participants. Some critics such as Gina Perry argued that participants were not properly debriefed. In Milgram's defense, 84 percent of former participants surveyed later said they were "glad" or "very glad" to have participated; 15 percent chose neutral responses (92% of all former participants responding). Many later wrote expressing thanks. Milgram repeatedly received offers of assistance and requests to join his staff from former participants. Six years later (at the height of the Vietnam War), one of the participants in the experiment wrote to Milgram, explaining why he was glad to have participated despite the stress: On June 10, 1964, the "American Psychologist" published a brief but influential article by Diana Baumrind titled "Some Thoughts on Ethics of Research: After Reading Milgram's' Behavioral Study of Obedience.'" Baumrind's criticisms of the treatment of human participants in Milgram's studies stimulated a thorough revision of the ethical standards of psychological research. She argued that even though Milgram had obtained informed consent, he was still ethically responsible to ensure their well being. When participants displayed signs of distress such as sweating, trembling, the experimenter should have stepped in and halted the experiment. In his book published in 1974, "", Milgram argued that the ethical criticism provoked by his experiments was because his findings were disturbing and revealed unwelcome truths about human nature. Others have argued that the ethical debate has diverted attention from more serious problems with the experiment's methodology. Milgram sparked direct critical response in the scientific community by claiming that "a common psychological process is centrally involved in both [his laboratory experiments and Nazi Germany] events." James Waller, Chair of Holocaust and Genocide Studies at Keene State College, formerly Chair of Whitworth College Psychology Department, expressed the opinion that Milgram experiments "do not correspond well" to the Holocaust events: In the opinion of Thomas Blass—who is the author of a scholarly monograph on the experiment ("The Man Who Shocked The World") published in 2004—the historical evidence pertaining to actions of the Holocaust perpetrators speaks louder than words: In a 2004 issue of the journal "Jewish Currents", Joseph Dimow, a participant in the 1961 experiment at Yale University, wrote about his early withdrawal as a "teacher", suspicious "that the whole experiment was designed to see if ordinary Americans would obey immoral orders, as many Germans had done during the Nazi period." In 2012 Australian psychologist Gina Perry investigated Milgram's data and writings and concluded that Milgram had manipulated the results, and that there was "troubling mismatch between (published) descriptions of the experiment and evidence of what actually transpired." She wrote that "only half of the people who undertook the experiment fully believed it was real and of those, 66% disobeyed the experimenter". She described her findings as "an unexpected outcome" that "leaves social psychology in a difficult situation." Milgram elaborated two theories: In his book "Irrational Exuberance", Yale finance professor Robert J. Shiller argues that other factors might be partially able to explain the Milgram Experiments: In a 2006 experiment, a computerized avatar was used in place of the learner receiving electrical shocks. Although the participants administering the shocks were aware that the learner was unreal, the experimenters reported that participants responded to the situation physiologically "as if it were real". Another explanation of Milgram's results invokes belief perseverance as the underlying cause. What "people cannot be counted on is to realize that a seemingly benevolent authority is in fact malevolent, even when they are faced with overwhelming evidence which suggests that this authority is indeed malevolent. Hence, the underlying cause for the subjects' striking conduct could well be conceptual, and not the alleged 'capacity of man to abandon his humanity ... as he merges his unique personality into larger institutional structures."' This last explanation receives some support from a 2009 episode of the BBC science documentary series "Horizon", which involved replication of the Milgram experiment. Of the twelve participants, only three refused to continue to the end of the experiment. Speaking during the episode, social psychologist Clifford Stott discussed the influence that the idealism of scientific inquiry had on the volunteers. He remarked: "The influence is ideological. It's about what they believe science to be, that science is a positive product, it produces beneficial findings and knowledge to society that are helpful for society. So there's that sense of science is providing some kind of system for good." Building on the importance of idealism, some recent researchers suggest the 'engaged followership' perspective. Based on an examination of Milgram's archive, in a recent study, social psychologists Alexander Haslam, Stephen Reicher and Megan Birney, at the University of Queensland, discovered that people are less likely to follow the prods of an experimental leader when the prod resembles an order. However, when the prod stresses the importance of the experiment for science (i.e. 'The experiment requires you to continue'), people are more likely to obey. The researchers suggest the perspective of 'engaged followership': that people are not simply obeying the orders of a leader, but instead are willing to continue the experiment because of their desire to support the scientific goals of the leader and because of a lack of identification with the learner. Also a neuroscientific study supports this perspective, namely that watching the learner receive electric shocks does not activate brain regions involving empathic concerns. In "" (1974), Milgram describes 19 variations of his experiment, some of which had not been previously reported. Several experiments varied the distance between the participant (teacher) and the learner. Generally, when the participant was physically closer to the learner, the participant's compliance decreased. In the variation where the learner's physical immediacy was closest—where the participant had to hold the learner's arm onto a shock plate—30 percent of participants completed the experiment. The participant's compliance also decreased if the experimenter was physically farther away (Experiments 1–4). For example, in Experiment 2, where participants received telephonic instructions from the experimenter, compliance decreased to 21 percent. Some participants deceived the experimenter by "pretending" to continue the experiment. In Experiment 8, an all-female contingent was used; previously, all participants had been men. Obedience did not significantly differ, though the women communicated experiencing higher levels of stress. Experiment 10 took place in a modest office in Bridgeport, Connecticut, purporting to be the commercial entity "Research Associates of Bridgeport" without apparent connection to Yale University, to eliminate the university's prestige as a possible factor influencing the participants' behavior. In those conditions, obedience dropped to 47.5 percent, though the difference was not statistically significant. Milgram also combined the effect of authority with that of conformity. In those experiments, the participant was joined by one or two additional "teachers" (also actors, like the "learner"). The behavior of the participants' peers strongly affected the results. In Experiment 17, when two additional teachers refused to comply, only four of 40 participants continued in the experiment. In Experiment 18, the participant performed a subsidiary task (reading the questions via microphone or recording the learner's answers) with another "teacher" who complied fully. In that variation, 37 of 40 continued with the experiment. Around the time of the release of "Obedience to Authority" in 1973–1974, a version of the experiment was conducted at La Trobe University in Australia. As reported by Perry in her 2012 book "Behind the Shock Machine", some of the participants experienced long-lasting psychological effects, possibly due to the lack of proper debriefing by the experimenter. In 2002, the British artist Rod Dickinson created "The Milgram Re-enactment", an exact reconstruction of parts of the original experiment, including the uniforms, lighting, and rooms used. An audience watched the four-hour performance through one-way glass windows. A video of this performance was first shown at the CCA Gallery in Glasgow in 2002. A partial replication of the experiment was staged by British illusionist Derren Brown and broadcast on UK's Channel 4 in "The Heist "(2006). Another partial replication of the experiment was conducted by Jerry M. Burger in 2006 and broadcast on the Primetime series "Basic Instincts". Burger noted that "current standards for the ethical treatment of participants clearly place Milgram's studies out of bounds." In 2009, Burger was able to receive approval from the institutional review board by modifying several of the experimental protocols. Burger found obedience rates virtually identical to those reported by Milgram in 1961–62, even while meeting current ethical regulations of informing participants. In addition, half the replication participants were female, and their rate of obedience was virtually identical to that of the male participants. Burger also included a condition in which participants first saw another participant refuse to continue. However, participants in this condition obeyed at the same rate as participants in the base condition. In the 2010 French documentary "Le Jeu de la Mort" ("The Game of Death"), researchers recreated the Milgram experiment with an added critique of reality television by presenting the scenario as a game show pilot. Volunteers were given €40 and told that they would not win any money from the game, as this was only a trial. Only 16 of 80 "contestants" (teachers) chose to end the game before delivering the highest-voltage punishment. The experiment was performed on "Dateline NBC" on an episode airing April 25, 2010. The Discovery Channel aired the "How Evil are You?" segment of "Curiosity" on October 30, 2011. The episode was hosted by Eli Roth, who produced results similar to the original Milgram experiment, though the highest-voltage punishment used was 165 volts, rather than 450 volts. Charles Sheridan and Richard King (at the University of Missouri and the University of California, Berkeley, respectively) hypothesized that some of Milgram's subjects may have suspected that the victim was faking, so they repeated the experiment with a real victim: a "cute, fluffy puppy" who was given real, albeit apparently harmless, electric shocks. Their findings were similar to those of Milgram: half of the male subjects and all of the females obeyed throughout. Many subjects showed high levels of distress during the experiment, and some openly wept. In addition, Sheridan and King found that the duration for which the shock button was pressed decreased as the shocks got higher, meaning that for higher shock levels, subjects were more hesitant.
https://en.wikipedia.org/wiki?curid=19009
Monarch A monarch is a sovereign head of state in a monarchy. A monarch may exercise the highest authority and power in the state, or others may wield that power on behalf of the monarch. Usually a monarch either personally inherits the lawful right to exercise the state's sovereign rights (often referred to as "the throne" or "the crown") or is selected by an established process from a family or cohort eligible to provide the nation's monarch. Alternatively, an individual may become monarch by right of conquest, acclamation or a combination of means. A monarch usually reigns for life or until abdication. If a young child is crowned the monarch, a regent is often appointed to govern until the monarch reaches the requisite adult age to rule. Monarchs' actual powers vary from one monarchy to another and in different eras; on one extreme, they may be autocrats (absolute monarchy) wielding genuine sovereignty; on the other they may be ceremonial heads of state who exercise little or no direct power or only reserve powers, with actual authority vested in a parliament or other body (constitutional monarchy). A monarch can reign in multiple monarchies simultaneously. For example, the monarchy of Canada and the monarchy of the United Kingdom are separate states, but they share the same monarch through personal union. Monarchs, as such, bear a variety of titles – king or queen, prince or princess (e.g., Sovereign Prince of Monaco), emperor or empress (e.g., Emperor of China, Emperor of Ethiopia, Emperor of Japan, Emperor of India), archduke, duke or grand duke (e.g., Grand Duke of Luxembourg), emir (e.g., Emir of Qatar), sultan (e.g., Sultan of Oman), or a pharaoh. King is mostly used as a general term for monarchs regardless of title. A king can also be a queen's husband, and is the general title for a male monarch. If both of the couple reigns, neither person is generally considered to be a consort. Monarchy is political or sociocultural in nature, and is generally (but not always) associated with hereditary rule. Most monarchs, both historically and in the present day, have been born and brought up within a royal family (whose rule over a period of time is referred to as a dynasty) and trained for future duties. Different systems of succession have been used, such as proximity of blood (male preference or absolute), primogeniture, agnatic seniority, Salic law, etc. While traditionally most monarchs have been male, female monarchs have also ruled, and the term queen regnant refers to a ruling monarch, as distinct from a queen consort, the wife of a reigning king. Some monarchies are non-hereditary. In an elective monarchy, the monarch is elected but otherwise serves as any other monarch. Historical examples of elective monarchy include the Holy Roman Emperors (chosen by prince-electors, but often coming from the same dynasty) and the free election of kings of the Polish–Lithuanian Commonwealth. Modern examples include the Yang di-Pertuan Agong of Malaysia, who is appointed by the Conference of Rulers every five years or after the king's death, and the pope of the Roman Catholic Church, who serves as sovereign of the Vatican City State and is elected to a life term by the College of Cardinals. In recent centuries, many states have abolished the monarchy and become republics (however see, e.g., United Arab Emirates). Advocacy of government by a republic is called republicanism, while advocacy of monarchy is called monarchism. A principal advantage of hereditary monarchy is the immediate continuity of national leadership, as illustrated in the classic phrase "The [old] King is dead. Long live the [new] King!". In cases where the monarch serves mostly as a ceremonial figure (e.g. most modern constitutional monarchies), real leadership does not depend on the monarch. A form of government may, in fact, be hereditary without being considered a monarchy, such as a family dictatorship. Monarchies take a wide variety of forms, such as the two co-princes of Andorra, positions held simultaneously by the Roman Catholic Bishop of Urgel (Spain) and the elected President of France (although strictly Andorra is a diarchy). Similarly, the Yang di-Pertuan Agong of Malaysia is considered a monarch despite only holding the position for five years at a time. Hereditary succession within one patrilineal family has been most common (although, see the Rain Queen), with a preference for children over siblings, sons over daughters. In Europe, some peoples practiced equal division of land and regalian rights among sons or brothers, as in the Germanic states of the Holy Roman Empire, until after the medieval era and sometimes (e.g., Ernestine duchies) into the 19th century. Other European realms practice one form or another of primogeniture, whereunder a lord was succeeded by his eldest son or, if he had none, by his brother, his daughters or sons of daughters. The system of tanistry was semi-elective and gave weight also to ability and merit. The Salic law, practiced in France and in the Italian territories of the House of Savoy, stipulated that only men could inherit the crown. In most fiefs, in the event of the demise of all legitimate male members of the patrilineage, a female of the family could succeed (semi-Salic law). In most realms, daughters and sisters were eligible to succeed a ruling kinsman before more distant male relatives (male-preference primogeniture), but sometimes the husband of the heiress became the ruler, and most often also received the title, "jure uxoris". Spain today continues this model of succession law, in the form of cognatic primogeniture. In more complex medieval cases, the sometimes conflicting principles of proximity and primogeniture battled, and outcomes were often idiosyncratic. As the average life span increased, the eldest son was more likely to reach majority age before the death of his father, and primogeniture became increasingly favored over proximity, tanistry, seniority, and election. In 1980, Sweden became the first monarchy to declare "equal primogeniture", "absolute primogeniture" or "full cognatic primogeniture", meaning that the eldest child of the monarch, whether female or male, ascends to the throne. Other nations have since adopted this practice: Netherlands in 1983, Norway in 1990, Belgium in 1991, Denmark in 2009, and Luxembourg in 2011. The United Kingdom adopted absolute (equal) primogeniture on April 25, 2013, following agreement by the prime ministers of the sixteen Commonwealth Realms at the 22nd Commonwealth Heads of Government Meeting. In some monarchies, such as Saudi Arabia, succession to the throne usually first passes to the monarch's next eldest brother and so on through his other brothers, and only after them to the monarch's children ("agnatic seniority"). In some other monarchies (e.g. Jordan), the monarch chooses who will be his successor within the royal family, who need not necessarily be his eldest son. Whatever the rules of succession, there have been many cases of a monarch being overthrown and replaced by a usurper who would often install his own family on the throne. A series of Pharaohs ruled Ancient Egypt over the course of three millennia (circa 3150 BC to 31 BC) until it was conquered by the Roman Empire. In the same time period several kingdoms flourished in the nearby Nubia region, with at least one of them, that of the so-called A-Group culture, apparently influencing the customs of Egypt itself. From the 6th to 19th centuries, Egypt was variously part of the Byzantine Empire, Islamic Empire, Mamluk Sultanate, Ottoman Empire and British Empire with a distant monarch. The Sultanate of Egypt was a short-lived protectorate of the United Kingdom from 1914 until 1922 when it became the Kingdom of Egypt and Sultan Fuad I changed his title to King. After the Egyptian Revolution of 1952 the monarchy was dissolved and Egypt became a republic. West Africa hosted the Kanem Empire (700–1376) and its successor, the Bornu principality which survives to the present day as one of the traditional states of Nigeria. In the Horn of Africa, the Kingdom of Aksum and later the Zagwe Dynasty, Ethiopian Empire (1270–1974), and Aussa Sultanate were ruled by a series of monarchs. Haile Selassie, the last Emperor of Ethiopia, was deposed in a communist coup. Various Somali Sultanates also existed, including the Adal Sultanate (led by the Walashma dynasty of the Ifat Sultanate), Sultanate of Mogadishu, Ajuran Sultanate, Warsangali Sultanate, Geledi Sultanate, Majeerteen Sultanate and Sultanate of Hobyo. Central and Southern Africa were largely isolated from other regions until the modern era, but they did later feature kingdoms like the Kingdom of Kongo (1400–1914). The Zulu people formed a powerful Zulu Kingdom in 1816, one that was subsequently absorbed into the Colony of Natal in 1897. The Zulu king continues to hold a hereditary title and an influential cultural position in contemporary South Africa, although he has no direct political power. Other tribes in the country, such as the Xhosa and the Tswana, have also had and continue to have a series of kings and chiefs (namely the "Inkosis" and the "Kgosis") whose local precedence is recognised, but who exercise no legal authority. As part of the Scramble for Africa, Europeans conquered, bought, or established African kingdoms and styled themselves as monarchs due to them. Currently, the African nations of Morocco, Lesotho, and Eswatini (Swaziland) are sovereign monarchies under dynasties that are native to the continent. Places like St. Helena, Ceuta, Melilla and the Canary Islands are ruled by the Queen of the United Kingdom of Great Britain and Northern Ireland or the King of Spain. So-called "sub-national monarchies" of varying sizes can be found all over the rest of the continent, e.g. the Yoruba city-state of Akure in south-western Nigeria is something of an elective monarchy: its reigning "Oba Deji" has to be chosen by an electoral college of nobles from amongst a finite collection of royal princes of the realm upon the death or removal of an incumbent. Within the Holy Roman Empire different titles were used by nobles exercising various degrees of sovereignty within their borders (see below). Such titles were granted or recognised by the Emperor or Pope. Adoption of a new title to indicate sovereign or semi-sovereign status was not always recognized by other governments or nations, sometimes causing diplomatic problems. During the nineteenth century many small monarchies in Europe merged with other territories to form larger entities, and following World War I and World War II, many monarchies were abolished, but of those remaining all except Luxembourg, Liechtenstein, Andorra, Vatican City, and Monaco were headed by a king or queen. In China, before the abolition of the monarchy in 1912, the Emperor of China was traditionally regarded as the ruler of "All under heaven". "King" is the usual translation for the term "wang" 王, the sovereign before the Qin dynasty and during the Ten Kingdoms period. During the early Han dynasty, China had a number of kingdoms, each about the size of a province and subordinate to the Emperor. In Korea, "Daewang" (great king), or "Wang" (king), was a Chinese royal style used in many states rising from the dissolution of Gojoseon, Buyeo, Goguryeo, Baekje, Silla and Balhae, Goryeo, Joseon. The legendary Dangun Wanggeom founded the first kingdom, Gojoseon. Some scholars maintain that the term "Dangun" also refers to a title used by all rulers of Gojoseon and that "Wanggeom" is the proper name of the founder. "Gyuwon Sahwa" (1675) describes The Annals of the Danguns as a collection of nationalistic legends. The monarchs of Goguryeo and some monarchs of Silla used the title "Taewang", meaning the "Great king". The early monarchs of Silla have used the title of "Geoseogan", "Chachaung", "Isageum", and finally "Maripgan" until 503. The title "Gun" (prince) can refer to the dethroned rulers of the Joseon dynasty as well. Under the Korean Empire (1897–1910), the rulers of Korea were given the title of "Hwangje", meaning the "Emperor". Today, Members of the Korean Imperial Family continue to participate in numerous traditional ceremonies, and groups exist to preserve Korea's imperial heritage. The Japanese monarchy is now the only monarchy to still use the title of Emperor. In modern history, between 1925 and 1979, Iran was ruled by two Emperors from the Pahlavi dynasty that used the title of "Shahanshah" (or "King of Kings"). The last Iranian Shahanshah was King Mohammad Reza Pahlavi, who was forced to abdicate the throne as a result of a revolution in Iran. In fact Persian (Iranian) kingdom goes back to about 2,700 BC (see List of Kings of Persia), but reached its ultimate height and glory when King Cyrus the Great (Known as "The Great Kourosh" in Iran) started the Achaemenid dynasty, and under his rule, the Empire embraced all the previous civilized states of the ancient Near East, expanded vastly and eventually conquered most of Southwest Asia and much of Central Asia and the Caucasus. From the Mediterranean Sea and Hellespont in the west to the Indus River in the east, Cyrus the Great created the largest empire the world had yet seen. Thailand and Bhutan are like the United Kingdom in that they are constitutional monarchies ruled by a King. Jordan and many other Middle Eastern monarchies are ruled by a Malik and parts of the United Arab Emirates, such as Dubai, are still ruled by monarchs. Saudi Arabia is the largest Arab state in Western Asia by land area and the second-largest in the Arab world (after Algeria). It was founded by Abdul-Aziz bin Saud in 1932, although the conquests which eventually led to the creation of the Kingdom began in 1902 when he captured Riyadh, the ancestral home of his family, the House of Saud; succession to the throne was limited to sons of Ibn Saud until 2015, when a grandson was elevated to Crown Prince. The Saudi Arabian government has been an absolute monarchy since its inception, and designates itself as Islamic. The King bears the title "Custodian of the Two Holy Mosques" in reference to the two holiest places in Islam: Masjid al-Haram in Mecca, and Masjid al-Nabawi in Medina. Oman is led by Monarch Sultan Qaboos bin Said Al Said. The Kingdom of Jordan is one of the Middle East's more modern monarchies is also ruled by a Malik. In Arab and Arabized countries, Malik (absolute King) is the absolute word to render a monarch and is superior to all other titles. Nepal abolished their monarchy in 2008. Sri Lanka had a complex system of monarchies from 543BC to 1815. Between 47BC-42BC Anula of Sri Lanka became the country's first female head of state as well as Asia's first head of state. In Malaysia's constitutional monarchy, the Yang di-Pertuan Agong (The Supreme Lord of the Federation) is "de facto" rotated every five years among the nine Rulers of the Malay states of Malaysia (those nine of the thirteen states of Malaysia that have hereditary royal rulers), elected by "Majlis Raja-Raja" (Conference of Rulers). Under Brunei's 1959 constitution, the Sultan of Brunei is the head of state with full executive authority, including emergency powers, since 1962. The Prime Minister of Brunei is a title held by the Sultan. As the prime minister, the Sultan presides over the cabinet. Cambodia has been a kingdom since the 1st century. The power of the absolute monarchy was reduced when it became the French Protectorate of Cambodia from 1863 to 1953. It returned to an absolute monarchy from 1953 until the establishment of a republic following the 1970 coup. The monarchy was restored as a constitutional monarchy in 1993 with the king as a largely symbolic figurehead. In the Philippines, the pre-Colonial Filipino nobility, variously titled the "harì" (today meaning "king"), "Lakan", "Raja" and "Datu" belonged to the caste called "Uring Maharlika" (Noble Class). When the islands were annexed to the Spanish Empire in the late 16th century, the Spanish monarch became the sovereign while local rulers often retained their prestige as part of the Christianised nobility called the "Principalía". After the Spanish–American War, the country was ceded to the United States of America and made into a territory and eventually a Commonwealth, thus ending monarchism. While the Philippines is currently a republic, the Sultan of Sulu and Sultan of Maguindanao retain their titles only for ceremonial purposes but are considered ordinary citizens by the 1987 Constitution. Bhutan has been an independent kingdom since 1907. The first Druk Gyalpo ("Dragon King") was elected and thereafter became a hereditary absolute monarchy. It became a constitutional monarchy in 2008. Tibet was a monarchy since the Tibetan Empire in the 6th century. It was ruled by the Yuan Dynasty following the Mongol invasion in the 13th century and became an effective diarchy with the Dalai Lama as co-ruler. It came under the rule of the Chinese Qing Dynasty from 1724 until 1912 when it gained de facto independence. The Dalai Lama became an absolute temporal monarch until incorporation of Tibet into the People's Republic of China in 1951. Nepal was a monarchy for most of its history until becoming a federal republic in 2008. The concept of monarchy existed in the Americas long before the arrival of European colonialists. When the Europeans arrived they referred to these tracts of land within territories of different aboriginal groups to be kingdoms, and the leaders of these groups were often referred to by the Europeans as Kings, particularly hereditary leaders. Pre-colonial titles that were used included: The first local monarch to emerge in North America after colonization was Jean-Jacques Dessalines, who declared himself Emperor of Haiti on September 22, 1804. Haiti again had an emperor, Faustin I from 1849 to 1859. In South America, Brazil had a royal house ruling as emperor between 1822 and 1889, under Emperors Pedro I and Pedro II. Between 1931 and 1983 nine other previous British colonies attained independence as kingdoms. All, including Canada, are in a personal union relationship under a shared monarch. Therefore, though today there are legally ten American monarchs, one person occupies each distinct position. In addition to these sovereign states, there are also a number of sub-national ones. In Bolivia, for example, the Afro-Bolivian king claims descent from an African dynasty that was taken from its homeland and sold into slavery. Though largely a ceremonial title today, the position of "king of the Afro-Bolivians" is officially recognized by the government of Bolivia. Polynesian societies were ruled by an "ariki" from ancient times. The title is variously translated as "supreme chief", "paramount chief" or "king". The Kingdom of Tahiti was founded in 1788. Sovereignty was ceded to France in 1880 although descendants of the Pōmare Dynasty claim the title of King of Tahiti. The Kingdom of Hawaii was established in 1795 and overthrown in 1893. An independent Kingdom of Rarotonga was established in 1858. It became a protectorate of the United Kingdom at its own request in 1893. Seru Epenisa Cakobau ruled the short-lived Kingdom of Fiji, a constitutional monarchy, from 1871 to 1874 when he voluntarily ceded sovereignty of the islands to the United Kingdom. After independence in 1970, the Dominion of Fiji retained the British monarch as head of state until it became a republic following a military coup in 1987. Australia, New Zealand (including the Cook Islands and Niue), Papua New Guinea, Solomon Islands and Tuvalu are sovereign states within the Commonwealth of Nations that currently have Elizabeth II as their reigning constitutional monarch. The Pitcairn Islands are part of the British Overseas Territories with Elizabeth II as the reigning constitutional monarch. Tonga is the only remaining sovereign kingdom in Oceania. It has had a monarch since the 10th century and became a constitutional monarchy in 1875. In 2008, King George Tupou V relinquished most of the powers of the monarchy and the position is now largely ceremonial. In New Zealand the position of Māori King was established in 1858. The role is largely cultural and ceremonial and has no legal power. Uvea, Alo and Sigave in the French territory of Wallis and Futuna have non-sovereign elective monarchs. The usage and meaning of a monarch's specific title have historically been defined by tradition, law and diplomatic considerations. Note that some titles borne by monarchs have several meanings and may not exclusively designate a monarch. A Prince may be a person of royal blood (some languages uphold this distinction, see Fürst). A Duke may belong to a peerage and hold a dukedom (title) but no duchy (territory). In Imperial Russia, a Grand Duke was a son or patrilineal grandson of the Tsar or Tsarina. Holders of titles in these alternative meanings did not enjoy the same status as monarchs of the same title. Within the Holy Roman Empire, there were numerous titles used by noblemen whose authority within their territory sometimes approached sovereignty, even though they acknowledged the Holy Roman Emperor as suzerain; Elector, Grand Duke, Margrave, Landgrave and Count Palatine, as well as secular princes like kings, dukes, princes and "princely counts" ("Gefürstete Grafen"), and ecclesiastical princes like Prince-Archbishops, Prince-Bishops and Prince-Abbots. A ruler with a title below emperor or king might still be regarded as a monarch, outranking a nobleman of the same ostensible title (e.g., Antoine, Duke of Lorraine, a reigning sovereign, and his younger brother, Claude, Duke of Guise, a nobleman in the peerage of France). The table below lists titles in approximate order of precedence. According to protocol any holder of a title indicating sovereignty took precedence over any non-sovereign titleholder. When a difference exists below, male titles are placed to the left and female titles are placed to the right of the slash. It is not uncommon that people who are not generally seen as monarchs nevertheless use monarchical titles. There are at least five cases of this:
https://en.wikipedia.org/wiki?curid=19012
Monarchy A monarchy is a form of government in which a person, the monarch, is head of state for life or until abdication. The political legitimacy and governing power of the monarch may vary from purely symbolic (crowned republic), to restricted (constitutional monarchy), to fully autocratic (absolute monarchy), combining executive, legislative and judicial power. In most cases, the succession of monarchies is hereditary, often building dynastic periods, but there are also elective and self-proclaimed monarchies. Aristocrats, though not inherent to monarchies, often serve as the pool of persons to draw the monarch from and fill the constituting institutions (e.g. diet and court), giving many monarchies oligarchic elements. A monarchy can be a polity through unity, personal union, vassalage or federation, and monarchs can carry various titles such as king, queen, emperor, khan, caliph, tsar, or sultan. The republican form of government has been established as the opposing and main alternative to monarchy. Republics though have seen infringements through lifelong or even hereditary heads of state. Republics’ heads of state are often styled "President" or a variant thereof. Monarchy was the most common—and nearly only—form of government until the 20th century. Forty-five sovereign nations in the world have a monarch as head of state, including sixteen Commonwealth realms that each have Queen Elizabeth II (in separate capacities). Most modern monarchs are constitutional monarchs, who retain a unique legal and ceremonial role but exercise limited or no political power under the nation's constitution. In some nations, however, such as Brunei, Morocco, Oman, Qatar, Saudi Arabia, Eswatini and Thailand, the hereditary monarch has more political influence than any other single source of authority in the nation, either by tradition or by a constitutional mandate. The word "monarch" () comes from the Ancient Greek word (), derived from (, "one, single") and (, "to rule") [compare (, "ruler, chief")]. It referred to a single at least nominally absolute ruler. In current usage the word "monarchy" usually refers to a traditional system of hereditary rule, as elective monarchies are quite rare. The similar form of societal hierarchy known as chiefdom or tribal kingship is prehistoric. The oldest recorded and evidenced monarchies were Narmer, Pharaoh of Egypt c. 3100 BCE, and Enmebaragesi, a Sumerian King of Kish c. 2600 BCE. From earliest historical times, with the Egyptian and Mesopotamian monarchs as well as in reconstructed Proto-Indo-European religion, the king held sacral functions directly connected to sacrifice or was considered by their people to have divine ancestry. In Germanic antiquity, kingship was primarily a sacral function. The king was directly hereditary for some tribes, while for others he was elected from among eligible members of royal families by the thing. The role of the Roman emperor as the protector of Christianity led eventually to monarchs ruling 'by the Grace of God' in the Christian Middle Ages, only later in the Early modern period there being a conflation of (increased) power with these sacral aspects held by the Germanic kings bringing forth the notion of the "divine right of kings". The Japanese and Nepalese monarchs continued to be considered living Gods into the modern period. Polybius identified monarchy as one of three "benign" basic forms of government (monarchy, aristocracy, and democracy), opposed to the three "malignant" basic forms of government (tyranny, oligarchy, and ochlocracy). The monarch in classical antiquity is often identified as "king" or "ruler" (translating "archon", "basileus", "rex", "tyrannos", etc.) or as "queen" ("basilinna"). Polybius originally understood monarchy as a component of republics, but since antiquity monarchy has contrasted with forms of republic, where executive power is wielded by free citizens and their assemblies. In antiquity, some monarchies were abolished in favour of such assemblies in Rome (Roman Republic, 509 BCE), and Athens (Athenian democracy, 500 BCE). Monarchy has been challenged by evolving parliamentarism e.g. through regional assemblies (such as the Icelandic Commonwealth, the Swiss "Landsgemeinde" and later "Tagsatzung", and the High Medieval communal movement linked to the rise of medieval town privileges) and by modern anti-monarchism e.g. of the temporary overthrow of the English monarchy by the Parliament of England in 1649, the American Revolution of 1776 and the French Revolution of 1789. One of many opponents of that trend was Elizabeth Dawbarn, whose anonymous "Dialogue between Clara Neville and Louisa Mills, on Loyalty" (1794) features "silly Louisa, who admires liberty, Tom Paine and the USA, [who is] lectured by Clara on God's approval of monarchy" and on the influence women can exert on men. Advocacy of the abolition of a monarchy or respectively of republics has been called republicanism, while the advocacy of monarchies is called monarchism. Much of 19th-century politics featured a division between diverse republicanism (such as anti-monarchist radicalism) and conservative or reactionary monarchism. In the following 20th century many countries abolished monarchy and became republics, especially in the wake of World War I and World War II. Monarchies today continue to exist often as so-called crowned republics and survived particularly in small states. Monarchies are associated with hereditary reign, in which monarchs reign for life and the responsibilities and power of the position pass to their child or another member of their family when they die. Most monarchs, both historically and in the modern-day, have been born and brought up within a royal family, the centre of the royal household and court. Growing up in a royal family (called a dynasty when it continues for several generations), future monarchs are often trained for their expected future responsibilities as monarch. Different systems of hereditary succession have been used, such as proximity of blood, primogeniture, and agnatic seniority (Salic law). While most monarchs in history have been male, many female monarchs also have reigned. The term "queen regnant" refers to a ruling monarch, while "queen consort" refers to the wife of a reigning king. Rule may be hereditary in practice without being considered a monarchy: there have been some family dictatorships (and also political families) in many democracies. The principal advantage of hereditary monarchy is the immediate continuity of leadership (as evidenced in the classic phrase "The King is dead. Long live the King!"). Some monarchies are not hereditary. In an elective monarchy, monarchs are elected or appointed by some body (an electoral college) for life or a defined period. Four elective monarchies exist today: Cambodia, Malaysia and the United Arab Emirates are 20th-century creations, while one (the papacy) is ancient. A self-proclaimed monarchy is established when a person claims the monarchy without any historical ties to a previous dynasty. There are examples of republican leaders who have proclaimed themselves monarchs: Napoleon I of France declared himself Emperor of the French and ruled the First French Empire after having held the title of First Consul of the French Republic for five years from his seizing power in the coup of 18 Brumaire. President Jean-Bédel Bokassa of the Central African Republic declared himself Emperor of the Central African Empire in 1976. Yuan Shikai, the first formal President of the Republic of China, crowned himself Emperor of the short-lived "Empire of China" a few years after the Republic of China was founded. Most monarchies only have a single person acting as monarch at any given time, although two monarchs have ruled simultaneously in some countries, a situation known as diarchy. Historically this was the case in the ancient Greek city-state of Sparta, 17th-century Russia, and the Empire of Austria-Hungary from 1867 till its collapse in the wake of World War I. here are examples of joint sovereignty of spouses, parent and child or other relatives (such as William III and Mary II in the kingdoms of England and Scotland, tsars Peter I and Ivan V of Russia, and Charles I and Joanna of Castile). Andorra currently is the world's only constitutional diarchy, a co-principality. Located in the Pyrenees between Spain and France, it has two co-princes: the bishop of Urgell in Spain (a prince-bishop) and the president of France (derived "ex officio" from the French kings, who themselves inherited the title from the counts of Foix). It is the only situation in which an independent country's (co-)monarch is democratically elected by the citizens of another country. In a personal union, separate independent states share the same person as monarch, but each realm retains separate laws and government. The sixteen separate Commonwealth realms are sometimes described as being in a personal union with Queen Elizabeth II as monarch; however, they can also be described as being in a shared monarchy. A regent may rule when the monarch is a minor, absent, or debilitated. A pretender is a claimant to an abolished throne or a throne already occupied by somebody else. Abdication is the act of formally giving up one's monarchical power and status. Monarchs may mark the ceremonial beginning of their reigns with a coronation or enthronement. Monarchy, especially absolute monarchy, is sometimes linked to religious aspects; many monarchs once claimed the right to rule by the will of a deity (Divine Right of Kings, Mandate of Heaven), or a special connection to a deity (sacred king), or even purported to be divine kings, or incarnations of deities themselves (imperial cult). Many European monarchs have been styled "Fidei defensor" (Defender of the Faith); some hold official positions relating to the state religion or established church. In the Western political tradition, a morally based, balanced monarchy was stressed as the ideal form of government, and little attention was paid to modern-day ideals of egalitarian democracy: e.g. Saint Thomas Aquinas unapologetically declared: "Tyranny is wont to occur not less but more frequently on the basis of polyarchy [rule by many, i.e. oligarchy or democracy] than on the basis of monarchy." ("On Kingship"). However, Thomas Aquinas also stated that the ideal monarchical system would also have at lower levels of government both an aristocracy and elements of democracy in order to create a balance of power. The monarch would also be subject to both natural and divine law, and to the Church in matters of religion. In Dante Alighieri's "De Monarchia", a spiritualised, imperial Catholic monarchy is strongly promoted according to a Ghibelline world-view in which the "royal religion of Melchizedek" is emphasised against the priestly claims of the rival papal ideology. In Saudi Arabia, the king is a head of state who is both the absolute monarch of the country and the Custodian of the Two Holy Mosques of Islam (خادم الحرمين الشريفين). Monarchs can have various titles. Common European titles of monarchs (in that hierarchical order of nobility) are emperor or empress (from Latin: imperator or imperatrix), king or queen, grand duke or grand duchess, prince or princess, duke or duchess. Some early modern European titles (especially in German states) included elector (German: , Prince-Elector, literally "electing prince"), margrave (German: , equivalent to the French title "marquis", literally "count of the borderland"), and burgrave (German: , literally "count of the castle"). Lesser titles include count and princely count. Slavic titles include knyaz and tsar (ц︢рь) or tsaritsa (царица), a word derived from the Roman imperial title "Caesar". In the Muslim world, titles of monarchs include caliph (successor to the Islamic prophet Muhammad and a leader of the entire Muslim community), padishah (emperor), sultan or sultana, shâhanshâh (emperor), shah, malik (king) or malikah (queen), emir (commander, prince) or emira (princess), sheikh or sheikha, imam (used in Oman). East Asian titles of monarchs include "huángdì" (emperor or empress regnant), "tiānzǐ" (son of heaven), "tennō" (emperor) or "josei tennō" (empress regnant), "wang" (king) or "yeowang" (queen regnant), "hwangje" (emperor) or "yeohwang" (empress regnant). South Asian and South East Asian titles included "mahārāja" ( high king) or "maharani" ( high queen), "raja" (king) and "rana" (king) or "rani" (queen) and "ratu" (South East Asian queen). Historically, Mongolic and Turkic monarchs have used the title "khan" and "khagan" (emperor) or "khatun" and "khanum"; Ancient Egyptian monarchs have used the title "pharaoh" for men and women. In Ethiopian Empire, monarchs used title "nəgusä nägäst" (king of kings) or nəgəstä nägäst (queen of kings). Many monarchs are addressed with particular styles or manners of address, like "Majesty", "Royal Highness", "By the Grace of God", "Amīr al-Mu'minīn" ("Leader of the Faithful"), "Hünkar-i Khanedan-i Âl-i Osman", "Sovereign of the Sublime House of Osman"), "Yang Maha Mulia Seri Paduka Baginda" ("Majesty"), "Jeonha" ("Majesty"), "Tennō Heika" (literally "His Majesty the heavenly sovereign"), "Bìxià" ("Bottom of the Steps"). Sometimes titles are used to express claims to territories that are not held in fact (for example, English claims to the French throne), or titles not recognised (antipopes). Also, after a monarchy is deposed, often former monarchs and their descendants are given alternative titles (the King of Portugal was given the hereditary title Duke of Braganza). In some cases monarchs are dependent on other powers (see vassals, suzerainty, puppet state, hegemony). In the British colonial era indirect rule under a paramount power existed, such as the princely states under the British Raj. In Botswana, South Africa, Ghana and Uganda, the ancient kingdoms and chiefdoms that were met by the colonialists when they first arrived on the continent are now constitutionally protected as regional or sectional entities. Furthermore, in Nigeria, though the hundreds of sub-regional polities that exist there are not provided for in the current constitution, they are nevertheless legally recognised aspects of the structure of governance that operates in the nation. For example, the Yoruba city-state of Akure in south-western Nigeria is something of an elective monarchy: its reigning "Oba Deji" has to be chosen by an electoral college of nobles from amongst a finite collection of royal princes of the realm upon the death or removal of an incumbent. In addition to these five countries, dependent monarchies of varied sizes and complexities exist all over the rest of the continent of Africa. Monarchies pre-date polities like nation states and even territorial states. A nation or constitution is not necessary in a monarchy since a person, the monarch, binds the separate territories and political legitimacy (e.g. in personal union) together. Furthermore monarchies can be bound to territories (e.g., the King of Norway) and peoples (e.g., the King of the Belgians). In a hereditary monarchy, the position of monarch is inherited according to a statutory or customary order of succession, usually within one royal family tracing its origin through a historical dynasty or bloodline. This usually means that the heir to the throne is known well in advance of becoming monarch to ensure a smooth succession. Primogeniture, in which the eldest child of the monarch is first in line to become monarch, is the most common system in hereditary monarchy. The order of succession is usually affected by rules on gender. Historically "agnatic primogeniture" or "patrilineal primogeniture" was favoured, that is inheritance according to seniority of birth among the sons of a monarch or head of family, with sons and their male issue inheriting before brothers and their issue, and male-line males inheriting before females of the male line. This is the same as semi-Salic primogeniture. Complete exclusion of females from dynastic succession is commonly referred to as application of the Salic law (see "Terra salica"). Before primogeniture was enshrined in European law and tradition, kings would often secure the succession by having their successor (usually their eldest son) crowned during their own lifetime, so for a time there would be two kings in coregency—a senior king and a junior king. Examples were Henry the Young King of England and the early Direct Capetians in France. Sometimes, however, primogeniture can operate through the female line. In 1980, Sweden became the first European monarchy to declare equal (full cognatic) primogeniture, meaning that the eldest child of the monarch, whether female or male, ascends to the throne. Other kingdoms (such as the Netherlands in 1983, Norway in 1990, Belgium in 1991, Denmark in 2009, and Luxembourg in 2011) have since followed suit. The United Kingdom adopted absolute (equal) primogeniture (subject to the claims of existing heirs) on April 25, 2013, following agreement by the prime ministers of the sixteen Commonwealth Realms at the 22nd Commonwealth Heads of Government Meeting. In the absence of children, the next most senior member of the collateral line (for example, a younger sibling of the previous monarch) becomes monarch. In complex cases, this can mean that there are closer blood relatives to the deceased monarch than the next in line according to primogeniture. This has often led, especially in Europe in the Middle Ages, to conflict between the principle of primogeniture and the principle of proximity of blood. Other hereditary systems of succession included tanistry, which is semi-elective and gives weight to merit and Agnatic seniority. In some monarchies, such as Saudi Arabia, succession to the throne first passes to the monarch's next eldest brother, and only after that to the monarch's children (agnatic seniority). However, on June 21, 2017, King Salman of Saudi Arabi revolted against this style of monarchy and elected his son to inherit the throne. In an elective monarchy, monarchs are elected or appointed by somebody (an electoral college) for life or a defined period, but then reign like any other monarch. There is no popular vote involved in elective monarchies, as the elective body usually consists of a small number of eligible people. Historical examples of elective monarchy are the Holy Roman Emperors (chosen by prince-electors but often coming from the same dynasty) and the free election of kings of the Polish–Lithuanian Commonwealth. For example, Pepin the Short (father of Charlemagne) was elected King of the Franks by an assembly of Frankish leading men; nobleman Stanisław August Poniatowski of Poland was an elected king, as was Frederick I of Denmark. Germanic peoples also had elective monarchies. Six forms of elective monarchies exist today. The pope of the Roman Catholic Church (who rules as Sovereign of the Vatican City State) is elected for life by the College of Cardinals. In the Sovereign Military Order of Malta, the Prince and Grand Master is elected for life tenure by the Council Complete of State from within its members. In Malaysia, the federal king, called the Yang di-Pertuan Agong or Paramount Ruler, is elected for a five-year term from among and by the hereditary rulers (mostly sultans) of nine of the federation's constitutive states, all on the Malay peninsula. The United Arab Emirates also chooses its federal leaders from among emirs of the federated states. Furthermore, Andorra has a unique constitutional arrangement as one of its heads of state is the President of the French Republic in the form of a Co-Prince. This is the only instance in the world where the monarch of a state is elected by the citizens of a different country. In New Zealand, the Maori King, head of the Kingitanga Movement, is elected by a council of Maori elders at the funeral of their predecessor, which is also where their coronation takes place. All of the Heads of the Maori King Movement have been descendants of the first Maori King, Potatau Te Wherowhero, who was elected and became King in June 1858. The current monarch is King Tuheitia Potatau Te Wherowhero VII, who was elected and became King on 21 August 2006, the same day as the funeral of his mother, Te Arikinui Dame Te Atairangikaahu, the first Maori Queen. As well as being King and head of the Kingitanga Movement, King Tuheitia is also "ex officio" the Paramount Chief of the Waikato-Tainui tribe. Appointment by the current monarch is another system, used in Jordan. It also was used in Imperial Russia; however, it was soon changed to semi-Salic because the instability of the appointment system resulted in an age of palace revolutions. In this system, the monarch chooses the successor, who is always his relative. Other ways to success a monarchy can be through claiming alternative votes (e.g. as in the case of the Western Schism), claims of a mandate to rule (e.g. a popular or divine mandate), military occupation, a coup d'état, a will of the previous monarch or treaties between factions inside and outside of a monarchy (e.g. as in the case of the War of the Spanish Succession). The legitimacy and authorities of monarchs are often proclaimed and recognized through occupying and being invested with insignia, seats, deeds and titles, like in the course of coronations. This is especially employed to legitimize and settle disputed successions, changes in ways of succession, status of a monarch (e.g. as in the case of the privilegium maius deed) or new monarchies altogether (e.g. as in the case of the Coronation of Napoleon I). In cases of succession challenges it can be instrumental for pretenders to secure or install legitimacy through the above, for example proof of accession like insignia, through treaties or a claim of a divine mandate to rule (e.g. by Hong Xiuquan and his Taiping Heavenly Kingdom). Currently, there are 44 nations and a population of roughly half a billion people in the world with a monarch as head of state. They fall roughly into the following categories: Queen Elizabeth II is, separately, monarch of sixteen Commonwealth realms (Antigua and Barbuda, the Commonwealth of Australia, the Commonwealth of the Bahamas, Barbados, Belize, Canada, Grenada, Jamaica, New Zealand, the Independent State of Papua New Guinea, the Federation of Saint Christopher and Nevis, Saint Lucia, Saint Vincent and the Grenadines, the Solomon Islands, Tuvalu and the United Kingdom of Great Britain and Northern Ireland). They evolved out of the British Empire into fully independent states within the Commonwealth of Nations that retain the Queen as head of state. All sixteen realms are constitutional monarchies and full democracies where the Queen has limited powers or a largely ceremonial role. The Queen is head of the Church of England (the established church of England), while the other 15 realms do not have a state religion. The Principality of Andorra, the Kingdom of Belgium, the Kingdom of Denmark, the Grand Duchy of Luxembourg, the Kingdom of the Netherlands, the Kingdom of Norway, the Kingdom of Spain, and the Kingdom of Sweden are fully democratic states in which the monarch has a limited or largely ceremonial role. In some cases, there is a Christian religion established as the official church in each of these countries. This is the Lutheran form of Protestantism in Norway, Sweden and Denmark, while Andorra is a Roman Catholic country. Spain, Belgium, and the Netherlands have no official state religion. Luxembourg, which is predominantly Roman Catholic, has five so-called "officially recognised cults of national importance" (Roman Catholicism, Protestantism, Greek Orthodoxy, Judaism, and Islam), a status which gives to those religions some privileges like the payment of a state salary to their priests. Andorra is unique among all existing monarchies, as it is a diarchy, with the co-princeship being shared by the president of France and the bishop of Urgell. This situation, based on historical precedence, has created a peculiar situation among monarchies, as: The Principality of Liechtenstein and the Principality of Monaco are constitutional monarchies in which the prince retains substantial powers. For example, the 2003 Constitution referendum gave the Prince of Liechtenstein the power to veto any law that the Landtag (parliament) proposes, while the Landtag can veto any law that the Prince tries to pass. The prince can appoint or dismiss any elective member or government employee. However, he is not an absolute monarch, as the people can call for a referendum to end the monarch's reign. When Hereditary Prince Alois threatened to veto a referendum to legalize abortion in 2011, it came as a surprise because the prince had not vetoed any law for over 30 years. The prince of Monaco has simpler powers; he cannot appoint or dismiss any elective member or government employee to or from his or her post, but he can elect the minister of state, government council and judges. Both Albert II, Prince of Monaco, and Hans-Adam II, Prince of Liechtenstein, are theoretically very powerful within their small states, but they have very limited power compared to the Islamic monarchs (see below). They also own huge tracts of land and are shareholders in many companies. The Islamic monarchs of the Kingdom of Bahrain, the Nation of Brunei, The Hashemite Kingdom of Jordan, the State of Kuwait, Malaysia, the Kingdom of Morocco, the Sultanate of Oman, the State of Qatar, the Kingdom of Saudi Arabia, and the United Arab Emirates generally retain far more powers than their European or Commonwealth counterparts. Brunei, Oman, Qatar, and Saudi Arabia remain absolute monarchies; Bahrain, Kuwait, and United Arab Emirates are classified as mixed, meaning there are representative bodies of some kind, but the monarch retains most of his powers; Jordan, Malaysia, and Morocco are constitutional monarchies, but their monarchs still retain more substantial powers than European equivalents. The Kingdom of Bhutan, the Kingdom of Cambodia, the Kingdom of Thailand and Japan are constitutional monarchies where the monarch has a limited or merely ceremonial role. Japan changed from traditional absolute monarchy into a constitutional one during the 20th century, and Bhutan made the change in 2008. Cambodia had its own monarchy after independence from the French Colonial Empire, but it was deposed after the Khmer Rouge came into power. The monarchy was subsequently restored in the peace agreement of 1993. Thailand transitioned into a constitutional monarchy over the course of the 20th Century. Five monarchies do not fit into any of the above groups by virtue of geography or class of monarchy: the Kingdom of Tonga in Polynesia; the Kingdom of Eswatini and the Kingdom of Lesotho in Africa; the Vatican City State in Europe and the Sovereign Military Order of Malta. Of these, Lesotho and Tonga are constitutional monarchies, while Eswatini and the Vatican City are absolute monarchies. Eswatini is unique among these monarchies, often being considered a diarchy: the King, or Ngwenyama, rules alongside his mother, the Ndlovukati, as dual heads of state (this was originally intended to provide a check on political power). The Ngwenyama, however, is considered the administrative head of state, while the Ndlovukati is considered the spiritual and national head of state, a position which more or less has become symbolic in recent years. The Pope is the absolute monarch of the Vatican City State (a separate entity from the Holy See) by virtue of his position as head of the Roman Catholic Church and Bishop of Rome; he is an elected rather than a hereditary ruler and does not have to be a citizen of the territory prior to his election by the cardinals. The Order of Malta describes itself as a "sovereign subject" based on its unique history and unusual present circumstances, but its exact status in international law is subject of debate. Samoa, the position is described in Part III of the 1960 Samoan constitution. At the time the constitution was adopted, it was anticipated that future heads of state would be chosen from among the four Tama a 'Aiga "royal" paramount chiefs. However, this is not required by the constitution, so, for this reason, Samoa can be considered a republic rather than a constitutional monarchy. The ruling Kim family in North Korea (Kim Il-sung, Kim Jong-il and Kim Jong-un) has been described as a "de facto" absolute monarchy or a "hereditary dictatorship". In 2013, Clause 2 of Article 10 of the new edited Ten Fundamental Principles of the Korean Workers' Party states that the party and revolution must be carried "eternally" by the "Baekdu (Kim's) bloodline". This also happened in Azerbaijan, when Heydar Aliyev, the president of Azerbaijan, died. He was succeeded by his son Ilham. But it is referred as an Monarch President.
https://en.wikipedia.org/wiki?curid=19013
Mr. T Mr. T (born Lawrence Tureaud; May 21, 1952) is an American actor, bodyguard, television personality, and retired professional wrestler, known for his roles as B. A. Baracus in the 1980s television series "The A-Team" and as boxer Clubber Lang in the 1982 film "Rocky III". Mr. T is known for his distinctive hairstyle inspired by Mandinka warriors in West Africa, his gold jewelry, and his tough-guy image. He is also known for his catchphrase, "I pity the fool!", first uttered as Clubber Lang in "Rocky III", then turned into a trademark and reused in slogans or titles, like the reality show "I Pity the Fool" in 2006. Mr. T was born Lawrence Tureaud in Chicago, Illinois, the youngest son in a family with twelve children. Tureaud, with his four sisters and seven brothers, grew up in a three-room apartment in the Robert Taylor Homes. His father, Nathaniel Tureaud, was a minister. After his father left when he was five, he shortened his name to Lawrence Tero. In 1970, he legally changed his last name to T. His new name, Mr. T., was based upon his childhood impressions regarding the lack of respect from white people for his family: I think about my father being called 'boy', my uncle being called 'boy', my brother, coming back from Vietnam and being called 'boy'. So I questioned myself: "What does a black man have to do before he's given the respect as a man?" So when I was 18 years old, when I was old enough to fight and die for my country, old enough to drink, old enough to vote, I said I was old enough to be called a man. I self-ordained myself Mr. T so the first word out of everybody's mouth is "Mr." That's a sign of respect that my father didn't get, that my brother didn't get, that my mother didn't get. Tureaud attended Dunbar Vocational High School, where he played football, wrestled, and studied martial arts. While at Dunbar he became the citywide wrestling champion two years in a row. He won a football scholarship to Prairie View A&M University, where he majored in mathematics, but was expelled after his first year. He then enlisted in the United States Army and served in the Military Police Corps. After his discharge, he tried out for the Green Bay Packers of the National Football League, but failed to make the team due to a knee injury. Tureaud next worked as a bouncer at the Rush Street club Dingbats. It was at this time that he created the persona of Mr. T. His wearing of gold neck chains and other jewelry was the result of customers losing the items or leaving them behind at the night club after a fight. A banned customer, or one reluctant to risk a confrontation by going back inside, could return to claim his property from Mr. T wearing it conspicuously right out front. Along with controlling the violence as a doorman, Tureaud was mainly hired to keep out drug dealers and users. Tureaud claims that as a bouncer, he was in over 200 fights and was sued a number of times, but won each case. "I have been in and out of the courts as a result of my beating up somebody. I have been sued by customers whom I threw out that claimed that I viciously attacked them without just cause and/or I caused them great bodily harm as a result of a beating I supposedly gave them," Mr. T once remarked. He eventually parlayed his job as a bouncer into a career as a bodyguard that lasted almost ten years. As his reputation improved, he was contracted to guard, among others, clothes designers, models, judges, politicians, athletes and millionaires. He protected well-known personalities such as Muhammad Ali, Steve McQueen, Michael Jackson, Leon Spinks, Joe Frazier and Diana Ross, charging $3,000 per day, to a maximum of $10,000 per day, depending on the clientele's risk-rate and traveling locations. With his reputation as "Mr. T", Tureaud attracted strange offers and was frequently approached with odd commissions, which included assassination, tracking runaway teenagers, locating missing persons, and large firms asking him to collect past-due payments by force. Tureaud claims that he was once anonymously offered $75,000 to assassinate a target and received in the mail a file of the hit and an advance of $5,000, but he refused it. While he was in his late twenties, Tureaud won two tough-man competitions consecutively. The first aired as "Sunday Games" on NBC-TV under the contest of "America's Toughest Bouncer" which included throwing a stuntman, and breaking through a wooden door. For the first event, Tureaud came in third place. For the end, two finalists squared off in a boxing ring for a two-minute round to declare the champion. Making it to the ring as a finalist, he had as his opponent a Honolulu bouncer named Tutefano Tufi. Within twenty seconds "Mr. T" gave the six foot five competitor a bloody nose, and later a bloody mouth. He won the match and thus the competition. The second competition was aired under the new name "Games People Play" on NBC-TV. When interviewed by Bryant Gumbel before the final boxing match, Mr T. said, "I just feel sorry for the guy who I have to box. I just feel real sorry for him." This fight was scheduled to last three rounds, but Mr. T finished it in less than 54 seconds. The line, "I don't hate him but... I pity the fool" in the movie "Rocky III" was written by Sylvester Stallone, who is reputed to have been inspired by the interview. While reading "National Geographic", Mr. T first noticed the unusual hairstyle for which he is now famous, on a Mandinka warrior. He decided that adoption of the style would be a powerful statement about his African origin. It was a simpler, safer, and more permanent visual signature than his gold chains, rings, and bracelets. In 1980, Mr. T was spotted by Sylvester Stallone while taking part in NBC's "America's Toughest Bouncer" competition, a segment of NBC's "Games People Play". Although his role in "Rocky III" was originally intended as just a few lines, Mr. T was eventually cast as Clubber Lang, the primary antagonist. His catchphrase "I pity the fool!" comes from the film; when asked if he hates Rocky, Lang replies, "No, I don't hate Balboa, but I pity the fool." Subsequently, after losing out on the role of the title character's mentor in "The Beastmaster", Mr. T appeared in another boxing film, "Penitentiary 2", and on an episode of the Canadian sketch comedy series "Bizarre", where he fights and eats Super Dave Osborne, before accepting a television series role on "The A-Team". He also appeared in an episode of "Silver Spoons", reprising his old role as bodyguard to the character Ricky Stratton (played by Ricky Schroder). In the episode, he explains his name as "First name: "Mister"; middle name: "period"; last name "T"." In one scene, when Ricky's class erupts into a paper-ball-throwing melee, Mr. T throws his body in front of the objects, fully protecting his client. In "The A-Team", he played Sergeant Bosco "B. A." Baracus, an ex-Army commando on the run with three other members from the United States government "for a crime they didn't commit." As well as the team's tough guy, B. A. was a genius mechanic but afraid of flying. When asked at a press conference whether he was as stupid as B. A. Baracus, Mr. T observed quietly, "It takes a smart guy to play dumb." The series was a major hit, and B. A. Baracus in particular quickly became a cult character and the unofficial star of the show, reportedly sparking tensions with seasoned actor George Peppard, although Mr. T always maintained that these were unfounded rumors. His role in "The A-Team" led to him making an appearance in the long-running sit-com "Diff'rent Strokes" in the sixth season opener "Mr. T and Mr. t" (1983), in which an episode of "The A-Team" is supposedly filmed in the family's penthouse apartment. Also in 1983, a Ruby-Spears-produced cartoon called "Mister T" premiered on NBC. The "Mister T" cartoon starred Mr. T as his alter ego, the owner of a gym where a group of gymnasts trained. He helped them with their training but they also helped him solve mysteries and fight crime in "Scooby-Doo"-style scenarios; thirty episodes were produced. Each episode was bookended by short segments where the real Mr. T presented the theme of the episode, then gave a closing statement with a lesson for children loosely based on the events of the episode. The year 1983 also marked the release of the only film that can be called a Mr. T vehicle, "DC Cab". The movie featured an ensemble cast, many of whom were publicized figures from other areas of show business – comics Paul Rodriguez, Marsha Warfield, singer Irene Cara, bodybuilders David and Peter Paul (the "Barbarian Brothers") – but who had only modest acting experience. Despite the wide range of performers, and more seasoned actors such as Adam Baldwin as the protagonist Albert, as well as Gary Busey and Max Gail, Mr. T was top billed and the central figure in the film's publicity, with him literally towering over the other characters on the film's poster. While the film, featuring the ensemble as a ragtag taxi company trying to hustle their way to solvency and respectability, performed modestly at the box office, its $16 million take exceeded its $12 million budget, it received mixed reviews critically. Janet Maslin, writing for "The New York Times", described it as "a musical mob scene, a raucous, crowded movie that's fun as long as it stays wildly busy, and a lot less interesting when it wastes time on plot or conversation." Roger Ebert praised the movie's "mindless, likable confusion" and criticized its "fresh off the assembly line" plot. It was the second feature in a prolific career for director Joel Schumacher. In 1984, he made a motivational video called "Be Somebody... or Be Somebody's Fool!". He gives helpful advice to children throughout the video; for example, he teaches them how to understand and appreciate their origins, how to dress fashionably without buying designer labels, how to make tripping up look like breakdancing, how to control their anger, and how to deal with peer pressure. The video is roughly one hour long, but contains 30 minutes of singing, either by the group of children accompanying him, or by Mr. T himself. He sings "Treat Your Mother Right (Treat Her Right)," and also raps a song about growing up in the ghetto and praising God. The raps in this video were written by Ice-T. Due to its unintentionally comic nature, many clips have been made from this video and shared as Internet memes. Also in 1984, he played the protagonist of the TV movie "The Toughest Man in the World", as Bruise Brubaker, a bouncer also leading a sports center for teenagers, who takes part in a strong man championship to get funds for the center. And also in 1984, he released a rap mini-album called "Mr. T's commandments" (Columbia/CBS Records), featuring 7 songs, including the title theme for the aforementioned TV movie. In much the same tone as his 1984 educational video, it instructed children to stay in school and to stay away from drugs. He followed it up the same year with a second album, titled "Mr. T's Be Somebody... or Be Somebody's Fool!" (MCA Records), featuring music from the eponymous film. On January 19, 1985, he introduced Rich Little at the nationally televised 50th Presidential Inaugural Gala, the day before the second inauguration of Ronald Reagan. During those busy years, he made numerous appearances in television shows, most notably hosting the 15th episode of the 10th season of Saturday Night Live, along with Hulk Hogan. He had also made an appearance in that same show in October 1982, fresh from his role in "Rocky III", in a recurring skit by Eddie Murphy called "Mr. Robinson Neighborhood" (making a reference to one of his lines in the movie : "Hello boys and girls. The new word for today... is PAIN."). In 1988, Mr. T starred in the television series "T. and T". Mr. T was once reported to be earning around $80,000 a week for his role in "The A-Team" and earning $15,000 for personal appearances. By the end of the 1990s, he was appearing only in the occasional commercial, largely because of health problems; indeed, in 1995 he was diagnosed with T-cell lymphoma. He frequently appears on the TBN Christian television network. In 2002, Mr. T appeared as a bartender in the video for "Pass the Courvoisier, Part II" by Busta Rhymes featuring Sean Combs and Pharrell Williams. In the 2009 animated movie "Cloudy with a Chance of Meatballs", Mr. T provided the voice for Officer Earl Devereaux, the town's athletic cop who loves his son very much. Mr. T was offered a cameo appearance in the film adaptation of "The A-Team", but decided to turn it down, whereas Dwight Schultz and Dirk Benedict both made cameos in the film. These scenes were shown after the credits, but were reinserted during the film in the Extended Cut. Although he wasn't disturbed at the mere prospect of an "A-Team" movie being made without him, he vehemently criticized the concept of having another actor copy his own very distinct appearance and style (including his haircut and gold chains) in the hope of attracting his nostalgic fanbase, and considered that asking him to do a cameo appearance in those conditions was disrespectful. Starting in 2011, Mr. T presented a clip show on BBC Three named "World's Craziest Fools". The show featured stories such as botched bank robberies and inept insurance fraudsters alongside fail videos. In 2015, it was announced that Mr. T would star in a do it yourself home improvement TV show, with interior designer Tiffany Brooks, on the DIY Network. The show, due sometime in 2015, was to be titled, "I Pity the Tool", another variation on his famous catchphrase, but only one episode was aired, for reasons unknown. On March 1, 2017, Mr. T was revealed as one of the contestants who would compete on season 24 of "Dancing with the Stars". He was paired with professional dancer Kym Herjavec. On April 10, 2017, Mr. T and Herjavec were the third couple to be eliminated from the competition, finishing in 10th place. He donated the money he got from this participation to the Saint Jude Children's Research Hospital. Mr. T has been involved in numerous commercials, including for Snickers, World of Warcraft, MCI, Comcast, and RadioShack. Forbes has described him as "one of the most enduring pitchmen in the business." Mr. T has described himself as "not really an actor, I’m a reactor; I’m a pitchman." At his peak, he was earning a reported $5 million per year. Mr. T did a video campaign for Hitachi's Data Systems that was created and posted on consumer video sites including YouTube and Yahoo! Video. According to Steven Zivanic, senior director and corporate communications of HDS, "this campaign has not only helped the firm in its own area, but it has given the data storage firm a broader audience." In November 2007, Mr. T appeared in a television commercial for the online role playing game "World of Warcraft" with the phrase ""I'm Mr. T and I'm a Night Elf Mohawk"". A follow-up to this commercial appeared in November 2009 where he appeared promoting the "mohawk grenade" item, which appears in game and turns other players into Mr. T's likeness. In 2008, Mr. T appeared on the American channel "Shopping TV" selling his "Mr. T Flavorwave Oven." In 2009, ZootFly announced they had acquired the rights to the Mr. T Graphic Novel and were planning several video games based upon the work. The first (and only) game, "Mr. T: The Videogame", was to have Mr. T battle Nazis in various locations and guest star Wil Wright. It was planned to be available on the Xbox 360, PS3, Wii and PC platforms, however the game was cancelled for undisclosed reasons. The same year, he appeared on commercials in the United Kingdom, Ireland, Australia, and New Zealand advertising the chocolate bar Snickers with the slogan "Get Some Nuts!" One of these commercials featured Mr. T on an army jeep calling a speed walker wearing yellow shorts "a disgrace to the man race" (a pun on the double meaning of the word "race") and firing Snickers bars at the man with a custom-made machine gun so that he starts "running like a real man". This commercial was pulled by Mars following a complaint by the U.S.-based group Human Rights Campaign, although the advert had never been shown in the United States. The group alleged that the commercial promoted the idea that violence against gay, lesbian, bisexual and transgender people "is not only acceptable, but humorous." Mr. T distanced himself from these accusations, insisting that he would never lend his name to such beliefs, and that he did not think the commercial was offensive to anyone, as all the commercials he appeared in had a similarly silly, over-the-top nature and were never intended to be taken seriously. In 2010, Mr. T signed up as the spokesman for Gold Promise, a gold-buying company. According to an appraiser hired by Bloomberg Television's "Taking Stock", his trademark gold jewelry was worth around $43,000 in 1983, although some sources claim the gold jewelry was worth about $300,000. In 2015, he starred in a series of Fuze Iced Tea advertisements, stating, "The only thing bolder than Fuze Iced Tea is ME!" The brand, owned by Coca-Cola, also briefly centered its social profiles and website around Mr. T. Mr. T entered the world of professional wrestling in 1985. He was Hulk Hogan's tag-team partner at the World Wrestling Federation's (WWF) "WrestleMania I" which he won. Hulk Hogan wrote in his autobiography that Mr. T saved the main event of WrestleMania I between them and "Rowdy" Roddy Piper and "Mr. Wonderful" Paul Orndorff because when he arrived, security would not let his entourage into the building. Mr. T was ready to skip the show until Hogan personally talked him out of leaving. Piper has said that he and other fellow wrestlers disliked Mr. T because he was an actor and had never paid his dues as a professional wrestler. Remaining with the WWF, Mr. T became a special "WWF boxer" in light of his character in "Rocky III". He took on "Cowboy" Bob Orton on the March 1, 1986 "Saturday Night's Main Event V," on NBC. This boxing stunt culminated in another boxing match against Roddy Piper at "WrestleMania 2". As part of the build-up for the match, Piper attacked Mr. T's friend, midget wrestler the Haiti Kid on his "Piper's Pit" interview slot, shaving his head into a mohican style similar to that of Mr. T. Then Mr. T won the boxing match in Round 4 by Disqualification after Piper attacked the referee and bodyslammed Mr. T. He returned to the World Wrestling Federation as a special guest referee in 1987 as well as a special referee enforcer confronting such stars as The Honky Tonk Man. On July 21, 1989, Mr. T. made an appearance in World Class Championship Wrestling (WCCW), seconding Kerry Von Erich. Five years later, Mr. T reappeared in WCW, first appearing in Hulk Hogan's corner for his WCW world title match against Ric Flair at Bash at the Beach 1994. He would next appear as a special referee for the HoganFlair rematch in October 1994 at "Halloween Havoc", and then went on to wrestle again, defeating Kevin Sullivan at that year's "Starrcade". Another seven years later Mr. T appeared in the front row of the November 19, 2001, episode of "WWF Raw". On April 5, 2014, at the Smoothie King Center in New Orleans, Mr. T was inducted by Gene Okerlund into the WWE Hall of Fame's celebrity wing. His acceptance speech, largely a tribute to his mother and motherhood rather than wrestling, ran long and was eventually interrupted by Kane. Mr. T is a born-again Christian. Mr. T has three children; two daughters, one of whom is a comedian, and a son from his ex-wife. In 1987, he angered the residents of Lake Forest, Illinois, by cutting down more than a hundred oak trees on his estate. The local newspaper referred to the incident as "the Lake Forest Chain Saw Massacre". He stopped wearing virtually all his gold, one of his identifying marks, after helping with the cleanup after Hurricane Katrina in 2005. He said, "As a Christian, when I saw other people lose their lives and lose their land and property ... I felt that it would be a sin before God for me to continue wearing my gold. I felt it would be insensitive and disrespectful to the people who lost everything, so I stopped wearing my gold." Mr. T often refers to himself in the third person. He also frequently talks in rhymes. Eddie Murphy made references to Mr. T in his 1983 stand-up special "Eddie Murphy Delirious", as part of a controversial segment where the comedian made impersonations of male celebrities as they would sound like if they were gay. The pop punk band The Mr. T Experience are named after him. Mr. T was featured in the "Epic Rap Battles of History" episode "Mr. T vs. Mr. Rogers", in which he was portrayed by DeStorm Power.
https://en.wikipedia.org/wiki?curid=19019
Malmö Malmö (; ; ) is the largest city in the Swedish county (län) of Scania. It is the third-largest city in Sweden, after Stockholm and Gothenburg, and the sixth-largest city in Scandinavia, with a population of 316,588 (municipal total 338,230 in 2018). The Malmö Metropolitan Region is home to over 700,000 people, and the Öresund region, which includes Malmö, is home to 4 million people. Malmö was one of the earliest and most industrialized towns in Scandinavia, but it struggled to adapt to post-industrialism. Since the construction of the Öresund Bridge, Malmö has undergone a major transformation, producing new architectural developments, supporting new biotech and IT companies, and attracting students through Malmö University and other higher education facilities. The city contains many historic buildings and parks, and is also a commercial center for the western part of Scania. The earliest written mention of Malmö as a city dates from 1275. It is thought to have been founded shortly before that date, as a fortified quay or ferry berth of the Archbishop of Lund, some to the north-east. Malmö was for centuries Denmark's second-biggest city. Its original name was "Malmhaug" (with alternate spellings), meaning "Gravel pile" or "Ore Hill". In the 15th century, Malmö became one of Denmark's largest and most visited cities, reaching a population of approximately 5,000 inhabitants. It became the most important city around the Øresund, with the German Hanseatic League frequenting it as a marketplace, and was notable for its flourishing herring fishery. In 1437, King Eric of Pomerania (King of Denmark from 1396 to 1439) granted the city's arms: argent with a griffin gules, based on Eric's arms from Pomerania. The griffin's head as a symbol of Malmö extended to the entire province of Scania from 1660. In 1434, a new citadel was constructed at the beach south of the town. This fortress, known today as "Malmöhus", did not take its current form until the mid-16th century. Several other fortifications were constructed, making Malmö Sweden's most fortified city, but only "Malmöhus" remains. Lutheran teachings spread during the 16th century Protestant Reformation, and Malmö became one of the first cities in Scandinavia to fully convert (1527–1529) to this Protestant denomination. In the 17th century, Malmö and the Scanian region ("Skåneland") came under control of Sweden following the Treaty of Roskilde with Denmark, signed in 1658. Fighting continued, however; in June 1677, 14,000 Danish troops laid siege to Malmö for a month, but were unable to defeat the Swedish troops holding it. By the dawn of the 18th century, Malmö had about 2,300 inhabitants. However, owing to the wars of Charles XII of Sweden (reigned 1697–1718) and to bubonic plague epidemics, the population dropped to 1,500 by 1727. The population did not grow much until the modern harbor was constructed in 1775. The city started to expand and the population in 1800 was 4,000. 15 years later, it had increased to 6,000. In 1840, Frans Henrik Kockum founded the workshop from which the Kockums shipyard eventually developed as one of the largest shipyards in the world. The Southern Main Line was built between 1856 and 1864; this enabled Malmö to become a center of manufacture, with major textile and mechanical industries. In 1870, Malmö overtook Norrköping to become Sweden's third-most populous city, and by 1900 Malmö had strengthened this position with 60,000 inhabitants. Malmö continued to grow through the first half of the 20th century. The population had swiftly increased to 100,000 by 1915 and to 200,000 by 1952. In 1914 (15 May to 4 October) Malmö hosted the Baltic Exhibition. The large park Pildammsparken was arranged and planted for this large event. The Russian part of the exhibition was never taken down, owing to the outbreak of World War I. On 18 and 19 December 1914, the "Three Kings Meeting" was held in Malmö. After a somewhat infected period (1905–1914), which included the dissolution of the Swedish-Norwegian Union, King Oscar II was replaced with King Håkon VII in Norway, who was the younger brother of the Danish King Christian X. As Oscar died in 1907, and his son Gustav V became the new King of Sweden, the tensions within Scandinavia were still unclear, but during this historical meeting, the Scandinavian Kings found internal understanding, as well as a common line about remaining neutral in the ongoing war. Within sports, Malmö has mostly been associated with football. IFK Malmö participated in the first ever edition of Allsvenskan 1924/25, but from the mid-1940s Malmö FF started to rise, and ever since it has been one of the most prominent clubs within Swedish football. They have won Allsvenskan 23 times in all (as of February 2018) between 1943/44 and 2017. By 1971, Malmö reached 265,000 inhabitants, but this was the peak which would stand for more than 30 years. (Svedala was, for a few years in the early 1970s, a part of Malmö municipality.) By the mid-1970s Sweden experienced a recession that hit the industrial sector especially hard; shipyards and manufacturing industries suffered, which led to high unemployment in many cities of Scania. Kockums shipyard had become a symbol of Malmö as its largest employer and, when shipbuilding ceased in 1986, confidence in the future of Malmö plummeted among politicians and the public. In addition, many middle-class families moved into one-family houses in surrounding municipalities such as Vellinge Municipality, Lomma Municipality and Staffanstorp Municipality, which profiled themselves as the suburbs of the upper-middle class. By 1985, Malmö had lost 35,000 inhabitants and was down to 229,000. The Swedish financial crises of the early 1990s exacerbated Malmö's decline as an industrial city; between 1990 and 1995 Malmö lost about 27,000 jobs and its economy was seriously strained. However, from 1994 under the leadership of the then mayor Ilmar Reepalu, the city of Malmö started to create a new economy as a center of culture and knowledge. Malmö reached bottom in 1995, but that same year marked the commencement of the massive Öresund Bridge road, railway and tunnel project, connecting it to Copenhagen and to the rail lines of Europe. The new Malmö University opened in 1998 on Kockums' former dockside. Further redevelopment of the now disused south-western harbor followed; a city architecture exposition (Bo01) was held in the area in 2001, and its buildings and villas form the core of a new city district. Designed with attractive waterfront vistas, it was intended to be and has been successful in attracting the urban middle-class. Since 1974, the Kockums Crane had been a landmark in Malmö and a symbol of the city's manufacturing industry, but in 2002 it was disassembled and moved to South Korea. In 2005, Malmö gained a new landmark with completion of Turning Torso, the tallest skyscraper in Scandinavia. Although the transformation from a city with its economic base in manufacturing has returned growth to Malmö, the new types of jobs have largely benefited the middle and upper classes. In its 2015 and 2017 reports, Police in Sweden placed the Rosengård and the Södra Sofielund/Seved district in the most severe category of urban areas with high crime rates. Malmö is located at 13°00' east and 55°35' north, near the southwestern tip of Sweden, in the province of Scania. The city is part of the transnational Öresund Region and, since 2000, has been linked by the Öresund Bridge across the Öresund to Copenhagen, Denmark. The bridge opened on 1 July 2000, and measures (the whole link totalling 16 km), with pylons reaching vertically. Apart from the Helsingborg-Helsingør ferry links further north, most ferry connections have been discontinued. Malmö, like the rest of southern Sweden, has an oceanic climate. Despite its northern location, the climate is mild compared to other locations at similar latitudes, mainly because of the influence of the Gulf Stream and also its westerly position on the Eurasian landmass. Owing to its northern latitude, daylight lasts 17 hours in midsummer, but only around seven hours in midwinter. According to data from 2002–2014 Falsterbo, to the south of the city, received an annual average of 1,895 hours of sunshine while Lund, to the north, received 1,803 hours. The sunshine data in the weather box is based on the data for Falsterbo. Summers are mild with average high temperatures of and lows of around . Heat waves during the summer arise occasionally. Winters are fairly cold and windy, with temperatures steady between , but it rarely drops below . Rainfall is light to moderate throughout the year with 169 wet days. Snowfall occurs mainly in December through March, but snow covers do not remain for a long time, and some winters are virtually free of snow. Öresund Line trains cross the Öresund Bridge every 20 minutes (hourly late night) connecting Malmö to Copenhagen, and the Copenhagen Airport. The trip takes around 35–40 minutes. Additionally, some of the X 2000 and Intercity trains to Stockholm, Gothenburg, and Kalmar cross the bridge, stopping at Copenhagen Airport. In March 2005, excavation began on a new railway connection called the City Tunnel, which opened for traffic on 4 December 2010. The tunnel runs south from Malmö Central Station through an underground station at the Triangeln railway station to Hyllievång (Hyllie Meadow). Then, the line comes to the surface to enter Hyllie Station, also created as part of the tunnel project. From Hyllie Station, the line connects to the existing Öresund line in either direction, with the Öresund Bridge lying due west. Besides the Copenhagen Airport, Malmö has an airport of its own, Malmö Airport, today chiefly used for domestic Swedish destinations, charter flights and low-cost carriers. The motorway system has been incorporated with the Öresund Bridge; the European route E20 goes over the bridge and then, together with the European route E6 follows the Swedish west coast from Malmö–Helsingborg to Gothenburg. E6 goes further north along the west coast and through Norway to the Norwegian town Kirkenes at Barents Sea. The European route to Jönköping–Stockholm (E4) starts at Helsingborg. Main roads in the directions of Växjö–Kalmar, Kristianstad–Karlskrona, Ystad (E65), and Trelleborg start as freeways. Malmö has of bike paths; approximately 40% of all commuting is done by bicycle. The city has two industrial harbors; one is still in active use and is the largest Nordic port for car importation. It also has two marinas: the publicly owned Limhamn Marina () and the private Lagunen (), both offering a limited number of guest docks. Public transport consisted of a tram network from 1887 until 1973. Afterwards, it was replaced by a bus network. A local train line with circular traffic at seven stations was opened in December 2018. The stations are Malmö Central Station (underground platforms) – Triangeln station – Hyllie station – Malmö South/Svågertorp – Persborg – Rosengård – Östervärn – Malmö Central Station (main overground terminus). Some trains arrive from Kristianstad and finish with a lap around Malmö, whilst other trains at this circular line, never drive outside the city limits. There is at least a 30 minutes service between each departure, but far more between the Central Station and Hyllie. Extension plans of a minor network system exists. The Öresundsmetro is a proposed rapid transit network linking Malmö with the existing Copenhagen Metro through a 22 km tunnel under the Öresund. Malmö Municipality is an administrative unit defined by geographical borders, consisting of the "City of Malmö" and its immediate surroundings. Malmö ("Malmö tätort") consists of the urban part of the municipality together with the small town of Arlöv in the Burlöv Municipality. Both municipalities also include smaller urban areas and rural areas, such as the suburbs of Oxie and Åkarp. "Malmö tätort" is to be distinguished from "Malmö stad" (the city of Malmö), which is a semi-official name of Malmö Municipality. The leaders in Malmö created a commission for a socially sustainable Malmö in November 2010. The commissions were tasked with providing evidence-based strategies for reducing health inequalities and improve living conditions for all citizens of Malmö, especially for the most vulnerable and disadvantaged and issued its final report in December 2013. Malmö has a young population by Swedish standards, with almost half of the population under the age of 35 (48.2%). After 1971, Malmö had 265,000 inhabitants, but the population then dropped to 229,000 by 1985. The total population of the urban area was 280,415 in December 2010. It then began to rise again, and had passed the previous record by the 1 January 2003 census, when it had 265,481 inhabitants. On 27 April 2011, the population of Malmö reached the 300,000 mark. In 2017 the total population of the city was 316,588 inhabitants out of a municipal total of 338,230. In 2019, approximately 55.5% of the population of Malmö municipality (190,849 residents) had at least one parent born abroad. The Middle East, Horn of Africa, former Yugoslavia and Denmark are the main sources of immigration. In addition, 14.8% (50 999 residents) of the population in 2019 were foreign nationals. Foreign-born population by country, 31 December 2018: Greater Malmö is one of Sweden's three officially recognized metropolitan areas ("storstadsområden") and since 2005 is defined as the municipality of Malmö and 11 other municipalities in the southwestern corner of Scania. As of 2019, its population was recorded as 740,840. The region covers an area of . The municipalities included, apart from Malmö, are Burlöv, Eslöv, Höör, Kävlinge, Lomma, Lund, Skurup, Staffanstorp, Svedala, Trelleborg and Vellinge. Together with Lund, Malmö is the region's economic and education hub. The economy of Malmö was traditionally based on shipbuilding (Kockums) and construction-related industries, such as concrete factories. The region's leading university, along with its associated hi-tech and pharmaceutical industries, is located in Lund about to the north-east. As a result, Malmö had a troubled economic situation following the mid-1970s. Between 1990–1995, 27,000 jobs were lost, and the budget deficit was more than one billion Swedish krona (SEK). In 1995, Malmö had Sweden's highest unemployment rate. However, during the last two decades, there has been a revival. One contributing factor has been the economic integration with Denmark brought about by the Öresund Bridge, which opened in July 2000. Also the university founded in 1998 and the effects of integration into the European Union have contributed. In 2017 the unemployment rate is still high but Malmö has, in the last 20 years, had one of the strongest employment growth rates in Sweden. But a lot of those jobs are taken by workers outside the neighboring municipalities. , the largest companies were: Almost 30 companies have moved their headquarters to Malmö during the last seven years, generating around 2,300 jobs. Among them are IKEA who has most of its headquarter functions based in Malmö. The number of start-up companies is high in Malmö. Around 7 new companies are started every day in Malmö. In 2010, the renewal of the number of companies amounted to 13.9%, which exceeds both Stockholm and Gothenburg. Especially strong growth is in the gaming area with Massive entertainment and King being the flagship companies for the industry. Among the industries that continue to increase their share of companies in Malmö are transport, financial and business services, entertainment, leisure and construction. Malmö has the country's ninth-largest school of higher education, Malmö University, established in 1998. It has 1,600 employees and 24,000 students (2014). In addition nearby Lund University (established in 1666) has some education located in Malmö: The United Nations World Maritime University is also located in Malmö. The World Maritime University (WMU) operates under the auspices of the International Maritime Organization (IMO), a specialized agency of the United Nations. WMU thus enjoys the status, privileges and immunities of a UN institution in Sweden. A striking depiction of Malmö (in the 1930s) was made by Bo Widerberg in his debut film "Kvarteret Korpen" ("Raven's End") (1963), largely shot in the shabby Korpen working-class district in Malmö. With humour and tenderness, it depicts the tensions between classes and generations. The movie was nominated for an Academy Award for Best Foreign Language Film in 1965. In 2017, the film "Medan Vi Lever" ("While We Live") was awarded the prize for best film by an African living abroad at the Africa Movie Academy Awards. It was filmed in Malmö and Gambia, and deals with identity, integration and everyday racism. The cities of Malmö and Copenhagen are, with the Öresund Bridge, the main locations in the television series "The Bridge (Bron/Broen)". In 1944, Malmö Stadsteater (Malmö Municipal Theatre) was established with a repertoire comprising stage theatre, opera, musical, ballet, musical recitals and experimental theatre. In 1993 it was split into three units, Dramatiska Teater (Dramatical Theatre), Malmö Musikteater (Music Theatre) and Skånes Dansteater (Scanian Dance Theatre) and the name was abandoned. The ownership of the last two were transferred to Region Skåne in 2006 Dramatiska Teatern regained its old name. In the 1950s Ingmar Bergman was the Director and Chief Stage Director of Malmö Stadsteater and many of his actors, like Max von Sydow and Ingrid Thulin became known through his films. Later stage directors include Staffan Valdemar Holm and Göran Stangertz. Malmö Musikteater were renamed Malmö Operan and plays operas and musicals, classics as newly composed, on one of Scandinavia's large opera scenes with 1,511 seats. Skånes dansteater is active and plays contemporary dance repertory and present works by Swedish and international choreographers in their house in Malmö harbor. Since the 1970s the city has also been home to independent theatre groups and show/musical companies. It also hosts a rock/dance/dub culture; in the 1960s The Rolling Stones played the Klubb Bongo, and in recent years stars like Morrissey, Nick Cave, B.B. King and Pat Metheny have made repeated visits. The Cardigans debuted in Malmö and recorded their albums there. On 7 January 2009 CNN Travel broadcast a segment called "MyCity_MyLife" featuring Nina Persson taking the camera to some of the sites in Malmö that she enjoys. The Rooseum Center for Contemporary Art, founded in 1988 by the Swedish art collector and financier Fredrik Roos and housed in a former power station which had been built in 1900, was one of the foremost centers for contemporary art in Europe during the 1980s and 1990s. By 2006, most of the collection had been sold off and the museum was on a time-out; by 2010 Rooseum had been dismantled and a subsidiary of the National Museum of Modern Art inaugurated in its place. In 1992 and in 2013 Malmö was the host of the Eurovision Song Contest. Moderna Museet Malmö was opened in December 2009 in the old Rooseum building. It is a part of the Moderna Museet, with independent exhibitions of modern and contemporary art. The collection of Moderna Museet holds key pieces of, among others, Marcel Duchamp, Louise Bourgeois, Pablo Picasso, Niki de Saint Phalle, Salvador Dalí, Carolee Schneemann, Henri Matisse and Robert Rauschenberg Malmö Museum ("Malmö Museer") is a municipal and regional museum. The museum features exhibitions on technology, shipping, natural history and history. Malmö Museum has an aquarium and an art museum. Malmöhus Castle is also operated as a part of the museum. Exhibitions are primarily shown at Slottsholmen and at the Technology and Maritime Museum ("Teknikens och sjöfartens hus"). Malmö Konsthall is one of the largest exhibition halls in Europe for contemporary art, opened in 1975. Malmö's oldest building is St. Peter's Church (). It was built in the early 14th century in Baltic Brick Gothic probably after St Mary's Church in Lübeck. The church is built with a nave, two aisles, a transept and a tower. Its exterior is characterized above all by the flying buttresses spanning its airy arches over the aisles and ambulatory. The tower, which fell down twice during the 15th century, got its current look in 1890. Another major church of significance is the Church of Our Saviour, Malmö, which was founded in 1870. Another old building is Tunneln, to the west of Sankt Petri Church, which also dates back to around 1300. The oldest parts of Malmö were built between 1300–1600 during its first major period of expansion. The central city's layout, as well as some of its oldest buildings, are from this time. Many of the smaller buildings from this time are typical Scanian: two-story urban houses that show a strong Danish influence. Recession followed in the ensuing centuries. The next expansion period was in the mid 19th century and led to the modern stone and brick city. This expansion lasted into the 20th century and can be seen by a number of Art Nouveau buildings, among those in the Malmö synagogue. Malmö was relatively late to be influenced by modern ideas of functionalist tenement architecture in the 1930s. Around 1965, the government initiated the so-called Million Programme, intending to offer affordable apartments in the outskirts of major Swedish cities. But this period also saw the reconstruction (and razing) of much of the historical city center. Since the late 1990s, Malmö has seen a more cosmopolitan architecture. "Västra Hamnen" (The Western Harbor), like most of the harbor to the north of the city center, was industrial. In 2001 its reconstruction began as an urban residential neighbourhood, with 500 residential units, most were part of the exhibition Bo01. The exhibition had two main objectives: develop self-sufficient housing units in terms of energy and greatly diminish phosphorus emissions. Among the new building's towers were the Turning Torso, a skyscraper with a twisting design, tall, the majority of which is residential. It became Malmö's new landmark. The most recent addition (2015) is the new development of Malmö Live. This new building features a hotel, a concert hall, congress hall and a sky bar in the center of Malmö. Point Hyllie is a new 110 m commercial tower that is under construction as of 2018. The beach "Ribersborg", by locals usually called "Ribban", south-west of the harbor area, is a man-made shallow beach, stretching along Malmö's coastline. Despite Malmö's chilly climate, it is sometimes referred to as the "Copacabana of Malmö". It is the site of Ribersborgs open-air bath, opened in the 1890s. The long boardwalk at The Western Harbor, "Scaniaparken" and "Daniaparken", has become a favorite summer hang-out for the people of Malmö and is a popular place for bathing. The harbor is particularly popular with Malmö's vibrant student community and has been the scene of several impromptu outdoor parties and gatherings. In the third week of August each year a festival, "Malmöfestivalen", fills the streets of Malmö with different kinds of cuisines and events. BUFF International Film Festival, an international children and young people's film festival, is held in Malmö every year in March. Nordisk Panorama – Nordic Short & Doc Film Festival, a film festival for short and documentary films by filmmakers from the Nordic countries, is held every year in September. Malmö Arab Film Festival (MAFF), the largest Arabic film festival in Europe, is held in Malmö. The Nordic Game conference takes place in Malmö every April/May. The event consists of conference itself, recruitment expo and game expo and attracts hundreds of gamedev professionals every year. Malmö also hosts other 3rd party events that cater to all communities that reside in Malmö, including religious and political celebrations. "Sydsvenskan", founded in 1870, is Malmö's largest daily newspaper. It has an average circulation of 130,000. Its main competitor is the regional daily Skånska Dagbladet, which has a circulation of 34,000. The tabloid Kvällsposten still has a minimal editorial staff but is today just a version of a Stockholm tabloid. The Social Democratic "Arbetet" was edited and printed at Malmö between 1887 and 2000. In addition to these, a number of free-of-charge papers, generally dealing with entertainment, music and fashion have local editions (for instance "City", "Rodeo", "Metro" and "Nöjesguiden"). Malmö is also home to the Egmont Group's Swedish magazine operations. A number of local and regional radio and TV broadcasters are based in the Greater Malmö area. Sports in southern Sweden are dominated by football. Over the years the city's best football teams have been Malmö FF who play in the top-level Allsvenskan Malmö FF had their most successful periods in the 1970s and 1980s when they won the league several times. In 1979, they advanced to the final of the European Cup defeating "AS Monaco, Dynamo Kiew, Wisla Krakow" and "Austria Wien (Vienna)" but lost in the final at the Munich Olympic Stadium against Nottingham Forest by a single goal just before half time scored by Trevor Francis. To date, they are the only Swedish football club to have reached the final of the competition. Malmö FF is the club where for instance, Bosse Larsson and Zlatan Ibrahimović began their football careers. A second football team, IFK Malmö played in Sweden's top flight for about 20 years and the club's quarterfinal in the European Cup is the club's greatest achievement in its history. Today, the club resides in the sixth tier of the Swedish league system. FC Rosengård (former LdB Malmö) are playing in the top level in Damallsvenskan, women's football league. FC Rosengård girls have won the league 10 times and the national cup title 5 times. In 2014, they reached the semi-final in Champions League, which they ultimately went on to lose to the German side 1. FFC Frankfurt. Brazilian football player Marta, widely regarded the best female football player of all time, played in FC Rosengård between 2014 and 2017. At the 1958 FIFA World Cup Malmö Stadion was inaugurated in combination with the opening match of the cup, as World Champions West Germany defeated Argentina 3–1, in front of 31,156, a record attendance for Malmö in general. Another two games were decided at the stadium. Malmö has athletes competing in a variety of sport. The most notable other sports team is the ice hockey team Malmö Redhawks. They were the creation of millionaire Percy Nilsson and quickly rose to the highest rank in the early to mid-1990s and won two Swedish championships, but for a number of years have found themselves residing outside of the top flight. A first division handball team, HK Malmö, attracts a fair amount of attendance. Rugby union team, Malmö RC, founded in 1954, have won 6 national championships. The club has teams for men, women and juniors. Gaelic football has also been introduced to Malmö. The men of Malmö G.A.A. have won the Scandinavian Championships twice and the women once. Other notable team a sports are baseball, American football and Australian football. Among non-team sports, badminton and athletics are the most popular, together with east Asian martial arts and boxing. Basketball is also fairly a big sport in the city, including the clubs Malbas and SF Srbija among others. Women are permitted by the city council to swim topless in public swimming pools. Everyone must wear bathing attire, but covering of the breasts is not mandatory.
https://en.wikipedia.org/wiki?curid=19021
Measurement Measurement is the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The scope and application of measurement are dependent on the context and discipline. In the natural sciences and engineering, measurements do not apply to nominal properties of objects or events, which is consistent with the guidelines of the "International vocabulary of metrology" published by the International Bureau of Weights and Measures. However, in other fields such as statistics as well as the social and behavioural sciences, measurements can have multiple levels, which would include nominal, ordinal, interval and ratio scales. Measurement is a cornerstone of trade, science, technology, and quantitative research in many disciplines. Historically, many measurement systems existed for the varied fields of human existence to facilitate comparisons in these fields. Often these were achieved by local agreements between trading partners or collaborators. Since the 18th century, developments progressed towards unifying, widely accepted standards that resulted in the modern International System of Units (SI). This system reduces all physical measurements to a mathematical combination of seven base units. The science of measurement is pursued in the field of metrology. The measurement of a property may be categorized by the following criteria: type, magnitude, unit, and uncertainty. They enable unambiguous comparisons between measurements. Measurements most commonly use the International System of Units (SI) as a comparison framework. The system defines seven fundamental units: kilogram, metre, candela, second, ampere, kelvin, and mole. Six of these units are defined without reference to a particular physical object which serves as a standard (artifact-free), while the kilogram is still embodied in an artifact which rests at the headquarters of the International Bureau of Weights and Measures in Sèvres near Paris. Artifact-free definitions fix measurements at an exact value related to a physical constant or other invariable phenomena in nature, in contrast to standard artifacts which are subject to deterioration or destruction. Instead, the measurement unit can only ever change through increased accuracy in determining the value of the constant it is tied to. The first proposal to tie an SI base unit to an experimental standard independent of fiat was by Charles Sanders Peirce (1839–1914), who proposed to define the metre in terms of the wavelength of a spectral line. This directly influenced the Michelson–Morley experiment; Michelson and Morley cite Peirce, and improve on his method. With the exception of a few fundamental quantum constants, units of measurement are derived from historical agreements. Nothing inherent in nature dictates that an inch has to be a certain length, nor that a mile is a better measure of distance than a kilometre. Over the course of human history, however, first for convenience and then for necessity, standards of measurement evolved so that communities would have certain common benchmarks. Laws regulating measurement were originally developed to prevent fraud in commerce. Units of measurement are generally defined on a scientific basis, overseen by governmental or independent agencies, and established in international treaties, pre-eminent of which is the General Conference on Weights and Measures (CGPM), established in 1875 by the Metre Convention, overseeing the International System of Units (SI). For example, the metre was redefined in 1983 by the CGPM in terms of the speed of light, the kilogram was redefined in 2019 in terms of the Planck constant and the international yard was defined in 1960 by the governments of the United States, United Kingdom, Australia and South Africa as being "exactly" 0.9144 metres. In the United States, the National Institute of Standards and Technology (NIST), a division of the United States Department of Commerce, regulates commercial measurements. In the United Kingdom, the role is performed by the National Physical Laboratory (NPL), in Australia by the National Measurement Institute, in South Africa by the Council for Scientific and Industrial Research and in India the National Physical Laboratory of India. Before SI units were widely adopted around the world, the British systems of English units and later imperial units were used in Britain, the Commonwealth and the United States. The system came to be known as U.S. customary units in the United States and is still in use there and in a few Caribbean countries. These various systems of measurement have at times been called "foot-pound-second" systems after the Imperial units for length, weight and time even though the tons, hundredweights, gallons, and nautical miles, for example, are different for the U.S. units. Many Imperial units remain in use in Britain, which has officially switched to the SI system—with a few exceptions such as road signs, which are still in miles. Draught beer and cider must be sold by the imperial pint, and milk in returnable bottles can be sold by the imperial pint. Many people measure their height in feet and inches and their weight in stone and pounds, to give just a few examples. Imperial units are used in many other places, for example, in many Commonwealth countries that are considered metricated, land area is measured in acres and floor space in square feet, particularly for commercial transactions (rather than government statistics). Similarly, gasoline is sold by the gallon in many countries that are considered metricated. The metric system is a decimal system of measurement based on its units for length, the metre and for mass, the kilogram. It exists in several variations, with different choices of base units, though these do not affect its day-to-day use. Since the 1960s, the International System of Units (SI) is the internationally recognised metric system. Metric units of mass, length, and electricity are widely used around the world for both everyday and scientific purposes. The International System of Units (abbreviated as SI from the French language name "Système International d'Unités") is the modern revision of the metric system. It is the world's most widely used system of units, both in everyday commerce and in science. The SI was developed in 1960 from the metre–kilogram–second (MKS) system, rather than the centimetre–gram–second (CGS) system, which, in turn, had many variants. The SI units for the seven base physical quantities are: In the SI, base units are the simple measurements for time, length, mass, temperature, amount of substance, electric current and light intensity. Derived units are constructed from the base units, for example, the watt, i.e. the unit for power, is defined from the base units as m2·kg·s−3. Other physical properties may be measured in compound units, such as material density, measured in kg/m3. The SI allows easy multiplication when switching among units having the same base but different prefixes. To convert from metres to centimetres it is only necessary to multiply the number of metres by 100, since there are 100 centimetres in a metre. Inversely, to switch from centimetres to metres one multiplies the number of centimetres by 0.01 or divides the number of centimetres by 100. A ruler or rule is a tool used in, for example, geometry, technical drawing, engineering, and carpentry, to measure lengths or distances or to draw straight lines. Strictly speaking, the "ruler" is the instrument used to rule straight lines and the calibrated instrument used for determining length is called a "measure", however common usage calls both instruments "rulers" and the special name "straightedge" is used for an unmarked rule. The use of the word "measure", in the sense of a measuring instrument, only survives in the phrase "tape measure", an instrument that can be used to measure but cannot be used to draw straight lines. As can be seen in the photographs on this page, a two-metre carpenter's rule can be folded down to a length of only 20 centimetres, to easily fit in a pocket, and a five-metre-long tape measure easily retracts to fit within a small housing. Some non-systematic names are applied for some multiples of some units. The Australian building trades adopted the metric system in 1966 and the units used for measurement of length are metres (m) and millimetres (mm). Centimetres (cm) are avoided as they cause confusion when reading plans. For example, the length two and a half metres is usually recorded as 2500 mm or 2.5 m; it would be considered non-standard to record this length as 250 cm. American surveyors use a decimal-based system of measurement devised by Edmund Gunter in 1620. The base unit is Gunter's chain of which is subdivided into 4 rods, each of 16.5 ft or 100 links of 0.66 feet. A link is abbreviated "lk," and links "lks" in old deeds and land surveys done for the government. Time is an abstract measurement of elemental changes over a non spatial continuum. It is denoted by numbers and/or named periods such as hours, days, weeks, months and years. It is an apparently irreversible series of occurrences within this non spatial continuum. It is also used to denote an interval between two relative points on this continuum. "Mass" refers to the intrinsic property of all material objects to resist changes in their momentum. "Weight", on the other hand, refers to the downward force produced when a mass is in a gravitational field. In free fall, (no net gravitational forces) objects lack weight but retain their mass. The Imperial units of mass include the ounce, pound, and ton. The metric units gram and kilogram are units of mass. One device for measuring weight or mass is called a weighing scale or, often, simply a "scale". A spring scale measures force but not mass, a balance compares weight, both require a gravitational field to operate. Some of the most accurate instruments for measuring weight or mass are based on load cells with a digital read-out, but require a gravitational field to function and would not work in free fall. The measures used in economics are physical measures, nominal price value measures and real price measures. These measures differ from one another by the variables they measure and by the variables excluded from measurements. In the field of survey research, measures are taken from individual attitudes, values, and behavior using questionnaires as a measurement instrument. As all other measurements, measurement in survey research is also vulnerable to measurement error, i.e. the departure from the true value of the measurement and the value provided using the measurement instrument. In substantive survey research, measurement error can lead to biased conclusions and wrongly estimated effects. In order to get accurate results, when measurement errors appear, the results need to be corrected for measurement errors. The following rules generally apply for displaying the exactness of measurements: Since accurate measurement is essential in many fields, and since all measurements are necessarily approximations, a great deal of effort must be taken to make measurements as accurate as possible. For example, consider the problem of measuring the time it takes an object to fall a distance of one metre (about 39 in). Using physics, it can be shown that, in the gravitational field of the Earth, it should take any object about 0.45 second to fall one metre. However, the following are just some of the sources of error that arise: Additionally, other sources of experimental error include: Scientific experiments must be carried out with great care to eliminate as much error as possible, and to keep error estimates realistic. In the classical definition, which is standard throughout the physical sciences, "measurement" is the determination or estimation of ratios of quantities. Quantity and measurement are mutually defined: quantitative attributes are those possible to measure, at least in principle. The classical concept of quantity can be traced back to John Wallis and Isaac Newton, and was foreshadowed in Euclid's Elements. In the representational theory, "measurement" is defined as "the correlation of numbers with entities that are not numbers". The most technically elaborated form of representational theory is also known as additive conjoint measurement. In this form of representational theory, numbers are assigned based on correspondences or similarities between the structure of number systems and the structure of qualitative systems. A property is quantitative if such structural similarities can be established. In weaker forms of representational theory, such as that implicit within the work of Stanley Smith Stevens, numbers need only be assigned according to a rule. The concept of measurement is often misunderstood as merely the assignment of a value, but it is possible to assign a value in a way that is not a measurement in terms of the requirements of additive conjoint measurement. One may assign a value to a person's height, but unless it can be established that there is a correlation between measurements of height and empirical relations, it is not a measurement according to additive conjoint measurement theory. Likewise, computing and assigning arbitrary values, like the "book value" of an asset in accounting, is not a measurement because it does not satisfy the necessary criteria. Three type of Representational theory 1) Empirical relation In science, an empirical relationship is a relationship or correlation based solely on observation rather than theory. An empirical relationship requires only confirmatory data irrespective of theoretical basis 2) The rule of mapping The real world is the Domain of mapping, and the mathematical world is the range. when we map the attribute to mathematical system, we have many choice for mapping and the range 3) The representation condition of measurement Information theory recognises that all data are inexact and statistical in nature. Thus the definition of measurement is: "A set of observations that reduce uncertainty where the result is expressed as a quantity." This definition is implied in what scientists actually do when they measure something and report both the mean and statistics of the measurements. In practical terms, one begins with an initial guess as to the expected value of a quantity, and then, using various methods and instruments, reduces the uncertainty in the value. Note that in this view, unlike the positivist representational theory, all measurements are uncertain, so instead of assigning one value, a range of values is assigned to a measurement. This also implies that there is not a clear or neat distinction between estimation and measurement. In quantum mechanics, a measurement is an action that determines a particular property (position, momentum, energy, etc.) of a quantum system. Before a measurement is made, a quantum system is simultaneously described by all values in a range of possible values, where the probability of measuring each value is determined by the wavefunction of the system. When a measurement is performed, the wavefunction of the quantum system "collapses" to a single, definite value. The unambiguous meaning of the measurement problem is an unresolved fundamental problem in quantum mechanics. In biology, there is no well established theory of measurement. However, the importance of the theoretical context is emphasized. Moreover, the theoretical context stemming from the theory of evolution leads to articulate the theory of measurement and historicity as a fundamental notion.
https://en.wikipedia.org/wiki?curid=19022
Malden Island Malden Island, sometimes called Independence Island in the nineteenth century, is a low, arid, uninhabited atoll in the central Pacific Ocean, about in area. It is one of the Line Islands belonging to the Republic of Kiribati. The lagoon is entirely enclosed by land, however it is connected to the sea by underground channels, and is quite salty. The island is chiefly notable for its mysterious prehistoric ruins, its once-extensive deposits of phosphatic guano (exploited by Australian interests from 1860–1927), its former use as the site of the first British H-bomb tests (Operation Grapple, 1957), and its current importance as a protected area for breeding seabirds. The island is designated as the "Malden Island Wildlife Sanctuary". In 2014 the Kiribati government established a fishing exclusion zone around each of the southern Line Islands (Caroline (commonly called Millennium), Flint, Vostok, Malden, and Starbuck). Malden Island is located south of the equator, south of Honolulu, Hawaii, and more than west of the coast of South America. The nearest land is uninhabited Starbuck Island, to the southwest. The closest inhabited place is Tongareva (Penrhyn Island), to the southwest. The nearest airport is on Kiritimati (Christmas Island), to the northwest. Other nearby islands (all uninhabited) include Jarvis Island, to the northwest, Vostok Island, to the south-southeast, and Caroline (Millennium) Island, to the southeast. The island has roughly the shape of an equilateral triangle, with on a side, aligned with the southwest side running northwest to southeast. The west and south corners are slightly truncated, shortening the north, east and southwest coasts to about , and adding shorter west and south coasts about 1 to 2 km (½–1 mi) in length. A large, mostly shallow, irregularly shaped lagoon, containing a number of small islets, fills the east central part of the island. The lagoon is entirely enclosed by land, but only by relatively narrow strips along its north and east sides. It is connected to the sea by underground channels, and is quite salty. Most of the land area of the island lies to the south and west of the lagoon. The total area of the island is about . The island is very low, no more than above sea level at its highest point. The highest elevations are found along a rim that closely follows the coastline. The interior forms a depression that is only a few meters above sea level in the western part and is below sea level (filled by the lagoon) in the east central part. Because of this topography, the ocean cannot be seen from much of Malden's interior. There is no standing fresh water on Malden Island, though a fresh water lens may exist. A continuous heavy surf falls all along the coast, forming a narrow white to gray sandy beach. Except on the west coast, where the white sandy beach is more extensive than elsewhere, a strip of dark gray coral rubble, forming a series of low ridges parallel to the coast, lies within the narrow beach, extending inward to the island rim. Because of Malden's isolation and aridity, its vegetation is extremely limited. Sixteen species of vascular plants have been recorded, of which nine are indigenous. The island is largely covered in stunted "Sida fallax" scrub, low herbs and grasses. Few, if any, of the clumps of stunted "Pisonia grandis" once found on the island still survive. Coconut palms planted by the guano diggers did not thrive, although a few dilapidated trees may still be seen. Introduced weeds, including the low-growing woody vine "Tribulus cistoides", now dominate extensive open areas, providing increased cover for young sooty terns. Malden is an important breeding island for about a dozen species including masked boobies ("Sula dactylatra"), red-footed booby ("Sula sula"), tropicbirds ("Phaethontidae"), great frigatebird ("Fregata minor"), lesser frigatebird ("Fregata ariel"), grey-backed tern ("Onychoprion lunata"), red-tailed tropicbird ("Phaethon rubricauda"), sooty terns ("sterna fuscata") It is also an important winter-stop for the bristle-thighed curlew ("Numenius tahitiensis"), a migrant from Alaska; and other migratory seabirds (nineteen species in all). Two kinds of lizards, the mourning gecko ("Lepidodactylus lugubris") and snake-eyed skink ("Cryptoblepharus boutonii") are present on Malden, together with brown libellulid dragonfly. Cats, pigs, goats and house mice were introduced to Malden during the guano-digging period. While the goats and pigs have all died off, feral cats and house mice are still present. Small numbers of green turtles nest on the beaches, and hermit crabs abound. The earliest documented Western sighting of Malden Island was on March 25, 1825, by Capt. Samuel Bunker (1796-1874) of the whaler "Alexander" of Nantucket. Bunker's journal for that day also mentioned that "it proved to be an island seen by the "Sarah Ann" of London and the "Independence" of Nantucket, Capt. Whippey". These too were whaling vessels. This seemingly insignificant logbook excerpt might eventually kill not two but three birds with one stone, for it probably explains why modern Malden Island was once known as "Independence Island""," could debunk the early twentieth century myth of Sarah Ann Island (a so called phantom island) and suggests Bunker probably wasn't even the first Westerner to see this island. The next day he couldn't land and sailed on further. On 30 July, 1825, the island was seen again by Captain The 7th Lord Byron (a cousin of the famous poet). Byron, commanding the British warship HMS "Blonde", was returning to London from a special mission to Honolulu to repatriate the remains of the young king and queen of Hawaii, who had died of measles during a visit to Britain. The island was named after Lt. Charles Robert Malden, navigator of the "Blonde", who sighted the island and briefly explored it. Andrew Bloxam, naturalist of the "Blonde", and James Macrae, a botanist travelling for the Royal Horticultural Society, joined in exploring the island and recorded their observations. Malden may have been the island sighted by another whaling captain William Clark in 1823, aboard the "Winslow". At the time of its discovery by Europeans, Malden was found to be unoccupied, but the remains of ruined temples and other structures indicated that the island had at one time been inhabited. At various times these remains have been speculatively attributed to "wrecked seamen", "buccaneers", "South American Incas", "early Chinese navigators", etc. In 1924, the Malden ruins were examined by an archaeologist from the Bishop Museum in Honolulu, Kenneth Emory, who concluded that they were the creation of a small Polynesian population which had resided there for perhaps several generations some centuries earlier. The ancient stone structures are located around the beach ridges, principally on the north and south sides. A total of 21 archaeological sites have been discovered, three of which (on the island's northwest side) are larger than the others. These sites include temple platforms, called marae, house sites, and graves. Comparisons with stone structures on Tuamotu atolls show that a population of between 100 and 200 natives could have produced all of the Malden structures. Marae of a similar type are found on Raivavae, one of the Austral Islands. Various wells used by these ancients were found by later settlers to be dry or brackish. In the first half of the nineteenth century, during the heyday of American whaling in the central Pacific, Malden was visited on a number of occasions by American whalers. In 1918, schooner "Annie Larsen", infamous for her role in the Hindu-German Conspiracy, was stranded at Malden Island. Malden was claimed by the U.S. Guano Company under the Guano Islands Act of 1856, which authorized citizens to take possession of uninhabited islands under the authority of the United States for the purpose of removing guano, a valuable agricultural fertilizer. Before the American company could begin their operations, the island was occupied by an Australian company under British licence. This company and its successors exploited the island continuously from the 1860s through 1927. Writer Beatrice Grimshaw, a visitor to Malden in the guano-digging era, decried the "glaring barrenness of the bit island", declaring that "...shade, coolness, refreshing fruit, pleasant sights and sounds: there are none. For those who live on the island, it is the scene of an exile which has to be endured somehow or other". She described Malden as containing "a little settlement fronted by a big wooden pier, and a desolate plain of low greyish-green herbage, relieved here and there by small bushes bearing insignificant yellow flowers". Water for settlers was produced by large distillation plants, since no fresh-water wells could be successfully dug on the island. The five or six European supervisors on the island were given "a row of little tin-roofed, one-storeyed houses above the beach", while the native labourers from Niue Island and Aitutaki were housed in "big, barn-like shelters". Grimshaw described these edifices as being "large, bare, shady buildings fitted with wide shelves, on which the men spread their mats and pillows to sleep". Their food consisted of "rice, biscuits, yams, tinned beef, and tea, with a few cocoanuts for those who may fall sick". Food for the white supervisors consisted of "tinned food of various kinds, also bread, rice, fowls, pork, goat, and goat's milk", but vegetables were hard to come by. Indentured labourers on Malden were contracted for one year, paid ten shillings per week plus room and board, and repatriated to their home islands when their contracts expired. Salaries for the supervisors were described as "quite high". Work hours were 5 am to 5 pm, with one hour and 45 minutes given off for meals. The guano diggers constructed a unique railroad on Malden Island, with cars powered by large sails. Laborers pushed empty carts from the loading area up the tramway to the digging pits, where they were then loaded with guano. At the end of the day, the sails were unfurled, and the train cars whisked back to the settlement by the prevailing southeastern winds. While cars were known to jump the tracks more than once during these excursions, the system seems to have worked fairly well. Railroad handcars were also used. This tramway remained in use on Malden as late as 1924, and its roadbed still exists on the island today. Although guano digging continued on Malden through the early 1920s, all human activity on the island had ceased by the early 1930s. No further human use seems to have been made of Malden until 1956. In 1956 the United Kingdom selected Malden as the "instrumentation site" for its first series of thermonuclear (H-bomb) weapons tests, based at Kiritimati (Christmas Island). British officials insisted that Malden should not be called a "target island". Nevertheless, the bombing target marker was located at the south point of the island and three thermonuclear devices were detonated at high altitude a short distance offshore in 1957. The airstrip constructed on the island by the Royal Engineers in 1956–57 remained usable in July 1979. Malden was incorporated in the British Gilbert and Ellice Islands Colony in 1972, and included in the portion of the colony which became the Republic of Kiribati in 1979. The U.S. continued to dispute British sovereignty, based on its nineteenth century Guano Act claims, until after Kiribati became independent. On 20 September 1979, representatives of the United States and Kiribati met on Tarawa Atoll in the Gilberts group of Kiribati, and signed a treaty of friendship between their two nations (commonly referred to as the Treaty of Tarawa of 1979) by which the United States recognized Kiribati's sovereignty over Malden and thirteen other islands in the Line and Phoenix Islands groups. This treaty entered into force on 23 September 1983. The main value of the island to Kiribati lies in the resources of the Exclusive Economic Zone which surrounds it, particularly the rich tuna fisheries. Gypsum deposits on the island itself are extensive, but do not appear to be economically viable under foreseeable market conditions, mainly due to cost of transportation. Some revenue has been realized from ecotourism; the "World Discoverer", an adventure cruise ship operated by "Society Expeditions", visited the island once or twice annually for several years in the mid-1990s. Malden was reserved as a wildlife sanctuary and closed area, and was officially designated as the "Malden Island Wildlife Sanctuary", on 29 May 1975, under the 1975 Wildlife Conservation Ordinance. The principal purpose of this reservation was to protect the large breeding populations of seabirds. This sanctuary is administered by the Wildlife Conservation Unit of the Ministry of Line and Phoenix Islands Development, headquartered on Kiritimati. There is no resident staff at Malden, however, and occasional visits by foreign yachtsmen and fishermen cannot be monitored from Kiritimati. A fire in 1977, possibly caused by visitors, threatened breeding seabirds; this remains a potential threat, particularly during periods of drought.
https://en.wikipedia.org/wiki?curid=19023
Mater lectionis In the spelling of Hebrew and some other Semitic languages, matres lectionis (from Latin "mothers of reading", singular form: "mater lectionis", from ) are certain consonants that are used to indicate a vowel. The letters that do this in Hebrew are "aleph" , "he" , "waw" and "yod" . The "'yod" and "waw" in particular are more often vowels than they are consonants. In Arabic, the "matres lectionis" (though they are much less often referred to thus) are "ʾalif" , "wāw" and "yāʾ" . The original value of the "matres lectionis" correspond closely to what is called in modern linguistics glides or semivowels. Because the scripts used to write some Semitic languages lack vowel letters, unambiguous reading of a text might be difficult. Therefore, to indicate vowels (mostly long), consonant letters are used. For example, in the Hebrew construct-state form "bēt", meaning "the house of", the middle letter in the spelling acts as a vowel, but in the corresponding absolute-state form "bayit" ("house"), which is spelled the same, the same letter represents a genuine consonant. "Matres lectionis" are extensively employed only in Hebrew, Aramaic, Syriac and Arabic, but the phenomenon is also found in the Ugaritic, Moabite, South Arabian and Phoenician alphabets. Historically, the practice of using "matres lectionis" seems to have originated when and diphthongs, written with the "yod" and the "waw" consonant letters respectively, monophthongized to simple long vowels and . This epiphenomenal association between consonant letters and vowel sounds was then seized upon and used in words without historic diphthongs. In general terms, it is observable that early Phoenician texts have very few "matres lectionis", and that during most of the 1st millennium BCE, Hebrew and Aramaic were quicker to develop "matres lectionis" than Phoenician. However, in its latest period of development in North Africa (referred to as "Punic"), Phoenician developed a very full use of "matres lectionis", including the use of the letter "ayin" , also used for this purpose much later in Yiddish orthography. In pre-exilic Hebrew, there was a significant development of the use of the letter "he" to indicate word final vowels other than "ī" and "ū". This was probably inspired by the phonological change of the third-person singular possessive suffix from > > in most environments. However, in later periods of Hebrew, the orthography was changed so word-final "ō" was no longer written with , except in a few archaically-spelled proper names, such as Solomon and Shiloh . The difference between the spelling of the third-person singular possessive suffix (as attached to singular nouns) with in early Hebrew versus with in later Hebrew has become an issue in the authentication of the Jehoash Inscription. According to Sass (5), already in the Middle Kingdom there were some cases of "matres lectionis", i.e. consonant graphemes which were used to transcribe vowels in foreign words, namely in Punic (Jensen 290, Naveh 62), Aramaic, and Hebrew (, , ; sometimes even "aleph" ; Naveh 62). Naveh (ibid.) notes that the earliest Aramaic and Hebrew documents already used "matres lectionis". Some scholars argue that the Greeks must therefore have borrowed their alphabet from the Arameans. However, the practice has older roots, as the Semitic cuneiform alphabet of Ugarit (13th century BC) already had "matres lectionis" (Naveh 138). The earliest method of indicating some vowels in Hebrew writing was to use the consonant letters "yod" , "waw" , "he" ,and "aleph" of the Hebrew alphabet to also write long vowels in some cases. Originally, and were only used as matres lectiones at the end of words, and and were used mainly to write the original diphthongs and as well as original vowel+[y]+vowel sequences (which sometimes simplified to plain long vowels). Gradually, as it was found to be insufficient for differentiating between similar nouns, and were also inserted to mark some long vowels of non-diphthongal origin. If words can be written with or without "matres lectionis", spellings that include the letters are called "malē" (Hebrew) or "plene" (Latin), meaning "full", and spellings without them are called "ḥaser" or "defective". In some verb forms, "matres lectionis" are almost always used. Around the 9th century CE, it was decided that the system of "matres lectionis" did not suffice to indicate the vowels precisely enough for purposes of liturgical recitation of Biblical texts so a supplemental vowel pointing system ("niqqud") (diacritic symbols indicating vowel pronunciation and other important phonological features not written by the traditional basic consonantal orthography) joined "matres lectionis" as part of the Hebrew writing system. In some words in Hebrew, there is a choice of whether to use a "mater lectionis" or not, and in modern printed texts "matres lectionis" are sometimes used even for short vowels, which is considered to be grammatically incorrect according to traditional norms, though instances are found as far back as Talmudic times. Such texts from Judaea and Galilee were noticeably more inclined to "malē" spellings than texts from Babylonia. Similarly, in the Middle Ages, Ashkenazi Jews tended to use "malē" spellings under the influence of European languages, but Sephardi Jews tended to use "ḥaser" spellings under the influence of Arabic. In Arabic there is no such choice, and the almost invariable rule is that a long vowel is written with a "mater lectionis" and a short vowel with a diacritic symbol, but the Uthmanic orthography, the one in which the Quran is traditionally written and printed, has some differences, which are not always consistent. Also, under influence from orthography of European languages, transliterating of borrowed words into Arabic is usually done using "matres lectionis" in place of diacritics, even when the latter is more suitable or when words from another Semitic language, such as Hebrew, are transliterated. That phenomenon is augmented by the neglect of diacritics in most printed forms since the beginning of mechanical printing. The name given to the three "matres lectionis" by traditional Arabic grammar is , ‘consonants of softness and lengthening’, or , ‘causal consonants‘ or ‘consonants of infirmity’, because as in Greek grammar, words with ‘accidents’ were deemed to be afflicted, ill, in opposition to ‘healthy’ words without accidents. Informal orthographies of spoken varieties of Arabic also use "ha" to indicate a shorter version of "alif" , a usage augmented by the ambiguity of the use of and "taa marbuta" in formal Arabic orthography. It is a formal orthography in other languages that use Arabic script, such as Kurdish alphabets. Syriac-Aramaic vowels are classified into three groups: the "alap" (), the "waw" (), and the "yod" (). The "mater lectionis" was developed as early as the 6th century to represent long vowels, which were earlier denoted by a dot under the line. The most frequent ones are the "yod" and the "waw", while the "alap" is mostly restricted to some transliterated words. Most commonly, "yod" indicates "i" or "e", while "waw" indicates "o" or "u". "Aleph" was not systematically developed as a "mater lectionis" in Hebrew (unlike in Aramaic and Arabic), but it is occasionally used to indicate an "a" vowel. (However, a silent , indicating an original glottal stop consonant sound that has become silent in Hebrew pronunciation, can occur after almost any vowel.) At the end of a word, "he" can also be used to indicate that a vowel "a" should be pronounced. Examples: Later, in some adaptations of the Arabic alphabet (such those sometimes used for Kurdish and Uyghur) and of the Hebrew alphabet (such as those used for Judeo-Arabic, Yiddish and Judaeo-Spanish), "matres lectionis" were generally used for all or most vowels, thus in effect becoming vowel letters: see Yiddish orthography. This tendency was taken to its logical conclusion in fully alphabetic scripts such as Greek, Latin, and Cyrillic. Many of the vowel letters in such languages historically go back to "matres lectionis" in the Phoenician script. For example, the letter was originally derived from the consonant letter "yod". Similarly the vowel letters in the Avestan alphabet were adapted from "matres lectionis" in the version of the Aramaic alphabet adapted as the Pahlavi scripts.
https://en.wikipedia.org/wiki?curid=19026
My Fair Lady My Fair Lady is a musical based on George Bernard Shaw's 1913 play "Pygmalion", with book and lyrics by Alan Jay Lerner and music by Frederick Loewe. The story concerns Eliza Doolittle, a Cockney flower girl who takes speech lessons from professor Henry Higgins, a phonetician, so that she may pass as a lady. The original Broadway and London shows starred Rex Harrison and Julie Andrews. The musical's 1956 Broadway production was a notable critical and popular success. It set a record for the longest run of any show on Broadway up to that time. It was followed by a hit London production, a popular film version, and many revivals. "My Fair Lady" has been called "the perfect musical". In Edwardian London, Eliza Doolittle is a Cockney flower girl with a thick, unintelligible accent. The noted phonetician Professor Henry Higgins encounters Eliza at Covent Garden and laments the vulgarity of her dialect ("Why Can't the English?"). Higgins also meets Colonel Pickering, another linguist, and invites him to stay as his houseguest. Eliza and her friends wonder what it would be like to live a comfortable life ("Wouldn't It Be Loverly?"). Eliza's father, Alfred P. Doolittle, stops by the next morning searching for money for a drink ("With a Little Bit of Luck"). Soon after, Eliza comes to Higgins's house, seeking elocution lessons so that she can get a job as an assistant in a florist's shop. Higgins wagers Pickering that, within six months, by teaching Eliza to speak properly, he will enable her to pass for a proper lady. Eliza becomes part of Higgins's household. Though Higgins sees himself as a kindhearted man who merely cannot get along with women ("I'm an Ordinary Man"), to others he appears self-absorbed and misogynistic. Eliza endures Higgins's tyrannical speech tutoring. Frustrated, she dreams of different ways to kill him ("Just You Wait"). Higgins's servants lament the stressful atmosphere ("The Servants' Chorus"). Just as Higgins is about to give up on her, Eliza suddenly recites one of her diction exercises in perfect upper-class style ("The Rain in Spain"). Though Mrs Pearce, the housekeeper, insists that Eliza go to bed, she declares she is too excited to sleep ("I Could Have Danced All Night"). For her first public tryout, Higgins takes Eliza to his mother's box at Ascot Racecourse ("Ascot Gavotte"). Though Eliza shocks everyone when she forgets herself while watching a race and reverts to foul language, she does capture the heart of Freddy Eynsford-Hill. Freddy calls on Eliza that evening, and he declares that he will wait for her in the street outside Higgins' house ("On the Street Where You Live"). Eliza's final test requires her to pass as a lady at the Embassy Ball. After more weeks of preparation, she is ready. All the ladies and gentlemen at the ball admire her, and the Queen of Transylvania invites her to dance with the prince ("Embassy Waltz"). A Hungarian phonetician, Zoltan Karpathy, attempts to discover Eliza's origins. Higgins allows Karpathy to dance with Eliza. The ball is a success; Karpathy has declared Eliza to be a Hungarian princess. Pickering and Higgins revel in their triumph ("You Did It"), failing to pay attention to Eliza. Eliza is insulted at receiving no credit for her success, packing up and leaving the Higgins house. As she leaves she finds Freddy, who begins to tell her how much he loves her, but she tells him that she has heard enough words; if he really loves her, he should show it ("Show Me"). Eliza and Freddy return to Covent Garden but she finds she no longer feels at home there. Her father is there as well, and he tells her that he has received a surprise bequest from an American millionaire, which has raised him to middle-class respectability, and now must marry his lover. Doolittle and his friends have one last spree before the wedding ("Get Me to the Church on Time"). Higgins awakens the next morning. He finds himself out of sorts without Eliza. He wonders why she left after the triumph at the ball and concludes that men (especially himself) are far superior to women ("A Hymn to Him"). Pickering notices the Professor's lack of consideration, and also leaves the Higgins house. Higgins despondently visits his mother's house, where he finds Eliza. Eliza declares she no longer needs Higgins ("Without You"). As Higgins walks home, he realizes he's grown attached to Eliza ("I've Grown Accustomed to Her Face"). At home, he sentimentally reviews the recording he made the day Eliza first came to him for lessons, hearing his own harsh words. Eliza suddenly appears in his home. In suppressed joy at their reunion, Professor Higgins scoffs and asks, "Eliza, where the devil are my slippers?" The original cast of the Broadway stage production: Act I Act II In the mid-1930s, film producer Gabriel Pascal acquired the rights to produce film versions of several of George Bernard Shaw's plays, "Pygmalion" among them. However, Shaw, having had a bad experience with "The Chocolate Soldier", a Viennese operetta based on his play "Arms and the Man", refused permission for "Pygmalion" to be adapted into a musical. After Shaw died in 1950, Pascal asked lyricist Alan Jay Lerner to write the musical adaptation. Lerner agreed, and he and his partner Frederick Loewe began work. But they quickly realised that the play violated several key rules for constructing a musical: the main story was not a love story, there was no subplot or secondary love story, and there was no place for an ensemble. Many people, including Oscar Hammerstein II, who, with Richard Rodgers, had also tried his hand at adapting "Pygmalion" into a musical and had given up, told Lerner that converting the play to a musical was impossible, so he and Loewe abandoned the project for two years. During this time, the collaborators separated and Gabriel Pascal died. Lerner had been trying to musicalize "Li'l Abner" when he read Pascal's obituary and found himself thinking about "Pygmalion" again. When he and Loewe reunited, everything fell into place. All of the insurmountable obstacles that had stood in their way two years earlier disappeared when the team realised that the play needed few changes apart from (according to Lerner) "adding the action that took place between the acts of the play". They then excitedly began writing the show. However, Chase Manhattan Bank was in charge of Pascal's estate, and the musical rights to "Pygmalion" were sought both by Lerner and Loewe and by Metro-Goldwyn-Mayer, whose executives called Lerner to discourage him from challenging the studio. Loewe said, "We will write the show without the rights, and when the time comes for them to decide who is to get them, we will be so far ahead of everyone else that they will be forced to give them to us." For five months Lerner and Loewe wrote, hired technical designers, and made casting decisions. The bank, in the end, granted them the musical rights. Lerner settled on the title "My Fair Lady", relating both to one of Shaw's provisional titles for "Pygmalion", "Fair Eliza", and to the final line of every verse of the nursery rhyme "London Bridge Is Falling Down". Recalling that the Gershwins' 1925 musical "Tell Me More" had been titled "My Fair Lady" in its out-of-town tryout, and also had a musical number under that title, Lerner made a courtesy call to Ira Gershwin, alerting him to the use of the title for the Lerner and Loewe musical. Noël Coward was the first to be offered the role of Henry Higgins, but he turned it down, suggesting the producers cast Rex Harrison instead. After much deliberation, Harrison agreed to accept the part. Mary Martin was an early choice for the role of Eliza Doolittle, but declined the role. Young actress Julie Andrews was "discovered" and cast as Eliza after the show's creative team went to see her Broadway debut in "The Boy Friend". Moss Hart agreed to direct after hearing only two songs. The experienced orchestrators Robert Russell Bennett and Philip J. Lang were entrusted with the arrangements, and the show quickly went into rehearsal. The musical's script used several scenes that Shaw had written especially for the 1938 film version of "Pygmalion", including the Embassy Ball sequence and the final scene of the 1938 film rather than the ending for Shaw's original play. The montage showing Eliza's lessons was also expanded, combining both Lerner's and Shaw's dialogue. The artwork on the original playbill (and the sleeve of the cast recording) is by Al Hirschfeld, who drew the playwright Shaw as a heavenly puppetmaster pulling the strings on the Henry Higgins character, while Higgins in turn attempts to control Eliza Doolittle. The musical had its pre-Broadway tryout at New Haven's Shubert Theatre. At the first preview Rex Harrison, who was unaccustomed to singing in front of a live orchestra, "announced that under no circumstances would he go on that night...with those thirty-two interlopers in the pit". He locked himself in his dressing room and came out little more than an hour before curtain time. The whole company had been dismissed but were recalled, and opening night was a success. "My Fair Lady" then played for four weeks at the Erlanger Theatre in Philadelphia, beginning on February 15, 1956. The musical premiered on Broadway March 15, 1956, at the Mark Hellinger Theatre in New York City. It transferred to the Broadhurst Theatre and then The Broadway Theatre, where it closed on September 29, 1962, after 2,717 performances, a record at the time. Moss Hart directed and Hanya Holm was choreographer. In addition to stars Rex Harrison, Julie Andrews and Stanley Holloway, the original cast included Robert Coote, Cathleen Nesbitt, John Michael King, and Reid Shelton. Harrison was replaced by Edward Mulhare in November 1957 and Sally Ann Howes replaced Andrews in February 1958. By the start of 1959, it was the biggest grossing Broadway show of all-time with a gross of $10 million. The "Original Cast Recording", released on April 2, 1956, went on to become the best-selling album in the country in 1956. The West End production, in which Harrison, Andrews, Coote, and Holloway reprised their roles, opened on April 30, 1958, at the Theatre Royal, Drury Lane, where it ran for five and a half years (2,281 performances). Edwardian musical comedy star Zena Dare made her last appearance in the musical as Mrs. Higgins. Leonard Weir played Freddy. Harrison left the London cast in March 1959, followed by Andrews in August 1959 and Holloway in October 1959. The first Broadway revival opened at the St. James Theatre on March 25, 1976, and ran there until December 5, 1976; it then transferred to the Lunt-Fontanne Theatre, running from December 9, 1976, until it closed on February 20, 1977, after a total of 377 performances and 7 previews. The director was Jerry Adler, with choreography by Crandall Diehl, based on the original choreography by Hanya Holm. Ian Richardson starred as Higgins, with Christine Andreas as Eliza, George Rose as Alfred P. Doolittle and Robert Coote recreating his role as Pickering. Both Richardson and Rose were nominated for the Tony Award for Best Actor in a Musical, with the award going to Rose. A London revival opened at the Adelphi Theatre in October 1979, with Tony Britton as Higgins, Liz Robertson as Eliza, Dame Anna Neagle as Higgins' mother, Peter Bayliss, Richard Caldicot and Peter Land. The revival was produced by Cameron Mackintosh and directed by the author, Alan Jay Lerner. A national tour was directed by Robin Midgley. Gillian Lynne choreographed. Britton and Robertson were both nominated for Olivier Awards. Another Broadway revival of the original production opened at the Uris Theatre on August 18, 1981, and closed on November 29, 1981, after 120 performances and 4 previews. Rex Harrison recreated his role as Higgins, with Jack Gwillim, Milo O'Shea, and Cathleen Nesbitt, at 93 years old reprising her role as Mrs. Higgins. The revival co-starred Nancy Ringham as Eliza. The director was Patrick Garland, with choreography by Crandall Diehl, recreating the original Hanya Holm dances. A new revival directed by Howard Davies opened at the Virginia Theatre on December 9, 1993, and closed on May 1, 1994, after 165 performances and 16 previews. The cast starred Richard Chamberlain, Melissa Errico and Paxton Whitehead. Julian Holloway, son of Stanley Holloway, recreated his father's role of Alfred P. Doolittle. Donald Saddler was the choreographer. Cameron Mackintosh produced a new production on March 15, 2001, at the Royal National Theatre, which transferred to the Theatre Royal, Drury Lane on July 21. Directed by Trevor Nunn, with choreography by Matthew Bourne, the musical starred Martine McCutcheon as Eliza and Jonathan Pryce as Higgins, with Dennis Waterman as Alfred P. Doolittle. This revival won three Olivier Awards: Outstanding Musical Production, Best Actress in a Musical (Martine McCutcheon) and Best Theatre Choreographer (Matthew Bourne), with Anthony Ward receiving a nomination for Set Design. In December 2001, Joanna Riding took over the role of Eliza, and in May 2002, Alex Jennings took over as Higgins, both winning Olivier Awards for Best Actor and Best Actress in a Musical respectively in 2003. In March 2003, Anthony Andrews and Laura Michelle Kelly took over the roles until the show closed on August 30, 2003. A UK tour of this production began September 28, 2005. The production starred Amy Nuttall and Lisa O'Hare as Eliza, Christopher Cazenove as Henry Higgins, Russ Abbot and Gareth Hale as Alfred Doolittle, and Honor Blackman and Hannah Gordon as Mrs. Higgins. The tour ended August 12, 2006. In 2003 a production of the musical at the Hollywood Bowl starred John Lithgow as Henry Higgins, Melissa Errico as Eliza Doolittle, Roger Daltrey as Alfred P. Doolittle and Paxton Whitehead as Colonel Pickering. A Broadway revival produced by Lincoln Center Theater and Nederlander Presentations Inc. began previews on March 15, 2018, at the Vivian Beaumont Theater and officially opened on April 19, 2018. It was directed by Bartlett Sher with choreography by Christopher Gattelli, scenic design by Michael Yeargan, costume design by Catherine Zuber and lighting design by Donald Holder. The cast included Lauren Ambrose as Eliza, Harry Hadden-Paton as Professor Henry Higgins, Diana Rigg as Mrs. Higgins, Norbert Leo Butz as Alfred P. Doolittle, Allan Corduner as Colonel Pickering, Jordan Donica as Freddy, and Linda Mugleston as Mrs. Pearce. Replacements included Rosemary Harris as Mrs. Higgins, Laura Benanti, as Eliza, and Danny Burstein, then Alexander Gemignani, as Alfred P. Doolittle. The revival closed on July 7, 2019, after 39 previews and 509 regular performances. A North American tour of the production, starring Shereen Ahmed and Laird Mackintosh as Eliza and Higgins, opened in December 2019 and is scheduled to run through August 2020. A German translation of "My Fair Lady" opened on October 1, 1961, at the Theater des Westens in Berlin, starring Karin Hübner and Paul Hubschmid (and conducted, as was the Broadway opening, by Franz Allers). Coming at the height of Cold War tensions, just weeks after the closing of the East Berlin–West Berlin border and the erection of the Berlin Wall, this was the first staging of a Broadway musical in Berlin since World War II. As such it was seen as a symbol of West Berlin's cultural renaissance and resistance. Lost attendance from East Berlin (now no longer possible) was partly made up by a "musical air bridge" of flights bringing in patrons from West Germany, and the production was embraced by Berliners, running for two years. In 2007 the New York Philharmonic held a full-costume concert presentation of the musical. The concert had a four-day engagement lasting from March 7–10 at Lincoln Center's Avery Fisher Hall. It starred Kelsey Grammer as Higgins, Kelli O'Hara as Eliza, Charles Kimbrough as Pickering, and Brian Dennehy as Alfred Doolittle. Marni Nixon played Mrs. Higgins; Nixon had provided the singing voice of Audrey Hepburn in the film version. A U.S. tour of Mackintosh's 2001 West End production ran from September 12, 2007, to June 22, 2008. The production starred Christopher Cazenove as Higgins, Lisa O'Hare as Eliza, Walter Charles as Pickering, Tim Jerome as Alfred Doolittle and Nixon as Mrs. Higgins, replacing Sally Ann Howes. An Australian tour produced by Opera Australia commenced in May 2008. The production starred Reg Livermore as Higgins, Taryn Fiebig as Eliza, Robert Grubb as Alfred Doolittle and Judi Connelli as Mrs Pearce. John Wood took the role of Alfred Doolittle in Queensland, and Richard E. Grant played the role of Henry Higgins at the Theatre Royal, Sydney. A new production was staged by Robert Carsen at the Théâtre du Châtelet in Paris for a limited 27-performance run, opening December 9, 2010, and closing January 2, 2011. It was presented in English. The costumes were designed by Anthony Powell and the choreography was by Lynne Page. The cast was as follows: Sarah Gabriel / Christine Arand (Eliza Doolittle), Alex Jennings (Henry Higgins), Margaret Tyzack (Mrs. Higgins), Nicholas Le Prevost (Colonel Pickering), Donald Maxwell (Alfred Doolittle), and Jenny Galloway (Mrs. Pearce). A new production of "My Fair Lady" opened at Sheffield Crucible on December 13, 2012. Dominic West played Henry Higgins, and Carly Bawden played Eliza Doolittle. Sheffield Theatres' Artistic Director Daniel Evans was the director. The production ran until January 26, 2013. The Gordon Frost Organisation, together with Opera Australia, presented a production at the Sydney Opera House from August 30 to November 5, 2016. It was directed by Julie Andrews and featured the set and costume designs of the original 1956 production by Smith and Beaton. The production sold more tickets than any other in the history of the Sydney Opera House. The show's opening run in Sydney was so successful that in November 2016, ticket pre-sales were released for a re-run in Sydney, with the extra shows scheduled between August 24 and September 10, 2017, at the Capitol Theatre. In 2017, the show toured to Brisbane from March 12 and Melbourne from May 11. The cast featured Alex Jennings as Higgins (Charles Edwards for Brisbane and Melbourne seasons), Anna O'Byrne as Eliza, Reg Livermore as Alfred P. Doolittle, Robyn Nevin as Mrs. Higgins (later Pamela Rabe), Mark Vincent as Freddy, Tony Llewellyn-Jones as Colonel Pickering, Deidre Rubenstein as Mrs. Pearce, and David Whitney as Karpathy. According to Geoffrey Block, "Opening night critics immediately recognized that "My Fair Lady" fully measured up to the Rodgers and Hammerstein model of an integrated musical...Robert Coleman...wrote 'The Lerner-Loewe songs are not only delightful, they advance the action as well. They are ever so much more than interpolations, or interruptions.'" The musical opened to "unanimously glowing reviews, one of which said 'Don't bother reading this review now. You'd better sit right down and send for those tickets...' Critics praised the thoughtful use of Shaw's original play, the brilliance of the lyrics, and Loewe's well-integrated score." A sampling of praise from critics, excerpted from a book form of the musical, published in 1956. The reception from Shavians was more mixed, however. Eric Bentley, for instance, called it "a terrible treatment of Mr. Shaw's play, [undermining] the basic idea [of the play]", even though he acknowledged it as "a delightful show". Sources: BroadwayWorld TheatreWorldAwards Sources: BroadwayWorld Drama Desk Source: Olivier Awards Source: BroadwayWorld Source: Drama Desk Source: Olivier Awards The film version was made in 1964, directed by George Cukor and with Harrison again in the part of Higgins. The casting of Audrey Hepburn instead of Julie Andrews as Eliza was controversial, partly because theatregoers regarded Andrews as perfect for the part and partly because Hepburn's singing voice was dubbed (by Marni Nixon). Jack L. Warner, the head of Warner Bros., which produced the film, wanted "a star with a great deal of name recognition", but since Andrews did not have any film experience, he thought it would be more successful to cast a movie star. (Andrews went on to star in "Mary Poppins" that same year for which she won both the Academy Award and the Golden Globe for Best Actress.) Lerner in particular disliked the film version of the musical, thinking it did not live up to the standards of Moss Hart's original direction. He was also unhappy with the casting of Hepburn as Eliza Doolittle and that the film was shot in its entirety at the Warner Bros. studio rather than, as he would have preferred, in London. Despite the controversy, "My Fair Lady" was considered a major critical and box office success, and won eight Oscars, including Best Picture of the Year, Best Actor for Rex Harrison, and Best Director for George Cukor. Columbia Pictures announced a new adaptation in 2008. The intention was to shoot on location in Covent Garden, Drury Lane, Tottenham Court Road, Wimpole Street and the Ascot Racecourse. John Madden was signed to direct the film, and Colin Firth and Carey Mulligan were possible choices for the leading roles. Emma Thompson wrote a new screenplay adaptation for the project, but it was shelved.
https://en.wikipedia.org/wiki?curid=19027
Martial arts film Martial arts films, also colloquially known as karate or kung fu films, are a subgenre of action films that feature numerous martial arts fights between characters. These fights are usually the films' primary appeal and entertainment value, and often are a method of storytelling and character expression and development. Martial arts are frequently featured in training scenes and other sequences in addition to fights. Martial arts films commonly include other types of action, such as hand-to-hand combat, stuntwork, chases, and gunfights. Asian films are known to have a more minimalist approach to film based on their culture. As with other action films, martial arts films are dominated by action to varying degrees, using only wire work at best; many martial arts films have only a minimal plot and amount of character development and focus almost exclusively on the action, while others have more creative and complex plots and characters along with action scenes. Films of the latter type are generally considered to be artistically superior films, but many films of the former type are commercially successful and well received by fans of the genre. One of the earliest Hollywood movies to employ the use of martial arts was the 1955 film "Bad Day at Black Rock", though the scenes of Spencer Tracy performed barely any realistic fight sequences, but composed mostly of soft knifehand strikes. Martial arts films contain many characters who are martial artists and these roles are often played by actors who are real martial artists. If not, actors frequently train in preparation for their roles or the action director may rely more on stylized action or film making tricks like camera angles, editing, doubles, undercranking, wire work and computer-generated imagery. Trampolines and springboards used to be used to increase the height of jumps. The minimalist style employs smaller sets and little space for improvised but explosive fight scenes, as seen by Jackie Chan's films. These techniques are sometimes used by real martial artists as well, depending on the style of action in the film. During the 1970s and 1980s, the most visible presence of martial arts films was the hundreds of English-dubbed kung fu and ninja films produced by the Shaw Brothers, Godfrey Ho and other Hong Kong producers. These films were widely broadcast on North American television on weekend timeslots that were often colloquially known as "Kung Fu Theater", "Black Belt Theater" or variations thereof. Inclusive in this list of films are commercial classics like "The Big Boss", "Drunken Master" and "One Armed Boxer". Martial arts films have been produced all over the world, but the genre has been dominated by Hong Kong action cinema, peaking from 1971 with the rise of Bruce Lee until the mid-1990s with a general decline in the industry, till it was revived close to the 2000s. Other notable figures in the genre include Jackie Chan, Jet Li, Sammo Hung, Yuen Biao and Donnie Yen. Sonny Chiba, Etsuko Shihomi, and Hiroyuki Sanada starred in numerous karate and jidaigeki films from Japan during the 1970s and early 1980s. Hollywood has also participated in the genre with actors such as Chuck Norris, Sho Kosugi, Jean-Claude Van Damme, Steven Seagal, Brandon Lee (son of Bruce Lee), Wesley Snipes, Gary Daniels, Mark Dacascos and Jason Statham. In the 2000s, Thailand's film industry became an international force in the genre with the films of Tony Jaa and the cinema of Vietnam followed suit with "The Rebel" and "Clash". In more recent years, the Indonesian film industry has offered "Merantau" (2009) and "" (2011). Women have also played key roles in the genre, including such actresses as Michelle Yeoh, Angela Mao and Cynthia Rothrock. In addition, western animation has ventured into the genre with the most successful effort being the internationally hailed DreamWorks Animation film franchise, "Kung Fu Panda", starring Jack Black and Angelina Jolie. In the Chinese-speaking world, martial arts films are commonly divided into two subcategories: the "wuxia" period films (武俠片), and the more modern Kung fu films (功夫片, best epitomized in the films of Bruce Lee). Kung fu films are a significant movie genre in themselves. Like westerns for Americans, they have become an identity of Chinese cinema. As the most prestigious movie type in Chinese film history, kung fu movies were among the first Chinese films produced and the "wuxia" period films (武俠片) are the original form of Chinese kung fu films. The wuxia period films came into vogue due to the thousands of years popularity of wuxia novels (武俠小說). For example, the wuxia novels of Jin Yong and Gu Long directly led to the prevalence of wuxia period films. Outside of the Chinese speaking world the most famous wuxia film made was the Ang Lee film "Crouching Tiger, Hidden Dragon", which was based on the Wang Dulu series of wuxia novels: it earned four Academy Awards, including one for Best Foreign Film. Martial arts westerns are usually American films inexpensively filmed in Southwestern United States locations, transposing martial arts themes into an "old west" setting; e.g., "Red Sun" with Charles Bronson and Toshiro Mifune.
https://en.wikipedia.org/wiki?curid=19028
Musical film Musical film is a film genre in which songs by the characters are interwoven into the narrative, sometimes accompanied by dancing. The songs usually advance the plot or develop the film's characters, but in some cases, they serve merely as breaks in the storyline, often as elaborate "production numbers." The musical film was a natural development of the stage musical after the emergence of sound film technology. Typically, the biggest difference between film and stage musicals is the use of lavish background scenery and locations that would be impractical in a theater. Musical films characteristically contain elements reminiscent of theater; performers often treat their song and dance numbers as if a live audience were watching. In a sense, the viewer becomes the diegetic audience, as the performer looks directly into the camera and performs to it. The 1930s through the early 1950s are considered to be the golden age of the musical film, when the genre's popularity was at its highest in the Western world. Disney's "Snow White and the Seven Dwarfs", the earliest Disney animated feature film, was a musical which won an honorary Oscar for Walt Disney at the 11th Academy Awards. Musical short films were made by Lee de Forest in 1923–24. Beginning in 1926, thousands of Vitaphone shorts were made, many featuring bands, vocalists, and dancers. The earliest feature-length films with synchronized sound had only a soundtrack of music and occasional sound effects that played while the actors portrayed their characters just as they did in silent films: without audible dialogue. "The Jazz Singer", released in 1927 by Warner Brothers, was the first to include an audio track including non-diegetic music and diegetic music, but it had only a short sequence of spoken dialogue. This feature-length film was also a musical, featuring Al Jolson singing "Dirty Hands, Dirty Face", "Toot, Toot, Tootsie", "Blue Skies", and "My Mammy". Historian Scott Eyman wrote, "As the film ended and applause grew with the houselights, Sam Goldwyn's wife Frances looked around at the celebrities in the crowd. She saw 'terror in all their faces', she said, as if they knew that 'the game they had been playing for years was finally over'." Still, only isolated sequences featured "live" sound; most of the film had only a synchronous musical score. In 1928, Warner Brothers followed this up with another Jolson part-talkie, "The Singing Fool", which was a blockbuster hit. Theaters scrambled to install the new sound equipment and to hire Broadway composers to write musicals for the screen. The first all-talking feature, "Lights of New York", included a musical sequence in a night club. The enthusiasm of audiences was so great that in less than a year all the major studios were making sound pictures exclusively. "The Broadway Melody" (1929) had a show-biz plot about two sisters competing for a charming song-and-dance man. Advertised by MGM as the first "All-Talking, All-Singing, All-Dancing" feature film, it was a hit and won the Academy Award for Best Picture for 1929. There was a rush by the studios to hire talent from the stage to star in lavishly filmed versions of Broadway hits. "The Love Parade" (Paramount 1929) starred Maurice Chevalier and newcomer Jeanette MacDonald, written by Broadway veteran Guy Bolton. Warner Brothers produced the first screen operetta, "The Desert Song" in 1929. They spared no expense and photographed a large percentage of the film in Technicolor. This was followed by the first all-color, all-talking musical feature which was entitled "On with the Show" (1929). The most popular film of 1929 was the second all-color, all-talking feature which was entitled "Gold Diggers of Broadway" (1929). This film broke all box office records and remained the highest-grossing film ever produced until 1939. Suddenly, the market became flooded with musicals, revues, and operettas. The following all-color musicals were produced in 1929 and 1930 alone: "The Show of Shows" (1929), "Sally" (1929), "The Vagabond King" (1930), "Follow Thru" (1930), "Bright Lights" (1930), "Golden Dawn" (1930), "Hold Everything" (1930), "The Rogue Song" (1930), "Song of the Flame" (1930), "Song of the West" (1930), "Sweet Kitty Bellairs" (1930), "Under a Texas Moon" (1930), "Bride of the Regiment" (1930), "Whoopee!" (1930), "King of Jazz" (1930), "Viennese Nights" (1930), and "Kiss Me Again" (1930). In addition, there were scores of musical features released with color sequences. Hollywood released more than 100 musical films in 1930, but only 14 in 1931. By late 1930, audiences had been oversaturated with musicals and studios were forced to cut the music from films that were then being released. For example, "Life of the Party" (1930) was originally produced as an all-color, all-talking musical comedy. Before it was released, however, the songs were cut out. The same thing happened to "Fifty Million Frenchmen" (1931) and "Manhattan Parade" (1932) both of which had been filmed entirely in Technicolor. Marlene Dietrich sang songs successfully in her films, and Rodgers and Hart wrote a few well-received films, but even their popularity waned by 1932. The public had quickly come to associate color with musicals and thus the decline in their popularity also resulted in a decline in color productions. The taste in musicals revived again in 1933 when director Busby Berkeley began to enhance the traditional dance number with ideas drawn from the drill precision he had experienced as a soldier during World War I. In films such as "42nd Street" and "Gold Diggers of 1933" (1933), Berkeley choreographed a number of films in his unique style. Berkeley's numbers typically begin on a stage but gradually transcend the limitations of theatrical space: his ingenious routines, involving human bodies forming patterns like a kaleidoscope, could never fit onto a real stage and the intended perspective is viewing from straight above. Musical stars such as Fred Astaire and Ginger Rogers were among the most popular and highly respected personalities in Hollywood during the classical era; the Fred and Ginger pairing was particularly successful, resulting in a number of classic films, such as "Top Hat" (1935), "Swing Time" (1936), and "Shall We Dance" (1937). Many dramatic actors gladly participated in musicals as a way to break away from their typecasting. For instance, the multi-talented James Cagney had originally risen to fame as a stage singer and dancer, but his repeated casting in "tough guy" roles and mob films gave him few chances to display these talents. Cagney's Oscar-winning role in "Yankee Doodle Dandy" (1942) allowed him to sing and dance, and he considered it to be one of his finest moments. Many comedies (and a few dramas) included their own musical numbers. The Marx Brothers' films included a musical number in nearly every film, allowing the Brothers to highlight their musical talents. Their final film, entitled "Love Happy" (1949), featured Vera-Ellen, considered to be the best dancer among her colleagues and professionals in the half century. Similarly, The vaudevillian comedian W. C. Fields joined forces with the comic actress Martha Raye and the young comedian Bob Hope in Paramount Pictures musical anthology "The Big Broadcast of 1938". The film also showcased the talents of several internationally recognized musical artists including: Kirsten Flagstad (Norwegian operatic soprano), Wilfred Pelletier (Canadian conductor of the Metropolitan Opera Orchestra, Tito Guizar (Mexican tenor), Shep Fields conducting his Rippling Rhythm Jazz Orchestra and John Serry Sr. (Italian-American concert accordionist). In addition to the Academy Award for Best Original Song (1938), the film earned an ASCAP Film and Television Award (1989) for Bob Hope's signature song "Thanks for the Memory". During the late 1940s and into the early 1950s, a production unit at Metro-Goldwyn-Mayer headed by Arthur Freed made the transition from old-fashioned musical films, whose formula had become repetitive, to something new. (However, they also produced Technicolor remakes of such musicals as "Show Boat", which had previously been filmed in the 1930s.) In 1939, Freed was hired as associate producer for the film "Babes in Arms". Starting in 1944 with "Meet Me in St. Louis", the Freed Unit worked somewhat independently of its own studio to produce some of the most popular and well-known examples of the genre. The products of this unit include "Easter Parade" (1948), "On the Town" (1949), "An American in Paris" (1951), "Singin' in the Rain" (1952), "The Band Wagon" (1953) and "Gigi" (1958). Non-Freed musicals from the studio included "Seven Brides for Seven Brothers" in 1954 and "High Society" in 1956, and the studio distributed Samuel Goldwyn's "Guys and Dolls" in 1955. This era saw musical stars become household names, including Judy Garland, Gene Kelly, Ann Miller, Donald O'Connor, Cyd Charisse, Mickey Rooney, Vera-Ellen, Jane Powell, Howard Keel, and Kathryn Grayson. Fred Astaire was also coaxed out of retirement for "Easter Parade" and made a permanent comeback. The other Hollywood studios proved themselves equally adept at tackling the genre at this time, particularly in the 1950s. Four adaptations of Rodgers and Hammerstein shows - "Oklahoma!", "The King and I", "Carousel", and "South Pacific" - were all successes, while Paramount Pictures released "White Christmas" and "Funny Face", two films which used previously written music by Irving Berlin and the Gershwins, respectively. Warner Bros. produced "Calamity Jane" and "A Star Is Born"; the former film was a vehicle for Doris Day, while the latter provided a big-screen comeback for Judy Garland, who had been out of the spotlight since 1950. Meanwhile, director Otto Preminger, better known for controversial "message pictures", made "Carmen Jones" and "Porgy and Bess", both starring Dorothy Dandridge, who is considered the first African American A-list film star. Celebrated director Howard Hawks also ventured into the genre with "Gentlemen Prefer Blondes". In the 1960s, 1970s, and continuing up to today, the musical film became less of a bankable genre that could be relied upon for sure-fire hits. Audiences for them lessened and fewer musical films were produced as the genre became less mainstream and more specialized. In the 1960s, the critical and box-office success of the films "West Side Story", "Gypsy", "The Music Man", "Bye Bye Birdie", "My Fair Lady", "Mary Poppins", "The Sound of Music", "A Funny Thing Happened on the Way to the Forum", "The Jungle Book", "Thoroughly Modern Millie", "Oliver!", and "Funny Girl" suggested that the traditional musical was in good health, while French filmmaker Jacques Demy's jazz musicals "The Umbrellas of Cherbourg" and "The Young Girls of Rochefort" were popular with international critics. However popular musical tastes were being heavily affected by rock and roll and the freedom and youth associated with it, and indeed Elvis Presley made a few films that have been equated with the old musicals in terms of form, though "A Hard Day's Night" and "Help!", starring the Beatles, were more technically audacious. Most of the musical films of the 1950s and 1960s such as "Oklahoma!" and "The Sound of Music" were straightforward adaptations or restagings of successful stage productions. The most successful musicals of the 1960s created specifically for film were "Mary Poppins" and "The Jungle Book", two of Disney's biggest hits of all time. The phenomenal box-office performance of "The Sound of Music" gave the major Hollywood studios more confidence to produce lengthy, large-budget musicals. Despite the resounding success of some of these films, Hollywood also produced a large number of musical flops in the late 1960s and early 1970s which appeared to seriously misjudge public taste. The commercially and/or critically unsuccessful films included "Camelot", "Finian's Rainbow", "Hello Dolly!", "Sweet Charity", "Doctor Dolittle", "Half a Sixpence", "The Happiest Millionaire", "Star!", "Darling Lili", "Goodbye, Mr. Chips", "Paint Your Wagon", "Song of Norway", "On a Clear Day You Can See Forever", "Man of La Mancha", "Lost Horizon", and "Mame". Collectively and individually these failures crippled several of the major studios. In the 1970s, film culture and the changing demographics of filmgoers placed greater emphasis on gritty realism, while the pure entertainment and theatricality of classical-era Hollywood musicals was seen as old-fashioned. Despite this, "Fiddler on the Roof" and "Cabaret" were more traditional musicals closely adapted from stage shows and were strong successes with critics and audiences. Changing cultural mores and the abandonment of the Hays Code in 1968 also contributed to changing tastes in film audiences. The 1973 film of Andrew Lloyd Webber and Tim Rice's "Jesus Christ Superstar" was met with some criticism by religious groups but was well received. By the mid-1970s, filmmakers avoided the genre in favor of using music by popular rock or pop bands as background music, partly in hope of selling a soundtrack album to fans. "The Rocky Horror Picture Show" was originally released in 1975 and was a critical failure until it started midnight screenings in the 1980s where it achieved cult status. 1976 saw the release of the low-budget comic musical, "The First Nudie Musical", released by Paramount. The 1978 film version of "Grease" was a smash hit; its songs were original compositions done in a 1950s pop style. However, the sequel "Grease 2" (released in 1982) bombed at the box-office. Films about performers which incorporated gritty drama and musical numbers interwoven as a diegetic part of the storyline were produced, such as "Lady Sings the Blues", "All That Jazz", and "New York, New York". Some musicals made in Britain experimented with the form, such as Richard Attenborough's "Oh! What a Lovely War" (released in 1969), Alan Parker's "Bugsy Malone" and Ken Russell's "Tommy" and "Lisztomania". A number of film musicals were still being made that were financially and/or critically less successful than in the musical's heyday. They include "1776", "The Wiz", "At Long Last Love", "Mame", "Man of La Mancha", "Lost Horizon"," Godspell", "Phantom of the Paradise", "Funny Lady" (Barbra Streisand's sequel to "Funny Girl"), "A Little Night Music", and "Hair" amongst others. The critical wrath against "At Long Last Love", in particular, was so strong that it was never released on home video. Fantasy musical films "Scrooge", "The Blue Bird", "The Little Prince", "Willy Wonka & the Chocolate Factory", "Pete's Dragon", and Disney's "Bedknobs and Broomsticks" were also released in the 1970s, the latter winning the Academy Award for Best Visual Effects. By the 1980s, financiers grew increasingly confident in the musical genre, partly buoyed by the relative health of the musical on Broadway and London's West End. Productions of the 1980s and 1990s included "The Apple", "Xanadu", "The Blues Brothers", "Annie", "Monty Python's The Meaning of Life", "The Best Little Whorehouse in Texas", "Victor Victoria", "Footloose", "Fast Forward", "A Chorus Line", "Little Shop of Horrors", "Forbidden Zone", "Absolute Beginners", "Labyrinth", "Evita", and "Everyone Says I Love You". However, "Can't Stop the Music", starring the Village People, was a calamitous attempt to resurrect the old-style musical and was released to audience indifference in 1980. "Little Shop of Horrors" was based on an off-Broadway musical adaptation of a 1960 Roger Corman film, a precursor of later film-to-stage-to-film adaptations, including "The Producers". Many animated films of the period – predominately from Disney – included traditional musical numbers. Howard Ashman, Alan Menken, and Stephen Schwartz had previous musical theater experience and wrote songs for animated films during this time, supplanting Disney workhorses the Sherman Brothers. Starting with 1989's "The Little Mermaid", the Disney Renaissance gave new life to the musical film. Other successful animated musicals included "Aladdin", "The Hunchback of Notre Dame", and "Pocahontas" from Disney proper, "The Nightmare Before Christmas" from Disney division Touchstone Pictures, "The Prince of Egypt" from DreamWorks, "Anastasia" from Fox and Don Bluth, and "" from Paramount. ("Beauty and the Beast" and "The Lion King" were adapted for the stage after their blockbuster success.) In the 21st century, movie musicals were reborn with darker musicals, musical biopics, epic drama musicals and comedy-drama musicals such as "Moulin Rouge!", "Chicago", "Walk the Line", "Dreamgirls", "", "Les Misérables" and "La La Land"; all of which won the Golden Globe Award for Best Motion Picture – Musical or Comedy in their respective years, while such films as "The Phantom of the Opera", "Hairspray", "Mamma Mia!", "Nine", "Into the Woods", "The Greatest Showman", "Mary Poppins Returns", and "Rocketman" were only nominated. "Chicago" was also the first musical since "Oliver!" to win Best Picture at the Academy Awards. Joshua Oppenheimer's Academy Award-nominated documentary "The Act of Killing" may be considered a nonfiction musical. One specific musical trend was the rising number of jukebox musicals based on music from various pop/rock artists on the big screen, some of which based on Broadway shows. Examples of Broadway-based jukebox musical films included "Mamma Mia!" (ABBA), "Rock of Ages", and "Sunshine on Leith" (The Proclaimers). Original ones included "Across the Universe" (The Beatles), "Moulin Rouge!" (various pop hits), "Idlewild" (Outkast) and "Yesterday" (The Beatles). Disney also returned to musicals with "Enchanted", "The Princess and the Frog", "Tangled", "Winnie the Pooh", "The Muppets", "Frozen", "Muppets Most Wanted", "Into the Woods", "Moana", "Mary Poppins Returns" and "Frozen II". Following a string of successes with live action fantasy adaptations of several of their animated features, Disney produced a live action version of "Beauty and the Beast", the first of this live action fantasy adaptation pack to be an all-out musical, and features new songs as well as new lyrics to both the Gaston number and the reprise of the title song. The second film of this live action fantasy adaptation pack to be an all-out musical was "Aladdin" and features new songs. The third film of this live action fantasy adaptation pack to be an all-out musical was "The Lion King" and features new songs. Pixar also produced "Coco", the very first computer-animated musical film by the company. Other animated musical films include "Rio", "Trolls", "Sing", "Smallfoot" and "UglyDolls". Biopics about music artists and showmen were also big in the 21st century. Examples include "8 Mile" (Eminem), "Ray" (Ray Charles), "Walk the Line" (Johnny Cash and June Carter), "La Vie en Rose" (Édith Piaf), "Notorious" (Biggie Smalls), "Jersey Boys" (The Four Seasons) "Love & Mercy" (Brian Wilson), "" (TLC), "" (Aaliyah), "Get on Up" (James Brown), "Whitney" (Whitney Houston), "Straight Outta Compton" (N.W.A), "The Greatest Showman" (P. T. Barnum), "Bohemian Rhapsody" (Freddie Mercury), "The Dirt" (Mötley Crüe) and "Rocketman" (Elton John). Director Damien Chazelle created a musical film called "La La Land", starring Ryan Gosling and Emma Stone. It was meant to reintroduce the traditional jazz style of song numbers with influences from the Golden Age of Hollywood and Jacques Demy's French musicals while incorporating a contemporary/modern take on the story and characters with balances in fantasy numbers and grounded reality. It received 14 nominations at the 89th Academy Awards, tying the record for most nominations with "All About Eve" (1950) and "Titanic" (1997), and won the awards for Best Director, Best Actress, Best Cinematography, Best Original Score, Best Original Song, and Best Production Design. An exception to the decline of the musical film is Indian cinema, especially the Bollywood film industry based in Mumbai (formerly Bombay), where the most of films have been and still are musicals. The majority of films produced in the Tamil industry based in Chennai (formerly Madras), Sandalwood based in Bangalore, Telugu industry based in Hyderabad, and Malayalam industry are also musicals. Despite this exception of almost every Indian movie being a musical and India producing the most number of movies in the world (Formed in 1913), the first Bollywood film to be a complete musical Dev D (Directed by Anurag Kashyap) came in the year 2009. The second film to follow its track was Jagga Jasoos (Directed by Anurag Basu) in the year 2017. Bollywood musicals have their roots in the traditional musical theatre of India, such as classical Indian musical theatre, Sanskrit drama, and Parsi theatre. Early Bombay filmmakers combined these Indian musical theatre traditions with the musical film format that emerged from early Hollywood sound films. Other early influences on Bombay filmmakers included Urdu literature and the "Arabian Nights". The first Indian sound film, Ardeshir Irani's "Alam Ara" (1931), was a major commercial success. There was clearly a huge market for talkies and musicals; Bollywood and all the regional film industries quickly switched to sound filming. In 1937, Ardeshir Irani, of "Alam Ara" fame, made the first colour film in Hindi, "Kisan Kanya". The next year, he made another colour film, a version of "Mother India". However, colour did not become a popular feature until the late 1950s. At this time, lavish romantic musicals and melodramas were the staple fare at the cinema. Following India's independence, the period from the late 1940s to the early 1960s is regarded by film historians as the "Golden Age" of Hindi cinema. Some of the most critically acclaimed Hindi films of all time were produced during this period. Examples include "Pyaasa" (1957) and "Kaagaz Ke Phool" (1959) directed by Guru Dutt and written by Abrar Alvi, "Awaara" (1951) and "Shree 420" (1955) directed by Raj Kapoor and written by Khwaja Ahmad Abbas, and "Aan" (1952) directed by Mehboob Khan and starring Dilip Kumar. These films expressed social themes mainly dealing with working-class life in India, particularly urban life in the former two examples; "Awaara" presented the city as both a nightmare and a dream, while "Pyaasa" critiqued the unreality of city life. Mehboob Khan's "Mother India" (1957), a remake of his earlier "Aurat" (1940), was the first Indian film to be nominated for the Academy Award for Best Foreign Language Film, which it lost by a single vote. "Mother India" was also an important film that defined the conventions of Hindi cinema for decades. In the 1960s and early 1970s, the industry was dominated by musical romance films with "romantic hero" leads, the most popular being Rajesh Khanna. Other actors during this period include Shammi Kapoor, Jeetendra, Sanjeev Kumar, and Shashi Kapoor, and actresses like Sharmila Tagore, Mumtaz, Saira Banu, Helen and Asha Parekh. By the start of the 1970s, Hindi cinema was experiencing thematic stagnation, dominated by musical romance films. The arrival of screenwriter duo Salim-Javed, consisting of Salim Khan and Javed Akhtar, marked a paradigm shift, revitalizing the industry. They began the genre of gritty, violent, Bombay underworld crime films in the early 1970s, with films such as "Zanjeer" (1973) and "Deewaar" (1975). The 1970s was also when the name "Bollywood" was coined, and when the quintessential conventions of commercial Bollywood films were established. Key to this was the emergence of the masala film genre, which combines elements of multiple genres (action, comedy, romance, drama, melodrama, musical). The masala film was pioneered in the early 1970s by filmmaker Nasir Hussain, along with screenwriter duo Salim-Javed, pioneering the Bollywood blockbuster format. "Yaadon Ki Baarat" (1973), directed by Hussain and written by Salim-Javed, has been identified as the first masala film and the "first" quintessentially "Bollywood" film. Salim-Javed went on to write more successful masala films in the 1970s and 1980s. Masala films launched Amitabh Bachchan into the biggest Bollywood movie star of the 1970s and 1980s. A landmark for the masala film genre was "Amar Akbar Anthony" (1977), directed by Manmohan Desai and written by Kader Khan. Manmohan Desai went on to successfully exploit the genre in the 1970s and 1980s. Along with Bachchan, other popular actors of this era included Feroz Khan, Mithun Chakraborty, Naseeruddin Shah, Jackie Shroff, Sanjay Dutt, Anil Kapoor and Sunny Deol. Actresses from this era included Hema Malini, Jaya Bachchan, Raakhee, Shabana Azmi, Zeenat Aman, Parveen Babi, Rekha, Dimple Kapadia, Smita Patil, Jaya Prada and Padmini Kolhapure. In the late 1980s, Hindi cinema experienced another period of stagnation, with a decline in box office turnout, due to increasing violence, decline in musical melodic quality, and rise in video piracy, leading to middle-class family audiences abandoning theaters. The turning point came with "Qayamat Se Qayamat Tak" (1988), directed by Mansoor Khan, written and produced by his father Nasir Hussain, and starring his cousin Aamir Khan with Juhi Chawla. Its blend of youthfulness, wholesome entertainment, emotional quotients and strong melodies lured family audiences back to the big screen. It set a new template for Bollywood musical romance films that defined Hindi cinema in the 1990s. The period of Hindi cinema from the 1990s onwards is referred to as "New Bollywood" cinema, linked to economic liberalisation in India during the early 1990s. By the early 1990s, the pendulum had swung back toward family-centric romantic musicals. "Qayamat Se Qayamat Tak" was followed by blockbusters such as "Maine Pyar Kiya" (1989), "Chandni" (1989), "Hum Aapke Hain Kaun" (1994), "Dilwale Dulhania Le Jayenge" (1995), "Raja Hindustani" (1996), "Dil To Pagal Hai" (1997), "Pyaar To Hona Hi Tha" (1998) and "Kuch Kuch Hota Hai" (1998). A new generation of popular actors emerged, such as Aamir Khan, Aditya Pancholi, Ajay Devgan, Akshay Kumar, Salman Khan (Salim Khan's son), and Shahrukh Khan, and actresses such as Madhuri Dixit, Sridevi, Juhi Chawla, Meenakshi Seshadri, Manisha Koirala, Kajol, and Karisma Kapoor. Since the 1990s, the three biggest Bollywood movie stars have been the "Three Khans": Aamir Khan, Shah Rukh Khan, and Salman Khan. Combined, they have starred in most of the top ten highest-grossing Bollywood films. The three Khans have had successful careers since the late 1980s, and have dominated the Indian box office since the 1990s, across three decades. In the 2000s, Bollywood musicals played an instrumental role in the revival of the musical film genre in the Western world. Baz Luhrmann stated that his successful musical film "Moulin Rouge!" (2001) was directly inspired by Bollywood musicals. The film thus pays homage to India, incorporating an Indian-themed play based on the ancient Sanskrit drama "The Little Clay Cart" and a Bollywood-style dance sequence with a song from the film "China Gate". The critical and financial success of "Moulin Rouge!" renewed interest in the then-moribund Western musical genre, and subsequently films such as "Chicago", "The Producers", "Rent", "Dreamgirls", "Hairspray", "", "Across the Universe", "The Phantom of the Opera", "Enchanted" and "Mamma Mia!" were produced, fuelling a renaissance of the genre. "The Guru" and "The 40-Year-Old Virgin" also feature Indian-style song-and-dance sequences; the Bollywood musical "Lagaan" (2001) was nominated for the Academy Award for Best Foreign Language Film; two other Bollywood films "Devdas" (2002) and "Rang De Basanti" (2006) were nominated for the BAFTA Award for Best Film Not in the English Language; and Danny Boyle's Academy Award winning "Slumdog Millionaire" (2008) also features a Bollywood-style song-and-dance number during the film's end credits. Spain has a history and tradition of musical films that were made independent of Hollywood influence. The first films arise during the Second Spanish Republic of the 1930s and the advent of sound films. A few zarzuelas (Spanish operetta) were even adapted as screenplays during the silent era. The beginnings of the Spanish musical were focused on romantic Spanish archetypes: Andalusian villages and landscapes, gypsys, "bandoleros", and copla and other popular folk songs included in story development. These films had even more box-office success than Hollywood premieres in Spain. The first Spanish film stars came from the musical genre: Imperio Argentina, Estrellita Castro, Florián Rey (director) and, later, Lola Flores, Sara Montiel and Carmen Sevilla. The Spanish musical started to expand and grow. Juvenile stars appear and top the box-office. Marisol, Joselito, Pili & Mili, and Rocío Dúrcal were the major figures of musical films from 1960s to 1970s. Due to Spanish transition to democracy and the rise of "Movida culture", the musical genre fell in production and box-office, only saved by Carlos Saura and his flamenco musical films. Unlike the musical films of Hollywood and Bollywood, popularly identified with escapism, the Soviet musical was first and foremost a form of propaganda. Vladimir Lenin said that cinema was "the most important of the arts." His successor, Joseph Stalin, also recognized the power of cinema in efficiently spreading Communist Party doctrine. Films were widely popular in the 1920s, but it was foreign cinema that dominated the Soviet filmgoing market. Films from Germany and the U.S. proved more entertaining than Soviet director Sergei Eisenstein's historical dramas. By the 1930s it was clear that if the Soviet cinema was to compete with its Western counterparts, it would have to give audiences what they wanted: the glamour and fantasy they got from Hollywood. The musical film, which emerged at that time, embodied the ideal combination of entertainment and official ideology. A struggle between laughter for laughter's sake and entertainment with a clear ideological message would define the golden age of the Soviet musical of the 1930s and 1940s. Then-head of the film industry Boris Shumyatsky sought to emulate Hollywood's conveyor belt method of production, going so far as to suggest the establishment of a Soviet Hollywood. In 1930, the esteemed Soviet film director Sergei Eisenstein went to the United States with fellow director Grigori Aleksandrov to study Hollywood's filmmaking process. The American films greatly impacted Aleksandrov, particularly the musicals. He returned in 1932, and in 1934 directed "The Jolly Fellows", the first Soviet musical. The film was light on plot and focused more on the comedy and musical numbers. Party officials at first met the film with great hostility. Aleksandrov defended his work by arguing the notion of laughter for laughter's sake. Finally, when Aleksandrov showed the film to Stalin, the leader decided that musicals were an effective means of spreading propaganda. Messages like the importance of collective labor and rags-to-riches stories would become the plots of most Soviet musicals. The success of "The Jolly Fellows" ensured a place in Soviet cinema for the musical format, but immediately Shumyatsky set strict guidelines to make sure the films promoted Communist values. Shumyatsky's decree "Movies for the Millions" demanded conventional plots, characters, and montage to successfully portray Socialist Realism (the glorification of industry and the working class) on film. The first successful blend of a social message and entertainment was Aleksandrov's "Circus" (1936). It starred his wife, Lyubov Orlova (an operatic singer who had also appeared in "The Jolly Fellows") as an American circus performer who has to immigrate to the USSR from the U.S. because she has a mixed-race child, whom she had with a black man. Amidst the backdrop of lavish musical productions, she finally finds love and acceptance in the USSR, providing the message that racial tolerance can only be found in the Soviet Union. The influence of Busby Berkeley's choreography on Aleksandrov's directing can be seen in the musical number leading up to the climax. Another, more obvious reference to Hollywood is the Charlie Chaplin impersonator who provides comic relief throughout the film. Four million people in Moscow and Leningrad went to see "Circus" during its first month in theaters. Another of Aleksandrov's more-popular films was "The Bright Path" (1940). This was a reworking of the fairytale "Cinderella" set in the contemporary Soviet Union. The Cinderella of the story was again Orlova, who by this time was the most popular star in the USSR. It was a fantasy tale, but the moral of the story was that a better life comes from hard work. Whereas in "Circus", the musical numbers involved dancing and spectacle, the only type of choreography in "Bright Path" is the movement of factory machines. The music was limited to Orlova's singing. Here, work provided the spectacle. The other director of musical films was Ivan Pyryev. Unlike Aleksandrov, the focus of Pyryev's films was life on the collective farms. His films, "Tractor Drivers" (1939), "The Swineherd and the Shepherd" (1941), and his most famous, "Cossacks of the Kuban" (1949) all starred his wife, Marina Ladynina. Like in Aleksandrov's "Bright Path", the only choreography was the work the characters were doing on film. Even the songs were about the joys of working. Rather than having a specific message for any of his films, Pyryev promoted Stalin's slogan "life has become better, life has become more joyous." Sometimes this message was in stark contrast with the reality of the time. During the filming of "Cossacks of the Kuban", the Soviet Union was going through a postwar famine. In reality, the actors who were singing about a time of prosperity were hungry and malnourished. The films did, however, provide escapism and optimism for the viewing public. The most popular film of the brief era of Stalinist musicals was Alexandrov's 1938 film "Volga-Volga". The star, again, was Lyubov Orlova and the film featured singing and dancing, having nothing to do with work. It is the most unusual of its type. The plot surrounds a love story between two individuals who want to play music. They are unrepresentative of Soviet values in that their focus is more on their music than their jobs. The gags poke fun at the local authorities and bureaucracy. There is no glorification of industry since it takes place in a small rural village. Work is not glorified either, since the plot revolves around a group of villagers using their vacation time to go on a trip up the Volga to perform in Moscow. "Volga-Volga" followed the aesthetic principles of Socialist Realism rather than the ideological tenets. It became Stalin's favorite film and he gave it as a gift to President Roosevelt during WWII. It is another example of one of the films that claimed life is better. Released at the height of Stalin's purges, it provided escapism and a comforting illusion for the public.
https://en.wikipedia.org/wiki?curid=19029
Motala Municipality Motala Municipality ("Motala kommun") is a municipality in Östergötland County in southeast Sweden. Its seat is located in the city of Motala. In 1971 Motala Municipality was formed by the amalgamation of the "City of Motala" with some of the adjacent rural municipalities. Three years later more entities were added, among them the former "City of Vadstena". In 1980 a new Vadstena Municipality was split off. Geographically, Motala is situated where Lake Vättern drains into the river system of Motala ström, which was of central importance to the massive industrialization of Sweden in the 19th century. Figures as of 2000, from Statistics Sweden. The population decreased by approximately 2% in most of the localities between the earlier census 1995 and the one in 2000. The largest employer is the municipality itself, employing circa 3,400 people. The next is the county council with 1,775. Of the private employers, Electrolux and Dometic have a total of 1,400; Autoliv 425; Hycop 325; Saab-Bofors Dynamics circa 300; And Motala Verkstad some 180. Motala is twinned with:
https://en.wikipedia.org/wiki?curid=19030
Maltese language Maltese () is a Semitic language spoken by the Maltese people of Malta. It is the national language of the country and also serves as an official language of the European Union, the only Semitic language so distinguished. Maltese is a Latinised variety of spoken historical Arabic through its descent from Siculo-Arabic, which developed as a Maghrebi Arabic dialect during the Emirate of Sicily between 831 and 1091. As a result of the Norman invasion of Malta and the subsequent re-christianisation of the island, Maltese evolved independently of Classical Arabic in a gradual process of Latinisation. It is therefore exceptional as a variety of historical Arabic that has no diglossic relationship with Classical or Modern Standard Arabic. Maltese is thus classified separately from the 30 varieties constituting the modern Arabic macrolanguage. Maltese is also distinguished from Arabic and other Semitic languages since its morphology has been deeply influenced by Romance languages, namely Italian and Sicilian. The original Arabic base comprises around one-third of the Maltese vocabulary, especially words that denote basic ideas and the function words, but about half of the vocabulary is derived from standard Italian and Sicilian; and English words make up between 6% and 20% of the vocabulary. A 2016 study shows that, in terms of basic everyday language, speakers of Maltese are able to understand around a third of what is said to them in Tunisian Arabic, which is a Maghrebi Arabic related to Siculo Arabic, whereas speakers of Tunisian Arabic are able to understand about 40% of what is said to them in Maltese. This reported level of asymmetric intelligibility is considerably lower than the mutual intelligibility found between other varieties of Arabic. Maltese has always been written in the Latin script, the earliest surviving example dating from the late Middle Ages. It continues to be the only standardised Semitic language written exclusively in the Latin script. The origins of the Maltese language are attributed to the arrival, early in the eleventh century, of settlers from neighbouring Sicily, where Siculo-Arabic was spoken, following the Fatimid Caliphate's conquest of the island at the end of the ninth century. This claim has been corroborated by genetic studies, which show that contemporary Maltese people share common ancestry with Sicilians and Calabrians, with little genetic input from North Africa and the Levant. The Norman conquest in 1091, followed by the —complete by 1249—permanently isolated the vernacular from its Arabic source, creating the conditions for its evolution into a distinct language. In contrast to Sicily—where Siculo-Arabic became extinct and replaced by Sicilian—the vernacular in Malta continued to develop alongside Italian, eventually replacing it as official language in 1934 – alongside English. The first written reference to the Maltese language is in a will of 1436, where it is called "lingua maltensi". The oldest known document in Maltese, "Il-Kantilena" () by Pietru Caxaro, dates from the 15th century. The earliest known Maltese dictionary was a 16th-century manuscript entitled "Maltese-Italiano"; it was included in the "Biblioteca Maltese" of Mifsud in 1764, but is now lost. A list of Maltese words was included in both the "Thesaurus Polyglottus" (1603) and "Propugnaculum Europae" (1606) of Hieronymus Megiser, who had visited Malta in 1588–1589; Domenico Magri gave the etymologies of some Maltese words in his "Hierolexicon, sive sacrum dictionarium" (1677). An early manuscript dictionary, "Dizionario Italiano e Maltese", was discovered in the Biblioteca Vallicelliana in Rome in the 1980s, together with a grammar, the "Regole per la Lingua Maltese", attributed to a French Knight named Thezan. The first systematic lexicon is that of Giovanni Pietro Francesco Agius de Soldanis, who also wrote the first systematic grammar of the language and proposed a standard orthography. "SIL Ethnologue" (2015) reports a total of 522,000 Maltese speakers, with 371,000 residing in Malta (close to 90% of Maltese population) according to the European Commission (2012). This implies a number of some 150,000 speakers in the Maltese diaspora. Most speakers are bilingual, the majority of speakers (345,000) regularly use English, and a reported 66,800 regularly use French. The largest diaspora community of Maltese speakers is in Australia, with 36,000 speakers reported in 2006 (down from 45,000 in 1996, and expected to decline further). The Maltese linguistic community in Tunisia originates in the 18th century. Numbering at several thousand in the 19th century, it was reported at only 100 to 200 people as of 2017. Maltese is descended from Siculo-Arabic, a Semitic language within the Afroasiatic family, that in the course of its history has been influenced by Sicilian and Italian, to a lesser extent French, and more recently English. Today, the core vocabulary (including both the most commonly used vocabulary and function words) is Semitic, with large numbers of loanwords. Because of the Sicilian influence on Siculo-Arabic, Maltese has many language contact features and is most commonly described as a language with a large number of loanwords. The Maltese language has historically been classified in various ways, with some claiming that the ancient Punic language (another Semitic language) was its origin instead of Siculo-Arabic, while others believed the language to be one of the Berber languages (another family within Afroasiatic), and under the Fascist Kingdom of Italy, it was classified as regional Italian. SIL reports six varieties, besides Standard Maltese: Gozo, Port Maltese, Rural Central Maltese, Rural East Maltese, Rural West Maltese, and Zurrieq. Urban varieties of Maltese are closer to Standard Maltese than rural varieties, which have some characteristics that distinguish them from Standard Maltese. They tend to show some archaic features such as the realisation of and and the imāla of Arabic ā into ē (or ī especially in Gozo), considered archaic because they are reminiscent of 15th-century transcriptions of this sound. Another archaic feature is the realisation of Standard Maltese ā as ō in rural dialects. There is also a tendency to diphthongise simple vowels, e.g., ū becomes eo or eu. Rural dialects also tend to employ more Semitic roots and broken plurals than Standard Maltese. In general, rural Maltese is less distant from its Siculo-Arabic ancestor than Standard Maltese. Voiceless stops are only lightly aspirated and voiced stops are fully voiced. Voicing is carried over from the last segment in obstruent clusters; thus, two- and three-obstruent clusters are either voiceless or voiced throughout, e.g. is realised "we write". Maltese has final-obstruent devoicing of voiced obstruents and voiceless stops have no audible release, making voiceless–voiced pairs phonetically indistinguishable. Gemination is distinctive word-medially and word-finally in Maltese. The distinction is most rigid intervocalically after a stressed vowel. Stressed, word-final closed syllables with short vowels end in a long consonant, and those with a long vowel in a single consonant; the only exception is where historic and meant the compensatory lengthening of the succeeding vowel. Some speakers have lost length distinction in clusters. The two nasals and assimilate for place of articulation in clusters. and are usually dental, whereas are all alveolar. are found mostly in words of Italian origin, retaining length (if not word-initial). and are only found in loanwords, e.g. "newspaper" and "television". The pharyngeal fricative is velar () or glottal () for some speakers. Maltese has five short vowels, , written "a e i o u;" six long vowels, , written "a, e, ie, i, o, u," all of which (with the exception of "ie" ) can only be known to represent long vowels in writing if they are followed by an orthographic "għ" or "h" (otherwise, one needs to know the pronunciation; e.g. "nar" (fire) is pronounced ); and seven diphthongs, , written "aj" or "għi, aw" or "għu, ej" or "għi, ew, iw, oj," and "ow" or "għu." Stress is generally on the penultimate syllable, unless some other syllable is heavy (has a long vowel or final consonant), or unless a stress-shifting suffix is added. (Suffixes marking gender, possession, and verbal plurals do not cause the stress to shift). Historically when vowel "a" and "u" were long or stressed they were written as "â" or "û", for example in the word "baħħâr" (sailor) to differentiate form "baħħar" (to sail), but nowadays these accents are mostly omitted. When two syllables are equally heavy, the penultimate takes the stress, but otherwise the heavier syllable does, e.g. "bajjad" 'he painted' vs "bajjad" 'a painter'. Many Classical Arabic consonants underwent mergers and modifications in Maltese: The modern system of Maltese orthography was introduced in 1924. Below is the Maltese alphabet, with symbols and approximate English pronunciation: Final vowels with grave accents (à, è, ì, ò, ù) are also found in some Maltese words of Italian origin, such as "libertà" ("freedom"), "sigurtà" (old Italian: "sicurtà", "security"), or "soċjetà" (Italian: "società", "society"). The official rules governing the structure of the Maltese language are found in the official guidebook issued by the "Akkademja tal-Malti", the Academy of the Maltese language, which is named "Tagħrif fuq il-Kitba Maltija", that is, "Knowledge on Writing in Maltese". The first edition of this book was printed in 1924 by the Maltese government's printing press. The rules were further expanded in the 1984 book, "iż-Żieda mat-Tagħrif", which focused mainly on the increasing influence of Romance and English words. In 1992 the Academy issued the "Aġġornament tat-Tagħrif fuq il-Kitba Maltija", which updated the previous works. All these works were included in a revised and expanded guidebook published in 1996. The National Council for the Maltese Language (KNM) is the main regulator of the Maltese language (see Maltese Language Act, below) and not the "Akkademja tal-Malti". However, these orthography rules are still valid and official. Since Maltese evolved after the Italo-Normans ended Arab rule of the islands, a written form of the language was not developed for a long time after the Arabs' expulsion in the middle of the thirteenth century. Under the rule of the Knights Hospitaller, both French and Italian were used for official documents and correspondence. During the British colonial period, the use of English was encouraged through education, with Italian regarded as the next-most important language. In the late eighteenth century and throughout the nineteenth century, philologists and academics such as Mikiel Anton Vassalli made a concerted effort to standardise written Maltese. Many examples of written Maltese exist from before this period, always in the Latin alphabet, "Il Cantilena" being the earliest example of written Maltese. In 1934, Maltese was recognised as an official language. The Maltese language has a tendency to have both Semitic vocabulary and also vocabulary derived from Romance languages, primarily Italian. Below are two versions of the same translations, one in vocabulary derived mostly from Semitic root words while the other uses Romance loanwords (from the "Treaty establishing a Constitution for Europe", see p. 17): *Note: the words "dritt" (pl. "drittijiet"), "minoranza" (pl. "minoranzi"), "pajjiż" (pl "pajjiżi") are derived from "diritto" (right), "minoranza" (minority) and "paese" (county) respectively. Although the original vocabulary of the language was Siculo-Arabic, it has incorporated a large number of borrowings from Romance sources of influence (Sicilian, Italian, and French), and more recently Germanic ones (from English). The historical source of modern Maltese vocabulary is 52% Italian/Sicilian, 32% Siculo-Arabic, and 6% English, with some of the remainder being French. Today, most function words are Semitic. In this way, it is similar to English, which is a Germanic language that had large influence from Norman French. As a result of this, Romance language-speakers may easily be able to comprehend conceptual ideas expressed in Maltese, such as "Ġeografikament, l-Ewropa hi parti tas-superkontinent ta' l-Ewrasja" ("Geographically, Europe is part of the Supercontinent of Eurasia"), while not understanding a single word of a functional sentence such as "Ir-raġel qiegħed fid-dar" ("The man is in the house"), which would be easily understood by any Arabic speaker. An analysis of the etymology of the 41,000 words in Aquilina's "Maltese-English Dictionary" shows that words of Romance origin make up 52% of the Maltese vocabulary, although other sources claim from as low as 40%, to as high as 55%. These vocabularies tend to deal with more complicated concepts. They are mostly derived from Sicilian and thus exhibit Sicilian phonetic characteristics, such as in place of , and in place of (e.g. "tiatru" not "teatro" and "fidi" not "fede"). Also, as with Old Sicilian, (English 'sh') is written 'x' and this produces spellings such as: "ambaxxata" ('embassy'), "xena" ('scene' cf. Italian "ambasciata, scena"). A tendency in modern Maltese is to adopt further influences from English and Italian. Complex Latinate English words adopted into Maltese are often given Italianate or Sicilianate forms, even if the resulting words do not appear in either of those languages. For instance, the words ""evaluation"", ""industrial action"", and ""chemical armaments"" become ""evalwazzjoni"", ""azzjoni industrjali"", and ""armamenti kimiċi"" in Maltese, while the Italian terms are "valutazione", "vertenza sindacale", and "armi chimiche" respectively. English words of Germanic origin are generally preserved relatively unchanged. Some impacts of African Romance on Arabic and Berber spoken in the Maghreb are theorised, which may then have passed into Maltese. For example, in calendar month names, the word "furar" "February" is only found in the Maghreb and in Maltese - proving the word's ancient origins. The region also has a form of another Latin named month in "awi/ussu < augustus". This word does not appear to be a loan word through Arabic, and may have been taken over directly from Late Latin or African Romance. Scholars theorise that a Latin-based system provided forms such as "awi/ussu" and "furar" in African Romance, with the system then mediating Latin/Romance names through Arabic for some month names during the Islamic period. The same situation exists for Maltese which mediated words from Italian, and retains both non-Italian forms such as "awissu/awwissu" and "frar", and Italian forms such as "april". Siculo-Arabic is the ancestor of the Maltese language, and supplies between 32% and 40% of the language's vocabulary. The Maltese language has merged many of the original Arabic consonants, in particular the emphatic consonants, with others that are common in European languages. Thus, original Arabic , , and all merged into Maltese . The vowels, however, separated from the three in Arabic () to five, as is more typical of other European languages (). Some unstressed short vowels have been elided. The common Arabic greeting is cognate with in Maltese (lit. "the peace for you", peace be with you), as are similar greetings in other Semitic languages (e.g. in Hebrew). Since the attested vocabulary of Siculo-Arabic is limited, the following table compares cognates in Maltese and some other varieties of Arabic (all forms are written phonetically, as in the source): It is estimated that English loanwords, which are becoming more commonplace, make up 20% of the Maltese vocabulary, although other sources claim amounts as low as 6%. This percentage discrepancy is due to the fact that a number of new English loanwords are sometimes not officially considered part of the Maltese vocabulary; hence, they are not included in certain dictionaries. Also, English loanwards of Latinate origin are very often Italianised, as discussed above. English loanwords are generally transliterated, although standard English pronunciation is virtually always retained. Below are a few examples: Note "fridge", which is a slang term derived from "refrigerator", a Latinate word which might be expected to be rendered as "rifriġeratori" (Italian uses two different words: "frigorifero" or "refrigeratore"). Maltese grammar is fundamentally derived from Siculo-Arabic, although Romance and English noun pluralisation patterns are also used on borrowed words. Adjectives follow nouns. There are no separately formed native adverbs, and word order is fairly flexible. Both nouns and adjectives of Semitic origin take the definite article (for example, "It-tifel il-kbir", lit. "The boy the elder"="The elder boy"). This rule does not apply to adjectives of Romance origin. Nouns are pluralised and also have a dual marker. Semitic plurals are complex; if they are regular, they are marked by "-iet"/"-ijiet", e.g., "art", "artijiet" "lands (territorial possessions or property)" (cf. Arabic "-at" and Hebrew "-ot"/"-oth") or "-in" (cf. Arabic "-īn" and Hebrew "-im"). If irregular, they fall in the "pluralis fractus" category, in which a word is pluralised by internal vowel changes: "ktieb", "kotba" " book", "books"; "raġel", "irġiel" "man", "men". Words of Romance origin are usually pluralised in two manners: addition of "-i" or "-jiet". For example, "lingwa", "lingwi" "languages", from Sicilian "lingua", "lingui". Words of English origin are pluralised by adding either an "-s" or "-jiet", for example, "friġġ", "friġis" from the word "fridge". Some words can be pluralised with either of the suffixes to denote the plural. A few words borrowed from English can amalgamate both suffixes, like "brikksa" from the English "brick", which can adopt either collective form "brikks" or the plural form "brikksiet". The proclitic "il-" is the definite article, equivalent to "the" in English and "al-" in Arabic. The Maltese article becomes "l-" before or after a vowel. The Maltese article assimilates to a following coronal consonant (called "konsonanti xemxin" "sun consonants"), namely: Maltese "il-" is coincidentally identical in pronunciation to one of the Italian masculine articles, "il,". Consequently, many nouns borrowed from Standard Italian did not change their original article when used in Maltese. Romance vocabulary taken from Sicilian did change where the Sicilian articles "u" and "a", before a consonant, are used. In spite of its Romance appearance, "il-" is related to the Arabic article "al-". Verbs show a triliteral Semitic pattern, in which a verb is conjugated with prefixes, suffixes, and infixes (for example "ktibna", Arabic "katabna", Hebrew "kathabhnu" (Modern Hebrew: katavnu) "we wrote"). There are two tenses: present and perfect. The Maltese verb system incorporates Romance verbs and adds Maltese suffixes and prefixes to them (for example, "iddeċidejna" "we decided" ← "(i)ddeċieda" "decide", a Romance verb + "-ejna", a Maltese first person plural perfect marker). With Malta being a multilingual country, the usage of Maltese in the mass media is shared with other European languages, namely English and Italian. The majority of television stations broadcast from Malta in English or Maltese, although broadcasts from Italy in Italian are also received on the islands. Similarly, there are more Maltese-language radio programs than English ones broadcast from Malta, but again, as with television, Italian broadcasts are also picked up. Maltese generally receives equal usage in newspaper periodicals to English. By early 2000s, the use of the Maltese language on the Internet is uncommon, and the number of websites written in Maltese are few. In a survey of Maltese cultural websites conducted in 2004 on behalf of the Maltese Government, 12 of 13 were in English only, while the remaining one was multilingual but did not include Maltese. The Maltese population, being fluent in both Maltese and English, displays code-switching (referred to as Maltenglish) in certain localities and between certain social groups.
https://en.wikipedia.org/wiki?curid=19031
Mormon (word) The word Mormon most colloquially denotes an adherent, practitioner, follower, or constituent of Mormonism in restorationist Christianity. "Mormon" also commonly refers, specifically, to a member of The Church of Jesus Christ of Latter-day Saints (LDS Church), which is often colloquially, but imprecisely, referred to as the "Mormon Church". In addition, the term "Mormon" may refer to any of the relatively small sects of Mormon fundamentalism, and any branch of the Latter Day Saint movement that recognizes Brigham Young as the successor to founder Joseph Smith. The term "Mormon" applies to the religion of Mormonism, as well as its culture, texts, and art. The term derives from the Book of Mormon, a sacred text published in 1830 regarded by the faith as a supplemental testament to the Bible. Adherents believe that the book was translated from an ancient record by Smith by the gift and power of God. The text is said to be an ancient chronicle of a fallen and lost indigenous American nation, compiled by the prophet–warrior, Mormon, and his son, Moroni, the last of the Nephite people. The term "Mormon" was applied to Latter Day Saint movement in the 1830s, and was soon embraced by the faith. Because the term became identified with polygamy in the mid-to-late-19th century, some Latter Day Saint denominations who never practiced polygamy have renounced the term. The term "Mormon" is taken from the title of the Book of Mormon, a sacred text adherents believe to have been translated from golden plates which had their location revealed by an angel to Joseph Smith and published in 1830. According to the text of the Book of Mormon, the word Mormon stems from the Land of Mormon, where the prophet Alma preached the gospel and baptized converts. Mormon—who was named after the land—was a 4th-century prophet–historian who compiled and abridged many records of his ancestors into the Book of Mormon. The book is believed by Mormons to be a literal record of God's dealings with pre-Columbian civilizations in the Americas from approximately 2600 BC through AD 420, written by prophets and followers of Jesus Christ. The book records the teachings of Jesus Christ to the people in the Americas as well as Christ's personal ministry among the people of Nephi after his resurrection. Mormons believe the Book of Mormon is another witness of Jesus Christ, "holy scripture comparable to the Bible". The terms "Mormonism" and "Mormonite" were originally descriptive terms invented in 1831 by newspaper editors or contributors in Ohio and New York to describe the growing movement of "proselytes of the Golden Bible". Historian Ardis Parshall quotes a 1831 news item, appearing within the first year of the LDS Church's founding, as reading, "In the sixth number of your paper I saw a notice of a sect of people called Mormonites; and thinking that a fuller history of their founder, Joseph Smith, Jr., might be interesting to your community … I will take the trouble to make a few remarks on the character of that infamous imposter." The term "Mormon" developed as a shortened version of "Mormonite" a year or two later. In all cases prior to 1833, these terms were used descriptively, despite nearly universal negative sentiment toward the movement. By the 1840s the term was adopted by Mormon leaders to refer to themselves, though leaders occasionally used the term as early as 1833. The term took on a pejorative meaning sometime before 1844 with the invention of the pejorative term Jack Mormon to describe non-Mormons sympathetic to the movement. Since that time, some have argued that the term "Mormon" has generally lost its pejorative status. Today, the term "Mormon" is most often used to refer to members of the LDS Church. However, the term is also adopted by other adherents of Mormonism, including adherents of Mormon fundamentalism. The term "Mormon" is generally disfavored by other denominations of the Latter Day Saint movement, such as the Community of Christ, which have distinct histories from that of the LDS Church since Smith's death in 1844. The term is particularly embraced by adherents of Mormon fundamentalism, who continue to believe in and practice plural marriage, a practice that the LDS Church officially abandoned in 1890. Seeking to distance itself from polygamy and Mormon fundamentalism, the LDS Church has taken the position that the term "Mormon" should only apply to the LDS Church and its members, and not other adherents who have adopted the term. The church cites the "AP Stylebook", which states, "The term Mormon is not properly applied to the other Latter Day Saints churches that resulted from the split after [Joseph] Smith’s death." Despite the LDS Church's position, the term "Mormon" is widely used by journalists and non-journalists to refer to adherents of Mormon fundamentalism. The official name of the Salt Lake City, Utah–based church is The Church of Jesus Christ of Latter-day Saints. While the term "Mormon Church" has long been attached to the church as a nickname, it is not a preferred title, and the church's style guide says, "Please avoid the use of 'Mormon Church', 'LDS Church' or the 'Church of the Latter-day Saints.'" Church leaders have encouraged members to use the church's full name to emphasize the church's focus on Jesus Christ. In 2018, church president Russell M. Nelson announced a renewed effort to discourage the use of the word "Mormon" in reference to itself and its members. J. Gordon Melton, in his "Encyclopedia of American Religions", subdivides the Mormons into "Utah Mormons", "Missouri Mormons", "Polygamy-Practicing Mormons", and "Other Mormons". In this scheme, the "Utah Mormon" group includes the non-polygamous organizations descending from those Mormons who followed Brigham Young to what is now Utah. The Church of Jesus Christ of Latter-day Saints is by far the largest of these groups, with a membership count totaling over 15,000,000 worldwide and the only group to initially reside in Utah. The "Missouri Mormon" groups include those non-polygamous groups that chose not to travel to Utah and are currently headquartered in Missouri, which Joseph Smith designated as the future site of the New Jerusalem. These organizations include Community of Christ, Church of Christ (Temple Lot), Remnant Church of Jesus Christ of Latter Day Saints, and others. "Polygamy-Practicing Mormon" groups are those that currently practice polygamy, regardless of location. Most notably, this category includes the Fundamentalist Church of Jesus Christ of Latter-Day Saints (FLDS Church) and the Apostolic United Brethren (AUB). "Other Mormon" groups include those that are not headquartered in Utah or Missouri and do not practice polygamy, such as The Church of Jesus Christ (Bickertonite) and the Church of Jesus Christ of Latter Day Saints (Strangite). The terms "Utah Mormon" and "Missouri Mormon" can be problematic if interpreted to mean more than the location of the various groups' headquarters. The majority of members of "Utah Mormon" groups and "Missouri Mormon" groups no longer live in either of these US states. Although a majority of Utahns are members of the LDS Church, it has a worldwide membership with the majority of its members outside the United States. Nor do most "Missouri Mormons" live in Missouri. The May 15, 1843, issue of the official Mormon periodical "Times and Seasons" contains an article, purportedly written by Joseph Smith, deriving the etymology of the name "Mormon" from English "more" + Egyptian "mon", "good", and extolling the meaning as follows: It has been stated that this word [mormon] was derived from the Greek word "mormo". This is not the case. There was no Greek or Latin upon the plates from which I, through the grace of God, translated the Book of Mormon. Let the language of that book speak for itself. On the 523d page, of the fourth edition, it reads: And now behold we have written this record according to our knowledge in the characters which are called among us the "Reformed Egyptian" ... none other people knoweth our language; therefore [God] hath prepared means for the interpretation thereof." ... [The] Bible in its widest sense, means "good"; for the Savior says according to the gospel of John, "I am the "good" shepherd;" and it will not be beyond the common use of terms, to say that good is among the most important in use, and though known by various names in different languages, still its meaning is the same, and is ever in opposition to "bad". We say from the Saxon, "good"; the Dane, "god"; the Goth, "goda"; the German, "gut"; the Dutch, "goed"; the Latin, "bonus"; the Greek, "kalos"; the Hebrew, "tob"; and the Egyptian, "mon". Hence, with the addition of "more", or the contraction, "mor", we have the word MOR-MON; which means, literally, "more good". Whether Smith was the actual author of this passage is uncertain. Official LDS Church historian B. H. Roberts removed the quote from his "History of the Church" compilation, saying he found evidence that W. W. Phelps wrote that paragraph and that it was "based on inaccurate premises and was offensively pedantic." LDS Church apostle Gordon B. Hinckley noted that the "more good" translation is incorrect but added that ""Mormon" means 'more good'" is a positive motto for members of the LDS Church. The Book of Mormon's title page begins, "The Book of Mormon: An account written by the hand of Mormon" (). According to the book, Mormon compiled nearly 1000 years of writings as well as chronicled events during his lifetime. Most of the text of the Book of Mormon consists of this compilation and his own writings. However, the name "Mormon" is also used in the Book of Mormon as a place name (e.g. Waters of Mormon). In some countries, "Mormon" and some phrases including the term are registered trademarks owned by Intellectual Reserve, a holding company for the LDS Church's intellectual property. In the United States, the LDS Church has applied for a trademark on "Mormon" as applied to religious services; however, the United States Patent and Trademark Office rejected the application, stating that the term "Mormon" was too generic, and is popularly understood as referring to a particular kind of church, similar to "Presbyterian" or "Methodist", rather than a service mark. The application was abandoned as of August 22, 2007. In all, Intellectual Reserve owns more than 60 trademarks related to the term "Mormon".
https://en.wikipedia.org/wiki?curid=19035
Mariana Trench The Mariana Trench or Marianas Trench is located in the western Pacific Ocean about east of the Mariana Islands; it is the deepest oceanic trench on Earth. It is crescent-shaped and measures about in length and in width. The maximum known depth is (± ) (6.825 miles) at the southern end of a small slot-shaped valley in its floor known as the Challenger Deep. However, some unrepeated measurements place the deepest portion at . By comparison: if Mount Everest were placed into the trench at this point, its peak would still be over under water. At the bottom of the trench the water column above exerts a pressure of , more than 1,071 times the standard atmospheric pressure at sea level. At this pressure, the density of water is increased by 4.96%. The temperature at the bottom is . In 2009, the Marianas Trench was established as a United States National Monument. Monothalamea have been found in the trench by Scripps Institution of Oceanography researchers at a record depth of below the sea surface. Data has also suggested that microbial life forms thrive within the trench. The Mariana Trench is named after the nearby Mariana Islands, which are named Las Marianas in honor of Spanish Queen Mariana of Austria, widow of Philip IV of Spain). The islands are part of the island arc that is formed on an over-riding plate, called the Mariana Plate (also named for the islands), on the western side of the trench. The Mariana Trench is part of the Izu-Bonin-Mariana subduction system that forms the boundary between two tectonic plates. In this system, the western edge of one plate, the Pacific Plate, is subducted (i.e., thrust) beneath the smaller Mariana Plate that lies to the west. Crustal material at the western edge of the Pacific Plate is some of the oldest oceanic crust on earth (up to 170 million years old), and is, therefore, cooler and denser; hence its great height difference relative to the higher-riding (and younger) Mariana Plate. The deepest area at the plate boundary is the Mariana Trench proper. The movement of the Pacific and Mariana plates is also indirectly responsible for the formation of the Mariana Islands. These volcanic islands are caused by flux melting of the upper mantle due to the release of water that is trapped in minerals of the subducted portion of the Pacific Plate. The trench was first sounded during the "Challenger" expedition in 1875, using a weighted rope, which recorded a depth of . In 1877, a map was published called "Tiefenkarte des Grossen Ozeans" ("Depth map of the Great Ocean") by Petermann, which showed a "Challenger Tief" ("Challenger deep") at the location of that sounding. In 1899, USS Nero, a converted collier, recorded a depth of . In 1951, "Challenger II" surveyed the trench using echo sounding, a much more precise and vastly easier way to measure depth than the sounding equipment and drag lines used in the original expedition. During this survey, the deepest part of the trench was recorded when the "Challenger II" measured a depth of at , known as the Challenger Deep. In 1957, the Soviet vessel reported a depth of at a location dubbed the "Mariana Hollow". In 1962, the surface ship M.V. "Spencer F. Baird" recorded a maximum depth of using precision depth gauges. In 1984, the Japanese survey vessel "Takuyō" (拓洋) collected data from the Mariana Trench using a narrow, multi-beam echo sounder; it reported a maximum depth of , also reported as ±. Remotely Operated Vehicle "KAIKO" reached the deepest area of the Mariana Trench and made the deepest diving record of on 24 March 1995. During surveys carried out between 1997 and 2001, a spot was found along the Mariana Trench that had depth similar to that of the Challenger Deep, possibly even deeper. It was discovered while scientists from the Hawaii Institute of Geophysics and Planetology were completing a survey around Guam; they used a sonar mapping system towed behind the research ship to conduct the survey. This new spot was named the HMRG (Hawaii Mapping Research Group) Deep, after the group of scientists who discovered it. On 1 June 2009, mapping aboard the "RV Kilo Moana" (mothership of the Nereus vehicle), indicated a spot with a depth of . The sonar mapping of the Challenger Deep was possible by its Simrad EM120 sonar multibeam bathymetry system for deep water. The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth across the entire swath (implying that the depth figure is accurate to ± ). In 2011, it was announced at the American Geophysical Union Fall Meeting that a US Navy hydrographic ship equipped with a multibeam echosounder conducted a survey which mapped the entire trench to resolution. The mapping revealed the existence of four rocky outcrops thought to be former seamounts. The Mariana Trench is a site chosen by researchers at Washington University and the Woods Hole Oceanographic Institution in 2012 for a seismic survey to investigate the subsurface water cycle. Using both ocean-bottom seismometers and hydrophones the scientists are able to map structures as deep as beneath the surface. Four manned descents and three unmanned descents have been achieved. The first was the manned descent by Swiss-designed, Italian-built, United States Navy-owned bathyscaphe "Trieste" which reached the bottom at 1:06 pm on 23 January 1960, with Don Walsh and Jacques Piccard on board. Iron shot was used for ballast, with gasoline for buoyancy. The onboard systems indicated a depth of 11,521 m (37,799 ft), but this was later revised to 10,916 m (35,814 ft). The depth was estimated from a conversion of pressure measured and calculations based on the water density from sea surface to seabed. This was followed by the unmanned ROVs "Kaikō" in 1996 and "Nereus" in 2009. The first three expeditions directly measured very similar depths of . The fourth was made by Canadian film director James Cameron in 2012. On 26 March, he reached the bottom of the Mariana Trench in the submersible vessel "Deepsea Challenger," diving to a depth of 10,908 m (35,787 ft). In July 2015, members of the National Oceanic and Atmospheric Administration, Oregon State University, and the Coast Guard submerged a hydrophone into the deepest part of the Mariana Trench, the Challenger Deep, never having deployed one past a mile. The titanium-shelled hydrophone was designed to withstand the immense pressure 7 miles under. Although researchers were unable to retrieve the hydrophone until November, the data capacity was full within the first 23 days. After months of analyzing the sounds, the experts were surprised to pick up natural sounds like earthquakes, a typhoon and baleen whales along with man-made sounds such as boats. Due to the mission's success, the researchers announced plans to deploy a second hydrophone in 2017 for an extended period of time. Victor Vescovo achieved a new record descent to 10,927 metres (35,853 ft.), using the "DSV Limiting Factor", a Triton 36000/2 model manufactured by Florida-based Triton Submarines. He dived again in May 2019 and became the first person to dive the Challenger Deep twice. In May 2020, a joint project between the Russian shipbuilders, scientific teams of the Russian Academy of Sciences with the support of Russian Foundation for Advanced Research Projects and the Pacific Fleet submerged an autonomous underwater vehicle "Vityaz" to the bottom of the Mariana Trench at a depth of 10,028 metres. "Vityaz" is the first underwater vehicle (AUV) to operate autonomously at the extreme depths of the Mariana Trench. The duration of the mission, excluding diving and surfacing, was more than 3 hours"." , at least one other team was planning a piloted submarine to reach the bottom of the Mariana Trench. DOER Marine, a marine technology company based near San Francisco and set up in 1992, plans for a crew of two or three to take 90 minutes to reach the seabed. The expedition conducted in 1960 claimed to have observed, with great surprise because of the high pressure, large creatures living at the bottom, such as a flatfish about long, and shrimp. According to Piccard, "The bottom appeared light and clear, a waste of firm diatomaceous ooze". Many marine biologists are now skeptical of the supposed sighting of the flatfish, and it is suggested that the creature may instead have been a sea cucumber. During the second expedition, the unmanned vehicle Kaikō collected mud samples from the seabed. Tiny organisms were found to be living in those samples. In July 2011, a research expedition deployed untethered landers, called dropcams, equipped with digital video cameras and lights to explore this region of the deep sea. Amongst many other living organisms, some gigantic single-celled amoebas with a size of more than , belonging to the class of monothalamea were observed. Monothalamea are noteworthy for their size, their extreme abundance on the seafloor and their role as hosts for a variety of organisms. In December 2014, a new species of snailfish was discovered at a depth of , breaking the previous record for the deepest living fish seen on video. During the 2014 expedition, several new species were filmed including huge amphipods known as supergiants. Deep-sea gigantism is the process where species grow larger than their shallow water relatives. In May 2017, an unidentified type of snailfish was filmed at a depth of . In 2016, a research expedition looked at the chemical makeup of crustacean scavengers collected from the range of 7,841–10,250 metres within the trench. Within these organisms, the researchers found extremely elevated concentrations of PCBs, a chemical toxin banned for its environmental harm in the 1970s, concentrated at all depths within the sediment of the trench. Further research has found that amphipods also ingest microplastics, with 100% of amphipods having at least one piece of synthetic material in their stomachs. In 2019, Victor Vescovo reported finding a plastic bag and candy wrappers at the bottom of the trench. That year, "Scientific American" also reported that carbon-14 from nuclear bomb testing has been found in the bodies of aquatic animals found in the trench. Like other oceanic trenches, the Mariana Trench has been proposed as a site for nuclear waste disposal in 1972, in the hope that tectonic plate subduction occurring at the site might eventually push the nuclear waste deep into the Earth's mantle, the second layer of the Earth. However, ocean dumping of nuclear waste is prohibited by international law. Furthermore, plate subduction zones are associated with very large megathrust earthquakes, the effects of which are unpredictable for the safety of long-term disposal of nuclear wastes within the hadopelagic ecosystem.
https://en.wikipedia.org/wiki?curid=19036
Macedonian language Macedonian (; , , ) is an Eastern South Slavic language. Spoken as a first language by around two million people, it serves as the official language of North Macedonia. Most speakers can be found in the country and its diaspora, with a smaller number of speakers throughout the transnational region of Macedonia. Macedonian is also a recognized minority language in parts of Albania, Bosnia and Herzegovina, Romania, and Serbia and it is spoken by emigrant communities predominantly in Australia, Canada and the United States. Macedonian developed out of the western dialects of the East South Slavic dialect continuum, whose earliest recorded form is Old Church Slavonic. During much of its history, this dialect continuum was called "Bulgarian", although in the 19th century, its western dialects came to be known separately as "Macedonian". Standard Macedonian was codified in 1945 and has developed modern literature since. As it is part of a dialect continuum with other South Slavic languages, Macedonian has a high degree of mutual intelligibility with Bulgarian and Serbo-Croatian. Linguists distinguish 29 dialects of Macedonian, with linguistic differences separating Western and Eastern groups of dialects. Some features of Macedonian grammar are the use of a dynamic stress that falls on the ante-penultimate syllable, three suffixed deictic articles that indicate noun position in reference to the speaker and the use of simple and complex verb tenses. Macedonian orthography is phonemic with a correspondence of one grapheme per phoneme. It is written using an adapted 31-letter version of the Cyrillic script with six original letters. Macedonian syntax is of the subject-object-verb (SOV) type and has flexible word order. Macedonian vocabulary has been historically influenced by Turkish and Russian. Somewhat less prominent vocabulary influences also came from neighboring and prestige languages. Since Macedonian and Bulgarian are mutually intelligible, share common linguistic features and are socio-historically related, some linguists are divided in their views of the two as separate languages or as a single pluricentric language. Macedonian belongs to the eastern group of the South Slavic branch of Slavic languages in the Indo-European language family, together with Bulgarian and the extinct Old Church Slavonic. Some authors also classify the Torlakian dialects in this group. Macedonian's closest relative is Bulgarian followed by Serbo-Croatian and Slovene, although the last is more distantly related. Together, South Slavic languages form a dialect continuum. Macedonian, like the other Eastern South Slavic idioms has characteristics that make it part of the Balkan sprachbund, a group of languages that share typological, grammatical and lexical features based on areal convergence, rather than genetic proximity. In that sense, Macedonian has experienced convergent evolution with other languages that belong to this group such as Greek, Aromanian, Albanian and Romani due to cultural and linguistic exchanges that occurred primarily through oral communication. Macedonian and Bulgarian are divergent from the remaining South Slavic languages in that they do not use noun cases (except for the vocative, and apart from some traces of once productive inflections still found scattered throughout these two) and have lost the infinitive. They are also the only Slavic languages with any definite articles (unlike standard Bulgarian, which uses only one article, standard Macedonian as well as some south-eastern Bulgarian dialects have a set of three deictic articles: unspecified, proximal and distal definite article). Macedonian and Bulgarian are the only Indo-European languages that make use of the narrative mood. Slavic people who settled on the Balkans, spoke their own dialects and used other dialects or languages to communicate with other people. The "canonical" Old Church Slavonic period of the development of Macedonian started during the 9th century and lasted until the first half of the 11th century. It saw translation of Greek religious texts. The Macedonian recension of Old Church Slavonic also appeared around that period in the Bulgarian Empire and was referred to as such due to works of the Ohrid Literary School. Towards the end of the 13th century, the influence of the Serbian language increased as Serbia expanded its borders southward. During the five centuries of Ottoman rule, from the 15th to the 20th century, the vernacular spoken in the territory of current-day North Macedonia witnessed grammatical and linguistic changes that came to characterize Macedonian as a member of the Balkan sprachbund. This period saw the introduction of many Turkish loanwords into the language. The latter half of the 18th century saw the rise of modern literary Macedonian through the written use of Macedonian dialects referred to as "Bulgarian" by writers. The first half of the 19th century saw the rise of nationalism among the South Slavic people in the Ottoman Empire. This period saw proponents of creating a common church for Bulgarian and Macedonian Slavs which would use a common modern Macedo-Bulgarian literary standard. The period between 1840 and 1870, saw a struggle to define the dialectal base of the common language called simply "Bulgarian", with two opposing views emerging. One ideology was to create a Bulgarian literary language based on Macedonian dialects, but such proposals were rejected by the Bulgarian codifiers. That period saw poetry written in the Struga dialect with elements from Russian. Textbooks also used either spoken dialectal forms of the language or a mixed Macedo-Bulgarian language. Subsequently, proponents of the idea of using a separate Macedonian language emerged. Krste Petkov Misirkov's book "Za makedonckite raboti" ("On Macedonian Matters") published in 1903, was the first attempt to formalize a separate literary language. With the book, the author proposed a Macedonian grammar and expressed the goal of codifying the language and using it in schools. The author postulated the principle that the Prilep-Bitola dialect be used as a dialectal basis for the formation of the Macedonian standard language; his idea however was not adopted until the 1940s. On 2 August 1944 at the first Anti-fascist Assembly for the National Liberation of Macedonia (ASNOM) meeting, Macedonian was declared an official language. With this, it became the last of the major Slavic languages to achieve a standard literary form. As such, Macedonian served as one of the three official languages of Yugoslavia from 1945 to 1991. Although the precise number of native and second language speakers of Macedonian is unknown due to the policies of neighboring countries and emigration of the population, estimates ranging between 1.4 million and 3.5 million have been reported. According to the 2002 census, the total population of North Macedonia was 2,022,547, with 1,344,815 citizens declaring Macedonian their native language. Macedonian is also studied and spoken to various degrees as a second language by all ethnic minorities in the country. Outside North Macedonia, there are small ethnic Macedonian minorities that speak Macedonian in neighboring countries including 4.697 in Albania (1989 census), 1,609 in Bulgaria (2011 census) and 12,706 in Serbia (2011 census). The exact number of speakers of the Macedonian language in Greece is difficult to ascertain due to the country's policies. Estimates of Slavophones ranging anywhere between 50,000 and 300,000 in the last decade of the 20th century have been reported. Approximately 580,000 Macedonians live outside North Macedonia per 1964 estimates with Australia, Canada, and the United States being home to the largest emigrant communities. Consequently, the number of speakers of Macedonian in these countries include 66,020 (2016 census), 15,605 (2016 census) and 22,885 (2010 census), respectively. Macedonian also has more than 50,000 native speakers in countries of Western Europe, predominantly in Germany, Switzerland and Italy. The Macedonian language has the status of an official language only in North Macedonia, and is a recognized minority and official language in parts of Albania (Pustec), Romania, Serbia (Jabuka and Plandište) and Bosnia and Herzegovina. There are provisions to learn the Macedonian language in Romania as Macedonians are an officially recognized minority group. Macedonian is studied and taught at various universities across the world and research centers focusing on the language are found at universities across Europe (France, Germany, Austria, Italy, Russia) as well as Australia, Canada and the United States (Chicago and North Carolina). During the standardization process of the Macedonian language, the dialectal base selected was primarily based on the West-Central dialects, which spans the triangle of the communities Makedonski Brod, Kičevo, Demir Hisar, Bitola, Prilep, and Veles. These were considered the most widespread and most likely to be adopted by speakers from other regions. The initial idea to select this region as a base was first proposed in Krste Petkov Misirkov's works as he believed the Macedonian language should abstract on those dialects that are distinct from neighboring Slavic languages, such as Bulgarian and Serbian. Likewise, this view does not take into account the fact that a Macedonian koiné language was already in existence. Based on a large group of features, Macedonian dialects can be divided into Eastern, Western and Northern groups. The boundary between them geographically runs approximately from Skopje and Skopska Crna Gora along the rivers Vardar and Crna. There are numerous isoglosses between these dialectal variations, with structural differences in phonetics, prosody (accentuation), morphology and syntax. The Western group of dialects can be subdivided into smaller dialectal territories, the largest group of which includes the central dialects. The linguistic territory where Macedonian dialects were spoken also span outside the country and within the region of Macedonia, including Pirin Macedonia into Bulgaria and Aegean Macedonia into Greece. Variations in consonant pronunciation occur between the two groups, with most Western regions losing the /x/ and the /v/ in intervocalic position ("глава" (head): /ɡlava/ = /ɡla/: "глави" (heads): /ɡlavi/ = /ɡlaj/) while Eastern dialects preserve it. Stress in the Western dialects is generally fixed and falls on the antepenultimate syllable while Eastern dialects have non-fixed stress systems that can fall on any syllable of the word, that is also reminiscent of Bulgarian dialects. Additionally, Eastern dialects are distinguishable by their fast tonality, elision of sounds and the suffixes for definiteness. The Northern dialectal group is close to South Serbian and Torlakian dialects and is characterized by 46–47 phonetic and grammatical isoglosses. In addition, a more detailed classification can be based on the modern reflexes of the Proto-Slavic reduced vowels (yers), vocalic sonorants, and the back nasal *ǫ. That classification distinguishes between the following 6 groups: Western Dialects: Eastern Dialects: The phonological system of Standard Macedonian is based on the Prilep-Bitola dialect. Macedonian possesses five vowels, one semivowel, three liquid consonants, three nasal stops, three pairs of fricatives, two pairs of affricates, a non-paired voiceless fricative, nine pairs of voiced and unvoiced consonants and four pairs of stops. Out of all the Slavic languages, Macedonian has the most frequent occurrence of vowels relative to consonants with a typical Macedonian sentence having on average 1.18 consonants for every one vowel. The Macedonian language contains 5 vowels which are /a/, /ɛ/, /ɪ/, /o/, and /u/. For the pronunciation of the middle vowels /"е"/ and /"о"/ by native Macedonian speakers, various vowel sounds can be produced ranging from [ɛ] to [ẹ] and from [o] to [ọ]. Unstressed vowels are not reduced, although they are pronounced more weakly and shortly than stressed ones, especially if they are found in a stressed syllable. The five vowels and the letter "р" (/r/) which acts as a semivowel when found between two consonants (e.g. "црква", "church"), can be syllable-forming. The schwa is phonemic in many dialects (varying in closeness to or ) but its use in the standard language is marginal. When writing a dialectal word and keeping the schwa for aesthetic effect, an apostrophe is used; for example, , , etc. When spelling words letter-by-letters, each consonant is followed by the schwa sound. The individual letters of acronyms are pronounced with the schwa in the same way: (). The lexicalized acronyms () and () (a brand of cigarettes), are among the few exceptions. Vowel length is not phonemic. Vowels in stressed open syllables in disyllablic words with stress on the penultimate can be realized as long, e.g. 'Veles'. The sequence is often realized phonetically as ; e.g. "'colloq." hour', - snakes. In other words, two vowels appearing next to each other can also be pronounced twice separately (e.g. "пооди" - to walk). The consonant inventory of the Macedonian language consists of 26 letters and distinguishes three groups of consonants ("согласки"): voiced ("звучни"), voiceless ("безвучни") and sonorant consonants ("сонорни"). Typical features and rules that apply to consonants in the Macedonian language include assimilation of voiced and voiceless consonants when next to each other, devoicing of vocal consonants when at the end of a word, double consonants and elision. At morpheme boundaries (represented in spelling) and at the end of a word (not represented in spelling), voicing opposition is neutralized. Similar to the Macedonian alphabet, Macedonian orthography was officially codified on 7 June 1945 at an ASNOM meeting. Rules about the orthography and orthoepy (correct pronunciation of words) were first collected and outlined in the book "Правопис на македонскиот литературен јазик" ("Orthography of the Macedonian standard language") published in 1945. Updated versions have subsequently appeared with the most recent one published in 2016. Macedonian orthography is consistent and phonemic in practice, an approximation of the principle of one grapheme per phoneme. This one-to-one correspondence is often simply described by the principle, "write as you speak and read as it is written". There is only one exception to this rule with the letter /"л"/ which is pronounced as /l/ before front vowels (e.g. "лист" (leaf); pronounced as [list]) and /j/ (e.g. "полјанка" (meadow); pronounced as [poljanka]) but velar /ł/ elswhere (e.g. "бела" (white) pronounced as [beła]). Another sound that is not represented in the written form but is pronounced in words is the schwa. Politicians and scholars from North Macedonia, Bulgaria and Greece have opposing views about the existence and distinctiveness of the Macedonian language. Through history and especially before its codification, Macedonian has been referred to as a variant of Bulgarian, Serbian or a distinct language of its own. Historically, after its codification, the use of the language has been a subject of different views and internal policies in Serbia, Bulgaria and Greece. Some international scholars also maintain Macedo-Bulgarian was a single pluricentric language until the 20th century and argue that the idea of linguistic separatism emerged in the late 19th century with the advent of Macedonian nationalism and the need for a separate Macedonian standard language subsequently appeared in the early 20th century. Different linguists have argued that during its codification, the Macedonian standard language was Serbianized with regards to its orthography and vocabulary. The government of Bulgaria, Bulgarian academics, the Bulgarian Academy of Sciences and the general public have and continue to widely consider Macedonian part of the Bulgarian dialect area. Dialect experts of the Bulgarian language refer to the Macedonian language as "македонска езикова норма" (Macedonian linguistic norm) of the Bulgarian language. As of 2019, disputes regarding the language and its origins are ongoing in academic and political circles in the two countries. The international consensus outside of Bulgaria is that Macedonian is an autonomous language within the Eastern South Slavic dialect continuum. The Greek scientific and local community was opposed to using the denomination Macedonian to refer to the language in light of the Greek-Macedonian naming dispute. Instead, the language is often called "Slavic", "Slavomacedonian" (translated to "Macedonian Slavic" in English), "makedonski", "makedoniski" ("Macedonian"), "slaviká" (Greek: "Slavic"), "dópia" or "entópia" (Greek: "local/indigenous [language]"), "balgàrtzki" (Bulgarian) or "Macedonian" in some parts of the region of Kastoria, "bògartski" ("Bulgarian") in some parts of Dolna Prespa along with "naši" ("our own") and "stariski" ("old"). With the Prespa agreement signed in 2018 between the Government of North Macedonia and the Government of Greece, the latter country accepted the use of the adjective Macedonian to refer to the language using a footnote to describe it as Slavic. The following is the Lord's Prayer in standard Macedonian.
https://en.wikipedia.org/wiki?curid=19037
Municipality A municipality is usually a single administrative division having corporate status and powers of self-government or jurisdiction as granted by national and regional laws to which it is subordinate. It is to be distinguished (usually) from the county, which may encompass rural territory or numerous small communities such as towns, villages and hamlets. The term "municipality" may also mean the governing or ruling body of a given municipality. A municipality is a general-purpose administrative subdivision, as opposed to a special-purpose district. The term is derived from French "municipalité" and Latin "municipalis". The English word "municipality" derives from the Latin social contract "municipium" (derived from a word meaning "duty holders"), referring to the Latin communities that supplied Rome with troops in exchange for their own incorporation into the Roman state (granting Roman citizenship to the inhabitants) while permitting the communities to retain their own local governments (a limited autonomy). A municipality can be any political jurisdiction from a sovereign state, such as the Principality of Monaco, to a small village, such as West Hampton Dunes, New York. The territory over which a municipality has jurisdiction may encompass Powers of municipalities range from virtual autonomy to complete subordination to the state. Municipalities may have the right to tax individuals and corporations with income tax, property tax, and corporate income tax, but may also receive substantial funding from the state. In some European countries, such as Germany, municipalities have the constitutional right to supply public services through municipally-owned public utlity companies. Terms cognate with "municipality", mostly referring to territory or political structure, are Spanish ' (Spain) and ' (Chile), and Catalan ". In a number of countries terms cognate with "commune" are used, referring to the community living in the area and the common interest. These include terms: The same terms may be used for church congregations or parishes, for example in the German and Dutch protestant churches. In Greece, the word Δήμος (demos) is used, also meaning 'community'; the word is known in English from the compound "democracy" (rule of the people). In some countries, the Spanish term "", referring to a municipality's administration building, is extended viasynecdoche to denote the municipality itself. In Moldova and Romania, both "municipalities" ("municipiu"; urban administrative units) and "communes" ("comună"; rural units) exist, and a commune may be part of a municipality. In many countries, comparable entities may exist with various names.
https://en.wikipedia.org/wiki?curid=19038
Marley Marl Marlon Williams (born September 30, 1962), better known by his stage name Marley Marl, is an American DJ, record producer, rapper and record label founder, primarily operating in hip hop music. Marlon grew up in Queensbridge housing projects located in Queens New York. He is credited with influencing a number of hip hop icons such as RZA, DJ Premier, and Pete Rock. He was also featured on Eric B. & Rakim's "Paid In Full" from their debut album which was also recorded in his studio. As a producer, one notable project was LL Cool J's "Mama Said Knock You Out". Marley Marl became interested in music, by performing in local talent shows, during the early days of rap music. He caught his big break in 1984, with artist Roxanne Shante's hit "Roxanne's Revenge". Marley Marl is also responsible for starting the hip hop collective Juice Crew alongside DJ Mr. Magic. Marl was referenced on Biggie Smalls' track "Juicy" as being one of Smalls' early influences.
https://en.wikipedia.org/wiki?curid=19041
Metal A metal (from Greek μέταλλον "métallon", "mine, quarry, metal") is a material that, when freshly prepared, polished, or fractured, shows a lustrous appearance, and conducts electricity and heat relatively well. Metals are typically malleable (they can be hammered into thin sheets) or ductile (can be drawn into wires). A metal may be a chemical element such as iron; an alloy such as stainless steel; or a molecular compound such as polymeric sulfur nitride. In physics, a metal is generally regarded as any substance capable of conducting electricity at a temperature of absolute zero. Many elements and compounds that are not normally classified as metals become metallic under high pressures. For example, the nonmetal iodine gradually becomes a metal at a pressure of between 40 and 170 thousand times atmospheric pressure. Equally, some materials regarded as metals can become nonmetals. Sodium, for example, becomes a nonmetal at pressure of just under two million times atmospheric pressure. In chemistry, two elements that would otherwise qualify (in physics) as brittle metals—arsenic and antimony—are commonly instead recognised as metalloids, on account of their chemistry (predominately non-metallic for arsenic, and balanced between metallicity and nonmetallicity for antimony). Around 95 of the 118 elements in the periodic table are metals (or are likely to be such). The number is inexact as the boundaries between metals, nonmetals, and metalloids fluctuate slightly due to a lack of universally accepted definitions of the categories involved. In astrophysics the term "metal" is cast more widely to refer to all chemical elements in a star that are heavier than the lightest two, hydrogen and helium, and not just traditional metals. A star fuses lighter atoms, mostly hydrogen and helium, into heavier atoms over its lifetime. Used in that sense, the metallicity of an astronomical object is the proportion of its matter made up of the heavier chemical elements. Metals, as chemical elements, comprise 25% of the Earth's crust and are present in many aspects of modern life. The strength and resilience of some metals has led to their frequent use in, for example, high-rise building and bridge construction, as well as most vehicles, many home appliances, tools, pipes, and railroad tracks. Precious metals were historically used as coinage, but in the modern era, coinage metals have extended to at least 23 of the chemical elements. The history of refined metals is thought to begin with the use of copper about 11,000 years ago. Gold, silver, iron (as meteoric iron), lead, and brass were likewise in use before the first known appearance of bronze in the 5th millennium BCE. Subsequent developments include the production of early forms of steel; the discovery of sodium—the first light metal—in 1809; the rise of modern alloy steels; and, since the end of World War II, the development of more sophisticated alloys. Metals are shiny and lustrous, at least when freshly prepared, polished, or fractured. Sheets of metal thicker than a few micrometres appear opaque, but gold leaf transmits green light. The solid or liquid state of metals largely originates in the capacity of the metal atoms involved to readily lose their outer shell electrons. Broadly, the forces holding an individual atom's outer shell electrons in place are weaker than the attractive forces on the same electrons arising from interactions between the atoms in the solid or liquid metal. The electrons involved become delocalised and the atomic structure of a metal can effectively be visualised as a collection of atoms embedded in a cloud of relatively mobile electrons. This type of interaction is called a metallic bond. The strength of metallic bonds for different elemental metals reaches a maximum around the center of the transition metal series, as these elements have large numbers of delocalized electrons. Although most elemental metals have higher densities than most nonmetals, there is a wide variation in their densities, lithium being the least dense (0.534 g/cm3) and osmium (22.59 g/cm3) the most dense. Magnesium, aluminium and titanium are light metals of significant commercial importance. Their respective densities of 1.7, 2.7 and 4.5 g/cm3 can be compared to those of the older structural metals, like iron at 7.9 and copper at 8.9 g/cm3. An iron ball would thus weigh about as much as three aluminium balls. Metals are typically malleable and ductile, deforming under stress without cleaving. The nondirectional nature of metallic bonding is thought to contribute significantly to the ductility of most metallic solids. In contrast, in an ionic compound like table salt, when the planes of an ionic bond slide past one another, the resultant change in location shifts ions of the same charge into close proximity, resulting in the cleavage of the crystal. Such a shift is not observed in a covalently bonded crystal, such as a diamond, where fracture and crystal fragmentation occurs. Reversible elastic deformation in metals can be described by Hooke's Law for restoring forces, where the stress is linearly proportional to the strain. Heat or forces larger than a metal's elastic limit may cause a permanent (irreversible) deformation, known as plastic deformation or plasticity. An applied force may be a tensile (pulling) force, a compressive (pushing) force, or a shear, bending or torsion (twisting) force. A temperature change may affect the movement or displacement of structural defects in the metal such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and twins in both crystalline and non-crystalline metals. Internal slip, creep, and metal fatigue may ensue. The atoms of metallic substances are typically arranged in one of three common crystal structures, namely body-centered cubic (bcc), face-centered cubic (fcc), and hexagonal close-packed (hcp). In bcc, each atom is positioned at the center of a cube of eight others. In fcc and hcp, each atom is surrounded by twelve others, but the stacking of the layers differs. Some metals adopt different structures depending on the temperature. The unit cell for each crystal structure is the smallest group of atoms which has the overall symmetry of the crystal, and from which the entire crystalline lattice can be built up by repetition in three dimensions. In the case of the body-centered cubic crystal structure shown above, the unit cell is made up of the central atom plus one-eight of each of the eight corner atoms. The electronic structure of metals means they are relatively good conductors of electricity. Electrons in matter can only have fixed rather than variable energy levels, and in a metal the energy levels of the electrons in its electron cloud, at least to some degree, correspond to the energy levels at which electrical conduction can occur. In a semiconductor like silicon or a nonmetal like sulfur there is an energy gap between the electrons in the substance and the energy level at which electrical conduction can occur. Consequently, semiconductors and nonmetals are relatively poor conductors. The elemental metals have electrical conductivity values of from 6.9 × 103 S/cm for manganese to 6.3 × 105 S/cm for silver. In contrast, a semiconducting metalloid such as boron has an electrical conductivity 1.5 × 10−6 S/cm. With one exception, metallic elements reduce their electrical conductivity when heated. Plutonium increases its electrical conductivity when heated in the temperature range of around −175 to +125 °C. Metals are relatively good conductors of heat. The electrons in a metal's electron cloud are highly mobile and easily able to pass on heat-induced vibrational energy. The contribution of a metal's electrons to its heat capacity and thermal conductivity, and the electrical conductivity of the metal itself can be calculated from the free electron model. However, this does not take into account the detailed structure of the metal's ion lattice. Taking into account the positive potential caused by the arrangement of the ion cores enables consideration of the electronic band structure and binding energy of a metal. Various mathematical models are applicable, the simplest being the nearly free electron model. Metals are usually inclined to form cations through electron loss. Most will react with oxygen in the air to form oxides over various timescales (potassium burns in seconds while iron rusts over years). Some others, like palladium, platinum and gold, do not react with the atmosphere at all. The oxides of metals are generally basic, as opposed to those of nonmetals, which are acidic or neutral. Exceptions are largely oxides with very high oxidation states such as CrO3, Mn2O7, and OsO4, which have strictly acidic reactions. Painting, anodizing or plating metals are good ways to prevent their corrosion. However, a more reactive metal in the electrochemical series must be chosen for coating, especially when chipping of the coating is expected. Water and the two metals form an electrochemical cell, and if the coating is less reactive than the underlying metal, the coating actually "promotes" corrosion. In chemistry, the elements which are usually considered to be metals under ordinary conditions are shown in yellow on the periodic table below. The elements shown as having unknown properties are likely to be metals. The remaining elements are either metalloids (B, Si, Ge, As, Sb, and Te being commonly recognised as such) or nonmetals. Astatine (At) is usually classified as either a nonmetal or a metalloid; it has been predicted to be a metal. It is here shown as a metalloid. An alloy is a substance having metallic properties and which is composed of two or more elements at least one of which is a metal. An alloy may have a variable or fixed composition. For example, gold and silver form an alloy in which the proportions of gold or silver can be freely adjusted; titanium and silicon form an alloy Ti2Si in which the ratio of the two components is fixed (also known as an intermetallic compound). Most pure metals are either too soft, brittle or chemically reactive for practical use. Combining different ratios of metals as alloys modifies the properties of pure metals to produce desirable characteristics. The aim of making alloys is generally to make them less brittle, harder, resistant to corrosion, or have a more desirable color and luster. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steel) make up the largest proportion both by quantity and commercial value. Iron alloyed with various proportions of carbon gives low, mid and high carbon steels, with increasing carbon levels reducing ductility and toughness. The addition of silicon will produce cast irons, while the addition of chromium, nickel and molybdenum to carbon steels (more than 10%) results in stainless steels. Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known since prehistory—bronze gave the Bronze Age its name—and have many applications today, most importantly in electrical wiring. The alloys of the other three metals have been developed relatively recently; due to their chemical reactivity they require electrolytic extraction processes. The alloys of aluminium, titanium and magnesium are valued for their high strength-to-weight ratios; magnesium can also provide electromagnetic shielding. These materials are ideal for situations where high strength-to-weight ratio is more important than material cost, such as in aerospace and some automotive applications. Alloys specially designed for highly demanding applications, such as jet engines, may contain more than ten elements. Metals can be categorised according to their physical or chemical properties. Categories described in the subsections below include ferrous and non-ferrous metals; brittle metals and refractory metals; white metals; heavy and light metals; and base, noble, and precious metals. The "Metallic elements" table in this section categorises the elemental metals on the basis of their chemical properties into alkali and alkaline earth metals; transition and post-transition metals; and lanthanides and actinides. Other categories are possible, depending on the criteria for inclusion. For example, the ferromagnetic metals—those metals that are magnetic at room temperature—are iron, cobalt, and nickel. The term "ferrous" is derived from the Latin word meaning "containing iron". This can include pure iron, such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but not exclusively. Non-ferrous metals—alloys—lack appreciable amounts of iron. While nearly all metals are malleable or ductile, a few—beryllium, chromium, manganese, gallium, and bismuth—are brittle. Arsenic, and antimony, if admitted as metals, are brittle. Low values of the ratio of bulk elastic modulus to shear modulus (Pugh's criterion) are indicative of intrinsic brittleness. In materials science, metallurgy, and engineering, a refractory metal is a metal that is extraordinarily resistant to heat and wear. Which metals belong to this category varies; the most common definition includes niobium, molybdenum, tantalum, tungsten, and rhenium. They all have melting points above 2000 °C, and a high hardness at room temperature. A white metal is any of range of white-coloured metals (or their alloys) with relatively low melting points. Such metals include zinc, cadmium, tin, antimony (here counted as a metal), lead, and bismuth, some of which are quite toxic. In Britain, the fine art trade uses the term "white metal" in auction catalogues to describe foreign silver items which do not carry British Assay Office marks, but which are nonetheless understood to be silver and are priced accordingly. A heavy metal is any relatively dense metal or metalloid. More specific definitions have been proposed, but none have obtained widespread acceptance. Some heavy metals have niche uses, or are notably toxic; some are essential in trace amounts. All other metals are light metals. In chemistry, the term "base metal" is used informally to refer to a metal that is easily oxidized or corroded, such as reacting easily with dilute hydrochloric acid (HCl) to form a metal chloride and hydrogen. Examples include iron, nickel, lead and zinc. Copper is considered a base metal as it is oxidized relatively easily, although it does not react with HCl. The term noble metal is commonly used in opposition to "base metal". Noble metals are resistant to corrosion or oxidation, unlike most base metals. They tend to be precious metals, often due to perceived rarity. Examples include gold, platinum, silver, rhodium, iridium and palladium. In alchemy and numismatics, the term base metal is contrasted with precious metal, that is, those of high economic value. A longtime goal of the alchemists was the transmutation of base metals into precious metals including such coinage metals as silver and gold. Most coins today are made of base metals with no intrinsic value, in the past, coins frequently derived their value primarily from their precious metal content. Chemically, the precious metals (like the noble metals) are less reactive than most elements, have high luster and high electrical conductivity. Historically, precious metals were important as currency, but are now regarded mainly as investment and industrial commodities. Gold, silver, platinum and palladium each have an ISO 4217 currency code. The best-known precious metals are gold and silver. While both have industrial uses, they are better known for their uses in art, jewelry, and coinage. Other precious metals include the platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of which platinum is the most widely traded. The demand for precious metals is driven not only by their practical use, but also by their role as investments and a store of value. Palladium and platinum, as of fall 2018, were valued at about three quarters the price of gold. Silver is substantially less expensive than these metals, but is often traditionally considered a precious metal in light of its role in coinage and jewelry. Metals up to the vicinity of iron (in the periodic table) are largely made via stellar nucleosynthesis. In this process, lighter elements from hydrogen to silicon undergo successive fusion reactions inside stars, releasing light and heat and forming heavier elements with higher atomic numbers. Heavier metals are not usually formed this way since fusion reactions involving such nuclei would consume rather than release energy. Rather, they are largely synthesised (from elements with a lower atomic number) by neutron capture, with the two main modes of this repetitive capture being the s-process and the r-process. In the s-process ("s" stands for "slow"), singular captures are separated by years or decades, allowing the less stable nuclei to beta decay, while in the r-process ("rapid"), captures happen faster than nuclei can decay. Therefore, the s-process takes a more or less clear path: for example, stable cadmium-110 nuclei are successively bombarded by free neutrons inside a star until they form cadmium-115 nuclei which are unstable and decay to form indium-115 (which is nearly stable, with a half-life times the age of the universe). These nuclei capture neutrons and form indium-116, which is unstable, and decays to form tin-116, and so on. In contrast, there is no such path in the r-process. The s-process stops at bismuth due to the short half-lives of the next two elements, polonium and astatine, which decay to bismuth or lead. The r-process is so fast it can skip this zone of instability and go on to create heavier elements such as thorium and uranium. Metals condense in planets as a result of stellar evolution and destruction processes. Stars lose much of their mass when it is ejected late in their lifetimes, and sometimes thereafter as a result of a neutron star merger, thereby increasing the abundance of elements heavier than helium in the interstellar medium. When gravitational attraction causes this matter to coalesce and collapse new stars and planets are formed. The Earth's crust is made of approximately 25% of metals by weight, of which 80% are light metals such as sodium, magnesium, and aluminium. Nonmetals (~75%) make up the rest of the crust. Despite the overall scarcity of some heavier metals such as copper, they can become concentrated in economically extractable quantities as a result of mountain building, erosion, or other geological processes. Metals are primarily found as lithophiles (rock-loving) or chalcophiles (ore-loving). Lithophile metals are mainly the s-block elements, the more reactive of the d-block elements. and the f-block elements. They have a strong affinity for oxygen and mostly exist as relatively low density silicate minerals. Chalcophile metals are mainly the less reactive d-block elements, and the period 4–6 p-block metals. They are usually found in (insoluble) sulfide minerals. Being denser than the lithophiles, hence sinking lower into the crust at the time of its solidification, the chalcophiles tend to be less abundant than the lithophiles. On the other hand, gold is a siderophile, or iron-loving element. It does not readily form compounds with either oxygen or sulfur. At the time of the Earth's formation, and as the most noble (inert) of metals, gold sank into the core due to its tendency to form high-density metallic alloys. Consequently, it is a relatively rare metal. Some other (less) noble metals—molybdenum, rhenium, the platinum group metals (ruthenium, rhodium, palladium, osmium, iridium, and platinum), germanium, and tin—can be counted as siderophiles but only in terms of their primary occurrence in the Earth (core, mantle and crust), rather the crust. These metals otherwise occur in the crust, in small quantities, chiefly as chalcophiles (less so in their native form). The rotating fluid outer core of the Earth's interior, which is composed mostly of iron, is thought to be the source of Earth's protective magnetic field. The core lies above Earth's solid inner core and below its mantle. If it could be rearranged into a column having a footprint it would have a height of nearly 700 light years. The magnetic field shields the Earth from the charged particles of the solar wind, and cosmic rays that would otherwise strip away the upper atmosphere (including the ozone layer that limits the transmission of ultraviolet radiation). Metals are often extracted from the Earth by means of mining ores that are rich sources of the requisite elements, such as bauxite. Ore is located by prospecting techniques, followed by the exploration and examination of deposits. Mineral sources are generally divided into surface mines, which are mined by excavation using heavy equipment, and subsurface mines. In some cases, the sale price of the metal/s involved make it economically feasible to mine lower concentration sources. Once the ore is mined, the metals must be extracted, usually by chemical or electrolytic reduction. Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs aqueous chemistry for the same purpose. The methods used depend on the metal and their contaminants. When a metal ore is an ionic compound of that metal and a non-metal, the ore must usually be smelted—heated with a reducing agent—to extract the pure metal. Many common metals, such as iron, are smelted using carbon as a reducing agent. Some metals, such as aluminium and sodium, have no commercially practical reducing agent, and are extracted using electrolysis instead. Sulfide ores are not reduced directly to the metal but are roasted in air to convert them to oxides. Metals are present in nearly all aspects of modern life. Iron, a heavy metal, may be the most common as it accounts for 90% of all refined metals; aluminium, a light metal, is the next most commonly refined metal. Pure iron may be the cheapest metallic element of all at cost of about US$0.07 per gram. Its ores are widespread; it is easy to refine; and the technology involved has been developed over hundreds of years. Cast iron is even cheaper, at a fraction of US$0.01 per gram, because there is no need for subsequent purification. Platinum, at a cost of about $27 per gram, may be the most ubiquitous given its very high melting point, resistance to corrosion, electrical conductivity, and durability. It is said to be found in, or used to produce, 20% of all consumer goods. Polonium is likely to be the most expensive metal, at a notional cost of about $100,000,000 per gram, due to its scarcity and micro-scale production. Some metals and metal alloys possess high structural strength per unit mass, making them useful materials for carrying large loads or resisting impact damage. Metal alloys can be engineered to have high resistance to shear, torque and deformation. However the same metal can also be vulnerable to fatigue damage through repeated use or from sudden stress failure when a load capacity is exceeded. The strength and resilience of metals has led to their frequent use in high-rise building and bridge construction, as well as most vehicles, many appliances, tools, pipes, and railroad tracks. Metals are good conductors, making them valuable in electrical appliances and for carrying an electric current over a distance with little energy lost. Electrical power grids rely on metal cables to distribute electricity. Home electrical systems, for the most part, are wired with copper wire for its good conducting properties. The thermal conductivity of metals is useful for containers to heat materials over a flame. Metals are also used for heat sinks to protect sensitive equipment from overheating. The high reflectivity of some metals enables their use in mirrors, including precision astronomical instruments, and adds to the aesthetics of metallic jewelry. Some metals have specialized uses; mercury is a liquid at room temperature and is used in switches to complete a circuit when it flows over the switch contacts. Radioactive metals such as uranium and plutonium are used in nuclear power plants to produce energy via nuclear fission. Shape memory alloys are used for applications such as pipes, fasteners and vascular stents. Metals can be doped with foreign molecules—organic, inorganic, biological and polymers. This doping entails the metal with new properties that are induced by the guest molecules. Applications in catalysis, medicine, electrochemical cells, corrosion and more have been developed. Demand for metals is closely linked to economic growth given their use in infrastructure, construction, manufacturing, and consumer goods. During the 20th century, the variety of metals used in society grew rapidly. Today, the development of major nations, such as China and India, and technological advances, are fuelling ever more demand. The result is that mining activities are expanding, and more and more of the world's metal stocks are above ground in use, rather than below ground as unused reserves. An example is the in-use stock of copper. Between 1932 and 1999, copper in use in the U.S. rose from 73 g to 238 g per person. Metals are inherently recyclable, so in principle, can be used over and over again, minimizing these negative environmental impacts and saving energy. For example, 95% of the energy used to make aluminium from bauxite ore is saved by using recycled material. Globally, metal recycling is generally low. In 2010, the International Resource Panel, hosted by the United Nations Environment Programme published reports on metal stocks that exist within society and their recycling rates. The authors of the report observed that the metal stocks in society can serve as huge mines above ground. They warned that the recycling rates of some rare metals used in applications such as mobile phones, battery packs for hybrid cars and fuel cells are so low that unless future end-of-life recycling rates are dramatically stepped up these critical metals will become unavailable for use in modern technology. Some metals are either essential nutrients (typically iron, cobalt, and zinc), or relatively harmless (such as ruthenium, silver, and indium), but can be toxic in larger amounts or certain forms. Other metals, such as cadmium, mercury, and lead, are highly poisonous. Potential sources of metal poisoning include mining, tailings, industrial wastes, agricultural runoff, occupational exposure, paints and treated timber. Copper, which occurs in native form, may have been the first metal discovered given its distinctive appearance, heaviness, and malleability compared to other stones or pebbles. Gold, silver, and iron (as meteoric iron), and lead were likewise discovered in prehistory. Forms of brass, an alloy of copper and zinc made by concurrently smelting the ores of these metals, originate from this period (although pure zinc was not isolated until the 13th century). The malleability of the solid metals led to the first attempts to craft metal ornaments, tools, and weapons. Meteoric iron containing nickel was discovered from time to time and, in some respects this was superior to any industrial steel manufactured up to the 1880s when alloy steels become prominent. The discovery of bronze (an alloy of copper with arsenic or tin) enabled people to create metal objects which were harder and more durable than previously possible. Bronze tools, weapons, armor, and building materials such as decorative tiles were harder and more durable than their stone and copper ("Chalcolithic") predecessors. Initially, bronze was made of copper and arsenic (forming arsenic bronze) by smelting naturally or artificially mixed ores of copper and arsenic. The earliest artifacts so far known come from the Iranian plateau in the 5th millennium BCE. It was only later that tin was used, becoming the major non-copper ingredient of bronze in the late 3rd millennium BCE. Pure tin itself was first isolated in 1800 BCE by Chinese and Japanese metalworkers. Mercury was known to ancient Chinese and Indians before 2000 BCE, and found in Egyptian tombs dating from 1500 BCE. The earliest known production of steel, an iron-carbon alloy, is seen in pieces of ironware excavated from an archaeological site in Anatolia (Kaman-Kalehöyük) and are nearly 4,000 years old, dating from 1800 BCE. From about 500 BCE sword-makers of Toledo, Spain were making early forms of alloy steel by adding a mineral called wolframite, which contained tungsten and manganese, to iron ore (and carbon). The resulting Toledo steel came to the attention of Rome when used by Hannibal in the Punic Wars. It soon became the basis for the weaponry of Roman legions; their swords were said to have been "so keen that there is no helmet which cannot be cut through by them." In pre-Columbian America, objects made of tumbaga, an alloy of copper and gold, started being produced in Panama and Costa Rica between 300–500 CE. Small metal sculptures were common and an extensive range of tumbaga (and gold) ornaments comprised the usual regalia of persons of high status. At around the same time indigenous Ecuadorians were combining gold with a naturally-occurring platinum alloy containing small amounts of palladium, rhodium, and iridium, to produce miniatures and masks composed of a white gold-platinum alloy. The metal workers involved heated gold with grains of the platinum alloy until the gold melted at which point the platinum group metals became bound within the gold. After cooling, the resulting conglomeration was hammered and reheated repeatedly until it became as homogenous as if all of the metals concerned had been melted together (attaining the melting points of the platinum group metals concerned was beyond the technology of the day). Arabic and medieval alchemists believed that all metals and matter were composed of the principle of sulfur, the father of all metals and carrying the combustible property, and the principle of mercury, the mother of all metals and carrier of the liquidity, fusibility, and volatility properties. These principles were not necessarily the common substances sulfur and mercury found in most laboratories. This theory reinforced the belief that all metals were destined to become gold in the bowels of the earth through the proper combinations of heat, digestion, time, and elimination of contaminants, all of which could be developed and hastened through the knowledge and methods of alchemy. Arsenic, zinc, antimony, and bismuth became known, although these were at first called semimetals or bastard metals on account of their immalleability. All four may have been used incidentally in earlier times without recognising their nature. Albertus Magnus is believed to have been the first to isolate arsenic from a compound in 1250, by heating soap together with arsenic trisulfide. Metallic zinc, which is brittle if impure, was isolated in India by 1300 AD. The first description of a procedure for isolating antimony is in the 1540 book "De la pirotechnia" by Vannoccio Biringuccio. Bismuth was described by Agricola in "De Natura Fossilium" (c. 1546); it had been confused in early times with tin and lead because of its resemblance to those elements. The first systematic text on the arts of mining and metallurgy was "De la Pirotechnia" (1540) by Vannoccio Biringuccio, which treats the examination, fusion, and working of metals. Sixteen years later, Georgius Agricola published "De Re Metallica" in 1556, a clear and complete account of the profession of mining, metallurgy, and the accessory arts and sciences, as well as qualifying as the greatest treatise on the chemical industry through the sixteenth century. He gave the following description of a metal in his "De Natura Fossilium" (1546): Metal is a mineral body, by nature either liquid or somewhat hard. The latter may be melted by the heat of the fire, but when it has cooled down again and lost all heat, it becomes hard again and resumes its proper form. In this respect it differs from the stone which melts in the fire, for although the latter regain its hardness, yet it loses its pristine form and properties. Traditionally there are six different kinds of metals, namely gold, silver, copper, iron, tin and lead. There are really others, for quicksilver is a metal, although the Alchemists disagree with us on this subject, and bismuth is also. The ancient Greek writers seem to have been ignorant of bismuth, wherefore Ammonius rightly states that there are many species of metals, animals, and plants which are unknown to us. Stibium when smelted in the crucible and refined has as much right to be regarded as a proper metal as is accorded to lead by writers. If when smelted, a certain portion be added to tin, a bookseller's alloy is produced from which the type is made that is used by those who print books on paper. Each metal has its own form which it preserves when separated from those metals which were mixed with it. Therefore neither electrum nor Stannum [not meaning our tin] is of itself a real metal, but rather an alloy of two metals. Electrum is an alloy of gold and silver, Stannum of lead and silver. And yet if silver be parted from the electrum, then gold remains and not electrum; if silver be taken away from Stannum, then lead remains and not Stannum. Whether brass, however, is found as a native metal or not, cannot be ascertained with any surety. We only know of the artificial brass, which consists of copper tinted with the colour of the mineral calamine. And yet if any should be dug up, it would be a proper metal. Black and white copper seem to be different from the red kind. Metal, therefore, is by nature either solid, as I have stated, or fluid, as in the unique case of quicksilver. But enough now concerning the simple kinds. Platinum, the third precious metal after gold and silver, was discovered in Ecuador during the period 1736 to 1744, by the Spanish astronomer Antonio de Ulloa and his colleague the mathematician Jorge Juan y Santacilia. Ulloa was the first person to write a scientific description of the metal, in 1748. In 1789, the German chemist Martin Heinrich Klaproth was able to isolate an oxide of uranium, which he thought was the metal itself. Klaproth was subsequently credited as the discoverer of uranium. It was not until 1841, that the French chemist Eugène-Melchior Péligot, was able to prepare the first sample of uranium metal. Henri Becquerel subsequently discovered radioactivity in 1896 by using uranium. In the 1790s, Joseph Priestley and the Dutch chemist Martinus van Marum observed the transformative action of metal surfaces on the dehydrogenation of alcohol, a development which subsequently led, in 1831, to the industrial scale synthesis of sulphuric acid using a platinum catalyst. In 1803, cerium was the first of the lanthanide metals to be discovered, in Bastnäs, Sweden by Jöns Jakob Berzelius and Wilhelm Hisinger, and independently by Martin Heinrich Klaproth in Germany. The lanthanide metals were largely regarded as oddities until the 1960s when methods were developed to more efficiently separate them from one another. They have subsequently found uses in cell phones, magnets, lasers, lighting, batteries, catalytic converters, and in other applications enabling modern technologies. Other metals discovered and prepared during this time were cobalt, nickel, manganese, molybdenum, tungsten, and chromium; and some of the platinum group metals, palladium, osmium, iridium, and rhodium. All metals discovered until 1809 had relatively high densities; their heaviness was regarded as a singularly distinguishing criterion. From 1809 onwards, light metals such as sodium, potassium, and strontium were isolated. Their low densities challenged conventional wisdom as to the nature of metals. They behaved chemically as metals however, and were subsequently recognised as such. Aluminium was discovered in 1824 but it was not until 1886 that an industrial large-scale production method was developed. Prices of aluminium dropped and aluminium became widely used in jewelry, everyday items, eyeglass frames, optical instruments, tableware, and foil in the 1890s and early 20th century. Aluminium's ability to form hard yet light alloys with other metals provided the metal many uses at the time. During World War I, major governments demanded large shipments of aluminium for light strong airframes. The most common metal in use for electric power transmission today is aluminium conductor steel reinforced. Also seeing much use is all-aluminum-alloy conductor. Aluminium is used because it has about half the weight of a comparable resistance copper cable (though larger diameter due to lower specific conductivity), as well as being cheaper. Copper was more popular in the past and is still in use, especially at lower voltages and for grounding. While pure metallic titanium (99.9%) was first prepared in 1910 it was not used outside the laboratory until 1932. In the 1950s and 1960s, the Soviet Union pioneered the use of titanium in military and submarine applications as part of programs related to the Cold War. Starting in the early 1950s, titanium came into use extensively in military aviation, particularly in high-performance jets, starting with aircraft such as the F-100 Super Sabre and Lockheed A-12 and SR-71. Metallic scandium was produced for the first time in 1937. The first pound of 99% pure scandium metal was produced in 1960. Production of aluminium-scandium alloys began in 1971 following a U.S. patent. Aluminium-scandium alloys were also developed in the USSR. The modern era in steelmaking began with the introduction of Henry Bessemer's Bessemer process in 1855, the raw material for which was pig iron. His method let him produce steel in large quantities cheaply, thus mild steel came to be used for most purposes for which wrought iron was formerly used. The Gilchrist-Thomas process (or "basic Bessemer process") was an improvement to the Bessemer process, made by lining the converter with a basic material to remove phosphorus. Due to its high tensile strength and low cost, steel came to be a major component used in buildings, infrastructure, tools, ships, automobiles, machines, appliances, and weapons. In 1872, the Englishmen Clark and Woods patented an alloy that would today be considered a stainless steel. The corrosion resistance of iron-chromium alloys had been recognized in 1821 by French metallurgist Pierre Berthier. He noted their resistance against attack by some acids and suggested their use in cutlery. Metallurgists of the 19th century were unable to produce the combination of low carbon and high chromium found in most modern stainless steels, and the high-chromium alloys they could produce were too brittle to be practical. It was not until 1912 that the industrialisation of stainless steel alloys occurred in England, Germany, and the United States. By 1900 three metals with atomic numbers less than lead (#82), the heaviest stable metal, remained to be discovered: elements 71, 72, 75. Von Welsbach, in 1906, proved that the old ytterbium also contained a new element (#71), which he named "cassiopeium". Urbain proved this simultaneously, but his samples were very impure and only contained trace quantities of the new element. Despite this, his chosen name "lutetium" was adopted. In 1908, Ogawa found element 75 in thorianite but assigned it as element 43 instead of 75 and named it "nipponium". In 1925 Walter Noddack, Ida Eva Tacke and Otto Berg announced its separation from gadolinite and gave it the present name, "rhenium". Georges Urbain claimed to have found element 72 in rare-earth residues, while Vladimir Vernadsky independently found it in orthite. Neither claim was confirmed due to World War I, and neither could be confirmed later, as the chemistry they reported does not match that now known for "hafnium". After the war, in 1922, Coster and Hevesy found it by X-ray spectroscopic analysis in Norwegian zircon. Hafnium was thus the last stable element to be discovered. By the end of World War II scientists had synthesized four post-uranium elements, all of which are radioactive (unstable) metals: neptunium (in 1940), plutonium (1940–41), and curium and americium (1944), representing elements 93 to 96. The first two of these were eventually found in nature as well. Curium and americium were by-products of the Manhattan project, which produced the world's first atomic bomb in 1945. The bomb was based on the nuclear fission of uranium, a metal first thought to have been discovered nearly 150 years earlier. Superalloys composed of combinations of Fe, Ni, Co, and Cr, and lesser amounts of W, Mo, Ta, Nb, Ti, and Al were developed shortly after World War II for use in high performance engines, operating at elevated temperatures (above 650 °C (1,200 °F)). They retain most of their strength under these conditions, for prolonged periods, and combine good low-temperature ductility with resistance to corrosion or oxidation. Superalloys can now be found in a wide range of applications including land, maritime, and aerospace turbines, and chemical and petroleum plants. The successful development of the atomic bomb at the end of World War II sparked further efforts to synthesize new elements, nearly all of which are, or are expected to be, metals, and all of which are radioactive. It was not until 1949 that element 97 (berkelium), next after element 96 (curium), was synthesized by firing alpha particles at an americium target. In 1952, element 100 (fermium) was found in the debris of the first hydrogen bomb explosion; hydrogen, a nonmetal, had been identified as an element nearly 200 years earlier. Since 1952, elements 101 (mendelevium) to 117 (tennessine) have been synthesized. The most recently synthesized element is 118 (oganesson). Its status as a metal or a nonmetal—or something else—is not yet clear. A metallic glass (also known as an amorphous or glassy metal) is a solid metallic material, usually an alloy, with disordered atomic-scale structure. Most pure and alloyed metals, in their solid state, have atoms arranged in a highly ordered crystalline structure. Amorphous metals have a non-crystalline glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity. Amorphous metals are produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. The first reported metallic glass was an alloy (Au75Si25) produced at Caltech in 1960. More recently, batches of amorphous steel with three times the strength of conventional steel alloys have been produced. Currently the most important applications rely on the special magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high efficiency transformers. Theft control ID tags and other article surveillance schemes often use metallic glasses because of these magnetic properties. A shape-memory alloy (SMA) is an alloy that "remembers" its original shape and when deformed returns to its pre-deformed shape when heated. While the shape memory effect had been first observed in 1932, in an Au-Cd alloy, it was not until 1962, with the accidental discovery of the effect in a Ni-Ti alloy that research began in earnest, and another ten years before commercial applications emerged. SMA's have applications in robotics and automotive, aerospace and biomedical industries. There is another type of SMA, called a ferromagnetic shape-memory alloy (FSMA), that changes shape under strong magnetic fields. These materials are of particular interest as the magnetic response tends to be faster and more efficient than temperature-induced responses. In 1984, Israeli chemist Dan Shechtman found an aluminium-manganese alloy having five-fold symmetry, in breach of crystallographic convention at the time which said that crystalline structures could only have two-, three-, four-, or six-fold symmetry. Due to fear of the scientific community's reaction, it took him two years to publish the results for which he was awarded the Nobel Prize in Chemistry in 2011. Since this time, hundreds of quasicrystals have been reported and confirmed. They exist in many metallic alloys (and some polymers). Quasicrystals are found most often in aluminium alloys (Al-Li-Cu, Al-Mn-Si, Al-Ni-Co, Al-Pd-Mn, Al-Cu-Fe, Al-Cu-V, etc.), but numerous other compositions are also known (Cd-Yb, Ti-Zr-Ni, Zn-Mg-Ho, Zn-Mg-Sc, In-Ag-Yb, Pd-U-Si, etc.). Quasicrystals effectively have infinitely large unit cells. Icosahedrite Al63Cu24Fe13, the first quasicrystal found in nature, was discovered in 2009. Most quasicrystals have ceramic-like properties including low electrical conductivity (approaching values seen in insulators) and low thermal conductivity, high hardness, brittleness, and resistance to corrosion, and non-stick properties. Quasicrystals have been used to develop heat insulation, LEDs, diesel engines, and new materials that convert heat to electricity. New applications may take advantage of the low coefficient of friction and the hardness of some quasicrystalline materials, for example embedding particles in plastic to make strong, hard-wearing, low-friction plastic gears. Other potential applications include selective solar absorbers for power conversion, broad-wavelength reflectors, and bone repair and prostheses applications where biocompatibility, low friction and corrosion resistance are required. Complex metallic alloys (CMAs) are intermetallic compounds characterized by large unit cells comprising some tens up to thousands of atoms; the presence of well-defined clusters of atoms (frequently with icosahedral symmetry); and partial disorder within their crystalline lattices. They are composed of two or more metallic elements, sometimes with metalloids or chalcogenides added. They include, for example, NaCd2, with 348 sodium atoms and 768 cadmium atoms in the unit cell. Linus Pauling attempted to describe the structure of NaCd2 in 1923, but did not succeed until 1955. At first called "giant unit cell crystals", interest in CMAs, as they came to be called, did not pick up until 2002, with the publication of a paper called "Structurally Complex Alloy Phases", given at the "8th International Conference on Quasicrystals." Potential applications of CMAs include as heat insulation; solar heating; magnetic refrigerators; using waste heat to generate electricity; and coatings for turbine blades in military engines. High entropy alloys (HEAs) such as AlLiMgScTi are composed of equal or nearly equal quantities of five or more metals. Compared to conventional alloys with only one or two base metals, HEAs have considerably better strength-to-weight ratios, higher tensile strength, and greater resistance to fracturing, corrosion, and oxidation. Although HEAs were described as early as 1981, significant interest did not develop until the 2010s; they continue to be the focus of research in materials science and engineering because of their potential for desirable properties. In a MAX phase alloy, M is an early transition metal, A is an A group element (mostly group IIIA and IVA, or groups 13 and 14), and X is either carbon or nitrogen. Examples are Hf2SnC and Ti4AlN3. Such alloys have some of the best properties of metals and ceramics. These properties include high electrical and thermal conductivity, thermal shock resistance, damage tolerance, machinability, high elastic stiffness, and low thermal expansion coefficients. They can be polished to a metallic luster because of their excellent electrical conductivities. During mechanical testing, it has been found that polycrystalline Ti3SiC2 cylinders can be repeatedly compressed at room temperature, up to stresses of 1 GPa, and fully recover upon the removal of the load. Some MAX phases are also highly resistant to chemical attack (e.g. Ti3SiC2) and high-temperature oxidation in air (Ti2AlC, Cr2AlC2, and Ti3AlC2). Potential applications for MAX phase alloys include: as tough, machinable, thermal shock-resistant refractories; high-temperature heating elements; coatings for electrical contacts; and neutron irradiation resistant parts for nuclear applications. While MAX phase alloys were discovered in the 1960s, the first paper on the subject was not published until 1996.
https://en.wikipedia.org/wiki?curid=19042
MIME Multipurpose Internet Mail Extensions (MIME) is an Internet standard that extends the format of email messages to support text in character sets other than ASCII, as well as attachments of audio, video, images, and application programs. Message bodies may consist of multiple parts, and header information may be specified in non-ASCII character sets. Email messages with MIME formatting are typically transmitted with standard protocols, such as the Simple Mail Transfer Protocol (SMTP), the Post Office Protocol (POP), and the Internet Message Access Protocol (IMAP). The MIME standard is specified in a series of requests for comments: , , . The integration with SMTP email is specified in Although the MIME formalism was designed mainly for SMTP, its content types are also important in other communication protocols. In the HyperText Transfer Protocol (HTTP) for the World Wide Web, servers insert a MIME header field at the beginning of any Web transmission. Clients use the content type or media type header to select an appropriate viewer application for the type of data indicated. Browsers typically contain GIF and JPEG image viewers. The presence of this header field indicates the message is MIME-formatted. The value is typically "1.0". The field appears as follows: According to MIME co-creator Nathaniel Borenstein, the version number was introduced to permit changes to the MIME protocol in subsequent versions. However, Borenstein admitted short-comings in the specification that hindered the implementation of this feature: ""We did not adequately specify how to handle a future MIME version. ... So if you write something that knows 1.0, what should you do if you encounter 2.0 or 1.1? I sort of thought it was obvious but it turned out everyone implemented that in different ways. And the result is that it would be just about impossible for the Internet to ever define a 2.0 or a 1.1."" This header field indicates the media type of the message content, consisting of a "type" and "subtype", for example Through the use of the "multipart" type, MIME allows mail messages to have parts arranged in a tree structure where the leaf nodes are any non-multipart content type and the non-leaf nodes are any of a variety of multipart types. This mechanism supports: The original MIME specifications only described the structure of mail messages. They did not address the issue of presentation styles. The content-disposition header field was added in RFC 2183 to specify the presentation style. A MIME part can have: In addition to the presentation style, the field "Content-Disposition" also provides parameters for specifying the name of the file, the creation date and modification date, which can be used by the reader's mail user agent to store the attachment. The following example is taken from RFC 2183, where the header field is defined: The filename may be encoded as defined in RFC 2231. As of 2010, a majority of mail user agents did not follow this prescription fully. The widely used Mozilla Thunderbird mail client ignores the "content-disposition" fields in the messages and uses independent algorithms for selecting the MIME parts to display automatically. Thunderbird prior to version 3 also sends out newly composed messages with "inline" content disposition for all MIME parts. Most users are unaware of how to set the content disposition to "attachment". Many mail user agents also send messages with the file name in the "name" parameter of the "content-type" header instead of the "filename" parameter of the header field "Content-Disposition". This practice is discouraged, as the file name should be specified either with the parameter "filename", or with both the parameters "filename" and "name". In HTTP, the response header field "Content-Disposition: attachment" is usually used as a hint to the client to present the response body as a downloadable file. Typically, when receiving such a response, a Web browser prompts the user to save its content as a file, instead of displaying it as a page in a browser window, with "filename" suggesting the default file name. In June 1992, MIME (RFC 1341, since made obsolete by RFC 2045) defined a set of methods for representing binary data in formats other than ASCII text format. The "content-transfer-encoding:" MIME header field has 2-sided significance: The RFC and the IANA's list of transfer encodings define the values shown below, which are not case sensitive. Note that '7bit', '8bit', and 'binary' mean that no binary-to-text encoding on top of the original encoding was used. In these cases, the header field is actually redundant for the email client to decode the message body, but it may still be useful as an indicator of what type of object is being sent. Values 'quoted-printable' and 'base64' tell the email client that a binary-to-text encoding scheme was used and that appropriate initial decoding is necessary before the message can be read with its original encoding (e.g. UTF-8). There is no encoding defined which is explicitly designed for sending arbitrary binary data through SMTP transports with the 8BITMIME extension. Thus, if BINARYMIME isn't supported, base64 or quoted-printable (with their associated inefficiency) are sometimes still useful. This restriction does not apply to other uses of MIME such as Web Services with MIME attachments or MTOM. Since RFC 2822, conforming message header field names and values use ASCII characters; values that contain non-ASCII data should use the MIME encoded-word syntax (RFC 2047) instead of a literal string. This syntax uses a string of ASCII characters indicating both the original character encoding (the ""charset"") and the content-transfer-encoding used to map the bytes of the charset into ASCII characters. The form is: "codice_1"charset"codice_2"encoding"codice_2"encoded text"codice_4". The ASCII codes for the question mark ("?") and equals sign ("=") may not be represented directly as they are used to delimit the encoded-word. The ASCII code for space may not be represented directly because it could cause older parsers to split up the encoded word undesirably. To make the encoding smaller and easier to read the underscore is used to represent the ASCII code for space creating the side effect that underscore cannot be represented directly. Use of encoded words in certain parts of header fields imposes further restrictions on which characters may be represented directly. For example, codice_7 is interpreted as "Subject: ¡Hola, señor!". The encoded-word format is not used for the names of the headers fields (for example "Subject"). These names are usually English terms and always in ASCII in the raw message. When viewing a message with a non-English email client, the header field names might be translated by the client. The MIME multipart message contains a boundary in the header field "Content-Type:"; this boundary, which must not occur in any of the parts, is placed between the parts, and at the beginning and end of the body of the message, as follows: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary=frontier This is a message with multiple parts in MIME format. --frontier Content-Type: text/plain This is the body of the message. --frontier Content-Type: application/octet-stream Content-Transfer-Encoding: base64 PGh0bWw+CiAgPGhlYWQ+CiAgPC9oZWFkPgogIDxib2R5PgogICAgPHA+VGhpcyBpcyB0aGUg Ym9keSBvZiB0aGUgbWVzc2FnZS48L3A+CiAgPC9ib2R5Pgo8L2h0bWw+Cg== --frontier-- Each part consists of its own content header (zero or more "Content-" header fields) and a body. Multipart content can be nested. The content-transfer-encoding of a multipart type must always be "7bit", "8bit" or "binary" to avoid the complications that would be posed by multiple levels of decoding. The multipart block as a whole does not have a charset; non-ASCII characters in the part headers are handled by the Encoded-Word system, and the part bodies can have charsets specified if appropriate for their content-type. Notes: The MIME standard defines various multipart-message subtypes, which specify the nature of the message parts and their relationship to one another. The subtype is specified in the "Content-Type" header field of the overall message. For example, a multipart MIME message using the digest subtype would have its Content-Type set as "multipart/digest". The RFC initially defined four subtypes: mixed, digest, alternative and parallel. A minimally compliant application must support mixed and digest; other subtypes are optional. Applications must treat unrecognized subtypes as "multipart/mixed". Additional subtypes, such as signed and form-data, have since been separately defined in other RFCs. Multipart/mixed is used for sending files with different "Content-Type" header fields inline (or as attachments). If sending pictures or other easily readable files, most mail clients will display them inline (unless otherwise specified with "Content-Disposition"). Otherwise, it offers them as attachments. The default content-type for each part is "text/plain". The type is defined in RFC 2046. Multipart/digest is a simple way to send multiple text messages. The default content-type for each part is "message/rfc822". The MIME type is defined in RFC 2046. The multipart/alternative subtype indicates that each part is an "alternative" version of the same (or similar) content, each in a different format denoted by its "Content-Type" header. The order of the parts is significant. RFC1341 states: "In general, user agents that compose multipart/alternative entities should place the body parts in increasing order of preference, that is, with the preferred format last." Systems can then choose the "best" representation they are capable of processing; in general, this will be the last part that the system can understand, although other factors may affect this. Since a client is unlikely to want to send a version that is less faithful than the plain text version, this structure places the plain text version (if present) first. This makes life easier for users of clients that do not understand multipart messages. Most commonly, multipart/alternative is used for email with two parts, one plain text (text/plain) and one HTML (text/html). The plain text part provides backwards compatibility while the HTML part allows use of formatting and hyperlinks. Most email clients offer a user option to prefer plain text over HTML; this is an example of how local factors may affect how an application chooses which "best" part of the message to display. While it is intended that each part of the message represent the same content, the standard does not require this to be enforced in any way. At one time, anti-spam filters would only examine the text/plain part of a message, because it is easier to parse than the text/html part. But spammers eventually took advantage of this, creating messages with an innocuous-looking text/plain part and advertising in the text/html part. Anti-spam software eventually caught up on this trick, penalizing messages with very different text in a multipart/alternative message. The type is defined in RFC 2046. A multipart/related is used to indicate that each message part is a component of an aggregate whole. It is for compound objects consisting of several inter-related components - proper display cannot be achieved by individually displaying the constituent parts. The message consists of a root part (by default, the first) which reference other parts inline, which may in turn reference other parts. Message parts are commonly referenced by "Content-ID". The syntax of a reference is unspecified and is instead dictated by the encoding or protocol used in the part. One common usage of this subtype is to send a web page complete with images in a single message. The root part would contain the HTML document, and use image tags to reference images stored in the latter parts. The type is defined in RFC 2387. "Multipart/report" is a message type that contains data formatted for a mail server to read. It is split between a text/plain (or some other content/type easily readable) and a message/delivery-status, which contains the data formatted for the mail server to read. The type is defined in RFC 6522. A multipart/signed message is used to attach a digital signature to a message. It has exactly two body parts, a body part and a signature part. The whole of the body part, including mime fields, is used to create the signature part. Many signature types are possible, like "application/pgp-signature" (RFC 3156) and "application/pkcs7-signature" (S/MIME). The type is defined in RFC 1847. A multipart/encrypted message has two parts. The first part has control information that is needed to decrypt the application/octet-stream second part. Similar to signed messages, there are different implementations which are identified by their separate content types for the control part. The most common types are "application/pgp-encrypted" (RFC 3156) and "application/pkcs7-mime" (S/MIME). The MIME type defined in RFC 1847. The MIME type "multipart/form-data" is used to express values submitted through a form. Originally defined as part of HTML 4.0, it is most commonly used for submitting files with HTTP. It is specified in RFC 7578, superseding RFC 2388. The content type multipart/x-mixed-replace was developed as part of a technology to emulate server push and streaming over HTTP. All parts of a mixed-replace message have the same semantic meaning. However, each part invalidates - "replaces" - the previous parts as soon as it is received completely. Clients should process the individual parts as soon as they arrive and should not wait for the whole message to finish. Originally developed by Netscape, it is still supported by Mozilla, Firefox, Safari, and Opera. It is commonly used in IP cameras as the MIME type for MJPEG streams. It was supported by Chrome for main resources until 2013 (images can still be displayed using this content type). The multipart/byterange is used to represent noncontiguous byte ranges of a single message. It is used by HTTP when a server returns multiple byte ranges and is defined in RFC 2616.
https://en.wikipedia.org/wiki?curid=19045
Mehmed the Conqueror Mehmed II (; Modern , ; 30 March 14323 May 1481), commonly known as Mehmed the Conqueror (), was an Ottoman Sultan who ruled from August 1444 to September 1446, and then later from February 1451 to May 1481. In Mehmed II's first reign, he defeated the crusade led by John Hunyadi after the Hungarian incursions into his country broke the conditions of the truce Peace of Szeged. When Mehmed II ascended the throne again in 1451 he strengthened the Ottoman navy and made preparations to attack Constantinople. At the age of 21, he conquered Constantinople (modern-day Istanbul) and brought an end to the Byzantine Empire. After the conquest Mehmed claimed the title "Caesar" of the Roman Empire ("Qayser-i Rûm"), based on the fact that Constantinople had been the seat and capital of the surviving Eastern Roman Empire since its consecration in 330 AD by Emperor Constantine I. The claim was only recognized by the Patriarchate of Constantinople. Nonetheless, Mehmed II viewed the Ottoman state as a continuation of the Roman Empire for the remainder of his life, seeing himself as "continuing" the Empire rather than "replacing" it. This assertion was eventually abandoned by his successors. Mehmed continued his conquests in Anatolia with its reunification and in Southeast Europe as far west as Bosnia. At home he made many political and social reforms, encouraged the arts and sciences, and by the end of his reign, his rebuilding program had changed the city into a thriving imperial capital. He is considered a hero in modern-day Turkey and parts of the wider Muslim world. Among other things, Istanbul's Fatih district, Fatih Sultan Mehmet Bridge and Fatih Mosque are named after him. Mehmed II was born on 30 March 1432, in Edirne, then the capital city of the Ottoman state. His father was Sultan Murad II (1404–1451) and his mother Hüma Hatun, a slave of uncertain origin. When Mehmed II was eleven years old he was sent to Amasya to govern and thus gain experience, per the custom of Ottoman rulers before his time. Sultan Murad II also sent a number of teachers for him to study under. This Islamic education had a great impact in molding Mehmed's mindset and reinforcing his Muslim beliefs. He was influenced in his practice of Islamic by practitioners of science, particularly by his mentor, Molla Gürani, and he followed their approach. The influence of Akshamsaddin in Mehmed's life became predominant from a young age, especially in the imperative of fulfilling his Islamic duty to overthrow the Byzantine empire by conquering Constantinople. After Murad II made peace with the Karamanids in Anatolia in August 1444, he abdicated the throne to his 12-year-old son Mehmed II. In Mehmed II's first reign, he defeated the crusade led by John Hunyadi after the Hungarian incursions into his country broke the conditions of the truce Peace of Szeged. Cardinal Julian Cesarini, the representative of the Pope, had convinced the king of Hungary that breaking the truce with Muslims was not a betrayal. At this time Mehmed II asked his father Murad II to reclaim the throne, but Murad II refused. Angry at his father, who had long since retired to a contemplative life in southwestern Anatolia, Mehmed II wrote, "If you are the sultan, come and lead your armies. If I am the sultan I hereby order you to come and lead my armies." It was only after receiving this letter that Murad II led the Ottoman army and won the Battle of Varna in 1444. Murad II's return to the throne was forced by Çandarlı Halil Paşa, the grand vizier at the time, who was not fond of Mehmed II's rule, because Mehmed II's influential lala (royal teacher), Akshamsaddin, had a rivalry with Çandarlı. When Mehmed II ascended the throne again in 1451 he devoted himself to strengthening the Ottoman navy and made preparations for an attack on Constantinople. In the narrow Bosphorus Straits, the fortress Anadoluhisarı had been built by his great-grandfather Bayezid I on the Asian side; Mehmed erected an even stronger fortress called Rumelihisarı on the European side, and thus gained complete control of the strait. Having completed his fortresses, Mehmed proceeded to levy a toll on ships passing within reach of their cannon. A Venetian vessel ignoring signals to stop was sunk with a single shot and all the surviving sailors beheaded, except for the captain, who was impaled and mounted as a human scarecrow as a warning to further sailors on the strait. Abu Ayyub al-Ansari, the companion and standard bearer of Muhammad, had died during the first Siege of Constantinople (674–678). As Mehmed II's army approached Constantinople, Mehmed's sheikh Akshamsaddin discovered the tomb of Abu Ayyub al-Ansari. After the conquest, Mehmed built Eyüp Sultan Mosque at the site to emphasize the importance of the conquest to the Islamic world and highlight his role as ghazi. In 1453 Mehmed commenced the siege of Constantinople with an army between 80,000 and 200,000 troops, an artillery train of over seventy large field pieces, and a navy of 320 vessels, the bulk of them transports and storeships. The city was surrounded by sea and land; the fleet at the entrance of the Bosphorus stretched from shore to shore in the form of a crescent, to intercept or repel any assistance for Constantinople from the sea. In early April, the Siege of Constantinople began. At first, the city's walls held off the Turks, even though Mehmed's army used the new bombard designed by Orban, a giant cannon similar to the Dardanelles Gun. The harbor of the Golden Horn was blocked by a boom chain and defended by twenty-eight warships. On 22 April, Mehmed transported his lighter warships overland, around the Genoese colony of Galata, and into the Golden Horn's northern shore; eighty galleys were transported from the Bosphorus after paving a route, little over one mile, with wood. Thus the Byzantines stretched their troops over a longer portion of the walls. About a month later, Constantinople fell, on 29 May, following a fifty-seven-day siege. After this conquest, Mehmed moved the Ottoman capital from Adrianople to Constantinople. When Sultan Mehmed II stepped into the ruins of the Boukoleon, known to the Ottomans and Persians as the Palace of the Caesars, probably built over a thousand years before by Theodosius II, he uttered the famous lines of Saadi: Some Muslim scholars claimed that a hadith in Musnad Ahmad referred specifically to Mehmed's conquest of Constantinople, seeing it as the fulfillment of a prophecy and a sign of the approaching apocalypse.
https://en.wikipedia.org/wiki?curid=19046
Martina Hingis Martina Hingis (; aka Martina Hingisová Molitorová; born Martina Hingisová; 30 September 1980) is a Swiss former professional tennis player. She spent a total of 209 weeks as the singles world No. 1 and 90 weeks as doubles world No. 1, holding both No. 1 rankings simultaneously for 29 weeks. She won five Grand Slam singles titles, thirteen Grand Slam women's doubles titles, winning a calendar-year doubles Grand Slam in 1998, and seven Grand Slam mixed doubles titles; for a combined total of twenty-five major titles. In addition, she won the season-ending WTA Finals two times in singles and three times in doubles, an Olympic silver medal, and a record seventeen Tier I singles titles. Hingis set a series of "youngest-ever" records during the 1990s, including youngest-ever Grand Slam champion and youngest-ever world No. 1. Before ligament injuries in both ankles forced her to withdraw temporarily from professional tennis in early 2003, at the age of 22, she had won 40 singles titles and 36 doubles titles and, according to "Forbes", was the highest-paid female athlete in the world for five consecutive years, 1997 to 2001. After several surgeries and long recoveries, Hingis returned to the WTA tour in 2006, climbing to world No. 6, winning two Tier I tournaments, and also receiving the Laureus World Sports Award for Comeback of the Year. She retired in November 2007 after being hampered by a hip injury for several months and testing positive for a metabolite of cocaine during that year's Wimbledon Championships, which led to a two-year suspension from the sport. In July 2013, Hingis came out of retirement to play the doubles events of the North American hard-court season. During her doubles comeback, she won four Grand Slam women's doubles tournaments, six Grand Slam mixed doubles tournaments (completing the Career Grand Slam), 27 WTA titles, and the silver medal in women's doubles at the 2016 Summer Olympics in Rio de Janeiro. Hingis retired after the 2017 WTA Finals while ranked world No. 1. Widely considered an all-time tennis great, Hingis was ranked by "Tennis" magazine in 2005 as the 8th-greatest female player of the preceding 40 years. She was named one of the "30 Legends of Women's Tennis: Past, Present and Future" by "TIME" in June 2011. In 2013, Hingis was elected into the International Tennis Hall of Fame, and was appointed two years later the organization's first ever Global Ambassador. Hingis was born in Košice, Czechoslovakia (now in Slovakia) as Martina Hingisová, to Melanie Molitorová and Karol Hingis, both of whom were tennis players. Molitorová was a professional tennis player who was once ranked tenth among women in Czechoslovakia, and was determined to develop Hingis into a top player as early as pregnancy. Her father was ranked as high as 19th in the Czechoslovak tennis rankings. Martina Hingis spent her early childhood growing up in the town of Rožnov pod Radhoštěm (now in Czech Republic). Hingis's parents divorced when she was six, and she and her mother defected from Czechoslovakia in 1987 and emigrated to Trübbach (Wartau) in Switzerland when she was seven. Her mother remarried to a Swiss man, Andreas Zogg, a computer technician. Hingis acquired Swiss citizenship through naturalization. Hingis began playing tennis when she was two years old and entered her first tournament at age four. In 1993, 12-year-old Hingis became the youngest player to win a Grand Slam junior title: the girls' singles at the French Open. In 1994, she retained her French Open junior title, won the girls' singles title at Wimbledon, and reached the final of the US Open. She made her WTA debut at the Zurich Open in October 1994, two weeks after turning 14, and ended 1994 ranked world No. 87. In 1996, Hingis became the youngest Grand Slam champion of all time, when she teamed with Helena Suková at Wimbledon to win the women's doubles title at age 15 years and 9 months. She also won her first professional singles title that year at Filderstadt, Germany. She reached the singles quarterfinals of the 1996 Australian Open and the singles semifinals of the 1996 US Open. Following her win at Filderstadt, Hingis defeated the reigning Australian Open champion and co-top ranked (with Steffi Graf) Monica Seles in the final in Oakland, but lost to Graf in the year-end WTA Tour Championships final in five sets. In 1997, Hingis became the undisputed World No. 1 women's tennis player. She started the year by winning the warm-up tournament in Sydney. She then became the youngest Grand Slam singles winner in the 20th century by winning the Australian Open at age 16 years and 3 months (beating former champion Mary Pierce in the final). She also won the Australian Open women's doubles with Natasha Zvereva. In March, she became the youngest top ranked player in history. In July, she became the youngest singles champion at Wimbledon since Lottie Dod in 1887 by beating Jana Novotná in the final. She then defeated another up-and-coming player, Venus Williams, in the final of the US Open. The only Grand Slam singles title that Hingis failed to win in 1997 was the French Open, where she lost in the final to Iva Majoli. In 1998, Hingis won all four of the Grand Slam women's doubles titles, only the fourth in women's tennis history to do so, (the Australian Open with Mirjana Lučić and the other three events with Novotná), and she became only the third woman to hold the No. 1 ranking in both singles and doubles simultaneously. She also retained her Australian Open singles title by beating Conchita Martínez in straight sets in the final. Hingis, however, lost in the final of the US Open to Lindsay Davenport. Davenport ended an 80-week stretch Hingis had enjoyed as the No. 1 singles player in October 1998, but Hingis finished the year by beating Davenport in the final of the WTA Tour Championships. 1999 saw Hingis win her third successive Australian Open singles crown as well as the doubles title (with Anna Kournikova). She had dropped her former doubles partner Jana Novotná. She then reached the French Open final and was three points away from victory in the second set before losing to Steffi Graf about whom she had said before: "Steffi had some results in the past, but it's a faster, more athletic game now... She is old now. Her time has passed." She broke into tears after a game in which the crowd had booed her for using underhand serves and crossing the line in a discussion about an umpire decision. After a shock first-round, straight set, loss to Jelena Dokić at Wimbledon, Hingis bounced back to reach her third consecutive US Open final, where she lost to 17-year-old Serena Williams. Hingis won a total of seven singles titles that year and reclaimed the No. 1 singles ranking. She also reached the final of the WTA Tour Championships, where she lost to Lindsay Davenport. In 2000, Hingis again found herself in both the singles and doubles finals at the Australian Open. This time, however, she lost both. Her three-year hold on the singles championship ended when she lost to Davenport. Later, Hingis and Mary Pierce, her new doubles partner, lost to Lisa Raymond and Rennae Stubbs. Hingis captured the French Open women's doubles title with Pierce and produced consistent results in singles tournaments throughout the year. She reached the quarterfinals at Wimbledon before losing to Venus Williams. Although she did not win a Grand Slam singles tournament, she kept the year end No. 1 ranking because of nine tournament championships, including the WTA Tour Championships where she won the singles and doubles titles. In 2001, Switzerland, with Hingis and Roger Federer on its team, won the Hopman Cup. Hingis didn't drop a set in any of her singles matches during the event, defeating Tamarine Tanasugarn, Nicole Pratt, Amanda Coetzer, and Monica Seles. In 2018, after his second Hopman Cup victory, Federer was quoted as saying: "I learned a lot from her, especially the two years I was here – once as a hitting partner and once as a partner with Martina. Definitely she helped me to become the player I am today." Hingis reached her fifth consecutive Australian Open final in 2001, defeating both of the Williams sisters en route, before losing to Jennifer Capriati. She briefly ended her coaching relationship with her mother Melanie early in the year but had a change of heart two months later just before the French Open. 2001 was her least successful year in several seasons, with only three tournament victories in total. She lost her No. 1 ranking for the last time (to Jennifer Capriati) on 14 October 2001. In that same month, Hingis underwent surgery on her right ankle. Coming back from injury, Hingis won the Australian Open doubles final at the start of 2002 (again teaming with Anna Kournikova) and reached a sixth straight Australian Open final in singles, again facing Capriati. Hingis led by a set and 4–0 and had four match points but lost in three sets. In May 2002, she needed another ankle ligament operation, this time on her left ankle. After that, she continued to struggle with injuries and was not able to recapture her best form. In February 2003, at the age of 22, Hingis announced her retirement from tennis, due to her injuries and being in pain. "I want to play tennis only for fun and concentrate more on horse riding and finish my studies." In several interviews, she indicated that she wished to return to her home country and coach full-time. During this segment of her tennis career, Hingis won 40 singles titles and 36 doubles events. She held the world No. 1 singles ranking for a total of 209 weeks (fifth most following Steffi Graf, Martina Navratilova (after whom she was named), Chris Evert, and Serena Williams). In 2005, "Tennis" magazine put her in 22nd place in its list of 40 Greatest Players of the Tennis era. In February 2005, Hingis made an unsuccessful return to competition at an event in Pattaya, Thailand, where she lost to Germany's Marlene Weingärtner in the first round. After the loss, she claimed that she had no further plans for a comeback. Hingis, however, resurfaced in July, playing singles, doubles, and mixed doubles in World Team Tennis and notching up singles victories over two top 100 players and shutting out Martina Navratilova in singles on 7 July. With these promising results behind her, Hingis announced on 29 November her return to the WTA Tour in 2006. At the Australian Open, Hingis lost in the quarterfinals to second-seeded Kim Clijsters. However, Hingis won the mixed doubles title with Mahesh Bhupathi of India. This was her first career Grand Slam mixed doubles title and fifteenth overall (5 singles, 9 women's doubles, 1 mixed doubles). The week after the Australian Open, Hingis defeated world No. 4 Maria Sharapova in the semifinals of the Tier I Toray Pan Pacific Open in Tokyo before losing in the final to world No. 9 Elena Dementieva. Hingis competed in Dubai then, reaching the quarterfinals before falling to Sharapova. At the Tier I Pacific Life Open in Indian Wells, Hingis defeated World No. 4 Lindsay Davenport in the fourth round before again losing to Sharapova in the semifinals. At the Tier I Internazionali BNL d'Italia in Rome, Hingis posted her 500th career singles match victory in the quarterfinals, beating world No. 18 Flavia Pennetta, and subsequently won the tournament with wins over Venus Williams in the semifinals and Dinara Safina in the final. This was her 41st WTA Tour singles title and first in more than four years. Hingis then reached the quarterfinals of the French Open before losing to Kim Clijsters. At Wimbledon, Hingis lost in the third round to Ai Sugiyama. Hingis's return to the US Open was short lived, as she was upset in the second round by world No. 112 Virginie Razzano of France. In her first tournament after the US Open, Hingis won the second title of her comeback at the Tier III Sunfeast Open in Kolkata, India. She defeated unseeded Russian Olga Puchkova in the final. The following week in Seoul, Hingis notched her 50th match win of the year before losing in the second round to Sania Mirza. Hingis qualified for the year-ending WTA Tour Championships in Madrid as the 8th seed. In her round robin matches, she lost in three sets to both Justine Henin and Amélie Mauresmo but defeated Nadia Petrova. Hingis ended the year ranked world No. 7. She also finished eighth in prize money earnings (US$1,159,537). Hingis also ranked as No. 7 on the Annual Top Google News Searches in 2006. At the Australian Open, Hingis won her first three rounds without losing a set before defeating China's Li Na in the fourth round. Hingis then lost a quarterfinal match to Kim Clijsters. This was the second consecutive year that Hingis had lost to Clijsters in the quarterfinals of the Australian Open and the third time in the last five Grand Slam tournaments that Clijsters had eliminated Hingis in the quarterfinals. Hingis won her next tournament, the Tier I Toray Pan Pacific Open in Tokyo, defeating Ana Ivanovic in the final. This was Hingis's record fifth singles title at this event. A hip injury that troubled her at the German Open caused her to withdraw from the Rome Masters, where she was the defending champion, and the French Open, the only important singles title that eluded her. In her first round match at Wimbledon, Hingis saved two match points to defeat British wildcard Naomi Cavaday, apparently not having fully recovered from the hip injury that prevented her from playing the French Open. In the third round, Hingis lost to Laura Granville of the United States, and stated afterwards she should not have entered the tournament. Hingis's next tournament was the last Grand Slam tournament of the year, the US Open, in which Hingis lost in the third round to Belarusian teenager Victoria Azarenka. Hingis did not play any tournaments after the China Open, as she was beset by injuries for the rest of the year. In November 2007, Hingis called a press conference to announce that she was under investigation for testing positive for benzoylecgonine, a metabolite of cocaine, during a urine test taken by players at Wimbledon. Hingis' urine sample contained an estimated 42 nanograms per millilitre of benzoylecgonine. The International Tennis Federation's report on the matter states that "the very low estimated concentration of benzoylecgonine (42 ng/ml) was such that it would go unreported in many drug testing programmes such as that of the US military, which uses a screening threshold of 150 ng/ml." As the amount was so low, Hingis appealed, arguing the likely cause was contamination rather than intentional ingestion. In January 2008, the ITF's tribunal suspended Hingis from the sport for two years, effective from October 2007. Having retired for the second time in 2007, Hingis played an exhibition match at the Liverpool International tournament on 13 June 2008. Although this event was a warm-up for Wimbledon, it was not part of the WTA Tour. In a rematch of their 1997 Wimbledon final, Hingis defeated Jana Novotná. In 2009, Hingis took part in the British television dancing competition "Strictly Come Dancing". She was the bookies' favourite for the competition, but went out in the first week after performing a waltz and a rumba. At the start of 2010, Hingis defeated former world No. 1 Lindsay Davenport, and hinted at a possible return to tennis. In February, she announced having committed to a full season with the World TeamTennis tour in 2010. She had previously played for World TeamTennis in 2005 to assist her first comeback. Sparking thoughts that she was trying to come back to the WTA Tour, she committed to playing at the Nottingham Masters. On 5 May 2010, it was announced that Hingis would reunite with her doubles partner Anna Kournikova. Kournikova was participating in competitive tennis for the first time in seven years, in the Invitational Ladies Doubles event at Wimbledon. Hingis also confirmed that she would play at the Tradition-ICAP Liverpool International championship in June 2010, preceding Wimbledon, before playing in the Manchester Masters after Wimbledon. Liverpool like the Nottingham and Manchester Masters are organised by her management company Northern Vision. At the Nottingham Masters, Hingis faced Michaëlla Krajicek (twice), Olga Savchuk and Monika Wejnert. Hingis won just once in the event, against Wejnert. After the Nottingham event, Billie Jean King stated that she believed that Hingis might return to the WTA Tour on the doubles circuit, after competing in the WTT. On 5 June 2011, Hingis, paired with Lindsay Davenport, won the Roland Garros Women's Legends title, defeating Martina Navratilova and Jana Novotná in the final. Before facing Navratilova/Novotná, Hingis and Davenport won two round-robin matches in the tournament: first against Gigi Fernández/Natasha Zvereva, and then in the next match they prevailed over Andrea Temesvári/Sandrine Testud and 10:0 in the super tie-break. On 3 July, Hingis partnering Lindsay Davenport won the Wimbledon Ladies' Invitation Doubles title, defeating Navratilova and Novotná in the final. She also played for the New York Sportimes of the World TeamTennis Pro League in July 2011. She finished the season with the top winning percentage of any player competing in women's singles. Hingis and Davenport successfully defended their Wimbledon Ladies' Invitation Doubles title in 2012, again beating Martina Navratilova and Jana Novotná in the final. In April 2013, Hingis agreed to coach Anastasia Pavlyuchenkova; however, after a disagreement about how to prepare for tournaments they parted ways in June. Hingis won the Ladies' Invitation Doubles for a third year in a row at Wimbledon, again with Davenport. They beat Jana Novotná and Barbara Schett in the final. Hingis was inducted into the International Tennis Hall of Fame in July 2013, and in the same month, announced that she was coming out of retirement to play a doubles tournament, with Daniela Hantuchová as her partner, in Carlsbad, California. She was accepted as a wildcard entry. She also played doubles in Toronto, Cincinnati, New Haven, and the US Open. Hingis helped Sabine Lisicki during the Australian Open. She participated in Champions Tennis League India to boost tennis in the country. Hingis returned to the WTA Tour at Indian Wells, partnering Lisicki in the doubles. They lost in the first round to three-time Grand Slam finalists Ashleigh Barty and Casey Dellacqua. At the Sony Open in Miami, Hingis and Lisicki reached the finals of the tournament and then defeated Makarova and Vesnina in straight sets, marking Hingis' first title since she won the Qatar Ladies Open in 2007 and her first Premier Mandatory doubles title since winning the 2001 title in Moscow. This was also her third win in Miami, having won her last title there in 1999. Hingis reached the final at Eastbourne with Pennetta where they lost to Chan Hao-ching and Chan Yung-jan of Taiwan. At the Wimbledon Championships, she reached the quarterfinals with partner Bruno Soares in mixed doubles, where they lost to Daniel Nestor and Kristina Mladenovic in straight sets. Entering as an unseeded team at the US Open, Hingis and Pennetta reached the final, without losing a set in any of their matches. In the final they lost to Makarova and Vesnina in three sets. At the latter end of the season, Hingis and Flavia Pennetta won two titles. At the tournament in Wuhan, they beat Cara Black and Caroline Garcia to take the title; in Moscow they beat Caroline Garcia and Arantxa Parra Santonja. In Hingis' first tournament of the year in Brisbane, she and partner Sabine Lisicki didn't drop a set en route to the title, beating Caroline Garcia and Katarina Srebotnik in straight sets in the final. Hingis played at the Australian Open with Flavia Pennetta, as the 4th seeds, but lost in the third round. However, Hingis paired with Leander Paes in the mixed doubles to win the title. The win was her first in a Grand Slam event since capturing the mixed-doubles crown at the 2006 Australian Open. After early exits with Pennetta at the Dubai Tennis Championships and Qatar Ladies Open, Hingis then partnered with Indian player Sania Mirza; they won the first 20 sets they contested, subsequently winning back-to-back titles in two WTA Premier Mandatory events: the BNP Paribas Open in Indian Wells and the Miami Open, also winning afterwise the Family Circle Cup. They were defeated in the first round in Stuttgart. At the Madrid Open they lost in the quarterfinals to Australian Open champions Bethanie Mattek-Sands and Lucie Šafářová 11–9 in the super tie-break. They reached the quarterfinals of the French Open, losing again to Mattek-Sands and Šafářová, this time in straight sets. Hingis made a comeback in Fed Cup after a 17-year absence. She was scheduled to play doubles only, but then decided to try another comeback in singles by playing in the Fed Cup tie for Switzerland. She drew Agnieszka Radwańska in the first rubber and was defeated in two sets in her first official tour match since 2007. She lost her second singles rubber too, defeated by Urszula Radwańska in three sets, having been a set and a double break up. On 11 July 2015, Hingis and Mirza beat Makarova and Vesnina in three tight sets recovering from 5–2 down in the third to win the women's doubles tournament at Wimbledon. The win gave Hingis her first Grand Slam in women's doubles since the 2002 Australian Open. The following day, Hingis then won the mixed doubles final partnering with Leander Paes to defeat Alexander Peya and Tímea Babos in straight sets. After two semifinal losses in Toronto and Cincinnati, Hingis won the mixed doubles title at the US Open on 12 September, partnering Paes, defeating Sam Querrey and Bethanie Mattek-Sands in three sets. The following day, Hingis and Mirza beat Casey Dellacqua and Yaroslava Shvedova in straight sets to win the doubles tournament. At the WTA Finals, they won all their group matches, including against Kops-Jones/Spears, Hlavackova/Hradecka and Babos/Mladenovic. In the semifinals they beat the Chan sisters, and then they beat the Spanish team Muguruza/Suarez Navarro to win the title. That month Hingis participated at the Champions Tennis League in India, playing for the Hyderabad Aces team. In January, Hingis and Mirza won at Brisbane and Sydney. They then won the doubles tournament at the Australian Open, defeating Hlaváčková and Hradecká in the final, for their third consecutive Grand Slam title. Afterwards, Hingis said of their partnership: "There's not that many people who can match her in the forehand rallies and me on the backhand side and at the net. That's what we try to do every match." In mixed doubles, Hingis and Paes lost in the quarterfinals to Mirza and Ivan Dodig. In February, Hingis represented Switzerland in the Fed Cup tie against Germany alongside Belinda Bencic and Timea Bacsinszky. Switzerland beat Germany 3–2, with Hingis and Bencic clinching the doubles rubber. Switzerland advanced to the semifinals, where the team lost to the defending champions the Czech Republic. The Hingis-Mirza winning-streak record of 41 matches ended in the quarterfinals of the Qatar Total Open, where they lost to Kasatkina/Vesnina. Hingis and Mirza then proceeded to the BNP Paribas Open to defend their title. However, they suffered a shock as the unseeded Vania King/Alla Kudryavtseva defeated them in straight sets, 7–6(7), 6–4. At the Miami Open, Mirza and Hingis lost in the second round to Margarita Gasparyan and Monica Niculescu. Hingis and Mirza started their clay season by reaching the finals of Porsche Tennis Grand Prix and Mutua Madrid Open, where they lost to Kristina Mladenovic and Caroline Garcia in both the tournaments. However, they won the Italian Open, defeating Makarova and Vesnina. At the French Open, they were upset by Czech pair Barbora Krejčíková and Kateřina Siniaková in the third round, which ended their 20 match winning streak in Grand Slam doubles tournaments. Hingis won the French Open mixed doubles partnering Leander Paes. It is her first mixed doubles title at Roland Garros, and she completed the mixed-doubles Career Grand Slam, becoming only the fourth woman ever to complete a career grand slam in both women's doubles and mixed doubles. Hingis qualified for the 2016 Summer Olympics in Rio de Janeiro, 20 years after her last Olympic appearance. She played doubles with Timea Bacsinszky and won the silver medal, losing to Ekaterina Makarova and Elena Vesnina in straight sets in the final. Hingis then played at the US Open with CoCo Vandeweghe, where they made the semifinals and lost to top seeds Garcia and Mladenovic. At the WTA Finals, Hingis reunited with Sania Mirza in what would be the partnership's last tournament together; they defeated the Chan sisters in the quarterfinals but then lost to Makarova and Vesnina. Hingis continued to partner CoCo Vandeweghe in women's doubles competition at the start of the season. Together they reached the quarterfinals of the Sydney International, losing to eventual champions Tímea Babos and Anastasia Pavlyuchenkova, and the second round of the Australian Open, losing to the Australian duo of Ashleigh Barty and Casey Dellacqua. This capped a run of poor form, having gone 5–5 in tournaments since they made the semifinals at the US Open the previous season. As a result, Hingis split with Vandeweghe and entered a new partnership with Taiwan's Chan Yung-jan, who herself had just split with her sister Chan Hao-ching. In the mixed doubles competition at the Australian Open, Hingis reached the quarterfinals with Leander Paes before losing to another Australian duo, Samantha Stosur and Sam Groth in straight sets. In preparation for the upcoming Fed Cup quarterfinal match between Switzerland and France, Hingis partnered with Belinda Bencic to defend her St. Petersburg title. The pair lost in the first round to Gabriela Dabrowski and Michaëlla Krajicek. In the Fed Cup quarterfinal match, Hingis instead paired up with Timea Bacsinszky and won their doubles match against Amandine Hesse and Kristina Mladenovic, helping the team to a 4–1 victory to advance to the semifinals. In the first two tournaments of their new partnership, Hingis and Chan suffered some "tough" losses. They fell to Olga Savchuk and Yaroslava Shvedova in the semifinals of the Qatar Open and to Andrea Hlaváčková and Peng Shuai in straight sets in the quarterfinals of the Dubai Tennis Championships. However, they immediately rebounded by winning their first title together at the Indian Wells Open, defeating Hingis' old partner Sania Mirza with Barbora Strýcová in the quarterfinals, top seeded Mattek-Sands and Šafářová in the semifinals, and Czech pair Lucie Hradecká and Kateřina Siniaková in the final. They then reached the semifinals of the Miami Open, before losing to Mirza and Strýcová. Hingis again sought to practice with a Swiss partner before the Fed Cup semifinal clash of Switzerland versus Belarus, and this time paired up with Bacsinszky to enter the inaugural Ladies Open Biel Bienne. Hingis and Bacsinszky reached the final, succumbing there to Hsieh Su-wei and Monica Niculescu. Despite winning her doubles rubber with Bacsinszky in the Fed Cup semifinal tie, Switzerland would ultimately lose 2–3. Switzerland had been seeking to reach its first final since Hingis had spearheaded the team to a narrow defeat to Spain in 1998. In the clay-court season, Hingis and Chan continued their good form to win back-to-back titles at the Madrid and Italian Opens, defeating Tímea Babos and Andrea Hlaváčková and Ekaterina Makarova and Elena Vesnina respectively, in the finals of each event. Hingis' victory in Madrid was her 100th WTA career title. This success marked the pair as one of the pre-tournament favorites to win the French Open. Hingis and Chan reached the semifinals, where their 12 match winning streak was ended by eventual champions Mattek-Sands and Šafářová. Hingis and Paes lost in the opening round of the mixed doubles competition to Katarina Srebotnik and Raven Klaasen in a super tiebreak. Hingis and Chan again won back-to-back titles, this time at the Mallorca Open and the Eastbourne International. At Mallorca, they won the title by walkover after Jelena Janković and Anastasija Sevastova withdrew from the title match due to an injury sustained by Sevastova in the singles competition. At Eastbourne, they won after defeating Barty and Dellacqua in the final. However, like the French Open two months previous, Hingis and Chan could not replicate the success at Grand Slam level: losing at the quarterfinal stage to Grönefeld and Peschke at Wimbledon. In the mixed doubles competition, Hingis paired up with new partner Great Britain's Jamie Murray after splitting from Leander Paes. As top seeds they reached the final without losing a set, before defeating defending champions Heather Watson and Henri Kontinen in the championship match. Hingis and Chan next played at the Canadian Open, where the German-Czech pair of Grönefeld and Peschke defeated them for the second tournament in a row in the quarterfinals. However, not to be deterred, a week later at the Cincinnati Open they produced another winning run and defeated Hsieh and Niculescu in the final to capture their next title together. On 14 August, Hingis and Chan became one of the first teams to qualify for the doubles competition at the year-end WTA Finals. At the US Open, Hingis emerged victorious from both the women's and the mixed doubles competition. Jamie Murray and she defeated Chan Hao-ching and Michael Venus in the final to capture their second consecutive title together and remain undefeated as a pair. Then, less than 24 hours later with Chan, they defeated Hradecká and Siniaková in the final to win their first Major title together. In total, this was Hingis's 25th Grand Slam title across all disciplines. Hingis and Chan extended their winning run to 18 matches in China by winning their third and fourth straight titles: the Wuhan and China Opens. In Wuhan, they defeated Shuko Aoyama and Yang Zhaoxuan in the final. With this win, Hingis ascended to the No. 1 ranking on 2 October for the 67th week in her career. In Beijing, they defeated Babos and Hlaváčková. Hingis announced her retirement at the WTA finals in Singapore in October, 2017. By winning the 1998 US Open title, Hingis completed the doubles Career Grand Slam, becoming the 17th female player in history to achieve this, as well as the youngest. It also meant she completed the Calendar Year Grand Slam, becoming the fourth woman in history to achieve the feat in doubles. By winning the 2016 French Open title, Hingis completed the mixed doubles Career Grand Slam. She became the 7th female player in history to achieve this. In the 1990s, Hingis was sponsored by Sergio Tacchini. She sued the company in 2001, demanding $40 million for making allegedly defective shoes that injured her feet. In 1998 she suffered a foot injury, and she withdrew from the Wimbledon doubles competition in 1999; Hingis alleged that a Tacchini-appointed specialist recommended her shoes be changed, a recommendation which was ignored by the company, which had fired her as spokeswoman in April 1999 due to an alleged breach of contract. Hingis and Tacchini settled in 2005 for an undisclosed amount of money. She was sponsored by Adidas from 1999 until 2008. Hingis's current on-court apparel is manufactured by "Tonic Lifestyle Apparel"; having her own clothing line: "Tonic by Martina Hingis". She is sponsored by Yonex for racquets and shoes. In 2000, Hingis dated Swedish tennis player Magnus Norman and Spanish golfer Sergio García. She was briefly engaged to Czech tennis player Radek Štěpánek, but split up with him in August 2007. She dated former tennis players Ivo Heuberger, Justin Gimelstob, and Julián Alonso. On 10 December 2010, in Paris, Hingis married then-24-year-old Thibault Hutin, a French equestrian show jumper she had met at a competition the previous April. On 8 July 2013, Hingis told the Swiss newspaper "Schweizer Illustrierte" the pair had been separated since the beginning of the year. In 2017 it was reported that she was dating Spaniard David Tosas Ros, a sports manager. Hingis speaks five languages: Swiss German, Standard German, Czech, English and French. On 20 July 2018, Hingis married former sports doctor Harald Leemann in Switzerland in a secret ceremony at the Grand Resort Bad Ragaz. Hingis and Leemann had been in a relationship for almost a year before they got married. On 30 September 2018 (her 38th birthday) Hingis announced, via social media, her first pregnancy. She gave birth to a daughter, Lia, on 26 February 2019.
https://en.wikipedia.org/wiki?curid=19047
Mass Mass is both a property of a physical body and a measure of its resistance to acceleration (a change in its state of motion) when a net force is applied. An object's mass also determines the strength of its gravitational attraction to other bodies. The basic SI unit of mass is the kilogram (kg). In physics, mass is not the same as weight, even though mass is often determined by measuring the object's weight using a spring scale, rather than balance scale comparing it directly with known masses. An object on the Moon would weigh less than it does on Earth because of the lower gravity, but it would still have the same mass. This is because weight is a force, while mass is the property that (along with gravity) determines the strength of this force. There are several distinct phenomena which can be used to measure mass. Although some theorists have speculated that some of these phenomena could be independent of each other, current experiments have found no difference in results regardless of how it is measured: The mass of an object determines its acceleration in the presence of an applied force. The inertia and the inertial mass describe the same properties of physical bodies at the qualitative and quantitative level respectively, by other words, the mass quantitatively describes the inertia. According to Newton's second law of motion, if a body of fixed mass "m" is subjected to a single force "F", its acceleration "a" is given by "F"/"m". A body's mass also determines the degree to which it generates or is affected by a gravitational field. If a first body of mass "m"A is placed at a distance "r" (center of mass to center of mass) from a second body of mass "m"B, each body is subject to an attractive force , where is the "universal gravitational constant". This is sometimes referred to as gravitational mass. Repeated experiments since the 17th century have demonstrated that inertial and gravitational mass are identical; since 1915, this observation has been entailed "a priori" in the equivalence principle of general relativity. The standard International System of Units (SI) unit of mass is the kilogram (kg). The kilogram is 1000 grams (g), first defined in 1795 as one cubic decimeter of water at the melting point of ice. However, because precise measurement of a cubic decimeter of water at the proper temperature and pressure was difficult, in 1889 the kilogram was redefined as the mass of the international prototype of the kilogram of cast iron, and thus became independent of the meter and the properties of water. However, the mass of the international prototype and its supposedly identical national copies have been found to be drifting over time. The re-definition of the kilogram and several other units occured on May 20, 2019, following a final vote by the CGPM in November 2018. The new definition uses only invariant quantities of nature: the speed of light, the caesium hyperfine frequency, and the Planck constant. Other units are accepted for use in SI: Outside the SI system, other units of mass include: In physical science, one may distinguish conceptually between at least seven different aspects of "mass", or seven physical notions that involve the concept of "mass". Every experiment to date has shown these seven values to be proportional, and in some cases equal, and this proportionality gives rise to the abstract concept of mass. There are a number of ways mass can be measured or operationally defined: In everyday usage, mass and "weight" are often used interchangeably. For instance, a person's weight may be stated as 75 kg. In a constant gravitational field, the weight of an object is proportional to its mass, and it is unproblematic to use the same unit for both concepts. But because of slight differences in the strength of the Earth's gravitational field at different places, the distinction becomes important for measurements with a precision better than a few percent, and for places far from the surface of the Earth, such as in space or on other planets. Conceptually, "mass" (measured in kilograms) refers to an intrinsic property of an object, whereas "weight" (measured in newtons) measures an object's resistance to deviating from its natural course of free fall, which can be influenced by the nearby gravitational field. No matter how strong the gravitational field, objects in free fall are weightless, though they still have mass. The force known as "weight" is proportional to mass and acceleration in all situations where the mass is accelerated away from free fall. For example, when a body is at rest in a gravitational field (rather than in free fall), it must be accelerated by a force from a scale or the surface of a planetary body such as the Earth or the Moon. This force keeps the object from going into free fall. Weight is the opposing force in such circumstances, and is thus determined by the acceleration of free fall. On the surface of the Earth, for example, an object with a mass of 50 kilograms weighs 491 newtons, which means that 491 newtons is being applied to keep the object from going into free fall. By contrast, on the surface of the Moon, the same object still has a mass of 50 kilograms but weighs only 81.5 newtons, because only 81.5 newtons is required to keep this object from going into a free fall on the moon. Restated in mathematical terms, on the surface of the Earth, the weight "W" of an object is related to its mass "m" by , where is the acceleration due to Earth's gravitational field, (expressed as the acceleration experienced by a free-falling object). For other situations, such as when objects are subjected to mechanical accelerations from forces other than the resistance of a planetary surface, the weight force is proportional to the mass of an object multiplied by the total acceleration away from free fall, which is called the proper acceleration. Through such mechanisms, objects in elevators, vehicles, centrifuges, and the like, may experience weight forces many times those caused by resistance to the effects of gravity on objects, resulting from planetary surfaces. In such cases, the generalized equation for weight "W" of an object is related to its mass "m" by the equation , where "a" is the proper acceleration of the object caused by all influences other than gravity. (Again, if gravity is the only influence, such as occurs when an object falls freely, its weight will be zero). Although inertial mass, passive gravitational mass and active gravitational mass are conceptually distinct, no experiment has ever unambiguously demonstrated any difference between them. In classical mechanics, Newton's third law implies that active and passive gravitational mass must always be identical (or at least proportional), but the classical theory offers no compelling reason why the gravitational mass has to equal the inertial mass. That it does is merely an empirical fact. Albert Einstein developed his general theory of relativity starting with the assumption of the intentionality of correspondence between inertial and passive gravitational mass, and that no experiment will ever detect a difference between them, in essence the equivalence principle. This particular equivalence often referred to as the "Galilean equivalence principle" or the "weak equivalence principle" has the most important consequence for freely falling objects. Suppose an object has inertial and gravitational masses "m" and "M", respectively. If the only force acting on the object comes from a gravitational field "g", the force on the object is: Given this force, the acceleration of the object can be determined by Newton's second law: Putting these together, the gravitational acceleration is given by: This says that the ratio of gravitational to inertial mass of any object is equal to some constant "K" if and only if all objects fall at the same rate in a given gravitational field. This phenomenon is referred to as the "universality of free-fall". In addition, the constant "K" can be taken as 1 by defining our units appropriately. The first experiments demonstrating the universality of free-fall were—according to scientific ‘folklore’—conducted by Galileo obtained by dropping objects from the Leaning Tower of Pisa. This is most likely apocryphal: he is more likely to have performed his experiments with balls rolling down nearly frictionless inclined planes to slow the motion and increase the timing accuracy. Increasingly precise experiments have been performed, such as those performed by Loránd Eötvös, using the torsion balance pendulum, in 1889. , no deviation from universality, and thus from Galilean equivalence, has ever been found, at least to the precision 10−12. More precise experimental efforts are still being carried out. The universality of free-fall only applies to systems in which gravity is the only acting force. All other forces, especially friction and air resistance, must be absent or at least negligible. For example, if a hammer and a feather are dropped from the same height through the air on Earth, the feather will take much longer to reach the ground; the feather is not really in "free"-fall because the force of air resistance upwards against the feather is comparable to the downward force of gravity. On the other hand, if the experiment is performed in a vacuum, in which there is no air resistance, the hammer and the feather should hit the ground at exactly the same time (assuming the acceleration of both objects towards each other, and of the ground towards both objects, for its own part, is negligible). This can easily be done in a high school laboratory by dropping the objects in transparent tubes that have the air removed with a vacuum pump. It is even more dramatic when done in an environment that naturally has a vacuum, as David Scott did on the surface of the Moon during Apollo 15. A stronger version of the equivalence principle, known as the "Einstein equivalence principle" or the "strong equivalence principle", lies at the heart of the general theory of relativity. Einstein's equivalence principle states that within sufficiently small regions of space-time, it is impossible to distinguish between a uniform acceleration and a uniform gravitational field. Thus, the theory postulates that the force acting on a massive object caused by a gravitational field is a result of the object's tendency to move in a straight line (in other words its inertia) and should therefore be a function of its inertial mass and the strength of the gravitational field. In theoretical physics, a mass generation mechanism is a theory which attempts to explain the origin of mass from the most fundamental laws of physics. To date, a number of different models have been proposed which advocate different views of the origin of mass. The problem is complicated by the fact that the notion of mass is strongly related to the gravitational interaction but a theory of the latter has not been yet reconciled with the currently popular model of particle physics, known as the Standard Model. The concept of amount is very old and predates recorded history. Humans, at some early era, realized that the weight of a collection of similar objects was directly proportional to the number of objects in the collection: where "W" is the weight of the collection of similar objects and "n" is the number of objects in the collection. Proportionality, by definition, implies that two values have a constant ratio: An early use of this relationship is a balance scale, which balances the force of one object's weight against the force of another object's weight. The two sides of a balance scale are close enough that the objects experience similar gravitational fields. Hence, if they have similar masses then their weights will also be similar. This allows the scale, by comparing weights, to also compare masses. Consequently, historical weight standards were often defined in terms of amounts. The Romans, for example, used the carob seed (carat or siliqua) as a measurement standard. If an object's weight was equivalent to 1728 carob seeds, then the object was said to weigh one Roman pound. If, on the other hand, the object's weight was equivalent to 144 carob seeds then the object was said to weigh one Roman ounce (uncia). The Roman pound and ounce were both defined in terms of different sized collections of the same common mass standard, the carob seed. The ratio of a Roman ounce (144 carob seeds) to a Roman pound (1728 carob seeds) was: In 1600 AD, Johannes Kepler sought employment with Tycho Brahe, who had some of the most precise astronomical data available. Using Brahe's precise observations of the planet Mars, Kepler spent the next five years developing his own method for characterizing planetary motion. In 1609, Johannes Kepler published his three laws of planetary motion, explaining how the planets orbit the Sun. In Kepler's final planetary model, he described planetary orbits as following elliptical paths with the Sun at a focal point of the ellipse. Kepler discovered that the square of the orbital period of each planet is directly proportional to the cube of the semi-major axis of its orbit, or equivalently, that the ratio of these two values is constant for all planets in the Solar System.
https://en.wikipedia.org/wiki?curid=19048
Finnish markka The Finnish markka (; ; sign: mk; ISO code: FIM) was the currency of Finland from 1860 until 28 February 2002, when it ceased to be legal tender. The markka was divided into 100 pennies (; ), abbreviated as "p". At the point of conversion, the rate was fixed at €1 = 5.94573 mk. The markka was replaced by the euro (€), which had been introduced, in cash form, on 1 January 2002. This was after a transitional period of three years, when the euro was the official currency but only existed as "book money" outside of the monetary base. The dual circulation period, when both the Finnish markka and the euro had legal tender status, ended on 28 February 2002. The name "markka" was based on a medieval unit of weight. Both "markka" and "penni" are similar to words used in Germany for that country's former currency, based on the same etymological roots as the German Mark and pfennig. Although the word "markka" predates the currency by several centuries, the currency was established before being named "markka". A competition was held for its name, and some of the other entries included "sataikko" (meaning "having a hundred parts"), "omena" (apple) and "suomo" (from "Suomi", the Finnish name for Finland). With numbered amounts of markka, the Finnish language does not use plurals but partitive singular forms: ""10 markkaa"" and ""10 penniä"" (the nominative is "penni"). In Swedish, the singular and plural forms of mark and penni are the same. When the euro replaced the markka, "mummonmarkka" (, sometimes shortened to just "mummo") became a new colloquial term for the old currency. The sometimes used "old markka" can be misleading, since it can also be used to refer to the pre-1963 markka. In Helsinki slang, a hundred markkaa was traditionally called "huge" [hu.ge] (from Swedish "hundra" for "hundred"). After the 1963 reform, this name was used for one new markka. The markka was introduced in 1860 by the Bank of Finland, replacing the Russian ruble at a rate of four markkas to one ruble. Senator Fabian Langenskiöld is called "father of markka". In 1865, the markka was separated from the Russian ruble and tied to the value of silver. From 1878 to 1915, Finland adopted the gold standard of the Latin Monetary Union. Before the markka, both the Swedish riksdaler and Russian ruble were used side-by-side for a time. Up until World War 1, the value of the markka fluctuated within +23%/−16% of its initial value, but with no trend. The markka suffered heavy inflation (91%) during 1914–18. Gaining independence in 1917, Finland returned to the gold standard from 1926 to 1931. Prices remained stable until 1940. but the markka suffered heavy inflation (17% annually on average) during World War II and again in 1956–57 (11%). In 1963, in order to reset the inflation, the markka was replaced by the "new markka", equivalent to 100 old markkaa. Finland joined the Bretton Woods Agreement in 1948. The value of markka was pegged to the dollar at 320 mk/USD, which became 3.20 new mk/USD in 1963 and devalued to 4.20 mk/USD in 1967. After the breakdown of the Bretton Woods agreement in 1971, a basket of currencies became the new reference. Inflation was high (over 5%) during 1971–85. Occasionally, devaluation was used, 60% in total between 1975 and 1990, allowing the currency to more closely follow the depreciating US dollar than the rising German mark. The paper industry, which mainly traded in US dollars, was often blamed for demanding these devaluations to boost their exports. Various economic controls were removed and the market was gradually liberalized throughout the 1980s and the 1990s. The monetary policy called "strong markka policy" ("vahvan markan politiikka") was a characteristic feature of the 1980s and early 1990s. The main architect of this policy was President Mauno Koivisto, who opposed floating the currency and devaluations. As a result, the nominal value of markka was extremely high, and in the year 1990, Finland was nominally the most expensive country in the world according to OECD's Purchasing Power Parities report. Koivisto's policy was maintained only briefly after Esko Aho was elected Prime Minister. In 1991, the markka was pegged to the currency basket ECU, but the peg had to be withdrawn after two months with a devaluation of 12%. In 1992, Finland was hit by a severe recession, the early 1990s depression in Finland. It was caused by several factors, the most severe being the incurring of debt, as the 1980s economic boom was based on debt. Also, the Soviet Union had collapsed, which brought an end to bilateral trade, and existing trade connections were severed. The most important source of export revenue, Western markets, were also depressed during the same time, in part due to the war in Kuwait. As a result, by some opinions years overdue, the artificial fixed exchange rate was abandoned and the markka was floated. Its value immediately decreased 13% and the inflated nominal prices converged towards German levels. In total, the value of the markka had decreased 40% as a result of the recession. Also, as a result, several entrepreneurs who had borrowed money denominated in foreign currency suddenly faced insurmountable debt. Inflation was low during the markka's independent existence as a floating currency (1992–1999): 1.3% annually on average. The markka was added into the ERM system in 1996 and then became a fraction of the euro in 1999, with physical euro money arriving later in 2002. It has been speculated that if Finland had not joined the euro, market fluctuations such as the dot-com bubble would have reflected as wild fluctuations in the price of the markka. Nokia, formerly traded in markka, was in 2000 the European company with the highest market capitalization. When the markka was introduced, coins were minted in copper (1, 5 and 10 penniä), silver (25 and 50 penniä, 1 and 2 markkaa) and gold (10 and 20 markkaa). After the First World War, silver and gold issues were ceased and cupro-nickel 25 and 50 penniä and 1 markka coins were introduced in 1921, followed by aluminium-bronze 5, 10 and 20 markkaa between 1928 and 1931. During the Second World War, copper replaced cupro-nickel in the 25 and 50 penniä and 1 markka, followed by an issue of iron 10, 25 and 50 penniä and 1 markka. This period also saw the issue of holed 5 and 10 penniä coins. All coins below 1 markka had ceased to be produced by 1948. In 1952, a new coinage was introduced, with smaller iron (later nickel-plated) 1- and 5=markka coins alongside aluminium-bronze 10-, 20- and 50-markka coins and (from 1956) silver 100- and 200-markka denominations. This coinage continued to be issued until the introduction of the new markka in 1963. The new markka coinage consisted initially of six denominations: 1 (bronze, later aluminium), 5 (bronze, later aluminium), 10 (aluminium-bronze, later aluminium), 20 and 50 penniä (aluminium-bronze) and 1 markka (silver, later cupro-nickel). From 1972, aluminium-bronze 5 markka were also issued. The last series of Finnish markka coins included five coins (listed with final euro values, rounded to the nearest cent): This section covers the last design series of the Finnish markka, designed in the 1980s by Finnish designer Erik Bruun and issued in 1986. In this final banknote series, Bank of Finland used a photograph of Väinö Linna on the 20-markkaa note without permission from copyright holders. This was only revealed after several million notes were in use. The Bank paid 100,000 mk (€17,000) compensation to the rights holders. The second-to-last banknote design series, designed by Tapio Wirkkala, was introduced in 1955 and revised in the reform of 1963. It was the first series to depict actual specific persons. These included Juho Kusti Paasikivi on the 10 markkaa, K. J. Ståhlberg on the 50 markkaa, J. V. Snellman on the 100 markkaa and Urho Kekkonen on the 500 markkaa (introduced later). Unlike Erik Bruun's series, this series did not depict any other real-life subjects, but only abstract ornaments in addition to the depictions of people. A popular joke at the time was to cover Paasikivi's face except for his ear and back of the head on the 10-markka note, ending up with something resembling a mouse, said to be the only animal illustration in the entire series. The still-older notes, designed by Eliel Saarinen, were introduced in 1922. They also depicted people, but these were generic men and women, and did not represent any specific individuals. The fact that these men and women were depicted nude caused a minor controversy at the time. By the end of 2001, Finland was a relatively cashless society. Most transactions were paid either using the 100 mk banknote or by debit card. There were 4 million banknotes apiece of the 500 and 1000 markkaa denomination banknotes for a country with a population of over 5 million people. There were about 19 banknotes per individual of the smaller denomination, adding up to €241 per inhabitant. For the introduction of the euro, ECB produced €8,020 million in banknotes before the changeover. During the first weeks of 2002, Finland's replacement of previous national banknotes with euro banknotes was among the fastest in the euro area. Of the cash payments, three-fourths were paid in euro already at the end of the first changeover week. Coins and banknotes that were legal tender at the time of the markka's retirement could be exchanged for euros until 29 February 2012. Today, the only value that markka coins and banknotes have is their value as collectibles.
https://en.wikipedia.org/wiki?curid=19050
Manganese Manganese is a chemical element with the symbol Mn and atomic number 25. It is not found as a free element in nature; it is often found in minerals in combination with iron. Manganese is a transition metal with a multifaceted array of industrial alloy uses, particularly in stainless steels. Historically, manganese is named for pyrolusite and other black minerals from the region of Magnesia in Greece, which also gave its name to magnesium and the iron ore magnetite. By the mid-18th century, Swedish-German chemist Carl Wilhelm Scheele had used pyrolusite to produce chlorine. Scheele and others were aware that pyrolusite (now known to be manganese dioxide) contained a new element, but they were unable to isolate it. Johan Gottlieb Gahn was the first to isolate an impure sample of manganese metal in 1774, which he did by reducing the dioxide with carbon. Manganese phosphating is used for rust and corrosion prevention on steel. Ionized manganese is used industrially as pigments of various colors, which depend on the oxidation state of the ions. The permanganates of alkali and alkaline earth metals are powerful oxidizers. Manganese dioxide is used as the cathode (electron acceptor) material in zinc-carbon and alkaline batteries. In biology, manganese(II) ions function as cofactors for a large variety of enzymes with many functions. Manganese enzymes are particularly essential in detoxification of superoxide free radicals in organisms that must deal with elemental oxygen. Manganese also functions in the oxygen-evolving complex of photosynthetic plants. While the element is a required trace mineral for all known living organisms, it also acts as a neurotoxin in larger amounts. Especially through inhalation, it can cause manganism, a condition in mammals leading to neurological damage that is sometimes irreversible. Manganese is a silvery-gray metal that resembles iron. It is hard and very brittle, difficult to fuse, but easy to oxidize. Manganese metal and its common ions are paramagnetic. Manganese tarnishes slowly in air and oxidizes ("rusts") like iron in water containing dissolved oxygen. Naturally occurring manganese is composed of one stable isotope, 55Mn. Several radioisotopes have been isolated and described, ranging in atomic weight from 44 u (44Mn) to 69 u (69Mn). The most stable are 53Mn with a half-life of 3.7 million years, 54Mn with a half-life of 312.2 days, and 52Mn with a half-life of 5.591 days. All of the remaining radioactive isotopes have half-lives of less than three hours, and the majority of less than one minute. The primary decay mode before the most abundant stable isotope, 55Mn, is electron capture and the primary mode after is beta decay. Manganese also has three meta states. Manganese is part of the iron group of elements, which are thought to be synthesized in large stars shortly before the supernova explosion. 53Mn decays to 53Cr with a half-life of 3.7 million years. Because of its relatively short half-life, 53Mn is relatively rare, produced by cosmic rays impact on iron. Manganese isotopic contents are typically combined with chromium isotopic contents and have found application in isotope geology and radiometric dating. Mn–Cr isotopic ratios reinforce the evidence from 26Al and 107Pd for the early history of the solar system. Variations in 53Cr/52Cr and Mn/Cr ratios from several meteorites suggest an initial 53Mn/55Mn ratio, which indicates that Mn–Cr isotopic composition must result from "in situ" decay of 53Mn in differentiated planetary bodies. Hence, 53Mn provides additional evidence for nucleosynthetic processes immediately before coalescence of the solar system. The most common oxidation states of manganese are +2, +3, +4, +6, and +7, though all oxidation states from −3 to +7 have been observed. Mn2+ often competes with Mg2+ in biological systems. Manganese compounds where manganese is in oxidation state +7, which are mostly restricted to the unstable oxide Mn2O7, compounds of the intensely purple permanganate anion MnO4−, and a few oxyhalides (MnO3F and MnO3Cl), are powerful oxidizing agents. Compounds with oxidation states +5 (blue) and +6 (green) are strong oxidizing agents and are vulnerable to disproportionation. The most stable oxidation state for manganese is +2, which has a pale pink color, and many manganese(II) compounds are known, such as manganese(II) sulfate (MnSO4) and manganese(II) chloride (MnCl2). This oxidation state is also seen in the mineral rhodochrosite (manganese(II) carbonate). Manganese(II) most commonly exists with a high spin, S = 5/2 ground state because of the high pairing energy for manganese(II). However, there are a few examples of low-spin, S =1/2 manganese(II). There are no spin-allowed d–d transitions in manganese(II), explaining why manganese(II) compounds are typically pale to colorless. The +3 oxidation state is known in compounds like manganese(III) acetate, but these are quite powerful oxidizing agents and also prone to disproportionation in solution, forming manganese(II) and manganese(IV). Solid compounds of manganese(III) are characterized by its strong purple-red color and a preference for distorted octahedral coordination resulting from the Jahn-Teller effect. The oxidation state +5 can be produced by dissolving manganese dioxide in molten sodium nitrite. Manganate (VI) salts can be produced by dissolving Mn compounds, such as manganese dioxide, in molten alkali while exposed to air. Permanganate (+7 oxidation state) compounds are purple, and can give glass a violet color. Potassium permanganate, sodium permanganate, and barium permanganate are all potent oxidizers. Potassium permanganate, also called Condy's crystals, is a commonly used laboratory reagent because of its oxidizing properties; it is used as a topical medicine (for example, in the treatment of fish diseases). Solutions of potassium permanganate were among the first stains and fixatives to be used in the preparation of biological cells and tissues for electron microscopy. The origin of the name manganese is complex. In ancient times, there were two black minerals from the regions of the Magnetes (either Magnesia, located within modern Greece or Magnesia ad Sipylum, located within modern Turkey). They were both called "magnes" from their place of origin, but were considered to differ in sex. The male "magnes" attracted iron, and was the iron ore now known as lodestone or magnetite, and which probably gave us the term magnet. The female "magnes" ore did not attract iron, but was used to decolorize glass. This female "magnes" was later called "magnesia", known now in modern times as pyrolusite or manganese dioxide. Neither this mineral nor elemental manganese is magnetic. In the 16th century, manganese dioxide was called "manganesum" (note the two Ns instead of one) by glassmakers, possibly as a corruption and concatenation of two words, since alchemists and glassmakers eventually had to differentiate a "magnesia negra" (the black ore) from "magnesia alba" (a white ore, also from Magnesia, also useful in glassmaking). Michele Mercati called magnesia negra "manganesa", and finally the metal isolated from it became known as "manganese" (German: "Mangan"). The name "magnesia" eventually was then used to refer only to the white magnesia alba (magnesium oxide), which provided the name magnesium for the free element when it was isolated much later. Several colorful oxides of manganese, for example manganese dioxide, are abundant in nature and have been used as pigments since the Stone Age. The cave paintings in Gargas that are 30,000 to 24,000 years old contain manganese pigments. Manganese compounds were used by Egyptian and Roman glassmakers, either to add to, or remove color from glass. Use as "glassmakers soap" continued through the Middle Ages until modern times and is evident in 14th-century glass from Venice. Because it was used in glassmaking, manganese dioxide was available for experiments by alchemists, the first chemists. Ignatius Gottfried Kaim (1770) and Johann Glauber (17th century) discovered that manganese dioxide could be converted to permanganate, a useful laboratory reagent. By the mid-18th century, the Swedish chemist Carl Wilhelm Scheele used manganese dioxide to produce chlorine. First, hydrochloric acid, or a mixture of dilute sulfuric acid and sodium chloride was made to react with manganese dioxide, later hydrochloric acid from the Leblanc process was used and the manganese dioxide was recycled by the Weldon process. The production of chlorine and hypochlorite bleaching agents was a large consumer of manganese ores. Scheele and other chemists were aware that manganese dioxide contained a new element, but they were not able to isolate it. Johan Gottlieb Gahn was the first to isolate an impure sample of manganese metal in 1774, by reducing the dioxide with carbon. The manganese content of some iron ores used in Greece led to speculations that steel produced from that ore contains additional manganese, making the Spartan steel exceptionally hard. Around the beginning of the 19th century, manganese was used in steelmaking and several patents were granted. In 1816, it was documented that iron alloyed with manganese was harder but not more brittle. In 1837, British academic James Couper noted an association between miners' heavy exposure to manganese with a form of Parkinson's disease. In 1912, United States patents were granted for protecting firearms against rust and corrosion with manganese phosphate electrochemical conversion coatings, and the process has seen widespread use ever since. The invention of the Leclanché cell in 1866 and the subsequent improvement of batteries containing manganese dioxide as cathodic depolarizer increased the demand for manganese dioxide. Until the development of batteries with nickel-cadmium and lithium, most batteries contained manganese. The zinc-carbon battery and the alkaline battery normally use industrially produced manganese dioxide because naturally occurring manganese dioxide contains impurities. In the 20th century, manganese dioxide was widely used as the cathodic for commercial disposable dry batteries of both the standard (zinc-carbon) and alkaline types. Manganese comprises about 1000 ppm (0.1%) of the Earth's crust, the 12th most abundant of the crust's elements. Soil contains 7–9000 ppm of manganese with an average of 440 ppm. Seawater has only 10 ppm manganese and the atmosphere contains 0.01 µg/m3. Manganese occurs principally as pyrolusite (MnO2), braunite, (Mn2+Mn3+6)(SiO12), psilomelane (Ba,H2O)2Mn5O10, and to a lesser extent as rhodochrosite (MnCO3). The most important manganese ore is pyrolusite (MnO2). Other economically important manganese ores usually show a close spatial relation to the iron ores. Land-based resources are large but irregularly distributed. About 80% of the known world manganese resources are in South Africa; other important manganese deposits are in Ukraine, Australia, India, China, Gabon and Brazil. According to 1978 estimate, the ocean floor has 500 billion tons of manganese nodules. Attempts to find economically viable methods of harvesting manganese nodules were abandoned in the 1970s. The CIA once used mining manganese nodules on the ocean floor as a cover story for recovering a sunken Soviet submarine. In South Africa, most identified deposits are located near Hotazel in the Northern Cape Province, with a 2011 estimate of 15 billion tons. In 2011 South Africa produced 3.4 million tons, topping all other nations. Manganese is mainly mined in South Africa, Australia, China, Gabon, Brazil, India, Kazakhstan, Ghana, Ukraine and Malaysia. US Import Sources (1998–2001): Manganese ore: Gabon, 70%; South Africa, 10%; Australia, 9%; Mexico, 5%; and other, 6%. Ferromanganese: South Africa, 47%; France, 22%; Mexico, 8%; Australia, 8%; and other, 15%. Manganese contained in all manganese imports: South Africa, 31%; Gabon, 21%; Australia, 13%; Mexico, 8%; and other, 27%. For the production of ferromanganese, the manganese ore is mixed with iron ore and carbon, and then reduced either in a blast furnace or in an electric arc furnace. The resulting ferromanganese has a manganese content of 30 to 80%. Pure manganese used for the production of iron-free alloys is produced by leaching manganese ore with sulfuric acid and a subsequent electrowinning process. A more progressive extraction process involves directly reducing manganese ore in a heap leach. This is done by percolating natural gas through the bottom of the heap; the natural gas provides the heat (needs to be at least 850 °C) and the reducing agent (carbon monoxide). This reduces all of the manganese ore to manganese oxide (MnO), which is a leachable form. The ore then travels through a grinding circuit to reduce the particle size of the ore to between 150–250 μm, increasing the surface area to aid leaching. The ore is then added to a leach tank of sulfuric acid and ferrous iron (Fe2+) in a 1.6:1 ratio. The iron reacts with the manganese dioxide to form iron hydroxide and elemental manganese. This process yields approximately 92% recovery of the manganese. For further purification, the manganese can then be sent to an electrowinning facility. In 1972 the CIA's Project Azorian, through billionaire Howard Hughes, commissioned the ship "Hughes Glomar Explorer" with the cover story of harvesting manganese nodules from the sea floor. That triggered a rush of activity to collect manganese nodules, which was not actually practical. The real mission of "Hughes Glomar Explorer" was to raise a sunken Soviet submarine, the K-129, with the goal of retrieving Soviet code books. Manganese has no satisfactory substitute in its major applications in metallurgy. In minor applications, (e.g., manganese phosphating), zinc and sometimes vanadium are viable substitutes. Manganese is essential to iron and steel production by virtue of its sulfur-fixing, deoxidizing, and alloying properties, as first recognized by the British metallurgist Robert Forester Mushet (1811–1891) who, in 1856, introduced the element, in the form of Spiegeleisen, into steel for the specific purpose of removing excess dissolved oxygen, sulfur, and phosphorus in order to improve its malleability. Steelmaking, including its ironmaking component, has accounted for most manganese demand, presently in the range of 85% to 90% of the total demand. Manganese is a key component of low-cost stainless steel. Often ferromanganese (usually about 80% manganese) is the intermediate in modern processes. Small amounts of manganese improve the workability of steel at high temperatures by forming a high-melting sulfide and preventing the formation of a liquid iron sulfide at the grain boundaries. If the manganese content reaches 4%, the embrittlement of the steel becomes a dominant feature. The embrittlement decreases at higher manganese concentrations and reaches an acceptable level at 8%. Steel containing 8 to 15% of manganese has a high tensile strength of up to 863 MPa. Steel with 12% manganese was discovered in 1882 by Robert Hadfield and is still known as Hadfield steel (mangalloy). It was used for British military steel helmets and later by the U.S. military. The second largest application for manganese is in aluminium alloys. Aluminium with roughly 1.5% manganese has increased resistance to corrosion through grains that absorb impurities which would lead to galvanic corrosion. The corrosion-resistant aluminium alloys 3004 and 3104 (0.8 to 1.5% manganese) are used for most beverage cans. Before 2000, more than 1.6 million tonnes of those alloys were used; at 1% manganese, this consumed 16,000 tonnes of manganese. Methylcyclopentadienyl manganese tricarbonyl is used as an additive in unleaded gasoline to boost octane rating and reduce engine knocking. The manganese in this unusual organometallic compound is in the +1 oxidation state. Manganese(IV) oxide (manganese dioxide, MnO2) is used as a reagent in organic chemistry for the oxidation of benzylic alcohols (where the hydroxyl group is adjacent to an aromatic ring). Manganese dioxide has been used since antiquity to oxidize and neutralize the greenish tinge in glass from trace amounts of iron contamination. MnO2 is also used in the manufacture of oxygen and chlorine and in drying black paints. In some preparations, it is a brown pigment for paint and is a constituent of natural umber. Manganese(IV) oxide was used in the original type of dry cell battery as an electron acceptor from zinc, and is the blackish material in carbon–zinc type flashlight cells. The manganese dioxide is reduced to the manganese oxide-hydroxide MnO(OH) during discharging, preventing the formation of hydrogen at the anode of the battery. The same material also functions in newer alkaline batteries (usually battery cells), which use the same basic reaction, but a different electrolyte mixture. In 2002, more than 230,000 tons of manganese dioxide was used for this purpose. The metal is occasionally used in coins; until 2000, the only United States coin to use manganese was the from 1942 to 1945. An alloy of 75% copper and 25% nickel was traditionally used for the production of nickel coins. However, because of shortage of nickel metal during the war, it was substituted by more available silver and manganese, thus resulting in an alloy of 56% copper, 35% silver and 9% manganese. Since 2000, dollar coins, for example the Sacagawea dollar and the Presidential $1 coins, are made from a brass containing 7% of manganese with a pure copper core. In both cases of nickel and dollar, the use of manganese in the coin was to duplicate the electromagnetic properties of a previous identically sized and valued coin in the mechanisms of vending machines. In the case of the later U.S. dollar coins, the manganese alloy was intended to duplicate the properties of the copper/nickel alloy used in the previous Susan B. Anthony dollar. Manganese compounds have been used as pigments and for the coloring of ceramics and glass. The brown color of ceramic is sometimes the result of manganese compounds. In the glass industry, manganese compounds are used for two effects. Manganese(III) reacts with iron(II) to induce a strong green color in glass by forming less-colored iron(III) and slightly pink manganese(II), compensating for the residual color of the iron(III). Larger quantities of manganese are used to produce pink colored glass. In 2009, Professor Mas Subramanian and associates at Oregon State University discovered that manganese can be combined with yttrium and indium to form an intensely blue, non-toxic, inert, fade-resistant pigment, YInMn blue, the first new blue pigment discovered in 200 years. Tetravalent manganese is used as an activator in red-emitting phosphors. While many compounds are known which show luminescence, the majority are not used in commercial application due to low efficiency or deep red emission. However, several Mn4+ activated fluorides were reported as potential red-emitting phosphors for warm-white LEDs. But to this day, only K2SiF6:Mn4+ is commercially available for use in warm-white LEDs. Manganese oxide is also used in Portland cement mixtures. The classes of enzymes that have manganese cofactors is large and includes oxidoreductases, transferases, hydrolases, lyases, isomerases, ligases, lectins, and integrins. The reverse transcriptases of many retroviruses (though not lentiviruses such as HIV) contain manganese. The best-known manganese-containing polypeptides may be arginase, the diphtheria toxin, and Mn-containing superoxide dismutase (Mn-SOD). Manganese is an essential human dietary element. It is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. Excessive exposure or intake may lead to a condition known as manganism, a neurodegenerative disorder that causes dopaminergic neuronal death and symptoms similar to Parkinson's disease. The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For people ages 15 and older the AI is set at 3.0 mg/day. AIs for pregnancy and lactation is 3.0 mg/day. For children ages 1–14 years the AIs increase with age from 0.5 to 2.0 mg/day. The adult AIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient information to set a UL. For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For manganese labeling purposes 100% of the Daily Value was 2.0 mg, but as of May 27, 2016 it was revised to 2.3 mg to bring it into agreement with the RDA. Compliance with the updated labeling regulations was required by 1 January 2020, for manufacturers with $10 million or more in annual food sales, and by 1 January 2021, for manufacturers with less than $10 million in annual food sales. During the first six months following the 1 January 2020 compliance date, the FDA plans to work cooperatively with manufacturers to meet the new Nutrition Facts label requirements and will not focus on enforcement actions regarding these requirements during that time. A table of the old and new adult Daily Values is provided at Reference Daily Intake. Mn-SOD is the type of SOD present in eukaryotic mitochondria, and also in most bacteria (this fact is in keeping with the bacterial-origin theory of mitochondria). The Mn-SOD enzyme is probably one of the most ancient, for nearly all organisms living in the presence of oxygen use it to deal with the toxic effects of superoxide (), formed from the 1-electron reduction of dioxygen. The exceptions, which are all bacteria, include "Lactobacillus plantarum" and related lactobacilli, which use a different nonenzymatic mechanism with manganese (Mn2+) ions complexed with polyphosphate, suggesting a path of evolution for this function in aerobic life. Manganese is also important in photosynthetic oxygen evolution in chloroplasts in plants. The oxygen-evolving complex (OEC) is a part of photosystem II contained in the thylakoid membranes of chloroplasts; it is responsible for the terminal photooxidation of water during the light reactions of photosynthesis, and has a metalloenzyme core containing four atoms of manganese. To fulfill this requirement, most broad-spectrum plant fertilizers contain manganese. Manganese compounds are less toxic than those of other widespread metals, such as nickel and copper. However, exposure to manganese dusts and fumes should not exceed the ceiling value of 5 mg/m3 even for short periods because of its toxicity level. Manganese poisoning has been linked to impaired motor skills and cognitive disorders. Permanganate exhibits a higher toxicity than manganese(II) compounds. The fatal dose is about 10 g, and several fatal intoxications have occurred. The strong oxidative effect leads to necrosis of the mucous membrane. For example, the esophagus is affected if the permanganate is swallowed. Only a limited amount is absorbed by the intestines, but this small amount shows severe effects on the kidneys and on the liver. Manganese exposure in United States is regulated by the Occupational Safety and Health Administration (OSHA). People can be exposed to manganese in the workplace by breathing it in or swallowing it. OSHA has set the legal limit (permissible exposure limit) for manganese exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 1 mg/m3 over an 8-hour workday and a short term limit of 3 mg/m3. At levels of 500 mg/m3, manganese is immediately dangerous to life and health. Generally, exposure to ambient Mn air concentrations in excess of 5 μg Mn/m3 can lead to Mn-induced symptoms. Increased ferroportin protein expression in human embryonic kidney (HEK293) cells is associated with decreased intracellular Mn concentration and attenuated cytotoxicity, characterized by the reversal of Mn-reduced glutamate uptake and diminished lactate dehydrogenase leakage. Waterborne manganese has a greater bioavailability than dietary manganese. According to results from a 2010 study, higher levels of exposure to manganese in drinking water are associated with increased intellectual impairment and reduced intelligence quotients in school-age children. It is hypothesized that long-term exposure due to inhaling the naturally occurring manganese in shower water puts up to 8.7 million Americans at risk. However, data indicates that the human body can recover from certain adverse effects of overexposure to manganese if the exposure is stopped and the body can clear the excess. Methylcyclopentadienyl manganese tricarbonyl (MMT) is a gasoline additive used to replace lead compounds for unleaded gasolines to improve the octane rating of low octane petroleum distillates. It reduces engine knock agent through the action of the carbonyl groups. Fuels containing manganese tend to form manganese carbides, which damage exhaust valves. Compared to 1953, levels of manganese in air have dropped. The tobacco plant readily absorbs and accumulates heavy metals such as manganese from the surrounding soil into its leaves. These are subsequently inhaled during tobacco smoking. While manganese is a constituent of tobacco smoke, studies have largely concluded that concentrations are not hazardous for human health. Manganese overexposure is most frequently associated with manganism, a rare neurological disorder associated with excessive manganese ingestion or inhalation. Historically, persons employed in the production or processing of manganese alloys have been at risk for developing manganism; however, current health and safety regulations protect workers in developed nations. The disorder was first described in 1837 by British academic John Couper, who studied two patients who were manganese grinders. Manganism is a biphasic disorder. In its early stages, an intoxicated person may experience depression, mood swings, compulsive behaviors, and psychosis. Early neurological symptoms give way to late-stage manganism, which resembles Parkinson's disease. Symptoms include weakness, monotone and slowed speech, an expressionless face, tremor, forward-leaning gait, inability to walk backwards without falling, rigidity, and general problems with dexterity, gait and balance. Unlike Parkinson's disease, manganism is not associated with loss of the sense of smell and patients are typically unresponsive to treatment with L-DOPA. Symptoms of late-stage manganism become more severe over time even if the source of exposure is removed and brain manganese levels return to normal. Chronic manganese exposure has been shown to produce a parkinsonism-like illness characterized by movement abnormalities. This condition is not responsive to typical therapies used in the treatment of PD, suggesting an alternative pathway than the typical dopaminergic loss within the substantia nigra. Manganese may accumulate in the basal ganglia, leading to the abnormal movements. A mutation of the SLC30A10 gene, a manganese efflux transporter necessary for decreasing intracellular Mn, has been linked with the development of this Parkinsonism-like disease. The Lewy bodies typical to PD are not seen in Mn-induced parkinsonism. Animal experiments have given the opportunity to examine the consequences of manganese overexposure under controlled conditions. In (non-aggressive) rats, manganese induces mouse-killing behavior. Several recent studies attempt to examine the effects of chronic low-dose manganese overexposure on child development. The earliest study was conducted in the Chinese province of Shanxi. Drinking water there had been contaminated through improper sewage irrigation and contained 240–350 µg Mn/L. Although Mn concentrations at or below 300 µg Mn/L were considered safe at the time of the study by the US EPA and 400 µg Mn/L by the World Health Organization, the 92 children sampled (between 11 and 13 years of age) from this province displayed lower performance on tests of manual dexterity and rapidity, short-term memory, and visual identification, compared to children from an uncontaminated area. More recently, a study of 10-year-old children in Bangladesh showed a relationship between Mn concentration in well water and diminished IQ scores. A third study conducted in Quebec examined school children between the ages of 6 and 15 living in homes that received water from a well containing 610 µg Mn/L; controls lived in homes that received water from a 160 µg Mn/L well. Children in the experimental group showed increased hyperactive and oppositional behavior. The current maximum safe concentration under EPA rules is 50 µg Mn/L. A protein called DMT1 is the major transporter in manganese absorption from the intestine, and may be the major transporter of manganese across the blood–brain barrier. DMT1 also transports inhaled manganese across the nasal epithelium. The proposed mechanism for manganese toxicity is that dysregulation leads to oxidative stress, mitochondrial dysfunction, glutamate-mediated excitoxicity, and aggregation of proteins.
https://en.wikipedia.org/wiki?curid=19051
Molybdenum Molybdenum is a chemical element with the symbol Mo and atomic number 42. The name is from Neo-Latin "molybdaenum", from Ancient Greek ", meaning lead, since its ores were confused with lead ores. Molybdenum minerals have been known throughout history, but the element was discovered (in the sense of differentiating it as a new entity from the mineral salts of other metals) in 1778 by Carl Wilhelm Scheele. The metal was first isolated in 1781 by Peter Jacob Hjelm. Molybdenum does not occur naturally as a free metal on Earth; it is found only in various oxidation states in minerals. The free element, a silvery metal with a gray cast, has the sixth-highest melting point of any element. It readily forms hard, stable carbides in alloys, and for this reason most of world production of the element (about 80%) is used in steel alloys, including high-strength alloys and superalloys. Most molybdenum compounds have low solubility in water, but when molybdenum-bearing minerals contact oxygen and water, the resulting molybdate ion is quite soluble. Industrially, molybdenum compounds (about 14% of world production of the element) are used in high-pressure and high-temperature applications as pigments and catalysts. In its pure form, molybdenum is a silvery-grey metal with a Mohs hardness of 5.5 and a standard atomic weight of 95.95 g/mol. It has a melting point of ; of the naturally occurring elements, only tantalum, osmium, rhenium, tungsten, and carbon have higher melting points. It has one of the lowest coefficients of thermal expansion among commercially used metals. Molybdenum is a transition metal with an electronegativity of 2.16 on the Pauling scale. It does not visibly react with oxygen or water at room temperature. Weak oxidation of molybdenum starts at ; bulk oxidation occurs at temperatures above 600 °C, resulting in molybdenum trioxide. Like many heavier transition metals, molybdenum shows little inclination to form a cation in aqueous solution, although the Mo3+ cation is known under carefully controlled conditions. There are 35 known isotopes of molybdenum, ranging in atomic mass from 83 to 117, as well as four metastable nuclear isomers. Seven isotopes occur naturally, with atomic masses of 92, 94, 95, 96, 97, 98, and 100. Of these naturally occurring isotopes, only molybdenum-100 is unstable. Molybdenum-98 is the most abundant isotope, comprising 24.14% of all molybdenum. Molybdenum-100 has a half-life of about 1019 y and undergoes double beta decay into ruthenium-100. All unstable isotopes of molybdenum decay into isotopes of niobium, technetium, and ruthenium. Of the synthetic radioisotopes, the most stable is 93Mo, with a half-life of 4,000 years. The most common isotopic molybdenum application involves molybdenum-99, which is a fission product. It is a parent radioisotope to the short-lived gamma-emitting daughter radioisotope technetium-99m, a nuclear isomer used in various imaging applications in medicine. In 2008, the Delft University of Technology applied for a patent on the molybdenum-98-based production of molybdenum-99. Molybdenum forms chemical compounds in oxidation states from -II to +VI. Higher oxidation states are more relevant to its terrestrial occurrence and its biological roles, mid-level oxidation states are often associated with metal clusters, and very low oxidation states are typically associated with organomolybdenum compounds. Mo and W chemistry shows strong similarities. The relative rarity of molybdenum(III), for example, contrasts with the pervasiveness of the chromium(III) compounds. The highest oxidation state is seen in molybdenum(VI) oxide (MoO3), whereas the normal sulfur compound is molybdenum disulfide MoS2. From the perspective of commerce, the most important compounds are molybdenum disulfide () and molybdenum trioxide (). The black disulfide is the main mineral. It is roasted in air to give the trioxide: The trioxide, which is volatile at high temperatures, is the precursor to virtually all other Mo compounds as well as alloys. Molybdenum has several oxidation states, the most stable being +4 and +6 (bolded in the table at left). Molybdenum(VI) oxide is soluble in strong alkaline water, forming molybdates (MoO42−). Molybdates are weaker oxidants than chromates. They tend to form structurally complex oxyanions by condensation at lower pH values, such as [Mo7O24]6− and [Mo8O26]4−. Polymolybdates can incorporate other ions, forming polyoxometalates. The dark-blue phosphorus-containing heteropolymolybdate P[Mo12O40]3− is used for the spectroscopic detection of phosphorus. The broad range of oxidation states of molybdenum is reflected in various molybdenum chlorides: Molybdenum(VI) chloride MoCl6 is not known, although the molybdenum hexafluoride is well characterized. Like chromium and some other transition metals, molybdenum forms quadruple bonds, such as in Mo2(CH3COO)4 and [Mo2Cl8]4−, which also has a quadruple bond. The oxidation state 0 is possible with carbon monoxide as ligand, such as in molybdenum hexacarbonyl, Mo(CO)6. Molybdenite—the principal ore from which molybdenum is now extracted—was previously known as molybdena. Molybdena was confused with and often utilized as though it were graphite. Like graphite, molybdenite can be used to blacken a surface or as a solid lubricant. Even when molybdena was distinguishable from graphite, it was still confused with the common lead ore PbS (now called galena); the name comes from Ancient Greek ", meaning "lead". (The Greek word itself has been proposed as a loanword from Anatolian Luvian and Lydian languages). Although (reportedly) molybdenum was deliberately alloyed with steel in one 14th-century Japanese sword (mfd. ca. 1330), that art was never employed widely and was later lost. In the West in 1754, Bengt Andersson Qvist examined a sample of molybdenite and determined that it did not contain lead and thus was not galena. By 1778 Swedish chemist Carl Wilhelm Scheele stated firmly that molybdena was (indeed) neither galena nor graphite. Instead, Scheele correctly proposed that molybdena was an ore of a distinct new element, named "molybdenum" for the mineral in which it resided, and from which it might be isolated. Peter Jacob Hjelm successfully isolated molybdenum using carbon and linseed oil in 1781. For the next century, molybdenum had no industrial use. It was relatively scarce, the pure metal was difficult to extract, and the necessary techniques of metallurgy were immature. Early molybdenum steel alloys showed great promise of increased hardness, but efforts to manufacture the alloys on a large scale were hampered with inconsistent results, a tendency toward brittleness, and recrystallization. In 1906, William D. Coolidge filed a patent for rendering molybdenum ductile, leading to applications as a heating element for high-temperature furnaces and as a support for tungsten-filament light bulbs; oxide formation and degradation require that molybdenum be physically sealed or held in an inert gas. In 1913, Frank E. Elmore developed a froth flotation process to recover molybdenite from ores; flotation remains the primary isolation process. During World War I, demand for molybdenum spiked; it was used both in armor plating and as a substitute for tungsten in high speed steels. Some British tanks were protected by 75 mm (3 in) manganese steel plating, but this proved to be ineffective. The manganese steel plates were replaced with much lighter molybdenum steel plates allowing for higher speed, greater maneuverability, and better protection. The Germans also used molybdenum-doped steel for heavy artillery, like in the super-heavy howitzer Big Bertha, because traditional steel melts at the temperatures produced by the propellant of the one ton shell. After the war, demand plummeted until metallurgical advances allowed extensive development of peacetime applications. In World War II, molybdenum again saw strategic importance as a substitute for tungsten in steel alloys. Molybdenum is the 54th most abundant element in the Earth's crust with an average of 1.5 parts per million and the 25th most abundant element in its oceans, with an average of 10 parts per billion; it is the 42nd most abundant element in the Universe. The Russian Luna 24 mission discovered a molybdenum-bearing grain (1 × 0.6 µm) in a pyroxene fragment taken from Mare Crisium on the Moon. The comparative rarity of molybdenum in the Earth's crust is offset by its concentration in a number of water-insoluble ores, often combined with sulfur in the same way as copper, with which it is often found. Though molybdenum is found in such minerals as wulfenite (PbMoO4) and powellite (CaMoO4), the main commercial source is molybdenite (MoS2). Molybdenum is mined as a principal ore and is also recovered as a byproduct of copper and tungsten mining. The world's production of molybdenum was 250,000 tonnes in 2011, the largest producers being China (94,000 t), the United States (64,000 t), Chile (38,000 t), Peru (18,000 t) and Mexico (12,000 t). The total reserves are estimated at 10 million tonnes, and are mostly concentrated in China (4.3 Mt), the US (2.7 Mt) and Chile (1.2 Mt). By continent, 93% of world molybdenum production is about evenly shared between North America, South America (mainly in Chile), and China. Europe and the rest of Asia (mostly Armenia, Russia, Iran and Mongolia) produce the remainder. In molybdenite processing, the ore is first roasted in air at a temperature of . The process gives gaseous sulfur dioxide and the molybdenum(VI) oxide: The oxidized ore is then usually extracted with aqueous ammonia to give ammonium molybdate: Copper, an impurity in molybdenite, is less soluble in ammonia. To completely remove it from the solution, it is precipitated with hydrogen sulfide. Ammonium molybdate converts to ammonium dimolybdate, which is isolated as a solid. Heating this solid gives molybdenum trioxide: Crude trioxide can be further purified by sublimation at . Metallic molybdenum is produced by reduction of the oxide with hydrogen: The molybdenum for steel production is reduced by the aluminothermic reaction with addition of iron to produce ferromolybdenum. A common form of ferromolybdenum contains 60% molybdenum. Molybdenum had a value of approximately $30,000 per tonne as of August 2009. It maintained a price at or near $10,000 per tonne from 1997 through 2003, and reached a peak of $103,000 per tonne in June 2005. In 2008, the London Metal Exchange announced that molybdenum would be traded as a commodity. Historically, the Knaben mine in southern Norway, opened in 1885, was the first dedicated molybdenum mine. It was closed in 1973 but was reopened in 2007. and now produces of molybdenum disulfide per year. Large mines in Colorado (such as the Henderson mine and the Climax mine) and in British Columbia yield molybdenite as their primary product, while many porphyry copper deposits such as the Bingham Canyon Mine in Utah and the Chuquicamata mine in northern Chile produce molybdenum as a byproduct of copper mining. About 86% of molybdenum produced is used in metallurgy, with the rest used in chemical applications. The estimated global use is structural steel 35%, stainless steel 25%, chemicals 14%, tool & high-speed steels 9%, cast iron 6%, molybdenum elemental metal 6%, and superalloys 5%. Molybdenum can withstand extreme temperatures without significantly expanding or softening, making it useful in environments of intense heat, including military armor, aircraft parts, electrical contacts, industrial motors, and supports for filaments in light bulbs. Most high-strength steel alloys (for example, 41xx steels) contain 0.25% to 8% molybdenum. Even in these small portions, more than 43,000 tonnes of molybdenum are used each year in stainless steels, tool steels, cast irons, and high-temperature superalloys. Molybdenum is also valued in steel alloys for its high corrosion resistance and weldability. Molybdenum contributes corrosion resistance to type-300 stainless steels (specifically type-316) and especially so in the so-called superaustenitic stainless steels (such as alloy AL-6XN, 254SMO and 1925hMo). Molybdenum increases lattice strain, thus increasing the energy required to dissolve iron atoms from the surface. Molybdenum is also used to enhance the corrosion resistance of ferritic (for example grade 444) and martensitic (for example 1.4122 and 1.4418) stainless steels. Because of its lower density and more stable price, molybdenum is sometimes used in place of tungsten. An example is the 'M' series of high-speed steels such as M2, M4 and M42 as substitution for the 'T' steel series, which contain tungsten. Molybdenum can also be used as a flame-resistant coating for other metals. Although its melting point is , molybdenum rapidly oxidizes at temperatures above making it better-suited for use in vacuum environments. TZM (Mo (~99%), Ti (~0.5%), Zr (~0.08%) and some C) is a corrosion-resisting molybdenum superalloy that resists molten fluoride salts at temperatures above . It has about twice the strength of pure Mo, and is more ductile and more weldable, yet in tests it resisted corrosion of a standard eutectic salt (FLiBe) and salt vapors used in molten salt reactors for 1100 hours with so little corrosion that it was difficult to measure. Other molybdenum-based alloys that do not contain iron have only limited applications. For example, because of its resistance to molten zinc, both pure molybdenum and molybdenum-tungsten alloys (70%/30%) are used for piping, stirrers and pump impellers that come into contact with molten zinc. Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals). At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. However, an extremely high concentration of molybdenum reverses the trend and can act as an inhibitor in both purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth. Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen: The biosynthesis of the FeMoco active site is highly complex. Molybdate is transported in the body as MoO42−. Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxidase, and mitochondrial amidoxime reductase. People severely deficient in molybdenum have poorly functioning sulfite oxidase and are prone to toxic reactions to sulfites in foods. The human body contains about 0.07 mg of molybdenum per kilogram of body weight, with higher concentrations in the liver and kidneys and lower in the vertebrae. Molybdenum is also present within human tooth enamel and may help prevent its decay. Acute toxicity has not been seen in humans, and the toxicity depends strongly on the chemical state. Studies on rats show a median lethal dose (LD50) as low as 180 mg/kg for some Mo compounds. Although human toxicity data is unavailable, animal studies have shown that chronic ingestion of more than 10 mg/day of molybdenum can cause diarrhea, growth retardation, infertility, low birth weight, and gout; it can also affect the lungs, kidneys, and liver. Sodium tungstate is a competitive inhibitor of molybdenum. Dietary tungsten reduces the concentration of molybdenum in tissues. Low soil concentration of molybdenum in a geographical band from northern China to Iran results in a general dietary molybdenum deficiency, and is associated with increased rates of esophageal cancer. Compared to the United States, which has a greater supply of molybdenum in the soil, people living in those areas have about 16 times greater risk for esophageal squamous cell carcinoma. Molybdenum deficiency has also been reported as a consequence of non-molybdenum supplemented total parenteral nutrition (complete intravenous feeding) for long periods of time. It results in high blood levels of sulfite and urate, in much the same way as molybdenum cofactor deficiency. However (presumably since pure molybdenum deficiency from this cause occurs primarily in adults), the neurological consequences are not as marked as in cases of congenital cofactor deficiency. A congenital molybdenum cofactor deficiency disease, seen in infants, is an inability to synthesize molybdenum cofactor, the heterocyclic molecule discussed above that binds molybdenum at the active site in all known human enzymes that use molybdenum. The resulting deficiency results in high levels of sulfite and urate, and neurological damage. High levels of molybdenum can interfere with the body's uptake of copper, producing copper deficiency. Molybdenum prevents plasma proteins from binding to copper, and it also increases the amount of copper that is excreted in urine. Ruminants that consume high levels of molybdenum suffer from diarrhea, stunted growth, anemia, and achromotrichia (loss of fur pigment). These symptoms can be alleviated by copper supplements, either dietary and injection. The effective copper deficiency can be aggravated by excess sulfur. Copper reduction or deficiency can also be deliberately induced for therapeutic purposes by the compound ammonium tetrathiomolybdate, in which the bright red anion tetrathiomolybdate is the copper-chelating agent. Tetrathiomolybdate was first used therapeutically in the treatment of copper toxicosis in animals. It was then introduced as a treatment in Wilson's disease, a hereditary copper metabolism disorder in humans; it acts both by competing with copper absorption in the bowel and by increasing excretion. It has also been found to have an inhibitory effect on angiogenesis, potentially by inhibiting the membrane translocation process that is dependent on copper ions. This is a promising avenue for investigation of treatments for cancer, age-related macular degeneration, and other diseases that involve a pathologic proliferation of blood vessels. In 2000, the then U.S. Institute of Medicine (now the National Academy of Medicine, NAM) updated its Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for molybdenum. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. An AI of 2 micrograms (μg) of molybdenum per day was established for infants up to 6 months of age, and 3 μg/day from 7 to 12 months of age, both for males and females. For older children and adults, the following daily RDAs have been established for molybdenum: 17 μg from 1 to 3 years of age, 22 μg from 4 to 8 years, 34 μg from 9 to 13 years, 43 μg from 14 to 18 years, and 45 μg for persons 19 years old and older. All these RDAs are valid for both sexes. Pregnant or lactating females from 14 to 50 years of age have a higher daily RDA of 50 μg of molybdenum. As for safety, the NAM sets tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of molybdenum, the UL is 2000 μg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women and men ages 15 and older the AI is set at 65 μg/day. Pregnant and lactating women have the same AI. For children aged 1–14 years, the AIs increase with age from 15 to 45 μg/day. The adult AIs are higher than the U.S. RDAs, but on the other hand, the European Food Safety Authority reviewed the same safety question and set its UL at 600 μg/day, which is much lower than the U.S. value. For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For molybdenum labeling purposes 100% of the Daily Value was 75 μg, but as of May 27, 2016 it was revised to 45 μg. Compliance with the updated labeling regulations was required by 1 January 2020, for manufacturers with $10 million or more in annual food sales, and by 1 January 2021, for manufacturers with less than $10 million in annual food sales. During the first six months following the 1 January 2020 compliance date, the FDA plans to work cooperatively with manufacturers to meet the new Nutrition Facts label requirements and will not focus on enforcement actions regarding these requirements during that time. A table of the old and new adult Daily Values is provided at Reference Daily Intake. Average daily intake varies between 120 and 240 μg/day, which is higher than dietary recommendations. Pork, lamb, and beef liver each have approximately 1.5 parts per million of molybdenum. Other significant dietary sources include green beans, eggs, sunflower seeds, wheat flour, lentils, cucumbers, and cereal grain. Molybdenum dusts and fumes, generated by mining or metalworking, can be toxic, especially if ingested (including dust trapped in the sinuses and later swallowed). Low levels of prolonged exposure can cause irritation to the eyes and skin. Direct inhalation or ingestion of molybdenum and its oxides should be avoided. OSHA regulations specify the maximum permissible molybdenum exposure in an 8-hour day as 5 mg/m3. Chronic exposure to 60 to 600 mg/m3 can cause symptoms including fatigue, headaches and joint pains. At levels of 5000 mg/m3, molybdenum is immediately dangerous to life and health.
https://en.wikipedia.org/wiki?curid=19052
Mineral A mineral is, broadly speaking, a solid chemical compound that occurs naturally in pure form. Minerals are most commonly associated with rocks due to the presence of minerals within rocks. These rocks may consist of one type of mineral, or may be an aggregate of two or more different types of minerals, spacially segregated into distinct phases. Compounds that occur only in living beings are usually excluded, but some minerals are often biogenic (such as calcite) or are organic compounds in the sense of chemistry (such as mellite). Moreover, living beings often synthesize inorganic minerals (such as hydroxylapatite) that also occur in rocks. In geology and mineralogy, the term "mineral" is usually reserved for mineral species: crystalline compounds with a fairly well-defined chemical composition and a specific crystal structure. Some natural solid substances without a definite crystalline structure, such as opal or obsidian, are then more properly called mineraloids. If a chemical compound may occur naturally with different crystal structures, each structure is considered a different mineral species. Thus, for example, quartz and stishovite are two different minerals consisting of the same compound, silicon dioxide. The International Mineralogical Association (IMA) is the world's premier standard body for the definition and nomenclature of mineral species. , the IMA recognizes 5,562 official mineral species out of more than 5,750 proposed or traditional ones. The chemical composition of a named mineral species may vary somewhat by the inclusion of small amounts of impurities. Specific varieties of a species sometimes have conventional or official names of their own. For example, amethyst is a purple variety of the mineral species quartz. Some mineral species can have variable proportions of two or more chemical elements that occupy equivalent positions in the mineral's structure; for example, the formula of mackinawite is given as , meaning , where "x" is a variable number between 0 and 9. Sometimes a mineral with variable composition is split into separate species, more or less arbitrarily, forming a mineral group; that is the case of the silicates , the olivine group. Besides the essential chemical composition and crystal structure, the description of a mineral species usually includes its common physical properties such as habit, hardness, lustre, diaphaneity, colour, streak, tenacity, cleavage, fracture, parting, specific gravity, magnetism, fluorescence, radioactivity, as well as its taste or smell and its reaction to acid. Minerals are classified by key chemical constituents; the two dominant systems are the Dana classification and the Strunz classification. Silicate minerals comprise approximately 90% of the Earth's crust. Other important mineral groups include the native elements, sulfides, oxides, halides, carbonates, sulfates, and phosphates. One definition of a mineral encompasses the following criteria: The first three general characteristics are less debated than the last two. Mineral classification schemes and their definitions are evolving to match recent advances in mineral science. Recent changes have included the addition of an organic class, in both the new Dana and the Strunz classification schemes. The organic class includes a very rare group of minerals with hydrocarbons. The IMA Commission on New Minerals and Mineral Names adopted in 2009 a hierarchical scheme for the naming and classification of mineral groups and group names and established seven commissions and four working groups to review and classify minerals into an official listing of their published names. According to these new rules, "mineral species can be grouped in a number of different ways, on the basis of chemistry, crystal structure, occurrence, association, genetic history, or resource, for example, depending on the purpose to be served by the classification." Ernest Nickel's (1995) exclusion of biogenic substances has not been universally adhered to. For example, Lowenstam (1981) stated that "organisms are capable of forming a diverse array of minerals, some of which cannot be formed inorganically in the biosphere." The distinction is a matter of classification and less to do with the constituents of the minerals themselves. Skinner (2005) views all solids as potential minerals and includes biominerals in the mineral kingdom, which are those that are created by the metabolic activities of organisms. Skinner expanded the previous definition of a mineral to classify "element or compound, amorphous or crystalline, formed through "biogeochemical " processes," as a mineral. Recent advances in high-resolution genetics and X-ray absorption spectroscopy are providing revelations on the biogeochemical relations between microorganisms and minerals that may make Nickel's (1995) biogenic mineral exclusion obsolete and Skinner's (2005) biogenic mineral inclusion a necessity. For example, the IMA-commissioned "Working Group on Environmental Mineralogy and Geochemistry " deals with minerals in the hydrosphere, atmosphere, and biosphere. The group's scope includes mineral-forming microorganisms, which exist on nearly every rock, soil, and particle surface spanning the globe to depths of at least 1600 metres below the sea floor and 70 kilometres into the stratosphere (possibly entering the mesosphere). Biogeochemical cycles have contributed to the formation of minerals for billions of years. Microorganisms can precipitate metals from solution, contributing to the formation of ore deposits. They can also catalyze the dissolution of minerals. Prior to the International Mineralogical Association's listing, over 60 biominerals had been discovered, named, and published. These minerals (a sub-set tabulated in Lowenstam (1981)) are considered minerals proper according to Skinner's (2005) definition. These biominerals are not listed in the International Mineral Association official list of mineral names, however, many of these biomineral representatives are distributed amongst the 78 mineral classes listed in the Dana classification scheme. Another rare class of minerals (primarily biological in origin) include the mineral liquid crystals that have properties of both liquids and crystals. To date, over 80,000 liquid crystalline compounds have been identified. Skinner's (2005) definition of a mineral takes this matter into account by stating that a mineral can be crystalline or amorphous, the latter group including liquid crystals. Although biominerals and liquid mineral crystals, are not the most common form of minerals, they help to define the limits of what constitutes a mineral proper. Nickel's (1995) formal definition explicitly mentioned crystallinity as a key to defining a substance as a mineral. A 2011 article defined icosahedrite, an aluminium-iron-copper alloy as mineral; named for its unique natural icosahedral symmetry, it is a quasicrystal. Unlike a true crystal, quasicrystals are ordered but not periodic. Minerals are not equivalent to rocks. A rock is an aggregate of one or more minerals or mineraloids. Some rocks, such as limestone or quartzite, are composed primarily of one mineral – calcite or aragonite in the case of limestone, and quartz in the latter case. Other rocks can be defined by relative abundances of key (essential) minerals; a granite is defined by proportions of quartz, alkali feldspar, and plagioclase feldspar. The other minerals in the rock are termed "accessory minerals", and do not greatly affect the bulk composition of the rock. Rocks can also be composed entirely of non-mineral material; coal is a sedimentary rock composed primarily of organically derived carbon. In rocks, some mineral species and groups are much more abundant than others; these are termed the rock-forming minerals. The major examples of these are quartz, the feldspars, the micas, the amphiboles, the pyroxenes, the olivines, and calcite; except for the last one, all of these minerals are silicates. Overall, around 150 minerals are considered particularly important, whether in terms of their abundance or aesthetic value in terms of collecting. Commercially valuable minerals and rocks are referred to as industrial minerals. For example, muscovite, a white mica, can be used for windows (sometimes referred to as isinglass), as a filler, or as an insulator. Ores are minerals that have a high concentration of a certain element, typically a metal. Examples are cinnabar (HgS), an ore of mercury, sphalerite (ZnS), an ore of zinc, or cassiterite (SnO2), an ore of tin. Gems are minerals with an ornamental value, and are distinguished from non-gems by their beauty, durability, and usually, rarity. There are about 20 mineral species that qualify as gem minerals, which constitute about 35 of the most common gemstones. Gem minerals are often present in several varieties, and so one mineral can account for several different gemstones; for example, ruby and sapphire are both corundum, Al2O3. Minerals are classified by variety, species, series and group, in order of increasing generality. The basic level of definition is that of mineral species, each of which is distinguished from the others by unique chemical and physical properties. For example, quartz is defined by its formula, SiO2, and a specific crystalline structure that distinguishes it from other minerals with the same chemical formula (termed polymorphs). When there exists a range of composition between two minerals species, a mineral series is defined. For example, the biotite series is represented by variable amounts of the endmembers phlogopite, siderophyllite, annite, and eastonite. In contrast, a mineral group is a grouping of mineral species with some common chemical properties that share a crystal structure. The pyroxene group has a common formula of XY(Si,Al)2O6, where X and Y are both cations, with X typically bigger than Y; the pyroxenes are single-chain silicates that crystallize in either the orthorhombic or monoclinic crystal systems. Finally, a mineral variety is a specific type of mineral species that differs by some physical characteristic, such as colour or crystal habit. An example is amethyst, which is a purple variety of quartz. Two common classifications, Dana and Strunz, are used for minerals; both rely on composition, specifically with regards to important chemical groups, and structure. James Dwight Dana, a leading geologist of his time, first published his "System of Mineralogy" in 1837; as of 1997, it is in its eighth edition. The Dana classification assigns a four-part number to a mineral species. Its class number is based on important compositional groups; the type gives the ratio of cations to anions in the mineral, and the last two numbers group minerals by structural similarity within a given type or class. The less commonly used Strunz classification, named for German mineralogist Karl Hugo Strunz, is based on the Dana system, but combines both chemical and structural criteria, the latter with regards to distribution of chemical bonds. , 5,562 mineral species are approved by the IMA. They are most commonly named after a person, followed by discovery location; names based on chemical composition or physical properties are the two other major groups of mineral name etymologies. The word "species" (from the Latin "species", "a particular sort, kind, or type with distinct look, or appearance") comes from the classification scheme in "Systema Naturae" by Carl Linnaeus. He divided the natural world into three kingdoms – plants, animals, and minerals – and classified each with the same hierarchy. In descending order, these were Phylum, Class, Order, Family, Tribe, Genus, and Species. The abundance and diversity of minerals is controlled directly by their chemistry, in turn dependent on elemental abundances in the Earth. The majority of minerals observed are derived from the Earth's crust. Eight elements account for most of the key components of minerals, due to their abundance in the crust. These eight elements, summing to over 98% of the crust by weight, are, in order of decreasing abundance: oxygen, silicon, aluminium, iron, magnesium, calcium, sodium and potassium. Oxygen and silicon are by far the two most important – oxygen composes 47% of the crust by weight, and silicon accounts for 28%. The minerals that form are directly controlled by the bulk chemistry of the parent body. For example, a magma rich in iron and magnesium will form mafic minerals, such as olivine and the pyroxenes; in contrast, a more silica-rich magma will crystallize to form minerals that incorporate more SiO2, such as the feldspars and quartz. In a limestone, calcite or aragonite (both CaCO3) form because the rock is rich in calcium and carbonate. A corollary is that a mineral will not be found in a rock whose bulk chemistry does not resemble the bulk chemistry of a given mineral with the exception of trace minerals. For example, kyanite, Al2SiO5 forms from the metamorphism of aluminium-rich shales; it would not likely occur in aluminium-poor rock, such as quartzite. The chemical composition may vary between end member species of a solid solution series. For example, the plagioclase feldspars comprise a continuous series from sodium-rich end member albite (NaAlSi3O8) to calcium-rich anorthite (CaAl2Si2O8) with four recognized intermediate varieties between them (given in order from sodium- to calcium-rich): oligoclase, andesine, labradorite, and bytownite. Other examples of series include the olivine series of magnesium-rich forsterite and iron-rich fayalite, and the wolframite series of manganese-rich hübnerite and iron-rich ferberite. Chemical substitution and coordination polyhedra explain this common feature of minerals. In nature, minerals are not pure substances, and are contaminated by whatever other elements are present in the given chemical system. As a result, it is possible for one element to be substituted for another. Chemical substitution will occur between ions of a similar size and charge; for example, K+ will not substitute for Si4+ because of chemical and structural incompatibilities caused by a big difference in size and charge. A common example of chemical substitution is that of Si4+ by Al3+, which are close in charge, size, and abundance in the crust. In the example of plagioclase, there are three cases of substitution. Feldspars are all framework silicates, which have a silicon-oxygen ratio of 2:1, and the space for other elements is given by the substitution of Si4+ by Al3+ to give a base unit of [AlSi3O8]−; without the substitution, the formula would be charge-balanced as SiO2, giving quartz. The significance of this structural property will be explained further by coordination polyhedra. The second substitution occurs between Na+ and Ca2+; however, the difference in charge has to accounted for by making a second substitution of Si4+ by Al3+. Coordination polyhedra are geometric representations of how a cation is surrounded by an anion. In mineralogy, coordination polyhedra are usually considered in terms of oxygen, due its abundance in the crust. The base unit of silicate minerals is the silica tetrahedron – one Si4+ surrounded by four O2−. An alternate way of describing the coordination of the silicate is by a number: in the case of the silica tetrahedron, the silicon is said to have a coordination number of 4. Various cations have a specific range of possible coordination numbers; for silicon, it is almost always 4, except for very high-pressure minerals where the compound is compressed such that silicon is in six-fold (octahedral) coordination with oxygen. Bigger cations have a bigger coordination numbers because of the increase in relative size as compared to oxygen (the last orbital subshell of heavier atoms is different too). Changes in coordination numbers leads to physical and mineralogical differences; for example, at high pressure, such as in the mantle, many minerals, especially silicates such as olivine and garnet, will change to a perovskite structure, where silicon is in octahedral coordination. Other examples are the aluminosilicates kyanite, andalusite, and sillimanite (polymorphs, since they share the formula Al2SiO5), which differ by the coordination number of the Al3+; these minerals transition from one another as a response to changes in pressure and temperature. In the case of silicate materials, the substitution of Si4+ by Al3+ allows for a variety of minerals because of the need to balance charges. Changes in temperature and pressure and composition alter the mineralogy of a rock sample. Changes in composition can be caused by processes such as weathering or metasomatism (hydrothermal alteration). Changes in temperature and pressure occur when the host rock undergoes tectonic or magmatic movement into differing physical regimes. Changes in thermodynamic conditions make it favourable for mineral assemblages to react with each other to produce new minerals; as such, it is possible for two rocks to have an identical or a very similar bulk rock chemistry without having a similar mineralogy. This process of mineralogical alteration is related to the rock cycle. An example of a series of mineral reactions is illustrated as follows. Orthoclase feldspar (KAlSi3O8) is a mineral commonly found in granite, a plutonic igneous rock. When exposed to weathering, it reacts to form kaolinite (Al2Si2O5(OH)4, a sedimentary mineral, and silicic acid): Under low-grade metamorphic conditions, kaolinite reacts with quartz to form pyrophyllite (Al2Si4O10(OH)2): As metamorphic grade increases, the pyrophyllite reacts to form kyanite and quartz: Alternatively, a mineral may change its crystal structure as a consequence of changes in temperature and pressure without reacting. For example, quartz will change into a variety of its SiO2 polymorphs, such as tridymite and cristobalite at high temperatures, and coesite at high pressures. Classifying minerals ranges from simple to difficult. A mineral can be identified by several physical properties, some of them being sufficient for full identification without equivocation. In other cases, minerals can only be classified by more complex optical, chemical or X-ray diffraction analysis; these methods, however, can be costly and time-consuming. Physical properties applied for classification include crystal structure and habit, hardness, lustre, diaphaneity, colour, streak, cleavage and fracture, and specific gravity. Other less general tests include fluorescence, phosphorescence, magnetism, radioactivity, tenacity (response to mechanical induced changes of shape or form), piezoelectricity and reactivity to dilute acids. Crystal structure results from the orderly geometric spatial arrangement of atoms in the internal structure of a mineral. This crystal structure is based on regular internal atomic or ionic arrangement that is often expressed in the geometric form that the crystal takes. Even when the mineral grains are too small to see or are irregularly shaped, the underlying crystal structure is always periodic and can be determined by X-ray diffraction. Minerals are typically described by their symmetry content. Crystals are restricted to 32 point groups, which differ by their symmetry. These groups are classified in turn into more broad categories, the most encompassing of these being the six crystal families. These families can be described by the relative lengths of the three crystallographic axes, and the angles between them; these relationships correspond to the symmetry operations that define the narrower point groups. They are summarized below; a, b, and c represent the axes, and α, β, γ represent the angle opposite the respective crystallographic axis (e.g. α is the angle opposite the a-axis, viz. the angle between the b and c axes): The hexagonal crystal family is also split into two crystal "systems" – the trigonal, which has a three-fold axis of symmetry, and the hexagonal, which has a six-fold axis of symmetry. Chemistry and crystal structure together define a mineral. With a restriction to 32 point groups, minerals of different chemistry may have identical crystal structure. For example, halite (NaCl), galena (PbS), and periclase (MgO) all belong to the hexaoctahedral point group (isometric family), as they have a similar stoichiometry between their different constituent elements. In contrast, polymorphs are groupings of minerals that share a chemical formula but have a different structure. For example, pyrite and marcasite, both iron sulfides, have the formula FeS2; however, the former is isometric while the latter is orthorhombic. This polymorphism extends to other sulfides with the generic AX2 formula; these two groups are collectively known as the pyrite and marcasite groups. Polymorphism can extend beyond pure symmetry content. The aluminosilicates are a group of three minerals – kyanite, andalusite, and sillimanite – which share the chemical formula Al2SiO5. Kyanite is triclinic, while andalusite and sillimanite are both orthorhombic and belong to the dipyramidal point group. These differences arise corresponding to how aluminium is coordinated within the crystal structure. In all minerals, one aluminium ion is always in six-fold coordination with oxygen. Silicon, as a general rule, is in four-fold coordination in all minerals; an exception is a case like stishovite (SiO2, an ultra-high pressure quartz polymorph with rutile structure). In kyanite, the second aluminium is in six-fold coordination; its chemical formula can be expressed as Al[6]Al[6]SiO5, to reflect its crystal structure. Andalusite has the second aluminium in five-fold coordination (Al[6]Al[5]SiO5) and sillimanite has it in four-fold coordination (Al[6]Al[4]SiO5). Differences in crystal structure and chemistry greatly influence other physical properties of the mineral. The carbon allotropes diamond and graphite have vastly different properties; diamond is the hardest natural substance, has an adamantine lustre, and belongs to the isometric crystal family, whereas graphite is very soft, has a greasy lustre, and crystallises in the hexagonal family. This difference is accounted for by differences in bonding. In diamond, the carbons are in sp3 hybrid orbitals, which means they form a framework where each carbon is covalently bonded to four neighbours in a tetrahedral fashion; on the other hand, graphite is composed of sheets of carbons in sp2 hybrid orbitals, where each carbon is bonded covalently to only three others. These sheets are held together by much weaker van der Waals forces, and this discrepancy translates to large macroscopic differences. Twinning is the intergrowth of two or more crystals of a single mineral species. The geometry of the twinning is controlled by the mineral's symmetry. As a result, there are several types of twins, including contact twins, reticulated twins, geniculated twins, penetration twins, cyclic twins, and polysynthetic twins. Contact, or simple twins, consist of two crystals joined at a plane; this type of twinning is common in spinel. Reticulated twins, common in rutile, are interlocking crystals resembling netting. Geniculated twins have a bend in the middle that is caused by start of the twin. Penetration twins consist of two single crystals that have grown into each other; examples of this twinning include cross-shaped staurolite twins and Carlsbad twinning in orthoclase. Cyclic twins are caused by repeated twinning around a rotation axis. This type of twinning occurs around three, four, five, six, or eight-fold axes, and the corresponding patterns are called threelings, fourlings, fivelings, sixlings, and eightlings. Sixlings are common in aragonite. Polysynthetic twins are similar to cyclic twins through the presence of repetitive twinning; however, instead of occurring around a rotational axis, polysynthetic twinning occurs along parallel planes, usually on a microscopic scale. Crystal habit refers to the overall shape of crystal. Several terms are used to describe this property. Common habits include acicular, which describes needlelike crystals as in natrolite, bladed, dendritic (tree-pattern, common in native copper), equant, which is typical of garnet, prismatic (elongated in one direction), and tabular, which differs from bladed habit in that the former is platy whereas the latter has a defined elongation. Related to crystal form, the quality of crystal faces is diagnostic of some minerals, especially with a petrographic microscope. Euhedral crystals have a defined external shape, while anhedral crystals do not; those intermediate forms are termed subhedral. The hardness of a mineral defines how much it can resist scratching. This physical property is controlled by the chemical composition and crystalline structure of a mineral. A mineral's hardness is not necessarily constant for all sides, which is a function of its structure; crystallographic weakness renders some directions softer than others. An example of this property exists in kyanite, which has a Mohs hardness of 5½ parallel to [001] but 7 parallel to [100]. The most common scale of measurement is the ordinal Mohs hardness scale. Defined by ten indicators, a mineral with a higher index scratches those below it. The scale ranges from talc, a phyllosilicate, to diamond, a carbon polymorph that is the hardest natural material. The scale is provided below: Lustre indicates how light reflects from the mineral's surface, with regards to its quality and intensity. There are numerous qualitative terms used to describe this property, which are split into metallic and non-metallic categories. Metallic and sub-metallic minerals have high reflectivity like metal; examples of minerals with this lustre are galena and pyrite. Non-metallic lustres include: adamantine, such as in diamond; vitreous, which is a glassy lustre very common in silicate minerals; pearly, such as in talc and apophyllite; resinous, such as members of the garnet group; silky which is common in fibrous minerals such as asbestiform chrysotile. The diaphaneity of a mineral describes the ability of light to pass through it. Transparent minerals do not diminish the intensity of light passing through them. An example of a transparent mineral is muscovite (potassium mica); some varieties are sufficiently clear to have been used for windows. Translucent minerals allow some light to pass, but less than those that are transparent. Jadeite and nephrite (mineral forms of jade are examples of minerals with this property). Minerals that do not allow light to pass are called opaque. The diaphaneity of a mineral depends on the thickness of the sample. When a mineral is sufficiently thin (e.g., in a thin section for petrography), it may become transparent even if that property is not seen in a hand sample. In contrast, some minerals, such as hematite or pyrite, are opaque even in thin-section. Colour is the most obvious property of a mineral, but it is often non-diagnostic. It is caused by electromagnetic radiation interacting with electrons (except in the case of incandescence, which does not apply to minerals). Two broad classes of elements (idiochromatic and allochromatic) are defined with regards to their contribution to a mineral's colour: Idiochromatic elements are essential to a mineral's composition; their contribution to a mineral's colour is diagnostic. Examples of such minerals are malachite (green) and azurite (blue). In contrast, allochromatic elements in minerals are present in trace amounts as impurities. An example of such a mineral would be the ruby and sapphire varieties of the mineral corundum. The colours of pseudochromatic minerals are the result of interference of light waves. Examples include labradorite and bornite. In addition to simple body colour, minerals can have various other distinctive optical properties, such as play of colours, asterism, chatoyancy, iridescence, tarnish, and pleochroism. Several of these properties involve variability in colour. Play of colour, such as in opal, results in the sample reflecting different colours as it is turned, while pleochroism describes the change in colour as light passes through a mineral in a different orientation. Iridescence is a variety of the play of colours where light scatters off a coating on the surface of crystal, cleavage planes, or off layers having minor gradations in chemistry. In contrast, the play of colours in opal is caused by light refracting from ordered microscopic silica spheres within its physical structure. Chatoyancy ("cat's eye") is the wavy banding of colour that is observed as the sample is rotated; asterism, a variety of chatoyancy, gives the appearance of a star on the mineral grain. The latter property is particularly common in gem-quality corundum. The streak of a mineral refers to the colour of a mineral in powdered form, which may or may not be identical to its body colour. The most common way of testing this property is done with a streak plate, which is made out of porcelain and coloured either white or black. The streak of a mineral is independent of trace elements or any weathering surface. A common example of this property is illustrated with hematite, which is coloured black, silver, or red in hand sample, but has a cherry-red to reddish-brown streak. Streak is more often distinctive for metallic minerals, in contrast to non-metallic minerals whose body colour is created by allochromatic elements. Streak testing is constrained by the hardness of the mineral, as those harder than 7 powder the "streak plate" instead. By definition, minerals have a characteristic atomic arrangement. Weakness in this crystalline structure causes planes of weakness, and the breakage of a mineral along such planes is termed cleavage. The quality of cleavage can be described based on how cleanly and easily the mineral breaks; common descriptors, in order of decreasing quality, are "perfect", "good", "distinct", and "poor". In particularly transparent minerals, or in thin-section, cleavage can be seen as a series of parallel lines marking the planar surfaces when viewed from the side. Cleavage is not a universal property among minerals; for example, quartz, consisting of extensively interconnected silica tetrahedra, does not have a crystallographic weakness which would allow it to cleave. In contrast, micas, which have perfect basal cleavage, consist of sheets of silica tetrahedra which are very weakly held together. As cleavage is a function of crystallography, there are a variety of cleavage types. Cleavage occurs typically in either one, two, three, four, or six directions. Basal cleavage in one direction is a distinctive property of the micas. Two-directional cleavage is described as prismatic, and occurs in minerals such as the amphiboles and pyroxenes. Minerals such as galena or halite have cubic (or isometric) cleavage in three directions, at 90°; when three directions of cleavage are present, but not at 90°, such as in calcite or rhodochrosite, it is termed rhombohedral cleavage. Octahedral cleavage (four directions) is present in fluorite and diamond, and sphalerite has six-directional dodecahedral cleavage. Minerals with many cleavages might not break equally well in all of the directions; for example, calcite has good cleavage in three directions, but gypsum has perfect cleavage in one direction, and poor cleavage in two other directions. Angles between cleavage planes vary between minerals. For example, as the amphiboles are double-chain silicates and the pyroxenes are single-chain silicates, the angle between their cleavage planes is different. The pyroxenes cleave in two directions at approximately 90°, whereas the amphiboles distinctively cleave in two directions separated by approximately 120° and 60°. The cleavage angles can be measured with a contact goniometer, which is similar to a protractor. Parting, sometimes called "false cleavage", is similar in appearance to cleavage but is instead produced by structural defects in the mineral, as opposed to systematic weakness. Parting varies from crystal to crystal of a mineral, whereas all crystals of a given mineral will cleave if the atomic structure allows for that property. In general, parting is caused by some stress applied to a crystal. The sources of the stresses include deformation (e.g. an increase in pressure), exsolution, or twinning. Minerals that often display parting include the pyroxenes, hematite, magnetite, and corundum. When a mineral is broken in a direction that does not correspond to a plane of cleavage, it is termed to have been fractured. There are several types of uneven fracture. The classic example is conchoidal fracture, like that of quartz; rounded surfaces are created, which are marked by smooth curved lines. This type of fracture occurs only in very homogeneous minerals. Other types of fracture are fibrous, splintery, and hackly. The latter describes a break along a rough, jagged surface; an example of this property is found in native copper. Tenacity is related to both cleavage and fracture. Whereas fracture and cleavage describes the surfaces that are created when a mineral is broken, tenacity describes how resistant a mineral is to such breaking. Minerals can be described as brittle, ductile, malleable, sectile, flexible, or elastic. Specific gravity numerically describes the density of a mineral. The dimensions of density are mass divided by volume with units: kg/m3 or g/cm3. Specific gravity measures how much water a mineral sample displaces. Defined as the quotient of the mass of the sample and difference between the weight of the sample in air and its corresponding weight in water, specific gravity is a unitless ratio. Among most minerals, this property is not diagnostic. Rock forming minerals – typically silicates or occasionally carbonates – have a specific gravity of 2.5–3.5. High specific gravity is a diagnostic property of a mineral. A variation in chemistry (and consequently, mineral class) correlates to a change in specific gravity. Among more common minerals, oxides and sulfides tend to have a higher specific gravity as they include elements with higher atomic mass. A generalization is that minerals with metallic or adamantine lustre tend to have higher specific gravities than those having a non-metallic to dull lustre. For example, hematite, Fe2O3, has a specific gravity of 5.26 while galena, PbS, has a specific gravity of 7.2–7.6, which is a result of their high iron and lead content, respectively. A very high specific gravity becomes very pronounced in native metals; kamacite, an iron-nickel alloy common in iron meteorites has a specific gravity of 7.9, and gold has an observed specific gravity between 15 and 19.3. Other properties can be used to diagnose minerals. These are less general, and apply to specific minerals. Dropping dilute acid (often 10% HCl) onto a mineral aids in distinguishing carbonates from other mineral classes. The acid reacts with the carbonate ([CO3]2−) group, which causes the affected area to effervesce, giving off carbon dioxide gas. This test can be further expanded to test the mineral in its original crystal form or powdered form. An example of this test is done when distinguishing calcite from dolomite, especially within the rocks (limestone and dolomite respectively). Calcite immediately effervesces in acid, whereas acid must be applied to powdered dolomite (often to a scratched surface in a rock), for it to effervesce. Zeolite minerals will not effervesce in acid; instead, they become frosted after 5–10 minutes, and if left in acid for a day, they dissolve or become a silica gel. When tested, magnetism is a very conspicuous property of minerals. Among common minerals, magnetite exhibits this property strongly, and magnetism is also present, albeit not as strongly, in pyrrhotite and ilmenite. Some minerals exhibit electrical properties – for example, quartz is piezoelectric – but electrical properties are rarely used as diagnostic criteria for minerals because of incomplete data and natural variation. Minerals can also be tested for taste or smell. Halite, NaCl, is table salt; its potassium-bearing counterpart, sylvite, has a pronounced bitter taste. Sulfides have a characteristic smell, especially as samples are fractured, reacting, or powdered. Radioactivity is a rare property; minerals may be composed of radioactive elements. They could be a defining constituent, such as uranium in uraninite, autunite, and carnotite, or as trace impurities. In the latter case, the decay of a radioactive element damages the mineral crystal; the result, termed a "radioactive halo" or "pleochroic halo", is observable with various techniques, such as thin-section petrography. As the composition of the Earth's crust is dominated by silicon and oxygen, silicate elements are by far the most important class of minerals in terms of rock formation and diversity. However, non-silicate minerals are of great economic importance, especially as ores. Non-silicate minerals are subdivided into several other classes by their dominant chemistry, which includes native elements, sulfides, halides, oxides and hydroxides, carbonates and nitrates, borates, sulfates, phosphates, and organic compounds. Most non-silicate mineral species are rare (constituting in total 8% of the Earth's crust), although some are relatively common, such as calcite, pyrite, magnetite, and hematite. There are two major structural styles observed in non-silicates: close-packing and silicate-like linked tetrahedra. close-packed structures is a way to densely pack atoms while minimizing interstitial space. Hexagonal close-packing involves stacking layers where every other layer is the same ("ababab"), whereas cubic close-packing involves stacking groups of three layers ("abcabcabc"). Analogues to linked silica tetrahedra include SO4 (sulfate), PO4 (phosphate), AsO4 (arsenate), and VO4 (vanadate). The non-silicates have great economic importance, as they concentrate elements more than the silicate minerals do. The largest grouping of minerals by far are the silicates; most rocks are composed of greater than 95% silicate minerals, and over 90% of the Earth's crust is composed of these minerals. The two main constituents of silicates are silicon and oxygen, which are the two most abundant elements in the Earth's crust. Other common elements in silicate minerals correspond to other common elements in the Earth's crust, such as aluminium, magnesium, iron, calcium, sodium, and potassium. Some important rock-forming silicates include the feldspars, quartz, olivines, pyroxenes, amphiboles, garnets, and micas. The base unit of a silicate mineral is the [SiO4]4− tetrahedron. In the vast majority of cases, silicon is in four-fold or tetrahedral coordination with oxygen. In very high-pressure situations, silicon will be in six-fold or octahedral coordination, such as in the perovskite structure or the quartz polymorph stishovite (SiO2). In the latter case, the mineral no longer has a silicate structure, but that of rutile (TiO2), and its associated group, which are simple oxides. These silica tetrahedra are then polymerized to some degree to create various structures, such as one-dimensional chains, two-dimensional sheets, and three-dimensional frameworks. The basic silicate mineral where no polymerization of the tetrahedra has occurred requires other elements to balance out the base 4- charge. In other silicate structures, different combinations of elements are required to balance out the resultant negative charge. It is common for the Si4+ to be substituted by Al3+ because of similarity in ionic radius and charge; in those cases, the [AlO4]5− tetrahedra form the same structures as do the unsubstituted tetrahedra, but their charge-balancing requirements are different. The degree of polymerization can be described by both the structure formed and how many tetrahedral corners (or coordinating oxygens) are shared (for aluminium and silicon in tetrahedral sites). Orthosilicates (or nesosilicates) have no linking of polyhedra, thus tetrahedra share no corners. Disilicates (or sorosilicates) have two tetrahedra sharing one oxygen atom. Inosilicates are chain silicates; single-chain silicates have two shared corners, whereas double-chain silicates have two or three shared corners. In phyllosilicates, a sheet structure is formed which requires three shared oxygens; in the case of double-chain silicates, some tetrahedra must share two corners instead of three as otherwise a sheet structure would result. Framework silicates, or tectosilicates, have tetrahedra that share all four corners. The ring silicates, or cyclosilicates, only need tetrahedra to share two corners to form the cyclical structure. The silicate subclasses are described below in order of decreasing polymerization. Tectosilicates, also known as framework silicates, have the highest degree of polymerization. With all corners of a tetrahedra shared, the silicon:oxygen ratio becomes 1:2. Examples are quartz, the feldspars, feldspathoids, and the zeolites. Framework silicates tend to be particularly chemically stable as a result of strong covalent bonds. Forming 12% of the Earth's crust, quartz (SiO2) is the most abundant mineral species. It is characterized by its high chemical and physical resistivity. Quartz has several polymorphs, including tridymite and cristobalite at high temperatures, high-pressure coesite, and ultra-high pressure stishovite. The latter mineral can only be formed on Earth by meteorite impacts, and its structure has been composed so much that it had changed from a silicate structure to that of rutile (TiO2). The silica polymorph that is most stable at the Earth's surface is α-quartz. Its counterpart, β-quartz, is present only at high temperatures and pressures (changes to α-quartz below 573 °C at 1 bar). These two polymorphs differ by a "kinking" of bonds; this change in structure gives β-quartz greater symmetry than α-quartz, and they are thus also called high quartz (β) and low quartz (α). Feldspars are the most abundant group in the Earth's crust, at about 50%. In the feldspars, Al3+ substitutes for Si4+, which creates a charge imbalance that must be accounted for by the addition of cations. The base structure becomes either [AlSi3O8]− or [Al2Si2O8]2− There are 22 mineral species of feldspars, subdivided into two major subgroups – alkali and plagioclase – and two less common groups – celsian and banalsite. The alkali feldspars are most commonly in a series between potassium-rich orthoclase and sodium-rich albite; in the case of plagioclase, the most common series ranges from albite to calcium-rich anorthite. Crystal twinning is common in feldspars, especially polysynthetic twins in plagioclase and Carlsbad twins in alkali feldspars. If the latter subgroup cools slowly from a melt, it forms exsolution lamellae because the two components – orthoclase and albite – are unstable in solid solution. Exsolution can be on a scale from microscopic to readily observable in hand-sample; perthitic texture forms when Na-rich feldspar exsolve in a K-rich host. The opposite texture (antiperthitic), where K-rich feldspar exsolves in a Na-rich host, is very rare. Feldspathoids are structurally similar to feldspar, but differ in that they form in Si-deficient conditions, which allows for further substitution by Al3+. As a result, feldspathoids cannot be associated with quartz. A common example of a feldspathoid is nepheline ((Na, K)AlSiO4); compared to alkali feldspar, nepheline has an Al2O3:SiO2 ratio of 1:2, as opposed to 1:6 in the feldspar. Zeolites often have distinctive crystal habits, occurring in needles, plates, or blocky masses. They form in the presence of water at low temperatures and pressures, and have channels and voids in their structure. Zeolites have several industrial applications, especially in waste water treatment. Phyllosilicates consist of sheets of polymerized tetrahedra. They are bound at three oxygen sites, which gives a characteristic silicon:oxygen ratio of 2:5. Important examples include the mica, chlorite, and the kaolinite-serpentine groups. The sheets are weakly bound by van der Waals forces or hydrogen bonds, which causes a crystallographic weakness, in turn leading to a prominent basal cleavage among the phyllosilicates. In addition to the tetrahedra, phyllosilicates have a sheet of octahedra (elements in six-fold coordination by oxygen) that balance out the basic tetrahedra, which have a negative charge (e.g. [Si4O10]4−) These tetrahedra (T) and octahedra (O) sheets are stacked in a variety of combinations to create phyllosilicate groups. Within an octahedral sheet, there are three octahedral sites in a unit structure; however, not all of the sites may be occupied. In that case, the mineral is termed dioctahedral, whereas in other case it is termed trioctahedral. The kaolinite-serpentine group consists of T-O stacks (the 1:1 clay minerals); their hardness ranges from 2 to 4, as the sheets are held by hydrogen bonds. The 2:1 clay minerals (pyrophyllite-talc) consist of T-O-T stacks, but they are softer (hardness from 1 to 2), as they are instead held together by van der Waals forces. These two groups of minerals are subgrouped by octahedral occupation; specifically, kaolinite and pyrophyllite are dioctahedral whereas serpentine and talc trioctahedral. Micas are also T-O-T-stacked phyllosilicates, but differ from the other T-O-T and T-O-stacked subclass members in that they incorporate aluminium into the tetrahedral sheets (clay minerals have Al3+ in octahedral sites). Common examples of micas are muscovite, and the biotite series. The chlorite group is related to mica group, but a brucite-like (Mg(OH)2) layer between the T-O-T stacks. Because of their chemical structure, phyllosilicates typically have flexible, elastic, transparent layers that are electrical insulators and can be split into very thin flakes. Micas can be used in electronics as insulators, in construction, as optical filler, or even cosmetics. Chrysotile, a species of serpentine, is the most common mineral species in industrial asbestos, as it is less dangerous in terms of health than the amphibole asbestos. Inosilicates consist of tetrahedra repeatedly bonded in chains. These chains can be single, where a tetrahedron is bound to two others to form a continuous chain; alternatively, two chains can be merged to create double-chain silicates. Single-chain silicates have a silicon:oxygen ratio of 1:3 (e.g. [Si2O6]4−), whereas the double-chain variety has a ratio of 4:11, e.g. [Si8O22]12−. Inosilicates contain two important rock-forming mineral groups; single-chain silicates are most commonly pyroxenes, while double-chain silicates are often amphiboles. Higher-order chains exist (e.g. three-member, four-member, five-member chains, etc.) but they are rare. The pyroxene group consists of 21 mineral species. Pyroxenes have a general structure formula of XY(Si2O6), where X is an octahedral site, while Y can vary in coordination number from six to eight. Most varieties of pyroxene consist of permutations of Ca2+, Fe2+ and Mg2+ to balance the negative charge on the backbone. Pyroxenes are common in the Earth's crust (about 10%) and are a key constituent of mafic igneous rocks. Amphiboles have great variability in chemistry, described variously as a "mineralogical garbage can" or a "mineralogical shark swimming a sea of elements". The backbone of the amphiboles is the [Si8O22]12−; it is balanced by cations in three possible positions, although the third position is not always used, and one element can occupy both remaining ones. Finally, the amphiboles are usually hydrated, that is, they have a hydroxyl group ([OH]−), although it can be replaced by a fluoride, a chloride, or an oxide ion. Because of the variable chemistry, there are over 80 species of amphibole, although variations, as in the pyroxenes, most commonly involve mixtures of Ca2+, Fe2+ and Mg2+. Several amphibole mineral species can have an asbestiform crystal habit. These asbestos minerals form long, thin, flexible, and strong fibres, which are electrical insulators, chemically inert and heat-resistant; as such, they have several applications, especially in construction materials. However, asbestos are known carcinogens, and cause various other illnesses, such as asbestosis; amphibole asbestos (anthophyllite, tremolite, actinolite, grunerite, and riebeckite) are considered more dangerous than chrysotile serpentine asbestos. Cyclosilicates, or ring silicates, have a ratio of silicon to oxygen of 1:3. Six-member rings are most common, with a base structure of [Si6O18]12−; examples include the tourmaline group and beryl. Other ring structures exist, with 3, 4, 8, 9, 12 having been described. Cyclosilicates tend to be strong, with elongated, striated crystals. Tourmalines have a very complex chemistry that can be described by a general formula XY3Z6(BO3)3T6O18V3W. The T6O18 is the basic ring structure, where T is usually Si4+, but substitutable by Al3+ or B3+. Tourmalines can be subgrouped by the occupancy of the X site, and from there further subdivided by the chemistry of the W site. The Y and Z sites can accommodate a variety of cations, especially various transition metals; this variability in structural transition metal content gives the tourmaline group greater variability in colour. Other cyclosilicates include beryl, Al2Be3Si6O18, whose varieties include the gemstones emerald (green) and aquamarine (bluish). Cordierite is structurally similar to beryl, and is a common metamorphic mineral. Sorosilicates, also termed disilicates, have tetrahedron-tetrahedron bonding at one oxygen, which results in a 2:7 ratio of silicon to oxygen. The resultant common structural element is the [Si2O7]6− group. The most common disilicates by far are members of the epidote group. Epidotes are found in variety of geologic settings, ranging from mid-ocean ridge to granites to metapelites. Epidotes are built around the structure [(SiO4)(Si2O7)]10− structure; for example, the mineral "species" epidote has calcium, aluminium, and ferric iron to charge balance: Ca2Al2(Fe3+, Al)(SiO4)(Si2O7)O(OH). The presence of iron as Fe3+ and Fe2+ helps understand oxygen fugacity, which in turn is a significant factor in petrogenesis. Other examples of sorosilicates include lawsonite, a metamorphic mineral forming in the blueschist facies (subduction zone setting with low temperature and high pressure), vesuvianite, which takes up a significant amount of calcium in its chemical structure. Orthosilicates consist of isolated tetrahedra that are charge-balanced by other cations. Also termed nesosilicates, this type of silicate has a silicon:oxygen ratio of 1:4 (e.g. SiO4). Typical orthosilicates tend to form blocky equant crystals, and are fairly hard. Several rock-forming minerals are part of this subclass, such as the aluminosilicates, the olivine group, and the garnet group. The aluminosilicates –bkyanite, andalusite, and sillimanite, all Al2SiO5 – are structurally composed of one [SiO4]4− tetrahedron, and one Al3+ in octahedral coordination. The remaining Al3+ can be in six-fold coordination (kyanite), five-fold (andalusite) or four-fold (sillimanite); which mineral forms in a given environment is depend on pressure and temperature conditions. In the olivine structure, the main olivine series of (Mg, Fe)2SiO4 consist of magnesium-rich forsterite and iron-rich fayalite. Both iron and magnesium are in octahedral by oxygen. Other mineral species having this structure exist, such as tephroite, Mn2SiO4. The garnet group has a general formula of X3Y2(SiO4)3, where X is a large eight-fold coordinated cation, and Y is a smaller six-fold coordinated cation. There are six ideal endmembers of garnet, split into two group. The pyralspite garnets have Al3+ in the Y position: pyrope (Mg3Al2(SiO4)3), almandine (Fe3Al2(SiO4)3), and spessartine (Mn3Al2(SiO4)3). The ugrandite garnets have Ca2+ in the X position: uvarovite (Ca3Cr2(SiO4)3), grossular (Ca3Al2(SiO4)3) and andradite (Ca3Fe2(SiO4)3). While there are two subgroups of garnet, solid solutions exist between all six end-members. Other orthosilicates include zircon, staurolite, and topaz. Zircon (ZrSiO4) is useful in geochronology as the Zr4+ can be substituted by U6+; furthermore, because of its very resistant structure, it is difficult to reset it as a chronometer. Staurolite is a common metamorphic intermediate-grade index mineral. It has a particularly complicated crystal structure that was only fully described in 1986. Topaz (Al2SiO4(F, OH)2, often found in granitic pegmatites associated with tourmaline, is a common gemstone mineral. Native elements are those that are not chemically bonded to other elements. This mineral group includes native metals, semi-metals, and non-metals, and various alloys and solid solutions. The metals are held together by metallic bonding, which confers distinctive physical properties such as their shiny metallic lustre, ductility and malleability, and electrical conductivity. Native elements are subdivided into groups by their structure or chemical attributes. The gold group, with a cubic close-packed structure, includes metals such as gold, silver, and copper. The platinum group is similar in structure to the gold group. The iron-nickel group is characterized by several iron-nickel alloy species. Two examples are kamacite and taenite, which are found in iron meteorites; these species differ by the amount of Ni in the alloy; kamacite has less than 5–7% nickel and is a variety of native iron, whereas the nickel content of taenite ranges from 7–37%. Arsenic group minerals consist of semi-metals, which have only some metallic traits; for example, they lack the malleability of metals. Native carbon occurs in two allotropes, graphite and diamond; the latter forms at very high pressure in the mantle, which gives it a much stronger structure than graphite. The sulfide minerals are chemical compounds of one or more metals or semimetals with a sulfur; tellurium, arsenic, or selenium can substitute for the sulfur. Sulfides tend to be soft, brittle minerals with a high specific gravity. Many powdered sulfides, such as pyrite, have a sulfurous smell when powdered. Sulfides are susceptible to weathering, and many readily dissolve in water; these dissolved minerals can be later redeposited, which creates enriched secondary ore deposits. Sulfides are classified by the ratio of the metal or semimetal to the sulfur, such as M:S equal to 2:1, or 1:1. Many sulfide minerals are economically important as metal ores; examples include sphalerite (ZnS), an ore of zinc, galena (PbS), an ore of lead, cinnabar (HgS), an ore of mercury, and molybdenite (MoS2, an ore of molybdenum. Pyrite (FeS2), is the most commonly occurring sulfide, and can be found in most geological environments. It is not, however, an ore of iron, but can be instead oxidized to produce sulfuric acid. Related to the sulfides are the rare sulfosalts, in which a metallic element is bonded to sulfur and a semimetal such as antimony, arsenic, or bismuth. Like the sulfides, sulfosalts are typically soft, heavy, and brittle minerals. Oxide minerals are divided into three categories: simple oxides, hydroxides, and multiple oxides. Simple oxides are characterized by O2− as the main anion and primarily ionic bonding. They can be further subdivided by the ratio of oxygen to the cations. The periclase group consists of minerals with a 1:1 ratio. Oxides with a 2:1 ratio include cuprite (Cu2O) and water ice. Corundum group minerals have a 2:3 ratio, and includes minerals such as corundum (Al2O3), and hematite (Fe2O3). Rutile group minerals have a ratio of 1:2; the eponymous species, rutile (TiO2) is the chief ore of titanium; other examples include cassiterite (SnO2; ore of tin), and pyrolusite (MnO2; ore of manganese). In hydroxides, the dominant anion is the hydroxyl ion, OH−. Bauxites are the chief aluminium ore, and are a heterogeneous mixture of the hydroxide minerals diaspore, gibbsite, and bohmite; they form in areas with a very high rate of chemical weathering (mainly tropical conditions). Finally, multiple oxides are compounds of two metals with oxygen. A major group within this class are the spinels, with a general formula of X2+Y3+2O4. Examples of species include spinel (MgAl2O4), chromite (FeCr2O4), and magnetite (Fe3O4). The latter is readily distinguishable by its strong magnetism, which occurs as it has iron in two oxidation states (Fe2+Fe3+2O4), which makes it a multiple oxide instead of a single oxide. The halide minerals are compounds in which a halogen (fluorine, chlorine, iodine, or bromine) is the main anion. These minerals tend to be soft, weak, brittle, and water-soluble. Common examples of halides include halite (NaCl, table salt), sylvite (KCl), fluorite (CaF2). Halite and sylvite commonly form as evaporites, and can be dominant minerals in chemical sedimentary rocks. Cryolite, Na3AlF6, is a key mineral in the extraction of aluminium from bauxites; however, as the only significant occurrence at Ivittuut, Greenland, in a granitic pegmatite, was depleted, synthetic cryolite can be made from fluorite. The carbonate minerals are those in which the main anionic group is carbonate, [CO3]2−. Carbonates tend to be brittle, many have rhombohedral cleavage, and all react with acid. Due to the last characteristic, field geologists often carry dilute hydrochloric acid to distinguish carbonates from non-carbonates. The reaction of acid with carbonates, most commonly found as the polymorph calcite and aragonite (CaCO3), relates to the dissolution and precipitation of the mineral, which is a key in the formation of limestone caves, features within them such as stalactite and stalagmites, and karst landforms. Carbonates are most often formed as biogenic or chemical sediments in marine environments. The carbonate group is structurally a triangle, where a central C4+ cation is surrounded by three O2− anions; different groups of minerals form from different arrangements of these triangles. The most common carbonate mineral is calcite, which is the primary constituent of sedimentary limestone and metamorphic marble. Calcite, CaCO3, can have a high magnesium impurity. Under high-Mg conditions, its polymorph aragonite will form instead; the marine geochemistry in this regard can be described as an aragonite or calcite sea, depending on which mineral preferentially forms. Dolomite is a double carbonate, with the formula CaMg(CO3)2. Secondary dolomitization of limestone is common, in which calcite or aragonite are converted to dolomite; this reaction increases pore space (the unit cell volume of dolomite is 88% that of calcite), which can create a reservoir for oil and gas. These two mineral species are members of eponymous mineral groups: the calcite group includes carbonates with the general formula XCO3, and the dolomite group constitutes minerals with the general formula XY(CO3)2. The sulfate minerals all contain the sulfate anion, [SO4]2−. They tend to be transparent to translucent, soft, and many are fragile. Sulfate minerals commonly form as evaporites, where they precipitate out of evaporating saline waters. Sulfates can also be found in hydrothermal vein systems associated with sulfides, or as oxidation products of sulfides. Sulfates can be subdivided into anhydrous and hydrous minerals. The most common hydrous sulfate by far is gypsum, CaSO4⋅2H2O. It forms as an evaporite, and is associated with other evaporites such as calcite and halite; if it incorporates sand grains as it crystallizes, gypsum can form desert roses. Gypsum has very low thermal conductivity and maintains a low temperature when heated as it loses that heat by dehydrating; as such, gypsum is used as an insulator in materials such as plaster and drywall. The anhydrous equivalent of gypsum is anhydrite; it can form directly from seawater in highly arid conditions. The barite group has the general formula XSO4, where the X is a large 12-coordinated cation. Examples include barite (BaSO4), celestine (SrSO4), and anglesite (PbSO4); anhydrite is not part of the barite group, as the smaller Ca2+ is only in eight-fold coordination. The phosphate minerals are characterized by the tetrahedral [PO4]3− unit, although the structure can be generalized, and phosphorus is replaced by antimony, arsenic, or vanadium. The most common phosphate is the apatite group; common species within this group are fluorapatite (Ca5(PO4)3F), chlorapatite (Ca5(PO4)3Cl) and hydroxylapatite (Ca5(PO4)3(OH)). Minerals in this group are the main crystalline constituents of teeth and bones in vertebrates. The relatively abundant monazite group has a general structure of ATO4, where T is phosphorus or arsenic, and A is often a rare-earth element (REE). Monazite is important in two ways: first, as a REE "sink", it can sufficiently concentrate these elements to become an ore; secondly, monazite group elements can incorporate relatively large amounts of uranium and thorium, which can be used in monazite geochronology to date the rock based on the decay of the U and Th to lead. The Strunz classification includes a class for . These rare compounds contain organic carbon, but can be formed by a geologic process. For example, whewellite, CaC2O4⋅H2O is an oxalate that can be deposited in hydrothermal ore veins. While hydrated calcium oxalate can be found in coal seams and other sedimentary deposits involving organic matter, the hydrothermal occurrence is not considered to be related to biological activity. It has been suggested that biominerals could be important indicators of extraterrestrial life and thus could play an important role in the search for past or present life on the planet Mars. Furthermore, organic components (biosignatures) that are often associated with biominerals are believed to play crucial roles in both pre-biotic and biotic reactions. On January 24, 2014, NASA reported that current studies by the "Curiosity" and "Opportunity" rovers on Mars will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective.
https://en.wikipedia.org/wiki?curid=19053
Marble Marble is a metamorphic rock composed of recrystallized carbonate minerals, most commonly calcite or dolomite. Marble is typically not foliated, although there are exceptions. In geology, the term "marble" refers to metamorphosed limestone, but its use in stonemasonry more broadly encompasses unmetamorphosed limestone. Marble is commonly used for sculpture and as a building material. The word "marble" derives from the Ancient Greek (), from (), "crystalline rock, shining stone", perhaps from the verb (), "to flash, sparkle, gleam"; R. S. P. Beekes has suggested that a "Pre-Greek origin is probable". This stem is also the ancestor of the English word "marmoreal", meaning "marble-like." While the English term "marble" resembles the French , most other European languages (with words like "marmoreal") more closely resemble the original Ancient Greek. Marble is a rock resulting from metamorphism of sedimentary carbonate rocks, most commonly limestone or dolomite rock. Metamorphism causes variable recrystallization of the original carbonate mineral grains. The resulting marble rock is typically composed of an interlocking mosaic of carbonate crystals. Primary sedimentary textures and structures of the original carbonate rock (protolith) have typically been modified or destroyed. Pure white marble is the result of metamorphism of a very pure (silicate-poor) limestone or dolomite protolith. The characteristic swirls and veins of many colored marble varieties are usually due to various mineral impurities such as clay, silt, sand, iron oxides, or chert which were originally present as grains or layers in the limestone. Green coloration is often due to serpentine resulting from originally magnesium-rich limestone or dolomite with silica impurities. These various impurities have been mobilized and recrystallized by the intense pressure and heat of the metamorphism. Examples of historically notable marble varieties and locations: White marble has been prized for its use in sculptures since classical times. This preference has to do with its softness, which made it easier to carve, relative isotropy and homogeneity, and a relative resistance to shattering. Also, the low index of refraction of calcite allows light to penetrate several millimeters into the stone before being scattered out, resulting in the characteristic waxy look which brings a lifelike luster to marble sculptures of any kind, which is why many sculptors preferred and still prefer marble for sculpting. Construction marble is a stone which is composed of calcite, dolomite or serpentine that is capable of taking a polish. More generally in construction, specifically the dimension stone trade, the term "marble" is used for any crystalline calcitic rock (and some non-calcitic rocks) useful as building stone. For example, Tennessee marble is really a dense granular fossiliferous gray to pink to maroon Ordovician limestone, that geologists call the Holston Formation. Ashgabat, the capital city of Turkmenistan, was recorded in the 2013 "Guinness Book of Records" as having the world's highest concentration of white marble buildings. According to the United States Geological Survey, U.S. domestic marble production in 2006 was 46,400 tons valued at about $18.1 million, compared to 72,300 tons valued at $18.9 million in 2005. Crushed marble production (for aggregate and industrial uses) in 2006 was 11.8 million tons valued at $116 million, of which 6.5 million tons was finely ground calcium carbonate and the rest was construction aggregate. For comparison, 2005 crushed marble production was 7.76 million tons valued at $58.7 million, of which 4.8 million tons was finely ground calcium carbonate and the rest was construction aggregate. U.S. dimension marble demand is about 1.3 million tons. The DSAN World Demand for (finished) Marble Index has shown a growth of 12% annually for the 2000–2006 period, compared to 10.5% annually for the 2000–2005 period. The largest dimension marble application is tile. In 1998, marble production was dominated by 4 countries that accounted for almost half of world production of marble and decorative stone. Italy and China were the world leaders, each representing 16% of world production, while Spain and India produced 9% and 8%, respectively. In 2018 Turkey was the world leader in marble export, with 42% share in global marble trade, followed by Italy with 18% and Greece with 10%. The largest importer of marble in 2018 was China with a 64% market share, followed by India with 11% and Italy with 5%. Dust produced by cutting marble could cause lung disease but more research needs to be carried out on whether dust filters and other safety products reduce this risk. In the United States, the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for marble exposure in the workplace as 15 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 10 mg/m3 total exposure and 5 mg/m3 respiratory exposure over an 8-hour workday. Acids damage marble, because the calcium carbonate in marble reacts with them, releasing carbon dioxide (technically speaking, carbonic acid, but that disintegrates quickly to CO2 and H2O) : Thus, vinegar or other acidic solutions should never be used on marble. Likewise, outdoor marble statues, gravestones, or other marble structures are damaged by acid rain. The haloalkaliphilic methylotrophic bacterium "Methylophaga murata" was isolated from deteriorating marble in the Kremlin. Bacterial and fungal degradation was detected in four samples of marble from Milan cathedral; black "Cladosporium" attacked dried acrylic resin using melanin. As the favorite medium for Greek and Roman sculptors and architects (see classical sculpture), marble has become a cultural symbol of tradition and refined taste. Its extremely varied and colorful patterns make it a favorite decorative material, and it is often imitated in background patterns for computer displays, etc. Places named after the stone include Marblehead, Massachusetts; Marblehead, Ohio; Marble Arch, London; the Sea of Marmara; India's Marble Rocks; and the towns of Marble, Minnesota; Marble, Colorado; Marble Falls, Texas, and Marble Hill, Manhattan, New York. The Elgin Marbles are marble sculptures from the Parthenon in Athens that are on display in the British Museum. They were brought to Britain by the Earl of Elgin. Marble dust is combined with cement or synthetic resins to make "reconstituted" or "cultured marble". The appearance of marble can be simulated with faux marbling, a painting technique that imitates the stone's color patterns. Portoro Marble
https://en.wikipedia.org/wiki?curid=19054
Manufacturing Consent (film) Manufacturing Consent: Noam Chomsky and the Media is a 1992 documentary film that explores the political life and ideas of linguist, intellectual, and political activist Noam Chomsky. Canadian filmmakers Mark Achbar and Peter Wintonick expand the analysis of political economy and mass media presented in "Manufacturing Consent", a 1988 book Chomsky wrote with Edward S. Herman. The film presents and illustrates Chomsky and Herman's propaganda model thesis that corporate media, as profit-driven institutions, tend to serve and further the agendas and interests of dominant, elite groups in the society. A centerpiece of the film is a long examination of the history of "The New York Times"' coverage of the Indonesian occupation of East Timor, which Chomsky says exemplifies the media's unwillingness to criticize an ally of the elite. Until the release of "The Corporation" (2003), made by Mark Achbar, Jennifer Abbott and Joel Bakan, it was the most successful feature documentary in Canadian history playing theatrically in over 300 cities around the world. It appeared in more than 50 international film festivals where it received 22 awards. It was broadcast on television in over 30 markets and translated into a dozen languages. Chomsky's response to the film was mixed; in a published conversation with Achbar and several activists, he stated that "the positive impact of it has been astonishing to me" but people mistakenly get the impression that he is the leader of a movement that they should join. He also criticizes "The New York Times" review of the film, which mistakes his message for being a call for voter organizing rather than for engaging in media critique and political action. Mark Achbar edited a companion book of the same name. It features a copy of the script annotated with excerpts from referenced and relevant materials as well as several comments from Chomsky interspersed throughout. Eighteen "Philosopher All-Stars" baseball cards (as seen in the film) are also included. On the back of each card it includes a short summary of the person, some of their major works and a series of quotations attributed to the individual. The people featured as cards in the set are: René Descartes, Jean-Jacques Rousseau, Voltaire, Mary Wollstonecraft, Wilhelm von Humboldt, Pierre Joseph Proudhon, Sojourner Truth, Karl Marx, Sitting Bull, Rosa Luxemburg, Peter Kropotkin, Emma Goldman, Gandhi, Martin Luther King, Jr., Bertrand Russell, Michel Foucault, and Avram Noam Chomsky. The book made the national bestseller list in Canada. The first half of the book, hyperlinked to the relevant portions of the film's audio, is available online from "Z Magazine". The entire book is available as a PDF document on the Region 2 DVD of the film.
https://en.wikipedia.org/wiki?curid=19055
Munich Munich ( ; ; ; ) is the capital and most populous city of Bavaria, the second most populous German state. With a population of around 1.5 million, it is the third-largest city in Germany, after Berlin and Hamburg, and thus the largest which does not constitute its own state, as well as the 11th-largest city in the European Union. The city's metropolitan region is home to 6 million people. Straddling the banks of the River Isar (a tributary of the Danube) north of the Bavarian Alps, it is the seat of the Bavarian administrative region of Upper Bavaria, while being the most densely populated municipality in Germany (4,500 people per km²). Munich is the second-largest city in the Bavarian dialect area, after the Austrian capital of Vienna. The city was first mentioned in 1158. Catholic Munich strongly resisted the Reformation and was a political point of divergence during the resulting Thirty Years' War, but remained physically untouched despite an occupation by the Protestant Swedes. Once Bavaria was established as a sovereign kingdom in 1806, Munich became a major European centre of arts, architecture, culture and science. In 1918, during the German Revolution, the ruling house of Wittelsbach, which had governed Bavaria since 1180, was forced to abdicate in Munich and a short-lived socialist republic was declared. In the 1920s, Munich became home to several political factions, among them the NSDAP. After the Nazis' rise to power, Munich was declared their "Capital of the Movement". The city was heavily bombed during World War II, but restored most of its traditional cityscape. After the end of postwar American occupation in 1949, there was a great increase in population and economic power during the years of "Wirtschaftswunder", or "economic miracle". The city hosted the 1972 Summer Olympics and was one of the host cities of the 1974 and 2006 FIFA World Cups. Today, Munich is a global centre of art, science, technology, finance, publishing, culture, innovation, education, business, and tourism and enjoys a very high standard and quality of living, reaching first in Germany and third worldwide according to the 2018 Mercer survey, and being rated the world's most liveable city by the Monocle's Quality of Life Survey 2018. According to the Globalization and World Rankings Research Institute, Munich is considered an alpha-world city, . It is one of the most prosperous and fastest growing cities in Germany. Munich's economy is based on high tech, automobiles, the service sector and creative industries, as well as IT, biotechnology, engineering and electronics among many others. The city houses many multinational companies, such as BMW, Siemens, MAN, Linde, Allianz and MunichRE. It is also home to two research universities, a multitude of scientific institutions, and world class technology and science museums like the Deutsches Museum and BMW Museum. Munich's numerous architectural and cultural attractions, sports events, exhibitions and its annual Oktoberfest attract considerable tourism. The city is home to more than 530,000 people of foreign background, making up 37.7% of its population. The name of the city is usually interpreted as deriving from the Old/Middle High German term "Munichen", meaning "by the monks". It derives from the monks of the Benedictine order, who ran a monastery at the place that was later to become the Old Town of Munich. A monk is also depicted on the city's coat of arms. The town is first mentioned as "forum apud Munichen" in the of June 14, 1158 by Holy Roman Emperor Friedrich I. The name in modern German is , but this has been variously translated in different languages: in English, French and various other languages as "Munich", in Italian as "Monaco di Baviera", in Portuguese as "Munique". The first known settlement in the area was of Benedictine monks on the Salt road. The foundation date is not considered the year 1158, the date the city was first mentioned in a document. The document was signed in Augsburg. By then, the Guelph Henry the Lion, Duke of Saxony and Bavaria, had built a toll bridge over the river Isar next to the monks' settlement and on the salt route. But during the archaeological excavations at Marienhof in advance of the expansion of the S-Bahn from 2012, shards of vessels from the 11th century were found, which prove again that the settlement of Munich must be older than their first documentary mention in 1158. In 1175 Munich received city status and fortification. In 1180 with the trial of Henry the Lion, Otto I Wittelsbach became Duke of Bavaria, and Munich was handed to the Bishop of Freising. (Wittelsbach's heirs, the Wittelsbach dynasty, ruled Bavaria until 1918.) In 1240, Munich was transferred to Otto II Wittelsbach and in 1255, when the Duchy of Bavaria was split in two, Munich became the ducal residence of Upper Bavaria. Duke Louis IV, a native of Munich, was elected German king in 1314 and crowned as Holy Roman Emperor in 1328. He strengthened the city's position by granting it the salt monopoly, thus assuring it of additional income. In the late 15th century, Munich underwent a revival of Gothic arts: the Old Town Hall was enlarged, and Munich's largest Gothic church – the Frauenkirche – now a cathedral, was constructed in only 20 years, starting in 1468. When Bavaria was reunited in 1506, Munich became its capital. The arts and politics became increasingly influenced by the court (see Orlando di Lasso and Heinrich Schütz). During the 16th century, Munich was a centre of the German counter reformation, and also of renaissance arts. Duke Wilhelm V commissioned the Jesuit Michaelskirche, which became a centre for the counter-reformation, and also built the Hofbräuhaus for brewing brown beer in 1589. The Catholic League was founded in Munich in 1609. In 1623, during the Thirty Years' War, Munich became an electoral residence when Maximilian I, Duke of Bavaria was invested with the electoral dignity, but in 1632 the city was occupied by Gustav II Adolph of Sweden. When the bubonic plague broke out in 1634 and 1635, about one third of the population died. Under the regency of the Bavarian electors, Munich was an important centre of Baroque life, but also had to suffer under Habsburg occupations in 1704 and 1742. In 1806 the city became the capital of the new Kingdom of Bavaria, with the state's parliament (the "Landtag") and the new archdiocese of Munich and Freising being located in the city. Twenty years later, Landshut University was moved to Munich. Many of the city's finest buildings belong to this period and were built under the first three Bavarian kings. Especially Ludwig I rendered outstanding services to Munich's status as a centre of the arts, attracting numerous artists and enhancing the city's architectural substance with grand boulevards and buildings. On the other hand, Ludwig II, known the world over as the fairytale king, was mostly aloof from his capital and focused more on his fanciful castles in the Bavarian countryside. Nevertheless, his patronage of Richard Wagner secured his posthumous reputation, as do his castles, which still generate significant tourist income for Bavaria. Later, Prince Regent Luitpold's years as regent were marked by tremendous artistic and cultural activity in Munich, enhancing its status as a cultural force of global importance (see Franz von Stuck and Der Blaue Reiter). Following the outbreak of World War I in 1914, life in Munich became very difficult, as the Allied blockade of Germany led to food and fuel shortages. During French air raids in 1916, three bombs fell on Munich. After World War I, the city was at the centre of substantial political unrest. In November 1918, on the eve of the German revolution, Ludwig III and his family fled the city. After the murder of the first republican premier of Bavaria Kurt Eisner in February 1919 by Anton Graf von Arco auf Valley, the Bavarian Soviet Republic was proclaimed. When Communists took power, Lenin, who had lived in Munich some years before, sent a congratulatory telegram, but the Soviet Republic was ended on 3 May 1919 by the Freikorps. While the republican government had been restored, Munich became a hotbed of extremist politics, among which Adolf Hitler and the National Socialists soon rose to prominence. In 1923, Adolf Hitler and his supporters, who were concentrated in Munich, staged the Beer Hall Putsch, an attempt to overthrow the Weimar Republic and seize power. The revolt failed, resulting in Hitler's arrest and the temporary crippling of the Nazi Party (NSDAP). The city again became important to the Nazis when they took power in Germany in 1933. The party created its first concentration camp at Dachau, north-west of the city. Because of its importance to the rise of National Socialism, Munich was referred to as the "Hauptstadt der Bewegung" ("Capital of the Movement"). The NSDAP headquarters were in Munich and many "Führerbauten" (""Führer" buildings") were built around the Königsplatz, some of which still survive. The city is known as the site of the culmination of the policy of appeasement by Britain and France leading up to World War II. It was in Munich that British Prime Minister Neville Chamberlain assented to the annexation of Czechoslovakia's Sudetenland region into Greater Germany in the hope of satisfying the desires of Hitler's Third Reich. Munich was the base of the White Rose, a student resistance movement from June 1942 to February 1943. The core members were arrested and executed following a distribution of leaflets in Munich University by Hans and Sophie Scholl. The city was heavily damaged by Allied bombing during World War II, with 71 air raids over five years. After US occupation in 1945, Munich was completely rebuilt following a meticulous plan, which preserved its pre-war street grid. In 1957, Munich's population surpassed one million. The city continued to play a highly significant role in the German economy, politics and culture, giving rise to its nickname "Heimliche Hauptstadt" ("secret capital") in the decades after World War II. Munich was the site of the 1972 Summer Olympics, during which 11 Israeli athletes were murdered by Palestinian terrorists in the Munich massacre, when gunmen from the Palestinian "Black September" group took hostage members of the Israeli Olympic team. Mass murders also occurred in Munich in 1980 and 2016. Most Munich residents enjoy a high quality of life. Mercer HR Consulting consistently rates the city among the top 10 cities with the highest quality of life worldwide – a 2011 survey ranked Munich as 4th. In 2007 the same company also ranked Munich as the 39th most expensive in the world and most expensive major city in Germany. Munich enjoys a thriving economy, driven by the information technology, biotechnology, and publishing sectors. Environmental pollution is low, although the city council is concerned about levels of particulate matter (PM), especially along the city's major thoroughfares. Since the enactment of EU legislation concerning the concentration of particulate in the air, environmental groups such as Greenpeace have staged large protest rallies to urge the city council and the State government to take a harder stance on pollution. Today, the crime rate is low compared with other large German cities, such as Hamburg or Berlin. For its high quality of life and safety, the city has been nicknamed "Toytown" among the English-speaking residents. German inhabitants call it "Millionendorf", an expression which means "village of a million people". Due to the high standard of living in and the thriving economy of the city and the region, there was an influx of people and Munich's population surpassed 1.5 million by June 2015, an increase of more than 20% in 10 years. Munich lies on the elevated plains of Upper Bavaria, about north of the northern edge of the Alps, at an altitude of about ASL. The local rivers are the Isar and the Würm. Munich is situated in the Northern Alpine Foreland. The northern part of this sandy plateau includes a highly fertile flint area which is no longer affected by the folding processes found in the Alps, while the southern part is covered with morainic hills. Between these are fields of fluvio-glacial out-wash, such as around Munich. Wherever these deposits get thinner, the ground water can permeate the gravel surface and flood the area, leading to marshes as in the north of Munich. By Köppen classification templates and updated data the climate is oceanic ("Cfb"), independent of the isotherm but with some humid continental ("Dfb") features like warm to hot summers and cold winters, but without permanent snow cover. The proximity to the Alps brings higher volumes of rainfall and consequently greater susceptibility to flood problems. Studies of adaptation to climate change and extreme events are carried out, one of them is the Isar Plan of the EU Adaptation Climate. The city center lies between both climates, while the airport of Munich has a humid continental climate. The warmest month, on average, is July. The coolest is January. Showers and thunderstorms bring the highest average monthly precipitation in late spring and throughout the summer. The most precipitation occurs in July, on average. Winter tends to have less precipitation, the least in February. The higher elevation and proximity to the Alps cause the city to have more rain and snow than many other parts of Germany. The Alps affect the city's climate in other ways too; for example, the warm downhill wind from the Alps (föhn wind), which can raise temperatures sharply within a few hours even in the winter. Being at the centre of Europe, Munich is subject to many climatic influences, so that weather conditions there are more variable than in other European cities, especially those further west and south of the Alps. At Munich's official weather stations, the highest and lowest temperatures ever measured are , on 27 July 1983 in Trudering-Riem, and , on 12 February 1929 in Botanic Garden of the city. In Munich the general trend of global warming with a rise of medium yearly temperatures of about 1°C in Germany over the last 120 years can be observed as well. In November 2016 the city council concluded officially that a further rise in medium temperature, a higher number of heat extremes, a rise in the number of hot days and nights with temperatures higher than 20°C (tropical nights), a change in precipitation patterns as well as a rise in the number of local instances of heavy rain is to be expected as part of the ongoing climate change. The city administration decided to support a joint study from its own Referat für Gesundheit und Umwelt (department for health and environmental issues) and the German Meteorological Service that will gather data on local weather. The data is supposed to be used to create a plan for action for adapting the city to better deal with climate change as well as an integrated action program for climate protection in Munich. With the help of those programs issues regarding spatial planning and settlement density, the development of buildings and green spaces as well as plans for functioning ventilation in a cityscape can be monitored and managed. From only 24,000 inhabitants in 1700, the city population doubled about every 30 years. It was 100,000 in 1852, 250,000 in 1883 and 500,000 in 1901. Since then, Munich has become Germany's third largest city. In 1933, 840,901 inhabitants were counted, and in 1957 over 1 million. In July 2017, Munich had 1.42 million inhabitants; 421,832 foreign nationals resided in the city as of 31 December 2017 with 50.7% of these residents being citizens of EU member states, and 25.2% citizens in European states not in the EU (including Russia and Turkey). The largest groups of foreign nationals were Turks (39,204), Croats (33,177), Italians (27,340), Greeks (27,117), Poles (27,945), Austrians (21,944), and Romanians (18,085). The largest foreign resident groups by 31 December 2018 About 45% of Munich's residents are not affiliated with any religious group; this ratio represents the fastest growing segment of the population. As in the rest of Germany, the Catholic and Protestant churches have experienced a continuous decline in membership. As of 31 December 2017, 31.8% of the city's inhabitants were Catholic, 11.4% Protestant, 0.3% Jewish, and 3.6% were members of an Orthodox Church (Eastern Orthodox or Oriental Orthodox). About 1% adhere to other Christian denominations. There is also a small Old Catholic parish and an English-speaking parish of the Episcopal Church in the city. According to Munich Statistical Office, in 2013 about 8.6% of Munich's population was Muslim. Munich's current mayor is Dieter Reiter of the Social Democratic Party of Germany. Munich has been governed by the SPD for all but six years since 1948. This is atypical because Bavaria – and particularly southern Bavaria – has long been identified with conservative politics, with the Christian Social Union gaining absolute majorities among the Bavarian electorate in many elections at the communal, state, and federal levels, and leading the Bavarian state government for all but three years since 1946. Bavaria's second most populous city, Nuremberg, is also one of the very few Bavarian cities governed by an SPD-led coalition. As the capital of the Free State of Bavaria, Munich is an important political centre in Germany and the seat of the Bavarian State Parliament, the Staatskanzlei (the State Chancellery) and of all state departments. Several national and international authorities are located in Munich, including the Federal Finance Court of Germany and the European Patent Office. Munich is twinned with the following cities (date of agreement shown in parentheses): Edinburgh, Scotland "(1954)", Verona, Italy "(1960)", Bordeaux, France "(1964)", Sapporo, Japan "(1972)", Cincinnati, Ohio, United States "(1989)", Kiev, Ukraine "(1989)" and Harare, Zimbabwe "(1996)". Since the administrative reform in 1992, Munich is divided into 25 boroughs or "Stadtbezirke", which themselves consist of smaller quarters. Allach-Untermenzing (23), Altstadt-Lehel (1), Aubing-Lochhausen-Langwied (22), Au-Haidhausen (5), Berg am Laim (14), Bogenhausen (13), Feldmoching-Hasenbergl (24), Hadern (20), Laim (25), Ludwigsvorstadt-Isarvorstadt (2), Maxvorstadt (3), Milbertshofen-Am Hart (11), Moosach (10), Neuhausen-Nymphenburg (9), Obergiesing (17), Pasing-Obermenzing (21), Ramersdorf-Perlach (16), Schwabing-Freimann (12), Schwabing-West (4), Schwanthalerhöhe (8), Sendling (6), Sendling-Westpark (7), Thalkirchen-Obersendling-Forstenried-Fürstenried-Solln (19), Trudering-Riem (15) and Untergiesing-Harlaching (18). The city has an eclectic mix of historic and modern architecture, because historic buildings destroyed in World War II were reconstructed, and new landmarks were built. A survey by the Society's Centre for Sustainable Destinations for the National Geographic Traveller chose over 100 historic destinations around the world and ranked Munich 30th. At the centre of the city is the Marienplatz – a large open square named after the Mariensäule, a Marian column in its centre – with the Old and the New Town Hall. Its tower contains the Rathaus-Glockenspiel. Three gates of the demolished medieval fortification survive – the Isartor in the east, the Sendlinger Tor in the south and the Karlstor in the west of the inner city. The Karlstor leads up to the Stachus, a square dominated by the Justizpalast (Palace of Justice) and a fountain. The Peterskirche close to Marienplatz is the oldest church of the inner city. It was first built during the Romanesque period, and was the focus of the early monastic settlement in Munich before the city's official foundation in 1158. Nearby St. Peter the Gothic hall-church Heiliggeistkirche (The Church of the Holy Spirit) was converted to baroque style from 1724 onwards and looks down upon the Viktualienmarkt. The Frauenkirche serves as the cathedral for the Catholic Archdiocese of Munich and Freising. The nearby Michaelskirche is the largest renaissance church north of the Alps, while the Theatinerkirche is a basilica in Italianate high baroque, which had a major influence on Southern German baroque architecture. Its dome dominates the Odeonsplatz. Other baroque churches in the inner city include the Bürgersaalkirche, the Trinity Church and the St. Anna Damenstiftskirche. The Asamkirche was endowed and built by the Brothers Asam, pioneering artists of the rococo period. The large Residenz palace complex (begun in 1385) on the edge of Munich's Old Town, Germany's largest urban palace, ranks among Europe's most significant museums of interior decoration. Having undergone several extensions, it contains also the treasury and the splendid rococo Cuvilliés Theatre. Next door to the Residenz the neo-classical opera, the National Theatre was erected. Among the baroque and neoclassical mansions which still exist in Munich are the Palais Porcia, the Palais Preysing, the Palais Holnstein and the Prinz-Carl-Palais. All mansions are situated close to the Residenz, same as the Alte Hof, a medieval castle and first residence of the Wittelsbach dukes in Munich. Lehel, a middle-class quarter east of the Altstadt, is characterised by numerous well-preserved townhouses. The St. Anna im Lehel is the first rococo church in Bavaria. St. Lukas is the largest Protestant Church in Munich. Four grand royal avenues of the 19th century with official buildings connect Munich's inner city with its then-suburbs: The neoclassical Brienner Straße, starting at Odeonsplatz on the northern fringe of the Old Town close to the Residenz, runs from east to west and opens into the Königsplatz, designed with the "Doric" Propyläen, the "Ionic" Glyptothek and the "Corinthian" State Museum of Classical Art, behind it St. Boniface's Abbey was erected. The area around Königsplatz is home to the Kunstareal, Munich's gallery and museum quarter (as described below). Ludwigstraße also begins at Odeonsplatz and runs from south to north, skirting the Ludwig-Maximilians-Universität, the St. Louis church, the Bavarian State Library and numerous state ministries and palaces. The southern part of the avenue was constructed in Italian renaissance style, while the north is strongly influenced by Italian Romanesque architecture. The Siegestor (gate of victory) sits at the northern end of Ludwigstraße, where the latter passes over into Leopoldstraße and the district of Schwabing begins. The neo-Gothic Maximilianstraße starts at Max-Joseph-Platz, where the Residenz and the National Theatre are situated, and runs from west to east. The avenue is framed by elaborately structured neo-Gothic buildings which house, among others, the Schauspielhaus, the Building of the district government of Upper Bavaria and the Museum of Ethnology. After crossing the river Isar, the avenue circles the Maximilianeum, which houses the state parliament. The western portion of Maximilianstraße is known for its designer shops, luxury boutiques, jewellery stores, and one of Munich's foremost five-star hotels, the Hotel Vier Jahreszeiten. Prinzregentenstraße runs parallel to Maximilianstraße and begins at Prinz-Carl-Palais. Many museums are on the avenue, such as the Haus der Kunst, the Bavarian National Museum and the Schackgalerie. The avenue crosses the Isar and circles the Friedensengel monument, then passing the Villa Stuck and Hitler's old apartment. The Prinzregententheater is at Prinzregentenplatz further to the east. In Schwabing and Maxvorstadt, many beautiful streets with continuous rows of Gründerzeit buildings can be found. Rows of elegant town houses and spectacular urban palais in many colours, often elaborately decorated with ornamental details on their façades, make up large parts of the areas west of Leopoldstraße (Schwabing's main shopping street), while in the eastern areas between Leopoldstraße and Englischer Garten similar buildings alternate with almost rural-looking houses and whimsical mini-castles, often decorated with small towers. Numerous tiny alleys and shady lanes connect the larger streets and little plazas of the area, conveying the legendary artist's quarter's flair and atmosphere convincingly like it was at the turn of the 20th century. The wealthy district of Bogenhausen in the east of Munich is another little-known area (at least among tourists) rich in extravagant architecture, especially around Prinzregentenstraße. One of Bogenhausen's most beautiful buildings is Villa Stuck, famed residence of painter Franz von Stuck. Two large baroque palaces in Nymphenburg and Oberschleissheim are reminders of Bavaria's royal past. Schloss Nymphenburg (Nymphenburg Palace), some north west of the city centre, is surrounded by an park and is considered to be one of Europe's most beautiful royal residences. northwest of Nymphenburg Palace is Schloss Blutenburg (Blutenburg Castle), an old ducal country seat with a late-Gothic palace church. Schloss Fürstenried (Fürstenried Palace), a baroque palace of similar structure to Nymphenburg but of much smaller size, was erected around the same time in the south west of Munich. The second large baroque residence is Schloss Schleissheim (Schleissheim Palace), located in the suburb of Oberschleissheim, a palace complex encompassing three separate residences: Altes Schloss Schleissheim (the old palace), Neues Schloss Schleissheim (the new palace) and Schloss Lustheim (Lustheim Palace). Most parts of the palace complex serve as museums and art galleries. Deutsches Museum's Flugwerft Schleissheim flight exhibition centre is located nearby, on the Schleissheim Special Landing Field. The Bavaria statue before the neo-classical Ruhmeshalle is a monumental, bronze sand-cast 19th-century statue at Theresienwiese. The Grünwald castle is the only medieval castle in the Munich area which still exists. St Michael in Berg am Laim is a church in the suburbs. Another church of Johann Michael Fischer is St George in Bogenhausen. Most of the boroughs have parish churches which originate from the Middle Ages, such as the church of pilgrimage St Mary in Ramersdorf. The oldest church within the city borders is Heilig Kreuz in Fröttmaning next to the Allianz-Arena, known for its Romanesque fresco. Especially in its suburbs, Munich features a wide and diverse array of modern architecture, although strict culturally sensitive height limitations for buildings have limited the construction of skyscrapers to avoid a loss of views to the distant Bavarian Alps. Most high-rise buildings are clustered at the northern edge of Munich in the skyline, like the Hypo-Haus, the Arabella High-Rise Building, the Highlight Towers, Uptown Munich, Münchner Tor and the BMW Headquarters next to the Olympic Park. Several other high-rise buildings are located near the city centre and on the Siemens campus in southern Munich. A landmark of modern Munich is also the architecture of the sport stadiums (as described below). In Fasangarten is the former McGraw Kaserne, a former US army base, near Stadelheim Prison. Munich is a densely-built city but has numerous public parks. The Englischer Garten, close to the city centre and covering an area of , is larger than Central Park in New York City, and is one of the world's largest urban public parks. It contains a naturist (nudist) area, numerous bicycle and jogging tracks as well as bridle-paths. It was designed and laid out by Benjamin Thompson, Count Rumford, both for pleasure and as a work area for the city's vagrants and homeless. Nowadays it is entirely a park, its southern half being dominated by wide open areas, hills, monuments and beach-like stretches (along the streams Eisbach and Schwabinger Bach). In contrast, its less-frequented northern part is much quieter, with many old trees and thick undergrowth. Multiple beer gardens can be found in both parts of the Englischer Garten, the most well known being located at the Chinese Pagoda. Other large green spaces are the modern Olympiapark, the Westpark, and the parks of Nymphenburg Palace (with the Botanischer Garten München-Nymphenburg to the north), and Schleissheim Palace. The city's oldest park is the Hofgarten, near the Residenz, dating back to the 16th century. The site of the largest beer garden in town, the former royal Hirschgarten was founded in 1780 for deer, which still live there. The city's zoo is the Tierpark Hellabrunn near the Flaucher Island in the Isar in the south of the city. Another notable park is Ostpark located in the Ramersdorf-Perlach borough which also houses the Michaelibad, the largest water park in Munich. Munich is home to several professional football teams including Bayern Munich, Germany's most successful club and a multiple UEFA Champions League winner. Other notable clubs include 1860 Munich, who were long time their rivals on a somewhat equal footing, but currently play in the 3rd Division 3. Liga along with another former Bundesliga club SpVgg Unterhaching. FC Bayern Munich Basketball is currently playing in the Beko Basket Bundesliga. The city hosted the final stages of the FIBA EuroBasket 1993, where the German national basketball team won the gold medal. The city's ice hockey club is EHC Red Bull München who play in the Deutsche Eishockey Liga. The team has won three DEL Championships, in 2016, 2017 and 2018. Munich hosted the 1972 Summer Olympics; the Munich Massacre took place in the Olympic village. It was one of the host cities for the 2006 Football World Cup, which was not held in Munich's Olympic Stadium, but in a new football specific stadium, the Allianz Arena. Munich bid to host the 2018 Winter Olympic Games, but lost to Pyeongchang. In September 2011 the DOSB President Thomas Bach confirmed that Munich would bid again for the Winter Olympics in the future. Regular annual road running events in Munich are the Munich Marathon in October, the Stadtlauf end of June, the company run B2Run in July, the New Year's Run on 31 December, the Spartan Race Sprint, the Olympia Alm Crosslauf and the Bestzeitenmarathon. Public sporting facilities in Munich include ten indoor swimming pools and eight outdoor swimming pools, which are operated by the Munich City Utilities (SWM) communal company. Popular indoor swimming pools include the Olympia Schwimmhalle of the 1972 Summer Olympics, the wave pool Cosimawellenbad, as well as the Müllersches Volksbad which was built in 1901. Further, swimming within Munich's city limits is also possible in several artificial lakes such as for example the Riemer See or the Langwieder lake district. Munich has a reputation as a surfing hotspot, offering the world's best known river surfing spot, the Eisbach wave, which is located at the southern edge of the Englischer Garten park and used by surfers day and night and throughout the year. Half a kilometre down the river, there is a second, easier wave for beginners, the so-called Kleine Eisbachwelle. Two further surf spots within the city are located along the river Isar, the wave in the Floßlände channel and a wave downstream of the Wittelsbacherbrücke bridge. The Bavarian dialects are spoken in and around Munich, with its variety West Middle Bavarian or Old Bavarian ("Westmittelbairisch" / "Altbairisch"). Austro-Bavarian has no official status by the Bavarian authorities or local government, yet is recognised by the SIL and has its own ISO-639 code. The Deutsches Museum or German Museum, located on an island in the River Isar, is the largest and one of the oldest science museums in the world. Three redundant exhibition buildings that are under a protection order were converted to house the Verkehrsmuseum, which houses the land transport collections of the Deutsches Museum. Deutsches Museum's Flugwerft Schleissheim flight exhibition centre is located nearby, on the Schleissheim Special Landing Field. Several non-centralised museums (many of those are public collections at Ludwig-Maximilians-Universität) show the expanded state collections of palaeontology, geology, mineralogy, zoology, botany and anthropology. The city has several important art galleries, most of which can be found in the Kunstareal, including the Alte Pinakothek, the Neue Pinakothek, the Pinakothek der Moderne and the Museum Brandhorst. The Alte Pinakothek contains a treasure trove of the works of European masters between the 14th and 18th centuries. The collection reflects the eclectic tastes of the Wittelsbachs over four centuries, and is sorted by schools over two floors. Major displays include Albrecht Dürer's Christ-like "Self-Portrait" (1500), his "Four Apostles", Raphael's paintings" The Canigiani Holy Family" and" Madonna Tempi" as well as Peter Paul Rubens large "Judgment Day". The gallery houses one of the world's most comprehensive Rubens collections. The Lenbachhaus houses works by the group of Munich-based modernist artists known as Der Blaue Reiter (The Blue Rider). An important collection of Greek and Roman art is held in the Glyptothek and the Staatliche Antikensammlung (State Antiquities Collection). King Ludwig I managed to acquire such pieces as the Medusa Rondanini, the Barberini Faun and figures from the Temple of Aphaea on Aegina for the Glyptothek. Another important museum in the Kunstareal is the Egyptian Museum. The gothic Morris dancers of Erasmus Grasser are exhibited in the Munich City Museum in the old gothic arsenal building in the inner city. Another area for the arts next to the Kunstareal is the Lehel quarter between the old town and the river Isar: the Museum Five Continents in Maximilianstraße is the second largest collection in Germany of artefacts and objects from outside Europe, while the Bavarian National Museum and the adjoining Bavarian State Archaeological Collection in Prinzregentenstraße rank among Europe's major art and cultural history museums. The nearby Schackgalerie is an important gallery of German 19th-century paintings. The former Dachau concentration camp is outside the city. Munich is a major international cultural centre and has played host to many prominent composers including Orlando di Lasso, W.A. Mozart, Carl Maria von Weber, Richard Wagner, Gustav Mahler, Richard Strauss, Max Reger and Carl Orff. With the Munich Biennale founded by Hans Werner Henze, and the "A*DEvantgarde" festival, the city still contributes to modern music theatre. Some of classical music's best-known pieces have been created in and around Munich by composers born in the area, for example Richard Strauss's tone poem "Also sprach Zarathustra" or Carl Orff's "Carmina Burana". At the Nationaltheater several of Richard Wagner's operas were premiered under the patronage of Ludwig II of Bavaria. It is the home of the Bavarian State Opera and the Bavarian State Orchestra. Next door, the modern Residenz Theatre was erected in the building that had housed the Cuvilliés Theatre before World War II. Many operas were staged there, including the premiere of Mozart's "Idomeneo" in 1781. The Gärtnerplatz Theatre is a ballet and musical state theatre while another opera house, the Prinzregententheater, has become the home of the Bavarian Theatre Academy and the Munich Chamber Orchestra. The modern Gasteig centre houses the Munich Philharmonic Orchestra. The third orchestra in Munich with international importance is the Bavarian Radio Symphony Orchestra. Its primary concert venue is the Herkulessaal in the former city royal residence, the Munich Residenz. Many important conductors have been attracted by the city's orchestras, including Felix Weingartner, Hans Pfitzner, Hans Rosbaud, Hans Knappertsbusch, Sergiu Celibidache, James Levine, Christian Thielemann, Lorin Maazel, Rafael Kubelík, Eugen Jochum, Sir Colin Davis, Mariss Jansons, Bruno Walter, Georg Solti, Zubin Mehta and Kent Nagano. A stage for shows, big events and musicals is the Deutsche Theater. It is Germany's largest theatre for guest performances. Munich's contributions to modern popular music are often overlooked in favour of its strong association with classical music, but they are numerous: the city has had a strong music scene in the 1960s and 1970s, with many internationally renowned bands and musicians frequently performing in its clubs. Furthermore, Munich was the centre of Krautrock in southern Germany, with many important bands such as Amon Düül II, Embryo or Popol Vuh hailing from the city. In the 1970s, the Musicland Studios developed into one of the most prominent recording studios in the world, with bands such as the Rolling Stones, Led Zeppelin, Deep Purple and Queen recording albums there. Munich also played a significant role in the development of electronic music, with genre pioneer Giorgio Moroder, who invented synth disco and electronic dance music, and Donna Summer, one of disco music's most important performers, both living and working in the city. In the late 1990s, Electroclash was substantially co-invented if not even invented in Munich, when DJ Hell introduced and assembled international pioneers of this musical genre through his International DeeJay Gigolo Records label here. Other examples of notable musicians and bands from Munich are Konstantin Wecker, , Spider Murphy Gang, Münchener Freiheit, Lou Bega, Megaherz, FSK, Colour Haze and Sportfreunde Stiller. Music is so important in the Bavarian capital that the city hall gives permissions every day to ten musicians for performing in the streets around Marienplatz. This is how performers such as Olga Kholodnaya and Alex Jacobowitz are entertaining the locals and the tourists every day. Next to the Bavarian Staatsschauspiel in the Residenz Theatre (Residenztheater), the Munich Kammerspiele in the Schauspielhaus is one of the most important German language theatres in the world. Since Gotthold Ephraim Lessing's premieres in 1775 many important writers have staged their plays in Munich such as Christian Friedrich Hebbel, Henrik Ibsen and Hugo von Hofmannsthal. The city is known as the second largest publishing centre in the world (around 250 publishing houses have offices in the city), and many national and international publications are published in Munich, such as Arts in Munich, LAXMag and Prinz. At the turn of the 20th century, Munich, and especially its suburb of Schwabing, was the preeminent cultural metropolis of Germany. Its importance as a centre for both literature and the fine arts was second to none in Europe, with numerous German and non-German artists moving there. For example, Wassily Kandinsky chose Munich over Paris to study at the Akademie der Bildenden Künste München, and, along with many other painters and writers living in Schwabing at that time, had a profound influence on modern art. Prominent literary figures worked in Munich especially during the final decades of the Kingdom of Bavaria, the so-called "Prinzregentenzeit" (literally "prince regent's time") under the reign of Luitpold, Prince Regent of Bavaria, a period often described as a cultural Golden Age for both Munich and Bavaria as a whole. Among them were luminaries such as Thomas Mann, Heinrich Mann, Paul Heyse, Rainer Maria Rilke, Ludwig Thoma, Fanny zu Reventlow, Oskar Panizza, Gustav Meyrink, Max Halbe, Erich Mühsam and Frank Wedekind. For a short while, Vladimir Lenin lived in Schwabing, where he wrote and published his most important work, "What Is to Be Done?" Central to Schwabing's bohemian scene (although they were actually often located in the nearby Maxvorstadt quarter) were "Künstlerlokale" (artist's cafés) like Café Stefanie or Kabarett Simpl, whose liberal ways differed fundamentally from Munich's more traditional localities. The Simpl, which survives to this day (although with little relevance to the city's contemporary art scene), was named after Munich's anti-authoritarian satirical magazine "Simplicissimus", founded in 1896 by Albert Langen and Thomas Theodor Heine, which quickly became an important organ of the "Schwabinger Bohème". Its caricatures and biting satirical attacks on Wilhelmine German society were the result of countless of collaborative efforts by many of the best visual artists and writers from Munich and elsewhere. The period immediately before World War I saw continued economic and cultural prominence for the city. Thomas Mann wrote in his novella "Gladius Dei" about this period: "München leuchtete" (literally "Munich shone"). Munich remained a centre of cultural life during the Weimar period, with figures such as Lion Feuchtwanger, Bertolt Brecht, Peter Paul Althaus, Stefan George, Ricarda Huch, Joachim Ringelnatz, Oskar Maria Graf, Annette Kolb, Ernst Toller, Hugo Ball and Klaus Mann adding to the already established big names. Karl Valentin was Germany's most important cabaret performer and comedian and is to this day well-remembered and beloved as a cultural icon of his hometown. Between 1910 and 1940, he wrote and performed in many absurdist sketches and short films that were highly influential, earning him the nickname of "Charlie Chaplin of Germany". Many of Valentin's works wouldn't be imaginable without his congenial female partner Liesl Karlstadt, who often played male characters to hilarious effect in their sketches. After World War II, Munich soon again became a focal point of the German literary scene and remains so to this day, with writers as diverse as Wolfgang Koeppen, Erich Kästner, Eugen Roth, Alfred Andersch, Elfriede Jelinek, Hans Magnus Enzensberger, Michael Ende, Franz Xaver Kroetz, Gerhard Polt, John Vincent Palatine and Patrick Süskind calling the city their home. From the Gothic to the Baroque era, the fine arts were represented in Munich by artists like Erasmus Grasser, Jan Polack, Johann Baptist Straub, Ignaz Günther, Hans Krumpper, Ludwig von Schwanthaler, Cosmas Damian Asam, Egid Quirin Asam, Johann Baptist Zimmermann, Johann Michael Fischer and François de Cuvilliés. Munich had already become an important place for painters like Carl Rottmann, Lovis Corinth, Wilhelm von Kaulbach, Carl Spitzweg, Franz von Lenbach, Franz von Stuck, Karl Piloty and Wilhelm Leibl when Der Blaue Reiter (The Blue Rider), a group of expressionist artists, was established in Munich in 1911. The city was home to the Blue Rider's painters Paul Klee, Wassily Kandinsky, Alexej von Jawlensky, Gabriele Münter, Franz Marc, August Macke and Alfred Kubin. Kandinsky's first abstract painting was created in Schwabing. Munich was (and in some cases, still is) home to many of the most important authors of the New German Cinema movement, including Rainer Werner Fassbinder, Werner Herzog, Edgar Reitz and Herbert Achternbusch. In 1971, the Filmverlag der Autoren was founded, cementing the city's role in the movement's history. Munich served as the location for many of Fassbinder's films, among them "". The Hotel Deutsche Eiche near Gärtnerplatz was somewhat like a centre of operations for Fassbinder and his "clan" of actors. New German Cinema is considered by far the most important artistic movement in German cinema history since the era of German Expressionism in the 1920s. In 1919, the Bavaria Film Studios were founded, which developed into one of Europe's largest film studios. Directors like Alfred Hitchcock, Billy Wilder, Orson Welles, John Huston, Ingmar Bergman, Stanley Kubrick, Claude Chabrol, Fritz Umgelter, Rainer Werner Fassbinder, Wolfgang Petersen and Wim Wenders made films there. Among the internationally well-known films produced at the studios are "The Pleasure Garden" (1925) by Alfred Hitchcock, "The Great Escape" (1963) by John Sturges, "Paths of Glory" (1957) by Stanley Kubrick, "Willy Wonka & the Chocolate Factory" (1971) by Mel Stuart and both "Das Boot" (1981) and "The Neverending Story" (1984) by Wolfgang Petersen. Munich remains one of the centres of the German film and entertainment industry. Annual "High End Munich" trade show. March and April, city-wide: Starkbierfest is held for three weeks during Lent, between Carnival and Easter, celebrating Munich's “strong beer”. Starkbier was created in 1651 by the local Paulaner monks who drank this 'Flüssiges Brot', or ‘liquid bread’ to survive the fasting of Lent. It became a public festival in 1751 and is now the second largest beer festival in Munich. Starkbierfest is also known as the “fifth season”, and is celebrated in beer halls and restaurants around the city. April and May, Theresienwiese: Held for two weeks from the end of April to the beginning of May, Frühlingsfest celebrates spring and the new local spring beers, and is commonly referred to as the "little sister of Oktoberfest". There are two beer tents, Hippodrom and Festhalle Bayernland, as well as one roofed beer garden, Münchner Weißbiergarten. There are also roller coasters, fun houses, slides, and a Ferris wheel. Other attractions of the festival include a flea market on the festival's first Saturday, a “Beer Queen” contest, a vintage car show on the first Sunday, fireworks every Friday night, and a "Day of Traditions" on the final day. May, August, and October, Mariahilfplatz: Auer Dult is Europe's largest jumble sale, with fairs of its kind dating back to the 14th century. The Auer Dult is a traditional market with 300 stalls selling handmade crafts, household goods, and local foods, and offers carnival rides for children. It has taken place over nine days each, three times a year. since 1905. July, English Garden: Traditionally a ball for Munich's domestic servants, cooks, nannies, and other household staff, Kocherlball, or ‘cook’s ball’ was a chance for the lower classes to take the morning off and dance together before the families of their households woke up. It now runs between 6 and 10 am the third Sunday in July at the Chinese Tower in Munich's English Garden. July and December, Olympia Park: For three weeks in July, and then three weeks in December, Tollwood showcases fine and performing arts with live music, circus acts, and several lanes of booths selling handmade crafts, as well as organic international cuisine. According to the festival's website, Tollwood's goal is to promote culture and the environment, with the main themes of "tolerance, internationality, and openness". To promote these ideals, 70% of all Tollwood events and attractions are free. September and October, Theresienwiese: The largest beer festival in the world, Munich's Oktoberfest runs for 16–18 days from the end of September through early October. Oktoberfest is a celebration of the wedding of Bavarian Crown Prince Ludwig to Princess Therese of Saxony-Hildburghausen which took place on October 12, 1810. In the last 200 years the festival has grown to span 85 acres and now welcomes over 6 million visitors every year. There are 14 beer tents which together can seat 119,000 attendees at a time, and serve beer from the six major breweries of Munich: Augustiner, Hacker-Pschorr, Löwenbräu, Paulaner, Spaten and Staatliches Hofbräuhaus. Over 7 million liters of beer are consumed at each Oktoberfest. There are also over 100 rides ranging from bumper cars to full-sized roller coasters, as well as the more traditional Ferris wheels and swings. Food can be bought in each tent, as well as at various stalls throughout the fairgrounds. Oktoberfest hosts 144 caterers and employees 13,000 people. November and December, city-wide: Munich's Christmas Markets, or Christkindlmärkte, are held throughout the city from late November until Christmas Eve, the largest spanning the Marienplatz and surrounding streets. There are hundreds of stalls selling handmade goods, Christmas ornaments and decorations, and Bavarian Christmas foods including pastries, roasted nuts, and gluwein. The Munich cuisine contributes to the Bavarian cuisine. Münchner Weisswurst ("white sausage") was invented here in 1857. It is a Munich speciality. Traditionally eaten only before noon – a tradition dating to a time before refrigerators – these morsels are often served with sweet mustard and freshly baked pretzels. Munich is known for its breweries and the "Weissbier" (or "Weißbier" / "Weizenbier", wheat beer) is a speciality from Bavaria. Helles, a pale lager with a translucent gold colour is the most popular Munich beer today, although it's not old (only introduced in 1895) and is the result of a change in beer tastes. Helles has largely replaced Munich's dark beer, Dunkles, which gets its colour from roasted malt. It was the typical beer in Munich in the 19th century, but it is now more of a speciality. Starkbier is the strongest Munich beer, with 6%–9% alcohol content. It is dark amber in colour and has a heavy malty taste. It is available and is sold particularly during the Lenten "Starkbierzeit" (strong beer season), which begins on or before St. Joseph's Day (19 March). The beer served at Oktoberfest is a special type of Märzen beer with a higher alcohol content than regular Helles. There are countless "Wirtshäuser" (traditional Bavarian ale houses/restaurants) all over the city area, many of which also have small outside areas. "Biergärten" (beer gardens) are popular fixtures of Munich's gastronomic landscape. They are central to the city's culture and serve as a kind of melting pot for members of all walks of life, for locals, expatriates and tourists alike. It is allowed to bring one's own food to a beer garden, however, it is forbidden to bring one's own drinks. There are many smaller beer gardens and around twenty major ones, providing at least a thousand seats, with four of the largest in the Englischer Garten: Chinesischer Turm (Munich's second largest beer garden with 7,000 seats), Seehaus, Hirschau and Aumeister. Nockherberg, Hofbräukeller (not to be confused with the Hofbräuhaus) and Löwenbräukeller are other beer gardens. Hirschgarten is the largest beer garden in the world, with 8,000 seats. There are six main breweries in Munich: Augustiner-Bräu, Hacker-Pschorr, Hofbräu, Löwenbräu, Paulaner and Spaten-Franziskaner-Bräu (separate brands Spaten and Franziskaner, the latter of which mainly for Weissbier). Also much consumed, though not from Munich and thus without the right to have a tent at the Oktoberfest, are Tegernseer and Schneider Weisse, the latter of which has a major beer hall in Munich. Smaller breweries are becoming more prevalent in Munich, such as Giesinger Bräu. However, these breweries do not have tents at Oktoberfest. The Circus Krone based in Munich is one of the largest circuses in Europe. It was the first and still is one of only a few in Western Europe to also occupy a building of its own. Nightlife in Munich is located mostly in the city centre (Altstadt-Lehel) and the boroughs Maxvorstadt, Ludwigsvorstadt-Isarvorstadt, Au-Haidhausen and Schwabing. Between Sendlinger Tor and Maximiliansplatz lies the so-called Feierbanane (party banana), a roughly banana-shaped unofficial party zone spanning along Sonnenstraße, characterised by a high concentration of clubs, bars and restaurants. The Feierbanane has become the mainstream focus of Munich's nightlife and tends to become crowded, especially at weekends. It has also been the subject of some debate among city officials because of alcohol-related security issues and the party zone's general impact on local residents as well as day-time businesses. Ludwigsvorstadt-Isarvorstadt's two main quarters, Gärtnerplatzviertel and Glockenbachviertel, are both considered decidedly less mainstream than most other nightlife hotspots in the city and are renowned for their many hip and laid back bars and clubs as well as for being Munich's main centres of gay culture. On warm spring or summer nights, hundreds of young people gather at Gärtnerplatz to relax, talk with friends and drink beer. Maxvorstadt has many smaller bars that are especially popular with university students, whereas Schwabing, once Munich's first and foremost party district with legendary clubs such as Big Apple, PN, Domicile, Hot Club, Piper Club, Tiffany, Germany's first large-scale disco Blow Up and the underwater nightclub Yellow Submarine, as well as many bars such as Schwabinger 7 or Schwabinger Podium, has lost much of its nightlife activity in the last decades, mainly due to gentrification and the resulting high rents. It has become the city's most coveted and expensive residential district, attracting affluent citizens with little interest in partying. Since the mid-1990s, the Kunstpark Ost and its successor Kultfabrik, a former industrial complex that was converted to a large party area near München Ostbahnhof in Berg am Laim, hosted more than 30 clubs and was especially popular among younger people and residents of the metropolitan area surrounding Munich. The Kultfabrik was closed at the end of the year 2015 to convert the area into a residential and office area. Apart from the Kultfarbik and the smaller Optimolwerke, there is a wide variety of establishments in the urban parts of nearby Haidhausen. Before the Kunstpark Ost, there had already been an accumulation of internationally known nightclubs in the remains of the abandoned former Munich-Riem Airport. Munich nightlife tends to change dramatically and quickly. Establishments open and close every year, and due to gentrification and the overheated housing market many survive only a few years, while others last longer. Beyond the already mentioned venues of the 1960s and 1970s, nightclubs with international recognition in recent history included Tanzlokal Größenwahn, Atomic Cafe and the techno clubs Babalu, Ultraschall, , and . From 1995 to 2001, Munich was also home to the Union Move, one of the largest technoparades in Germany. Munich has two directly connected gay quarters, which basically can be seen as one: Gärtnerplatzviertel and Glockenbachviertel, both part of the Ludwigsvorstadt-Isarvorstadt district. Freddie Mercury had an apartment near the Gärtnerplatz and transsexual icon Romy Haag had a club in the city centre for many years. Munich has more than 100 night clubs and thousands of bars and restaurants within city limits. Some notable nightclubs are: popular techno clubs are Blitz Club, Harry Klein, Rote Sonne, Bahnwärter Thiel, Bob Beaman, Pimpernel, Charlie and Palais. Popular mixed music clubs are Call me Drella, Cord, Wannda Circus, Tonhalle, Backstage, Muffathalle, Ampere, Pacha, P1, Zenith, Minna Thiel and the party ship Alte Utting. Some notable bars (pubs are located all over the city) are Charles Schumann's Cocktail Bar, Havana Club, Sehnsucht, Bar Centrale, Ksar, Holy Home, Eat the Rich, Negroni, Die Goldene Bar and Bei Otto (a bavarian-style pub). Munich is a leading location for science and research with a long list of Nobel Prize laureates from Wilhelm Conrad Röntgen in 1901 to Theodor Hänsch in 2005. Munich has become a spiritual centre already since the times of Emperor Louis IV when philosophers like Michael of Cesena, Marsilius of Padua and William of Ockham were protected at the emperor's court. The Ludwig Maximilian University (LMU) and the Technische Universität München (TU or TUM), were two of the first three German universities to be awarded the title "elite university" by a selection committee composed of academics and members of the Ministries of Education and Research of the Federation and the German states (Länder). Only the two Munich universities and the Technical University of Karlsruhe (now part of Karlsruhe Institute of Technology) have held this honour, and the implied greater chances of attracting research funds, since the first evaluation round in 2006. Grundschule in Munich: Gymnasiums in Munich: Realschule in Munich: International schools in Munich: The Max Planck Society, an independent German non-profit research organisation, has its administrative headquarters in Munich. The following institutes are located in the Munich area: The Fraunhofer Society, the German non-profit research organization for applied research, has its headquarters in Munich. The following institutes are located in the Munich area: Munich has the strongest economy of any German city and the lowest unemployment rate (3.0% in June 2014) of any German city of more than a million people (the others being Berlin, Hamburg and Cologne). The city is also the economic centre of southern Germany. Munich topped the ranking of the magazine "Capital" in February 2005 for the economic prospects between 2002 and 2011 in 60 German cities. Munich is a financial center and a global city and holds the headquarters of many companies. This includes more companies listed by the DAX than any other German city, as well as the German or European headquarters of many foreign companies such as McDonald's and Microsoft. One of the best known newly established Munich companies is Flixbus. Munich holds the headquarters of Siemens AG (electronics), BMW (car), MAN AG (truck manufacturer, engineering), MTU Aero Engines (aircraft engine manufacturer), Linde (gases) and Rohde & Schwarz (electronics). Among German cities with more than 500,000 inhabitants, purchasing power is highest in Munich (€26,648 per inhabitant) . In 2006, Munich blue-collar workers enjoyed an average hourly wage of €18.62 (ca. $20). The breakdown by cities proper (not metropolitan areas) of Global 500 cities listed Munich in 8th position in 2009. Munich is also a centre for biotechnology, software and other service industries. Munich is also the home of the headquarters of many other large companies such as the injection moulding machine manufacturer Krauss-Maffei, the camera and lighting manufacturer Arri, the semiconductor firm Infineon Technologies (headquartered in the suburban town of Neubiberg), lighting giant Osram, as well as the German or European headquarters of many foreign companies such as Microsoft. Munich has significance as a financial centre (second only to Frankfurt), being home of HypoVereinsbank and the Bayerische Landesbank. It outranks Frankfurt though as home of insurance companies such as Allianz (insurance) and Munich Re (re-insurance). Munich is the largest publishing city in Europe and home to the "Süddeutsche Zeitung", one of Germany's biggest daily newspapers. The city is also the location of the programming headquarters of Germany's largest public broadcasting network, ARD, while the largest commercial network, Pro7-Sat1 Media AG, is headquartered in the suburb of Unterföhring. The headquarters of the German branch of Random House, the world's largest publishing house, and of Burda publishing group are also in Munich. The Bavaria Film Studios are located in the suburb of Grünwald. They are one of Europe's biggest film production studios. Munich has an extensive public transport system consisting of an underground metro, trams, buses and high-speed rail. In 2015, the transport modal share in Munich was 38 percent public transport, 25 percent car, 23 percent walking, and 15 percent bicycle. Its public transport system delivered 566 million passenger trips that year. Munich is the hub of a well-developed regional transportation system, including the second-largest airport in Germany and the Berlin–Munich high-speed railway, which connects Munich to the German capital city with a journey time of about 4 hours. The trade fair transport logistic is held every two years at the "Neue Messe München" (Messe München International). Flixmobility which offers intercity coach service is headquartered in Munich. For its urban population of 2.6 million people, Munich and its closest suburbs have a comprehensive network of public transport incorporating the Munich U-Bahn (underground railway), the Munich S-Bahn (suburban trains), trams and buses. The system is supervised by the Munich Transport and Tariff Association ("Münchner Verkehrs- und Tarifverbund GmbH"). The Munich tramway is the oldest existing public transportation system in the city, which has been in operation since 1876. Munich also has an extensive network of bus lines. The extensive network of subway and tram lines assist and complement pedestrian movement in the city centre. The 700m-long Kaufinger Straße, which starts near the Main train station, forms a pedestrian east–west spine that traverses almost the entire centre. Similarly, Weinstraße leads off northwards to the Hofgarten. These major spines and many smaller streets cover an extensive area of the centre that can be enjoyed on foot and bike. The transformation of the historic area into a pedestrian priority zone enables and invites walking and biking by making these active modes of transport comfortable, safe and enjoyable. These attributes result from applying the principle of "filtered permability", which selectively restricts the number of roads that run through the centre. While certain streets are discontinuous for cars, they connect to a network of pedestrian and bike paths, which permeate the entire centre. In addition, these paths go through public squares and open spaces increasing the enjoyment of the trip (see image). The logic of filtering a mode of transport is fully expressed in a comprehensive model for laying out neighbourhoods and districts – the Fused Grid. The average amount of time people spend commuting to and from work with public transit in Munich on a weekday is 56 min. 11% of public transit users, spend more than two hours travelling each day. The average amount of time people wait at a stop or station for public transit is ten minutes, whilst 6% of passengers wait for over twenty minutes on average every day. The average distance people usually ride in a single trip with public transit is 9.2 km, while 21% travel for over 12 km in a single direction. Cycling has a strong presence in the city and is recognised as a good alternative to motorised transport. The growing number of bicycle lanes are widely used throughout the year. Cycle paths can be found alongside the majority of sidewalks and streets, although the newer and/or renovated ones are much easier to tell apart from pavements than older ones. The cycle paths usually involve a longer route than by the road, as they are diverted around objects, and the presence of pedestrians can make them quite slow. A modern bike hire system is available within the area bounded by the "Mittlerer Ring". München Hauptbahnhof is the main railway station located in the city centre and is one of three long distance stations in Munich, the others being München Ost (to the east) and München-Pasing (to the west). All stations are connected to the public transport system and serve as transportation hubs. München Hauptbahnhof serves about 450,000 passengers a day, which puts it on par with other large stations in Germany, such as Hamburg Hauptbahnhof and Frankfurt Hauptbahnhof. It and München Ost are two of the 21 stations in Germany classified by Deutsche Bahn as a category 1 station. The mainline station is a terminal station with 32 platforms. The subterranean S-Bahn with 2 platforms and U-Bahn stations with 6 platforms are through stations. ICE highspeed trains stop at Munich-Pasing and Munich-Hauptbahnhof only. InterCity and EuroCity trains to destinations east of Munich also stop at Munich East. Since 28 May 2006 Munich has been connected to Nuremberg via Ingolstadt by the Nuremberg–Munich high-speed railway line. In 2017, the Berlin–Munich high-speed railway opened, providing a journey time of less than 4 hours between the two German cities. Munich is an integral part of the motorway network of southern Germany. Motorways from Stuttgart (W), Nuremberg, Frankfurt and Berlin (N), Deggendorf and Passau (E), Salzburg and Innsbruck (SE), Garmisch Partenkirchen (S) and Lindau (SW) terminate at Munich, allowing direct access to the different parts of Germany, Austria and Italy. Traffic, however, is often very heavy in and around Munich. Traffic jams are commonplace during rush hour as well as at the beginning and end of major holidays in Germany. There are few "green waves" or roundabouts, and the city's prosperity often causes an abundance of obstructive construction sites. Other contributing factors are the extraordinarily high rates of car ownership per capita (multiple times that of Berlin), the city's historically grown and largely preserved centralised urban structure, which leads to a very high concentration of traffic in specific areas, and sometimes poor planning (for example bad traffic light synchronisation and a less than ideal ring road). Franz Josef Strauss International Airport (IATA: MUC, ICAO: EDDM) is the second-largest airport in Germany and seventh-largest in Europe after London Heathrow, Paris Charles de Gaulle, Frankfurt, Amsterdam, Madrid and Istanbul Atatürk. It is used by about 46 million passengers a year, and lies some north east of the city centre. It replaced the smaller Munich-Riem airport in 1992. The airport can be reached by suburban train lines from the city. From the main railway station the journey takes 40–45 minutes. An express train will be added that will cut down travel time to 20–25 minutes with limited stops on dedicated tracks. A magnetic levitation train (called Transrapid), which was to have run at speeds of up to from the central station to the airport in a travel time of 10 minutes, had been approved, but was cancelled in March 2008 because of cost escalation and after heavy protests. Lufthansa opened its second hub at the airport when Terminal 2 was opened in 2003. In 2008, the Bavarian state government granted a licence to expand Oberpfaffenhofen Air Station located west of Munich, for commercial use. These plans were opposed by many residents in the Oberpfaffenhofen area as well as other branches of local Government, including the city of Munich, which took the case to court. However, in October 2009, the permit allowing up to 9725 business flights per year to depart from or land at Oberpfaffenhofen was confirmed by a regional judge. Despite being from Munich, Memmingen Airport has been advertised as Airport Munich West. After 2005, passenger traffic of nearby Augsburg Airport was relocated to Munich Airport, leaving the Augsburg region of Bavaria without an air passenger airport within close reach. The Munich agglomeration sprawls across the plain of the Alpine foothills comprising about 2.6 million inhabitants. Several smaller traditional Bavarian towns and cities like Dachau, Freising, Erding, Starnberg, Landshut and Moosburg are today part of the Greater Munich Region, formed by Munich and the surrounding districts, making up the Munich Metropolitan Region, which has a population of about 6 million people. South of Munich, there are numerous nearby freshwater lakes such as Lake Starnberg, Ammersee, Chiemsee, Walchensee, Kochelsee, Tegernsee, Schliersee, Simssee, Staffelsee, Wörthsee, Kirchsee and the Osterseen (Easter Lakes), which are popular among the people of Munich for recreation, swimming and watersports and can be quickly reached by car and a few also by Munich's S-Bahn. Munich has seven sister cities.
https://en.wikipedia.org/wiki?curid=19058
Millsaps College Millsaps College is a private liberal arts college in Jackson, Mississippi. Founded in 1890 and affiliated with the United Methodist Church, Millsaps is home to 985 students. The college was founded in 1889–90 by a Confederate veteran, Major Reuben Webster Millsaps, who donated the land for the college and $50,000. Dr. William Belton Murrah was the college's first president, and Bishop Charles Betts Galloway of the Methodist Episcopal Church South organized the college's early fund-raising efforts. Both men were honored with halls named in their honor. Major Millsaps and his wife are interred in a tomb near the center of campus. The current United Methodist Church continues to have affiliations with the college. Nearly 53 years after founding the college, Millsaps was chosen as one of 131 sites for the training of Navy and Marine officers in the V-12 Navy College Training Program. In April 1943, 380 students arrived for the Navy V-12 program. It offered engineering, pre-medical and pre-dental. Thereafter Millsaps began accepting students year-round for the program. A total of 873 officer candidates went through Millsaps between 1943 and 1945. Traces of the Navy V-12 unit appear in the "Bobashela" (school yearbook) in 1944. That year, the "Bobashela" staff decided to dedicate the yearbook to the unit and Dr. Sanders, one of the unit's advisers. One section memorialized students who had been killed in action. Despite its religious affiliation, the curriculum is secular. The writing-intensive core curriculum requires each student to compile an acceptable portfolio of written work before completion of the sophomore year. Candidates for an undergraduate degree must also pass oral and written comprehensive exams in their major field of study. These exams last up to three hours, and may cover any required or elective course offered by the major department. Unacceptable performance on comprehensive exams will prevent a candidate from receiving a degree, even if all course work has been completed. "Comps" are usually associated with graduate degree requirements, so their inclusion at the undergraduate level is a source of pride (and possibly pressure) for Millsaps students. Millsaps offers B.S., B.A., B.B.A., M.B.A. and MAcc degrees and corresponding programs. The current undergraduate population is 910 students on a 103 acre (417,000 m²) campus near downtown Jackson, Mississippi. The student to faculty ratio is 1:9 with an average class size around 15 students. Millsaps offers 32 majors and 41 minors, including the option of a self-designed major, along with a multitude of study abroad and internship opportunities. Millsaps employs 97 full-time faculty members. Of those, 94 percent of tenure-track faculty hold a Ph.D. or the terminal degree in their field. The professors on the tenure track have the highest degree in their field. The college offers research partnerships for undergraduate students, and a variety of study abroad programs. Millsaps reports that 57% of their student body comes from outside Mississippi; a large portion of out-of-state students are from neighboring Louisiana. Millsaps is home to 910 undergraduate, 75 graduate students from 26 states and territories plus 23 countries. The college also offers a Continuing Education program and the Community Enrichment Series for adults in the Jackson area. The Millsaps campus is close to downtown Jackson. It is bordered by Woodrow Wilson Avenue to the north, North State Street to the east, West Street to the west, and Marshall Street to the south. The center of campus is dominated by "The Bowl," where many events occur, including Homecoming activities, concerts, the Multicultural Festival, and Commencement. Adjacent to the Bowl is the Campbell College Center, renovated in 2000, which contains the campus bookstore, post office, cafeteria, and Student Life offices. This central section of campus also holds the Gertrude C. Ford Academic Complex, Olin Science Hall, Sullivan-Harrell Hall, and the Millsaps-Wilson Library. The north part of campus includes the Hall Activities Center (commonly called "the HAC"), the sports fields, and the freshman dormitories. On the far northwestern corner is James Observatory, the oldest building on campus. Operational since 1901, the observatory underwent major renovations in 1980. It is open for celestial gazing. Upperclassmen dormitories are located on the south side of campus, with Fraternity Row and the Christian Center. Originally constructed as a memorial to students and graduates who died in service during World War II, the Christian Center houses an auditorium and the departments of Performing Arts, History and Religious Studies. Between the Christian Center and Murrah Hall, which houses the Else School of Management, is the tomb of Major Millsaps and the "M" Bench, erected by the classes of 1926, 1927, and 1928. The Nicholson Garden was added to improve the aesthetics of this area. Millsaps was ranked 90 out of 251 national liberal arts colleges in the U.S. News & World Report of America's Best Colleges Issue; top ranked liberal arts college in Mississippi, Louisiana, and Alabama; also, named to the list of "High School Counselors' Picks" for 2011 and 2012. Millsaps College professors are ranked among the best in the nation, according to The Princeton Review's The Best 377 Colleges - 2013 Edition. The Millsaps faculty won praise in The Princeton Review's special Top 20 category: Professors Get High Marks, where Millsaps was ranked twelfth in the country. The Princeton Review also named the Else School of Management at Millsaps College one of the Best Business Schools in the Southeast in the 2011 edition of its book, "The Best 300 Business Schools". Millsaps is one of 40 schools in Loren Pope's "Colleges That Change Lives". Millsaps is among 21 private universities and colleges nationwide named a "best buy" in the "Fiske Guide to Colleges 2013". Millsaps is the only institution in Mississippi to earn the "best buy" honor from the annual guide. The guide names Millsaps as "the strongest liberal arts college in the deep, Deep South and by far the most progressive" and notes that what differentiates the school is "its focus on scholarly inquiry, spiritual growth, and community service, along with its Heritage Program, an interdisciplinary approach to world culture." Millsaps leads the list of 13 United Methodist–related colleges named among the top 100 liberal arts colleges by the 2012 Washington Monthly College Guide. The school's sports teams are known as the Majors, and their colors are purple and white. They participate in the NCAA Division III and the Southern Athletic Association. Men's sports include: baseball, basketball, cheerleading, cross country, football, golf, lacrosse, soccer, tennis, and track and field, and the addition of a 2019-2020 swim team. Women's sports include basketball, cheerleading, cross country, dance team, golf, lacrosse, soccer, softball, tennis, track and field, and volleyball, and the addition of a 2019-2020 swim team. The Majors had a fierce football and basketball rivalry with Mississippi College in nearby Clinton through the 1950s before competition was suspended after an infamous student brawl at a basketball game. Campus legend says the brawl was sparked by the alleged theft of the body of Millsaps founder Major Millsaps by Mississippi College students. The rivalry was considered by many as the best in Mississippi, featuring a prank by Mississippi College students who painted "TO HELL WITH MILSAPS" (sic) on the Millsaps Observatory. The football rivalry resumed in 2000 as the "Backyard Brawl", with games at Mississippi Veterans Memorial Stadium. The rivalry took a one-year hiatus in 2005 but resumed in 2006. Millsaps was the summer training camp home for the NFL's New Orleans Saints in 2006, 2007, and 2008. Millsaps was also home to the famous game-ending play in the 2007 Trinity vs. Millsaps football game, in which Trinity University executed 15 laterals on the way to a touchdown, defeating Millsaps by a score of 28-24. The play later won the Pontiac Game-Changing Performance of the Year award, which had never before been bestowed upon a play outside of the NCAA's Bowl Subdivision. In 2008, Millsaps quarterback Juan Joseph was awarded the Conerly Trophy, which goes to the best football player in the state of Mississippi. The school is home to six different fraternities: Kappa Alpha Order, Sigma Alpha Epsilon, Pi Kappa Alpha, Lambda Chi Alpha, Kappa Sigma, and Alpha Phi Alpha; as well as six sororities: Delta Delta Delta, Kappa Delta, Phi Mu, Chi Omega, Alpha Kappa Alpha, and Zeta Phi Beta.
https://en.wikipedia.org/wiki?curid=19064
Mälaren Mälaren ( , , or ), historically referred to as Lake Malar ( , ) in English, is the third-largest freshwater lake in Sweden (after Vänern and Vättern). Its area is 1,140 km2 and its greatest depth is 64 m. Mälaren spans 120 kilometers from east to west. The lake drains, from south-west to north-east, into the Baltic Sea through its natural outlets Norrström and Söderström (as it flows around Stadsholmen island) and through the artificial Södertälje Canal and Hammarbyleden waterway. The easternmost bay of Mälaren, in central Stockholm, is called Riddarfjärden. The lake is located in Svealand and bounded by the provinces of Uppland, Södermanland, Närke, and Västmanland. The two largest islands in Mälaren are Selaön (91 km2) and Svartsjölandet (79 km2). The Viking Age settlements Birka on the island of Björkö and Hovgården on the neighbouring island Adelsö have been a UNESCO World Heritage Site since 1993, as has Drottningholm Palace on the island of Lovön. The barrow of Björn Ironside is on the island of Munsö, within the lake. The etymological origin of the name stems from the Old Norse word appearing in historical records in the 1320s and meaning gravel. The lake was previously known as , which is Old Norse for "The Lake". By the end of the last ice age about 11,000 years ago, much of northern Europe and North America was covered by ice sheets up to 3 km thick. At the end of the ice age when the glaciers retreated, the removal of the weight from the depressed land led to a post-glacial rebound. Initially the rebound was rapid, proceeding at about 7.5 cm/year. This phase lasted for about 2,000 years, and took place as the ice was being unloaded. Once deglaciation was complete, uplift slowed to about 2.5 cm/year, and decreased exponentially after that. Today, typical uplift rates are of the order of 1 cm/year or less, and studies suggest that rebound will continue for about another 10,000 years. The total uplift from the end of deglaciation can be up to 400 m. In the Viking Age Mälaren was still a bay of the Baltic Sea, and seagoing vessels could sail up it far into the interior of Sweden. Birka was conveniently near the trade routes through the Södertälje Canal. Due to the post-glacial rebound, Södertälje canal and the mouth of Riddarfjärden bay had become so shallow by about the year 1200 that ships had to unload their cargoes near the entrances, and progressively the bay became a lake. The decline of Birka and the subsequent foundation of Stockholm at the choke point of Riddarfjärden were in part due to the post-glacial rebound changing the topography of the Mälaren basin. The lake's surface currently averages 0.7 meters above sea level. According to Norse mythology as contained in the thirteenth-century Icelandic work "Prose Edda", the lake was created by the goddess Gefjon when she tricked Gylfi, the Swedish king of Gylfaginning. Gylfi promised Gefjon as much land as four oxen could plough in a day and a night, but she used oxen from the land of the giants, and moreover uprooted the land and dragged it into the sea, where it became the island of Zealand. "Snorra Edda" says that 'the inlets in the lake correspond to the headlands in Zealand'; since this is much more true of Lake Vänern, the myth was probably originally about Vänern, not Mälaren. A selection, in alphabetical order: The most common nesting birds on the skerries of Mälaren are also the most common in the Baltic Sea. After a survey in 2005, the ten most common species were found to be common tern, herring gull, black-headed gull, common gull, mallard, tufted duck, Canada goose, common goldeneye, lesser black-backed gull and common sandpiper. White-tailed eagle, greylag goose, barnacle goose, black-throated diver, red-breasted merganser and gadwall are less common, and some of these latter are endangered in the Mälaren area. Since 1994 a subspecies of great cormorant "Phalacrocorax carbo sinensis", has nested there as well. A 2005 survey tallied 23 breeding colonies with 2178 nests, of which the largest colony had 235 nests. Most experts believe the great cormorant population has peaked and will stabilize at around 2000 nests. One of the characteristic species is the osprey which has one of its strongest presences in Lake Mälaren. The osprey nests in almost all bays of the lake. The Zebra mussel is considered an invasive species and is causing some problems in Lake Mälaren.
https://en.wikipedia.org/wiki?curid=19066
Macau Macau, also spelled Macao (; , ; official "Macau"), and officially the Macao Special Administrative Region of the People's Republic of China, is a city in the western Pearl River Delta by the South China Sea. It is a special administrative region of China and maintains separate governing and economic systems from those of mainland China. With a population of 696,100 and an area of , it is the most densely populated region in the world. Macau was formerly a colony of the Portuguese Empire, after Ming China leased the territory as a trading post in 1557. Portugal paid an annual rent and administered the territory under Chinese sovereignty until 1887 when it gained perpetual colonial rights in the Sino-Portuguese Treaty of Peking. The colony remained under Portuguese rule until 1999, when it was transferred to China. Originally a sparsely populated collection of coastal islands, the territory has become a major resort city and the top destination for gambling tourism. Its gambling industry is seven times larger than that of Las Vegas. Although the city has one of the highest per capita incomes in the world, it has severe income inequality. Its GDP per capita by purchasing power parity is one of the highest in the world and higher than any country in the world in 2014 according to the World Bank. Macau has a very high Human Development Index, although it is only calculated by the Macau government instead of the United Nations. Macau has the fourth-highest life expectancy in the world. The territory is highly urbanised and most development is built on reclaimed land; two-thirds of the total land area is reclaimed from the sea. The first known written record of the name "Macau", rendered as "Ya/A Ma Gang" (""), is found in a letter dated 20 November 1555. The local inhabitants believed that the sea-goddess Mazu (alternatively called A-Ma) had blessed and protected the harbour and called the waters around A-Ma Temple using her name. When Portuguese explorers first arrived in the area and asked for the place name, the locals thought they were asking about the temple and told them it was "Ma Kok" (). The earliest Portuguese spelling for this was "Amaquão". Multiple variations were used until "Amacão / Amacao" and "Macão / Macao" became common during the 17th century. By the 1911 reform of Portuguese orthography, the spelling "Macau" became the standardised form, however the use of "Macao" persisted in English and other European languages. Macau Peninsula had many names in Chinese, including "Jing'ao" (), "Haojing" (), and "Haojing'ao" (). The islands Taipa, Coloane, and Hengqin were collectively called "Shizimen" (). These names would later become "Aomen" (), "Oumún" in Cantonese and translating as "bay gate" or "port gate", to refer to the whole territory. During the Qin dynasty (221–206 BC), the region was under the jurisdiction of Panyu County, Nanhai Prefecture of the province of Guangdong. The region is first known to have been settled during the Han dynasty. It was administratively part of Dongguan Prefecture in the Jin dynasty (265–420 AD), and alternated under the control of Nanhai and Dongguan in later dynasties. In 1152, during the Song dynasty (960–1279 AD), it was under the jurisdiction of the new Xiangshan County. In 1277, approximately 50,000 refugees fleeing the Mongol conquest of China settled in the coastal area. Macau did not develop as a major settlement until the Portuguese arrived in the 16th century. The first European visitor to reach China by sea was the explorer Jorge Álvares, who arrived in 1513. Merchants first established a trading post in Hong Kong waters at Tamão (present-day Tuen Mun), beginning regular trade with nearby settlements in southern China. Military clashes between the Ming and Portuguese navies followed the expulsion of the Tamão traders in 1521. Despite the trade ban, Portuguese merchants continued to attempt settling on other parts of the Pearl River estuary, finally settling on Macau. Luso-Chinese trade relations were formally reestablished in 1554 and Portugal soon after acquired a permanent lease for Macau in 1557, agreeing to pay 500 taels of silver as annual land rent. The initially small population of Portuguese merchants rapidly became a growing city. The Roman Catholic Diocese of Macau was created in 1576, and by 1583, the Senate had been established to handle municipal affairs for the growing settlement. Macau was at the peak of its prosperity as a major entrepôt during the late 16th century, providing a crucial connection in exporting Chinese silk to Japan during the "Nanban" trade period. Although the Portuguese were initially prohibited from fortifying Macau or stockpiling weapons, the Fortaleza do Monte was constructed in response to frequent Dutch naval incursions. The Dutch attempted to take the city in the 1622 Battle of Macau, but were repelled successfully by the Portuguese. Macau entered a period of decline in the 1640s following a series of catastrophic events for the burgeoning colony: Portuguese access to trade routes was irreparably severed when Japan halted trade in 1639, Portugal revolted against Spain in 1640, and Malacca fell to the Dutch in 1641. Maritime trade with China was banned in 1644 following the Qing conquest under the "Haijin" policies and limited only to Macau on a lesser scale while the new dynasty focused on eliminating surviving Ming loyalists. While the Kangxi Emperor lifted the prohibition in 1684, China again restricted trade under the Canton System in 1757. Foreign ships were required to first stop at Macau before further proceeding to Canton. Qing authorities exercised a much greater role in governing the territory during this period; Chinese residents were subject to Qing courts and new construction had to be approved by the resident mandarin beginning in the 1740s. As the opium trade became more lucrative during the eighteenth century, Macau again became an important stopping point en route to China. Following the First Opium War and establishment of Hong Kong, Macau lost its role as a major port. Firecracker and incense production, as well as tea and tobacco processing, were vital industries in the colony during this time. Portugal was able to capitalise on China's post-war weakness and assert its sovereignty; the Governor of Macau began refusing to pay China annual land rent for the colony in the 1840s, and annexed Taipa and Coloane, in 1851 and 1864 respectively. Portugal also occupied nearby Lapa and Montanha, but these would be returned to China by 1887, when perpetual occupation rights over Macau were formalised in the Sino-Portuguese Treaty of Peking. This agreement also prohibited Portugal from ceding Macau without Chinese approval. Despite occasional conflict between Cantonese authorities and the colonial government, Macau's status remained unchanged through the republican revolutions of both Portugal in 1910 and China in 1911. The Kuomintang further affirmed Portuguese jurisdiction in Macau when the Treaty of Peking was renegotiated in 1928. During the Second World War, the Empire of Japan did not occupy the colony and generally respected Portuguese neutrality in Macau. However, after Japanese troops captured a British cargo ship in Macau waters in 1943, Japan installed a group of government "advisors" as an alternative to military occupation. The territory largely avoided military action during the war except in 1945, when the United States ordered air raids on Macau after learning that the colonial government was preparing to sell aviation fuel to Japan. Portugal was later given over US$20 million in compensation for the damage in 1950. Refugees from mainland China swelled the population as they fled from the Chinese Civil War. Access to a large workforce enabled Macau's economy to grow as the colony expanded its clothing and textiles manufacturing industry, developed tourism, and legalised casino gaming. However, at the height of the Cultural Revolution, residents dissatisfied with the colonial administration rioted in the 1966 12-3 incident, in which 8 people were killed and over 200 were injured. Portugal lost full control over the colony afterwards, and agreed to cooperate with the communist authorities in exchange for continued administration of Macau. Following the 1974 Carnation Revolution, Portugal formally relinquished Macau as an overseas province and acknowledged it as a "Chinese territory under Portuguese administration". After China first concluded arrangements on Hong Kong's future with the United Kingdom, it entered negotiations with Portugal over Macau in 1986. They were concluded with the signing of the 1987 Joint Declaration on the Question of Macau, in which Portugal agreed to transfer the colony in 1999 and China would guarantee Macau's political and economic systems for 50 years after the transfer. In the waning years of colonial rule, Macau rapidly urbanised and constructed large-scale infrastructure projects, including Macau International Airport and a new container port. Macau was transferred to China on 20 December 1999, after 442 years of Portuguese rule. Following the transfer, Macau liberalised its casino industry (previously operating under a government-licensed monopoly) to allow foreign investors, starting a new period of economic development. The regional economy grew by a double-digit annual growth rate from 2002 to 2014, making Macau one of the richest economies in the world on a per capita basis. Political debates have centred on the region's jurisdictional independence and the central government's adherence of "one country, two systems". While issues such as national security legislation have been controversial, Macanese residents generally have high levels of trust in the government. Macao is the last Portuguese colony to gain independence and the only one which is not member of the Community of Portuguese Language Countries. Portuguese is one of the official languages of Macao. In 2006, during the II Ministerial meeting between China and Portuguese Speaking Countries, the CPLP Executive Secretary and Deputy ambassador Tadeu Soares invited the Chief Executive of the Government of the Macau Special Administrative Region, Edmund Ho, to request the Associate Observer status for Macau. The Government of Macau has not yet formalized this request. In 2016, Murade Murargy, then executive secretary of CPLP said in an interview that Macao's membership is a complicated question, since like the Galicia region in Spain, it is not an independent country, but only a part of China. But the "Instituto Internacional de Macau" and the University of São José are Consultative Observers of CPLP. Macau is a special administrative region of China, with executive, legislative, and judicial powers devolved from the national government. The Sino-Portuguese Joint Declaration provided for economic and administrative continuity through the transfer of sovereignty, resulting in an executive-led governing system largely inherited from the territory's history as a Portuguese colony. Under these terms and the "one country, two systems" principle, the Basic Law of Macao is the regional constitution. Because negotiations for the Joint Declaration and Basic Law began after transitional arrangements for Hong Kong were made, Macau's structure of government is very similar to Hong Kong's. The regional government is composed of three branches: The Chief Executive is the head of government, and serves for a maximum of two five-year terms. The State Council (led by the Premier of China) appoints the Chief Executive after nomination by the Election Committee, which is composed of 400 business, community, and government leaders. The Legislative Assembly has 33 members, each serving a four-year term: 14 are directly elected, 12 indirectly elected, and 7 appointed by the Chief Executive. Indirectly elected assemblymen are selected from limited electorates representing sectors of the economy or special interest groups. All directly elected members are chosen with proportional representation. Twelve political parties had representatives elected to the Legislative Assembly in the 2017 election. These parties have aligned themselves into two ideological groups: the pro-establishment (the current government) and pro-democracy camps. Macau is represented in the National People's Congress by 12 deputies chosen through an electoral college, and 29 delegates in the Chinese People's Political Consultative Conference appointed by the central government. Chinese national law does not generally apply in the region, and Macau is treated as a separate jurisdiction. Its judicial system is based on Portuguese civil law, continuing the legal tradition established during colonial rule. Interpretative and amending power over the Basic Law and jurisdiction over acts of state lie with the central authority, however, making regional courts ultimately subordinate to the mainland's socialist civil law system. Decisions made by the Standing Committee of the National People's Congress can also override territorial judicial processes. The territory's jurisdictional independence is most apparent in its immigration and taxation policies. The Identification Department issues passports for permanent residents which differ from those issued by the mainland or Hong Kong, and the region maintains a regulated border with the rest of the country. All travellers between Macau and China and Hong Kong must pass border controls, regardless of nationality. Chinese citizens resident in mainland China do not have the right of abode in Macau and are subject to immigration controls. Public finances are handled separately from the national government, and taxes levied in Macau do not fund the central authority. The Macao Garrison is responsible for the region's defence. Although the Chairman of the Central Military Commission is supreme commander of the armed forces, the regional government may request assistance from the garrison. Macau residents are not required to perform military service and current law also has no provision for local enlistment, so its defence force is composed entirely of nonresidents. The State Council and the Ministry of Foreign Affairs handle diplomatic matters, but Macau retains the ability to maintain separate economic and cultural relations with foreign nations. The territory negotiates its own trade agreements and actively participates in supranational organisations, including agencies of the World Trade Organization and United Nations. The regional government maintains trade offices in Greater China and other nations. The territory is divided into seven parishes. Cotai, a major area developed on reclaimed land between Taipa and Coloane, and areas of the Macau New Urban Zone do not have defined parishes. Historically, the parishes belonged to one of two municipalities (the Municipality of Macau or the Municipality of Ilhas) that were responsible for administering municipal services. The municipalities were abolished in 2001 and superseded by the Civic and Municipal Affairs Bureau in providing local services. Sex trafficking in Macau is an issue. Macau and foreign women and girls are forced into prostitution in brothels, homes, and businesses in the city. Macau is on China's southern coast, west of Hong Kong, on the western side of the Pearl River estuary. It is surrounded by the South China Sea in the east and south, and neighbours the Guangdong city of Zhuhai to the west and north. The territory consists of Macau Peninsula, Taipa, and Coloane. A parcel of land in neighbouring Hengqin island that hosts the University of Macau also falls under the regional government's jurisdiction. The territory's highest point is Coloane Alto, above sea level. Urban development is concentrated on peninsular Macau, where most of the population lives. The peninsula was originally a separate island with hilly terrain, which gradually became a tombolo as a connecting sandbar formed over time. Both natural sedimentation and land reclamation expanded the area enough to support urban growth. Macau has tripled its land area in the last century, increasing from in the late 19th century to in 2018. Cotai, the area of reclaimed land connecting Taipa and Coloane, contains many of the newer casinos and resorts established after 1999. The region's jurisdiction over the surrounding sea was greatly expanded in 2015, when it was granted an additional of maritime territory by the State Council. Further reclamation is currently underway to develop parts of the Macau New Urban Zone. The territory also has control over part of an artificial island to maintain a border checkpoint for the Hong Kong–Zhuhai–Macau Bridge. Macau has a humid subtropical climate (Köppen "Cwa"), characteristic of southern China. The territory is dual season dominant – summer (May to September) and winter (November to February) are the longest seasons, while spring (March and April) and autumn (October) are relatively brief periods. The summer monsoon brings warm and humid air from the sea, with the most frequent rainfall occurring during the season. Typhoons also occur most often then, bringing significant spikes in rainfall. During the winter, northern winds from the continent bring dry air and much less rainfall. The highest and lowest temperatures recorded at the Macao Meteorological and Geophysical Bureau are on both 2 July 1930 and 6 July 1930 and on 26 January 1948. The Statistics and Census Service estimated Macau's population at 667,400 at the end of 2018. With a population density of 21,340 people per square kilometre, Macau is the most densely populated region in the world. The overwhelming majority (88.7 per cent) are Chinese, many of whom originate from Guangdong (31.9 per cent) or Fujian (5.9 per cent). The remaining 11.6 per cent are non-ethnic Chinese minorities, primarily Filipinos (4.6 per cent), Vietnamese (2.4 per cent), and Portuguese (1.8 per cent). Several thousand residents are of Macanese heritage, native-born multiracial people with mixed Portuguese ancestry. Of the total population (excluding migrants), 49.4 per cent were born in Macau, followed by 43.1 per cent in Mainland China. A large portion of the population are Portuguese citizens, a legacy of colonial rule; at the time of the transfer of sovereignty in 1999, 107,000 residents held Portuguese passports. The predominant language is Cantonese, a variety of Chinese originating in Guangdong. It is spoken by 87.5 per cent of the population, 80.1 per cent as a first language and 7.5 per cent as a second language. Only 2.3 per cent can speak Portuguese, the other official language; 0.7 per cent are native speakers, and 1.6 per cent use it as a second language. Increased immigration from mainland China in recent years has added to the number of Mandarin speakers, making up about half of the population (50.4 per cent); 5.5 per cent are native speakers and 44.9 per cent are second language speakers. Traditional Chinese characters are used in writing, rather than the simplified characters used on the mainland. English is considered an additional working language and is spoken by over a quarter of the population (27.5 per cent); 2.8 per cent are native speakers, and 24.7 per cent speak English as a second language. Macanese Patois, a local creole generally known as "Patuá", is now spoken only by a few in the older Macanese community. Chinese folk religions have the most adherents (58.9 per cent) and are followed by Buddhism (17.3 per cent) and Christianity (7.2 per cent), while 15.4 per cent of the population profess no religious affiliation at all. Small minorities adhering to other religions (less than 1 per cent), including Hinduism, Judaism, and Islam, are also resident in Macau. Life expectancy in Macau was 81.6 years for males and 87.7 years for females in 2018, the fourth highest in the world. Cancer, heart disease, and respiratory disease are the territory's three leading causes of death. Most government-provided healthcare services are free of charge, though alternative treatment is also heavily subsidised. Migrant workers living in Macau account for over 25 per cent of the entire workforce. They largely work in lower wage sectors of the economy, including construction, hotels, and restaurants. As a growing proportion of local residents take up employment in the gaming industry, the disparity in income between local and migrant workers has been increasing. Rising living costs have also pushed a large portion of non-resident workers to live in Zhuhai. Macau has a capitalist service economy largely based on casino gaming and tourism. It is the world's 83rd-largest economy, with a nominal GDP of approximately MOP433 billion (US$53.9 billion). Although Macau has one of the highest per capita GDPs, the territory also has a high level of wealth disparity. Macau's gaming industry is the largest in the world, generating over MOP195 billion (US$24 billion) in revenue and about seven times larger than that of Las Vegas. Macau's gambling revenue was $37 billion in 2018. The regional economy is heavily reliant on casino gaming. The vast majority of government funding (79.6 per cent of total tax revenue) comes from gaming. Gambling as a share of GDP peaked in 2013 at over 60 per cent, and continues to account for 49.1 per cent of total economic output. The vast majority of casino patrons are tourists from mainland China, making up 68 per cent of all visitors. Casino gaming is illegal in both the mainland and Hong Kong, giving Macau a legal monopoly on the industry in China. Revenue from Chinese high rollers has been falling and was forecast to fall as much as 10% more in 2019. Economic uncertainty may account for some of the drop, but alternate Asian gambling venues do as well. For example, Chinese visitors to the Philippines more than doubled between 2015 and 2018, since the City of Dreams casino opened in Manila. Casino gambling was legalised in 1962 and the gaming industry initially operated under a government-licensed monopoly granted to the Sociedade de Turismo e Diversões de Macau. This license was renegotiated and renewed several times before ending in 2002 after 40 years. The government then allowed open bidding for casino licenses to attract foreign investors. Along with an easing of travel restrictions on mainland Chinese visitors, this triggered a period of rapid economic growth; from 1999 to 2016, Macau's gross domestic product multiplied by 7 and the unemployment rate dropped from 6.3 to 1.9 per cent. The Sands Macao, Wynn Macau, MGM Macau, and Venetian Macau were all opened during the first decade after liberalisation of casino concessions. Casinos employ about 24 per cent of the total workforce in the region. "Increased competition from casinos popping up across Asia to lure away Chinese high rollers and tourists" in Singapore, South Korea, Japan, Nepal, the Philippines, Australia, Vietnam and the Russian Far East led in 2019 to the lowest revenues in three years. Export-oriented manufacturing previously contributed to a much larger share of economic output, peaking at 36.9 per cent of GDP in 1985 and falling to less than 1 per cent in 2017. The bulk of these exports were cotton textiles and apparel, but also included toys and electronics. At the transfer of sovereignty in 1999, manufacturing, financial services, construction and real estate, and gaming were the four largest sectors of the economy. Macau's shift to an economic model entirely dependent on gaming caused concern over its overexposure to a single sector, prompting the regional government to attempt re-diversifying its economy. The government traditionally had a non-interventionist role in the economy and taxes corporations at very low rates. Post-handover administrations have generally been more involved in enhancing social welfare to counter the cyclical nature of the gaming industry. Economic growth has been attributed in large part to the high number of mainlander visits to Macau, and the central government exercises a role in guiding casino business growth through its control of the flow of tourists. The Closer Economic Partnership Arrangement formalised a policy of free trade between Macau and mainland China, with each jurisdiction pledging to remove remaining obstacles to trade and cross-boundary investment. Due to a lack of available land for farming, agriculture is not significant in the economy. Food is exclusively imported to Macau and almost all foreign goods are transshipped through Hong Kong. Macau has a highly developed road system, with over of road constructed in the territory. Automobiles drive on the left (unlike in both mainland China and Portugal), due to historical influence of the Portuguese Empire. Vehicle traffic is extremely congested, especially within the oldest part of the city, where streets are the most narrow. Public bus services operate over 80 routes, supplemented by free hotel shuttle buses that also run routes to popular tourist attractions and downtown locations. About 1,500 black taxicabs are licensed to carry riders in the territory. The Hong Kong–Zhuhai–Macau Bridge, opened in 2018, provides a direct link with the eastern side of the Pearl River estuary. Cross-boundary traffic to mainland China may also pass through border checkpoints at the Portas do Cerco and Lótus Bridge. Macau International Airport serves over 8 million passengers each year and is the primary hub for local flag carrier Air Macau. The territory's first rail network, the Macau Light Rapid Transit, is currently under construction. Phase 1 of the Taipa line had begun operations in December 2019, the Taipa line will connect 11 metro stations throughout Taipa and Cotai. Ferry services to Hong Kong and mainland China operate out of Outer Harbour Ferry Terminal, Inner Harbour Ferry Terminal, and Taipa Ferry Terminal. Daily helicopter service is also available to Hong Kong and Shenzhen. The Macau Light Rapid Transit (MLRT) also known in Portuguese as Metro Ligeiro de Macau (MLM) is a mass transit system in Macau. It serves the Macau Peninsula, Taipa and Cotai, serving major border checkpoints such as the Border Gate, the Outer Harbour Ferry Terminal, the Lotus Bridge Border and the Macau International Airport. Macau is served by one major public hospital, the Hospital Conde S. Januário, and one major private hospital, the Kiang Wu Hospital, both located in Macau Peninsula, as well as a university associated hospital called Macau University of Science and Technology Hospital in Cotai. In addition to hospitals, Macau also has numerous health centres providing free basic medical care to residents. Consultation in traditional Chinese medicine is also available. None of the Macau hospitals are independently assessed through international healthcare accreditation. There are no western-style medical schools in Macau, and thus all aspiring physicians in Macau have to obtain their education and qualification elsewhere. Local nurses are trained at the Macau Polytechnic Institute and the Kiang Wu Nursing College. Currently there are no training courses in midwifery in Macau. A study by the University of Macau, commissioned by the Macau SAR government, concluded that Macau is too small to have its own medical specialist training centre. The "Macau Corps of Firefighters" (Portuguese: "Corpo de Bombeiros de Macau") is responsible for ambulance service (Ambulância de Macau). The Macau Red Cross also operates ambulances (Toyota HiAce vans) for emergency and non-emergencies to local hospitals with volunteer staff. The organization has a total of 739 uniformed firefighters and paramedics serving from 7 stations in Macau. The Health Bureau in Macau is mainly responsible for coordinating the activities between the public and private organizations in the area of public health, and assure the health of citizens through specialized and primary health care services, as well as disease prevention and health promotion. The Macau Centre for Disease Control and Prevention was established in 2001, which monitors the operation of hospitals, health centres, and the blood transfusion centre in Macau. It also handles the organization of care and prevention of diseases affecting the population, sets guidelines for hospitals and private healthcare providers, and issues licences. Education in Macau does not have a single centralised set of standards or curriculum. Individual schools follow different educational models, including Chinese, Portuguese, Hong Kong, and British systems. Children are required to attend school from the age of five until completion of lower secondary school, or at age 15. Of residents aged 3 and older, 69 per cent completed lower secondary education, 49 per cent graduated from an upper secondary school, 21 per cent earned a bachelor's degree or higher. Mandatory education has contributed to an adult literacy rate of 96.5 per cent. While lower than that of other developed economies, the rate is due to the influx of refugees from mainland China during the post-war colonial era. Much of the elderly population were not formally educated due to war and poverty. Most schools in the territory are private institutions. Out of the 77 non-tertiary schools, 10 are public and the other 67 are privately run. The Roman Catholic Diocese of Macau maintains an important position in territorial education, managing 27 primary and secondary schools. The government provides 15 years of free education for all residents enrolled in publicly run schools, and subsidises tuition for students in private schools. Students at the secondary school level studying in neighbouring areas of Guangdong are also eligible for tuition subsidies. The vast majority of schools use Cantonese as the medium of instruction, with written education in Chinese and compulsory classes in Mandarin. A minority of private schools use English or Portuguese as the primary teaching language. Luso-Chinese schools mainly use Chinese, but additionally require mandatory Portuguese-language classes as part of their curriculum. Macau has ten universities and tertiary education institutes. The University of Macau, founded in 1981, is the territory's only public comprehensive university. The Kiang Wu Nursing College of Macau is the oldest higher institute, specialising in educating future nursing staff for the college's parent hospital. The University of Saint Joseph, Macau University of Science and Technology, and the City University of Macau were all established in subsequent years. Five other institutes specialise in specific vocations or provide continuing education. The mixing of the Chinese and Portuguese cultures and religious traditions for more than four centuries has left Macau with an inimitable collection of holidays, festivals and events. The biggest event of the year is the Macau Grand Prix in November, when the main streets in Macau Peninsula are converted to a racetrack bearing similarities with the Monaco Grand Prix. Other annual events include Macau Arts festival in March, the International Fireworks Display Contest in September, the International Music festival in October and/or November, and the Macau International Marathon in December. The Lunar Chinese New Year is the most important traditional festival and celebration normally takes place in late January or early February. The Pou Tai Un Temple in Taipa is the place for the Feast of Tou Tei, the Earth god, in February. The Procession of the Passion of Our Lord is a well-known Roman Catholic rite and journey, which travels from Saint Austin's Church to the Cathedral, also taking place in February. A-Ma Temple, which honours the Goddess Matsu, is in full swing in April with many worshippers celebrating the A-Ma festival. In May it is common to see dancing dragons at the Feast of the Drunken Dragon and twinkling-clean Buddhas at the Feast of the Bathing of Lord Buddha. In Coloane Village, the Taoist god Tam Kong is also honoured on the same day. Dragon Boat Festival is brought into play on Nam Van Lake in June and Hungry Ghosts' festival, in late August and/or early September every year. All events and festivities of the year end with Winter Solstice in December. Macau preserves many historical properties in the urban area. The Historic Centre of Macau, which includes some twenty-five historic locations, was officially listed as a World Heritage Site by UNESCO on 15 July 2005 during the 29th session of the World Heritage Committee, held in Durban, South Africa. However, the Macao government is criticized for ignoring the conservation of heritage in urban planning. In 2007, local residents of Macao wrote a letter to UNESCO complaining about construction projects around world heritage Guia Lighthouse (Focal height 108 meters), including the headquarter of the Liaison Office (91 meters). UNESCO then issued a warning to the Macau government, which led former Chief Executive Edmund Ho to sign a notice regulating height restrictions on buildings around the site. In 2015, the New Macau Association submitted a report to UNESCO claiming that the government had failed to protect Macao's cultural heritage against threats by urban development projects. One of the main examples of the report is that the headquarter of the Liaison Office of the Central People's Government, which is located on the Guia foothill and obstructs the view of the Guia Fortress (one of the world heritages symbols of Macao). One year later, Roni Amelan, a spokesman from UNESCO Press service, said that the UNESCO has asked China for information and is still waiting for a reply. In 2016, the Macau government approved an 81-meter construction limit for the residential project, which reportedly goes against the city's regulations on the height of buildings around world heritage site Guia Lighthouse. Food in Macau is mainly based on both Cantonese and Portuguese cuisine, drawing influences from Indian and Malay dishes as well, reflecting a unique cultural and culinary blend after centuries of colonial rule. Portuguese recipes were adapted to use local ingredients, such as fresh seafood, turmeric, coconut milk, and adzuki beans. These adaptations produced Macanese variations of traditional Portuguese dishes including "caldo verde", minchee, and "cozido à portuguesa". While many restaurants claim to serve traditional Portuguese or Macanese dishes, most serve a mix of Cantonese-Portuguese fusion cuisine. "Galinha à portuguesa" is an example of a Chinese dish that draws from Macanese influences, but is not part of Macanese cuisine. "Cha chaan teng", a type of fast casual diner originating in Hong Kong that serves that region's interpretation of Western food, are also prevalent in Macau. "Pastel de nata", pork chop buns, and almond biscuits are popular street food items. Despite its small area, Macau is home to a variety of sports and recreational facilities that have hosted a number of major international sporting events, including the 2005 East Asian Games, the 2006 Lusophony Games, and the 2007 Asian Indoor Games. The territory regularly hosts the Macau Grand Prix, one of the most significant annual motorsport competitions that uses city streets as the racetrack. It is the only street circuit that hosts Formula Three, touring car, and motorcycle races in the same event. The Guia Circuit, with narrow corner clearance and a winding path, is considered an extremely challenging course and a serious milestone for prospective Formula One racers. Macau represents itself separately from mainland China with its own sports teams in international competitions. The territory maintains its own National Olympic Committee, but does not compete in the Olympic Games. Current International Olympic Committee rules specify that new NOCs can only be admitted if they represent sovereign states (Hong Kong has participated in the Olympics since before the regulation change in 1996). Macau has six sister cities, listed chronologically by year joined: Additionally, Macau has other cultural agreements with the following cities: Macau is part of the Union of Luso-Afro-Americo-Asiatic Capital Cities from 28 June 1985, establishing brotherly relations with the following cities:
https://en.wikipedia.org/wiki?curid=19068
Iridium Iridium is a chemical element with the symbol Ir and atomic number 77. A very hard, brittle, silvery-white transition metal of the platinum group, iridium is considered to be the second-densest metal (after osmium) with a density of as defined by experimental X-ray crystallography. However, at room temperature and standard atmospheric pressure, iridium has been calculated to have a density of , higher than osmium measured the same way. Still, the experimental X-ray crystallography value is considered to be the most accurate, as such iridium is considered to be the second densest element. It is the most corrosion-resistant metal, even at temperatures as high as 2000 °C. Although only certain molten salts and halogens are corrosive to solid iridium, finely divided iridium dust is much more reactive and can be flammable. Iridium was discovered in 1803 among insoluble impurities in natural platinum. Smithson Tennant, the primary discoverer, named iridium after the Greek goddess Iris, personification of the rainbow, because of the striking and diverse colors of its salts. Iridium is one of the rarest elements in Earth's crust, with annual production and consumption of only three tonnes. 191Ir and 193Ir are the only two naturally occurring isotopes of iridium, as well as the only stable isotopes; the latter is the more abundant. The most important iridium compounds in use are the salts and acids it forms with chlorine, though iridium also forms a number of organometallic compounds used in industrial catalysis, and in research. Iridium metal is employed when high corrosion resistance at high temperatures is needed, as in high-performance spark plugs, crucibles for recrystallization of semiconductors at high temperatures, and electrodes for the production of chlorine in the chloralkali process. Iridium radioisotopes are used in some radioisotope thermoelectric generators. Iridium is found in meteorites in much higher abundance than in the Earth's crust. For this reason, the unusually high abundance of iridium in the clay layer at the Cretaceous–Paleogene boundary gave rise to the Alvarez hypothesis that the impact of a massive extraterrestrial object caused the extinction of dinosaurs and many other species 66 million years ago. Similarly, an iridium anomaly in core samples from the Pacific Ocean suggested the Eltanin impact of about 2.5 million years ago. It is thought that the total amount of iridium in the planet Earth is much higher than that observed in crustal rocks, but as with other platinum-group metals, the high density and tendency of iridium to bond with iron caused most iridium to descend below the crust when the planet was young and still molten. A member of the platinum group metals, iridium is white, resembling platinum, but with a slight yellowish cast. Because of its hardness, brittleness, and very high melting point, solid iridium is difficult to machine, form, or work; thus powder metallurgy is commonly employed instead. It is the only metal to maintain good mechanical properties in air at temperatures above . It has the 10th highest boiling point among all elements and becomes a superconductor at temperatures below 0.14 K. Iridium's modulus of elasticity is the second-highest among the metals, only being surpassed by osmium. This, together with a high shear modulus and a very low figure for Poisson's ratio (the relationship of longitudinal to lateral strain), indicate the high degree of stiffness and resistance to deformation that have rendered its fabrication into useful components a matter of great difficulty. Despite these limitations and iridium's high cost, a number of applications have developed where mechanical strength is an essential factor in some of the extremely severe conditions encountered in modern technology. The measured density of iridium is only slightly lower (by about 0.12%) than that of osmium, the densest metal known. Some ambiguity occurred regarding which of the two elements was denser, due to the small size of the difference in density and difficulties in measuring it accurately, but, with increased accuracy in factors used for calculating density, X-ray crystallographic data yielded densities of for iridium and for osmium. Iridium is the most corrosion-resistant metal known: it is not attacked by almost any acid, aqua regia, molten metals, or silicates at high temperatures. It can, however, be attacked by some molten salts, such as sodium cyanide and potassium cyanide, as well as oxygen and the halogens (particularly fluorine) at higher temperatures. Iridium also reacts directly with sulfur at atmospheric pressure to yield iridium disulfide. Iridium forms compounds in oxidation states between −3 and +9; the most common oxidation states are +3 and +4. Well-characterized examples of the high +6 oxidation state are rare, but include and two mixed oxides and . In addition, it was reported in 2009 that iridium(VIII) oxide () was prepared under matrix isolation conditions (6 K in Ar) by UV irradiation of an iridium-peroxo complex. This species, however, is not expected to be stable as a bulk solid at higher temperatures. The highest oxidation state (+9), which is also the highest recorded for "any" element, is only known in one cation, ; it is only known as gas-phase species and is not known to form any salts. Iridium dioxide, , a blue black solid, is the only well-characterized oxide of iridium. A sesquioxide, , has been described as a blue-black powder which is oxidized to by . The corresponding disulfides, diselenides, sesquisulfides, and sesquiselenides are known, and has also been reported. Iridium also forms iridates with oxidation states +4 and +5, such as and , which can be prepared from the reaction of potassium oxide or potassium superoxide with iridium at high temperatures. Although no binary hydrides of iridium, are known, complexes are known that contain and , where iridium has the +1 and +3 oxidation states, respectively. The ternary hydride is believed to contain both the and the 18-electron anion. No monohalides or dihalides are known, whereas trihalides, , are known for all of the halogens. For oxidation states +4 and above, only the tetrafluoride, pentafluoride and hexafluoride are known. Iridium hexafluoride, , is a volatile and highly reactive yellow solid, composed of octahedral molecules. It decomposes in water and is reduced to , a crystalline solid, by iridium black. Iridium pentafluoride has similar properties but it is actually a tetramer, , formed by four corner-sharing octahedra. Iridium metal dissolves in molten alkali-metal cyanides to produce the (hexacyanoiridate) ion. Hexachloroiridic(IV) acid, , and its ammonium salt are the most important iridium compounds from an industrial perspective. They are involved in the purification of iridium and used as precursors for most other iridium compounds, as well as in the preparation of anode coatings. The ion has an intense dark brown color, and can be readily reduced to the lighter-colored and vice versa. Iridium trichloride, , which can be obtained in anhydrous form from direct oxidation of iridium powder by chlorine at 650 °C, or in hydrated form by dissolving in hydrochloric acid, is often used as a starting material for the synthesis of other Ir(III) compounds. Another compound used as a starting material is ammonium hexachloroiridate(III), . Iridium(III) complexes are diamagnetic (low-spin) and generally have an octahedral molecular geometry. Organoiridium compounds contain iridium–carbon bonds where the metal is usually in lower oxidation states. For example, oxidation state zero is found in tetrairidium dodecacarbonyl, , which is the most common and stable binary carbonyl of iridium. In this compound, each of the iridium atoms is bonded to the other three, forming a tetrahedral cluster. Some organometallic Ir(I) compounds are notable enough to be named after their discoverers. One is Vaska's complex, , which has the unusual property of binding to the dioxygen molecule, . Another one is Crabtree's catalyst, a homogeneous catalyst for hydrogenation reactions. These compounds are both square planar, d8 complexes, with a total of 16 valence electrons, which accounts for their reactivity. An iridium-based organic LED material has been documented, and found to be much brighter than DPA or PPV, so could be the basis for flexible OLED lighting in the future. Iridium has two naturally occurring, stable isotopes, 191Ir and 193Ir, with natural abundances of 37.3% and 62.7%, respectively. At least 37 radioisotopes have also been synthesized, ranging in mass number from 164 to 202. 192Ir, which falls between the two stable isotopes, is the most stable radioisotope, with a half-life of 73.827 days, and finds application in brachytherapy and in industrial radiography, particularly for nondestructive testing of welds in steel in the oil and gas industries; iridium-192 sources have been involved in a number of radiological accidents. Three other isotopes have half-lives of at least a day—188Ir, 189Ir, and 190Ir. Isotopes with masses below 191 decay by some combination of β+ decay, α decay, and (rare) proton emission, with the exception of 189Ir, which decays by electron capture. Synthetic isotopes heavier than 191 decay by β− decay, although 192Ir also has a minor electron capture decay path. All known isotopes of iridium were discovered between 1934 and 2008, with the most recent discoveries being 200–202Ir. At least 32 metastable isomers have been characterized, ranging in mass number from 164 to 197. The most stable of these is 192m2Ir, which decays by isomeric transition with a half-life of 241 years, making it more stable than any of iridium's synthetic isotopes in their ground states. The least stable isomer is 190m3Ir with a half-life of only 2 µs. The isotope 191Ir was the first one of any element to be shown to present a Mössbauer effect. This renders it useful for Mössbauer spectroscopy for research in physics, chemistry, biochemistry, metallurgy, and mineralogy. The discovery of iridium is intertwined with that of platinum and the other metals of the platinum group. Native platinum used by ancient Ethiopians and by South American cultures always contained a small amount of the other platinum group metals, including iridium. Platinum reached Europe as "platina" ("silverette"), found in the 17th century by the Spanish conquerors in a region today known as the department of Chocó in Colombia. The discovery that this metal was not an alloy of known elements, but instead a distinct new element, did not occur until 1748. Chemists who studied platinum dissolved it in aqua regia (a mixture of hydrochloric and nitric acids) to create soluble salts. They always observed a small amount of a dark, insoluble residue. Joseph Louis Proust thought that the residue was graphite. The French chemists Victor Collet-Descotils, Antoine François, comte de Fourcroy, and Louis Nicolas Vauquelin also observed the black residue in 1803, but did not obtain enough for further experiments. In 1803, British scientist Smithson Tennant (1761–1815) analyzed the insoluble residue and concluded that it must contain a new metal. Vauquelin treated the powder alternately with alkali and acids and obtained a volatile new oxide, which he believed to be of this new metal—which he named "ptene", from the Greek word "ptēnós", "winged". Tennant, who had the advantage of a much greater amount of residue, continued his research and identified the two previously undiscovered elements in the black residue, iridium and osmium. He obtained dark red crystals (probably of ]·"n") by a sequence of reactions with sodium hydroxide and hydrochloric acid. He named iridium after Iris (), the Greek winged goddess of the rainbow and the messenger of the Olympian gods, because many of the salts he obtained were strongly colored. Discovery of the new elements was documented in a letter to the Royal Society on June 21, 1804. British scientist John George Children was the first to melt a sample of iridium in 1813 with the aid of "the greatest galvanic battery that has ever been constructed" (at that time). The first to obtain high-purity iridium was Robert Hare in 1842. He found it had a density of around and noted the metal is nearly immalleable and very hard. The first melting in appreciable quantity was done by Henri Sainte-Claire Deville and Jules Henri Debray in 1860. They required burning more than 300 liters of pure and gas for each kilogram of iridium. These extreme difficulties in melting the metal limited the possibilities for handling iridium. John Isaac Hawkins was looking to obtain a fine and hard point for fountain pen nibs, and in 1834 managed to create an iridium-pointed gold pen. In 1880, John Holland and William Lofland Dudley were able to melt iridium by adding phosphorus and patented the process in the United States; British company Johnson Matthey later stated they had been using a similar process since 1837 and had already presented fused iridium at a number of World Fairs. The first use of an alloy of iridium with ruthenium in thermocouples was made by Otto Feussner in 1933. These allowed for the measurement of high temperatures in air up to . In Munich, Germany in 1957 Rudolf Mössbauer, in what has been called one of the "landmark experiments in twentieth-century physics", discovered the resonant and recoil-free emission and absorption of gamma rays by atoms in a solid metal sample containing only 191Ir. This phenomenon, known as the Mössbauer effect (which has since been observed for other nuclei, such as 57Fe), and developed as Mössbauer spectroscopy, has made important contributions to research in physics, chemistry, biochemistry, metallurgy, and mineralogy. Mössbauer received the Nobel Prize in Physics in 1961, at the age 32, just three years after he published his discovery. In 1986 Rudolf Mössbauer was honored for his achievements with the Albert Einstein Medal and the Elliot Cresson Medal. Iridium is one of the nine least abundant stable elements in Earth's crust, having an average mass fraction of 0.001 ppm in crustal rock; platinum is 10 times more abundant, gold is 40 times more abundant, and silver and mercury are 80 times more abundant. Tellurium is about as abundant as iridium. In contrast to its low abundance in crustal rock, iridium is relatively common in meteorites, with concentrations of 0.5 ppm or more. The overall concentration of iridium on Earth is thought to be much higher than what is observed in crustal rocks, but because of the density and siderophilic ("iron-loving") character of iridium, it descended below the crust and into Earth's core when the planet was still molten. Iridium is found in nature as an uncombined element or in natural alloys; especially the iridium–osmium alloys, osmiridium (osmium-rich), and iridosmium (iridium-rich). In the nickel and copper deposits, the platinum group metals occur as sulfides (i.e. (Pt,Pd)S), tellurides (i.e. PtBiTe), antimonides (PdSb), and arsenides (i.e. ). In all of these compounds, platinum is exchanged by a small amount of iridium and osmium. As with all of the platinum group metals, iridium can be found naturally in alloys with raw nickel or raw copper. A number of iridium-dominant minerals, with iridium as the species-forming element, are known. They are exceedingly rare and often represent the iridium analogues of the above-given ones. The examples are irarsite and cuproiridsite, to mention some. Within Earth's crust, iridium is found at highest concentrations in three types of geologic structure: igneous deposits (crustal intrusions from below), impact craters, and deposits reworked from one of the former structures. The largest known primary reserves are in the Bushveld igneous complex in South Africa, (near the largest known impact crater, the Vredefort crater) though the large copper–nickel deposits near Norilsk in Russia, and the Sudbury Basin (also an impact crater) in Canada are also significant sources of iridium. Smaller reserves are found in the United States. Iridium is also found in secondary deposits, combined with platinum and other platinum group metals in alluvial deposits. The alluvial deposits used by pre-Columbian people in the Chocó Department of Colombia are still a source for platinum-group metals. As of 2003, world reserves have not been estimated. The Cretaceous–Paleogene boundary of 66 million years ago, marking the temporal border between the Cretaceous and Paleogene periods of geological time, was identified by a thin stratum of iridium-rich clay. A team led by Luis Alvarez proposed in 1980 an extraterrestrial origin for this iridium, attributing it to an asteroid or comet impact. Their theory, known as the Alvarez hypothesis, is now widely accepted to explain the extinction of the non-avian dinosaurs. A large buried impact crater structure with an estimated age of about 66 million years was later identified under what is now the Yucatán Peninsula (the Chicxulub crater). Dewey M. McLean and others argue that the iridium may have been of volcanic origin instead, because Earth's core is rich in iridium, and active volcanoes such as Piton de la Fournaise, in the island of Réunion, are still releasing iridium. Production in 2019 242000 ounces in world Iridium is also obtained commercially as a by-product from nickel and copper mining and processing. During electrorefining of copper and nickel, noble metals such as silver, gold and the platinum group metals as well as selenium and tellurium settle to the bottom of the cell as "anode mud", which forms the starting point for their extraction. To separate the metals, they must first be brought into solution. Several separation methods are available depending on the nature of the mixture; two representative methods are fusion with sodium peroxide followed by dissolution in aqua regia, and dissolution in a mixture of chlorine with hydrochloric acid. After the mixture is dissolved, iridium is separated from the other platinum group metals by precipitating ammonium hexachloroiridate () or by extracting with organic amines. The first method is similar to the procedure Tennant and Wollaston used for their separation. The second method can be planned as continuous liquid–liquid extraction and is therefore more suitable for industrial scale production. In either case, the product is reduced using hydrogen, yielding the metal as a powder or "sponge" that can be treated using powder metallurgy techniques. Iridium prices have fluctuated over a considerable range. With a relatively small volume in the world market (compared to other industrial metals like aluminium or copper), the iridium price reacts strongly to instabilities in production, demand, speculation, hoarding, and politics in the producing countries. As a substance with rare properties, its price has been particularly influenced by changes in modern technology: The gradual decrease between 2001 and 2003 has been related to an oversupply of Ir crucibles used for industrial growth of large single crystals. Likewise the prices above 1000 USD/oz between 2010 and 2014 have been explained with the installation of production facilities for single crystal sapphire used in LED backlights for TVs. The demand for iridium surged from 2.5 tonnes in 2009 to 10.4 tonnes in 2010, mostly because of electronics-related applications that saw a rise from 0.2 to 6 tonnes – iridium crucibles are commonly used for growing large high-quality single crystals, demand for which has increased sharply. This increase in iridium consumption is predicted to saturate due to accumulating stocks of crucibles, as happened earlier in the 2000s. Other major applications include spark plugs that consumed 0.78 tonnes of iridium in 2007, electrodes for the chloralkali process (1.1 t in 2007) and chemical catalysts (0.75 t in 2007). The high melting point, hardness and corrosion resistance of iridium and its alloys determine most of its applications. Iridium (or sometimes platinum alloys or osmium) and mostly iridium alloys have a low wear and are used, for example, for multi-pored spinnerets, through which a plastic polymer melt is extruded to form fibers, such as rayon. Osmium–iridium is used for compass bearings and for balances. Their resistance to arc erosion makes iridium alloys ideal for electrical contacts for spark plugs, and iridium-based spark plugs are particularly used in aviation. Pure iridium is extremely brittle, to the point of being hard to weld because the heat-affected zone cracks, but it can be made more ductile by addition of small quantities of titanium and zirconium (0.2% of each apparently works well). Corrosion and heat resistance makes iridium an important alloying agent. Certain long-life aircraft engine parts are made of an iridium alloy, and an iridium–titanium alloy is used for deep-water pipes because of its corrosion resistance. Iridium is also used as a hardening agent in platinum alloys. The Vickers hardness of pure platinum is 56 HV, whereas platinum with 50% of iridium can reach over 500 HV. Devices that must withstand extremely high temperatures are often made from iridium. For example, high-temperature crucibles made of iridium are used in the Czochralski process to produce oxide single-crystals (such as sapphires) for use in computer memory devices and in solid state lasers. The crystals, such as gadolinium gallium garnet and yttrium gallium garnet, are grown by melting pre-sintered charges of mixed oxides under oxidizing conditions at temperatures up to 2100 °C. Iridium compounds are used as catalysts in the Cativa process for carbonylation of methanol to produce acetic acid. The radioisotope iridium-192 is one of the two most important sources of energy for use in industrial γ-radiography for non-destructive testing of metals. Additionally, 192Ir is used as a source of gamma radiation for the treatment of cancer using brachytherapy, a form of radiotherapy where a sealed radioactive source is placed inside or next to the area requiring treatment. Specific treatments include high-dose-rate prostate brachytherapy, biliary duct brachytherapy, and intracavitary cervix brachytherapy. In February 2019, medical scientists announced that iridium attached to albumin, creating a photosensitized molecule, can penetrate cancer cells and, after being irradiated with light (a process called photodynamic therapy), destroy the cancer cells. Iridium is a good catalyst for the decomposition of hydrazine (into hot nitrogen and ammonia), and this is used in practice in low-thrust rocket engines; there are more details in the monopropellant rocket article. An alloy of 90% platinum and 10% iridium was used in 1889 to construct the International Prototype Metre and kilogram mass, kept by the International Bureau of Weights and Measures near Paris. The meter bar was replaced as the definition of the fundamental unit of length in 1960 by a line in the atomic spectrum of krypton, but the kilogram prototype remained the international standard of mass until 20 May 2019, when the kilogram was redefined in terms of the Planck constant. Iridium is often used as a coating for non-conductive materials in preparation for observation in scanning electron microscopes (SEM). The addition of a 2-20 nm layer of iridium helps especially organic materials survive electron beam damage and reduces static charge build-up within the target area of the SEM beam's focal point. A coating of iridium also increases the signal to noise ratio associated with secondary electron emission which is essential to using SEMs for X-Ray spectrographic composition analysis. While other metals can be used for coating objects for SEM use, iridium is the preferred coating when samples will be studied with a wide variety of imaging parameters. Iridium has been used in the radioisotope thermoelectric generators of unmanned spacecraft such as the "Voyager", "Viking", "Pioneer", "Cassini", "Galileo", and "New Horizons". Iridium was chosen to encapsulate the plutonium-238 fuel in the generator because it can withstand the operating temperatures of up to 2000 °C and for its great strength. Another use concerns X-ray optics, especially X-ray telescopes. The mirrors of the Chandra X-ray Observatory are coated with a layer of iridium 60 nm thick. Iridium proved to be the best choice for reflecting X-rays after nickel, gold, and platinum were also tested. The iridium layer, which had to be smooth to within a few atoms, was applied by depositing iridium vapor under high vacuum on a base layer of chromium. Iridium is used in particle physics for the production of antiprotons, a form of antimatter. Antiprotons are made by shooting a high-intensity proton beam at a "conversion target", which needs to be made from a very high density material. Although tungsten may be used instead, iridium has the advantage of better stability under the shock waves induced by the temperature rise due to the incident beam. Carbon–hydrogen bond activation (C–H activation) is an area of research on reactions that cleave carbon–hydrogen bonds, which were traditionally regarded as unreactive. The first reported successes at activating C–H bonds in saturated hydrocarbons, published in 1982, used organometallic iridium complexes that undergo an oxidative addition with the hydrocarbon. Iridium complexes are being investigated as catalysts for asymmetric hydrogenation. These catalysts have been used in the synthesis of natural products and able to hydrogenate certain difficult substrates, such as unfunctionalized alkenes, enantioselectively (generating only one of the two possible enantiomers). Iridium forms a variety of complexes of fundamental interest in triplet harvesting. Iridium–osmium alloys were used in fountain pen nib tips. The first major use of iridium was in 1834 in nibs mounted on gold. Since 1944, the famous Parker 51 fountain pen was fitted with a nib tipped by a ruthenium and iridium alloy (with 3.8% iridium). The tip material in modern fountain pens is still conventionally called "iridium", although there is seldom any iridium in it; other metals such as ruthenium, osmium, and tungsten have taken its place. An iridium–platinum alloy was used for the touch holes or vent pieces of cannon. According to a report of the Paris Exhibition of 1867, one of the pieces being exhibited by Johnson and Matthey "has been used in a Withworth gun for more than 3000 rounds, and scarcely shows signs of wear yet. Those who know the constant trouble and expense which are occasioned by the wearing of the vent-pieces of cannon when in active service, will appreciate this important adaptation". The pigment "iridium black", which consists of very finely divided iridium, is used for painting porcelain an intense black; it was said that "all other porcelain black colors appear grey by the side of it". Iridium in bulk metallic form is not biologically important or hazardous to health due to its lack of reactivity with tissues; there are only about 20 parts per trillion of iridium in human tissue. Like most metals, finely divided iridium powder can be hazardous to handle, as it is an irritant and may ignite in air. Very little is known about the toxicity of iridium compounds, primarily because it is used so rarely that few people come in contact with it and those who do only with very small amounts. However, soluble salts, such as the iridium halides, could be hazardous due to elements other than iridium or due to iridium itself. At the same time, most iridium compounds are insoluble, which makes absorption into the body difficult. A radioisotope of iridium, , is dangerous, like other radioactive isotopes. The only reported injuries related to iridium concern accidental exposure to radiation from used in brachytherapy. High-energy gamma radiation from can increase the risk of cancer. External exposure can cause burns, radiation poisoning, and death. Ingestion of 192Ir can burn the linings of the stomach and the intestines. 192Ir, 192mIr, and 194mIr tend to deposit in the liver, and can pose health hazards from both gamma and beta radiation.
https://en.wikipedia.org/wiki?curid=14752
International Phonetic Alphabet The International Phonetic Alphabet (IPA) is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was devised by the International Phonetic Association in the late 19th century as a standardized representation of the sounds of spoken language. The IPA is used by lexicographers, foreign language students and teachers, linguists, speech-language pathologists, singers, actors, constructed language creators and translators. The IPA is designed to represent only those qualities of speech that are part of oral language: phones, phonemes, intonation and the separation of words and syllables. To represent additional qualities of speech, such as tooth gnashing, lisping, and sounds made with a cleft lip and cleft palate, an extended set of symbols, the extensions to the International Phonetic Alphabet, may be used. IPA symbols are composed of one or more elements of two basic types, letters and diacritics. For example, the sound of the English letter may be transcribed in IPA with a single letter, , or with a letter plus diacritics, , depending on how precise one wishes to be. Often, slashes are used to signal broad or phonemic transcription; thus, is less specific than, and could refer to, either or , depending on the context and language. Occasionally letters or diacritics are added, removed or modified by the International Phonetic Association. As of the most recent change in 2005, there are 107 letters, 52 diacritics and four prosodic marks in the IPA. These are shown in the current IPA chart, also posted below in this article and at the website of the IPA. In 1886, a group of French and British language teachers, led by the French linguist Paul Passy, formed what would come to be known from 1897 onwards as the International Phonetic Association (in French, ""). Their original alphabet was based on a spelling reform for English known as the Romic alphabet, but in order to make it usable for other languages, the values of the symbols were allowed to vary from language to language. For example, the sound (the "sh" in "shoe") was originally represented with the letter in English, but with the digraph in French. However, in 1888, the alphabet was revised so as to be uniform across languages, thus providing the base for all future revisions. The idea of making the IPA was first suggested by Otto Jespersen in a letter to Paul Passy. It was developed by Alexander John Ellis, Henry Sweet, Daniel Jones, and Passy. Since its creation, the IPA has undergone a number of revisions. After revisions and expansions from the 1890s to the 1940s, the IPA remained primarily unchanged until the Kiel Convention in 1989. A minor revision took place in 1993 with the addition of four letters for mid central vowels and the removal of letters for voiceless implosives. The alphabet was last revised in May 2005 with the addition of a letter for a labiodental flap. Apart from the addition and removal of symbols, changes to the IPA have consisted largely of renaming symbols and categories and in modifying typefaces. Extensions to the International Phonetic Alphabet for speech pathology were created in 1990 and officially adopted by the International Clinical Phonetics and Linguistics Association in 1994. The general principle of the IPA is to provide one letter for each distinctive sound (speech segment), although this practice is not followed if the sound itself is complex. This means that: For instance, flaps and taps are two different kinds of articulation, but since no language has (yet) been found to make a distinction between, say, an alveolar flap and an alveolar tap, the IPA does not provide such sounds with dedicated letters. Instead, it provides a single letter (in this case, ) for both. Strictly speaking, this makes the IPA a partially phon"em"ic alphabet, not a purely phon"et"ic one. The alphabet is designed for transcribing sounds (phones), not phonemes, though it is used for phonemic transcription as well. A few letters that did not indicate specific sounds have been retired (, once used for the 'compound' tone of Swedish and Norwegian, and , once used for the moraic nasal of Japanese), though one remains: , used for the sj-sound of Swedish. When the IPA is used for phonemic transcription, the letter–sound correspondence can be rather loose. For example, and are used in the IPA "Handbook" for and . Among the symbols of the IPA, 107 letters represent consonants and vowels, 31 diacritics are used to modify these, and 19 additional signs indicate suprasegmental qualities such as length, tone, stress, and intonation. These are organized into a chart; the chart displayed here is the official chart as posted at the website of the IPA. The letters chosen for the IPA are meant to harmonize with the Latin alphabet. For this reason, most letters are either Latin or Greek, or modifications thereof. Some letters are neither: for example, the letter denoting the glottal stop, , has the form of a dotless question mark, and derives originally from an apostrophe. A few letters, such as that of the voiced pharyngeal fricative, , were inspired by other writing systems (in this case, the Arabic letter ""). Despite its preference for harmonizing with the Latin script, the International Phonetic Association has occasionally admitted other letters. For example, before 1989, the IPA letters for click consonants were , , , and , all of which were derived either from existing IPA letters, or from Latin and Greek letters. However, except for , none of these letters were widely used among Khoisanists or Bantuists, and as a result they were replaced by the more widespread symbols , , , , and at the IPA Kiel Convention in 1989. Although the IPA diacritics are fully featural, there is little systemicity in the letter forms. A retroflex articulation is consistently indicated with a right-swinging tail, as in , and implosion by a top hook, , but other pseudo-featural elements are due to haphazard derivation and coincidence. For example, all nasal consonants but uvular are based on the form : . However, the similarity between and is a historical accident; and are derived from ligatures of "gn" and "ng," and is an "ad hoc" imitation of . Some of the new letters were ordinary Latin letters turned 180 degrees, such as (turned ). This was easily done in the era of mechanical typesetting, and had the advantage of not requiring the casting of special type for IPA symbols. Full capital letters are not used as IPA symbols. They are, however, often used for archiphonemes and for natural classes of phonemes (that is, as wildcards). Such usage is not part of the IPA or even standardized, and may be ambiguous between authors, but it is commonly used in conjunction with the IPA. (The extIPA chart, for example, uses wildcards in its illustrations.) Capital letters are also basic to the Voice Quality Symbols sometimes used in conjunction with the IPA. As wildcards, for {consonant} and for {vowel} are ubiquitous. Other common capital-letter symbols are for {tone/accent} (tonicity), for {nasal}, for {plosive}, for {fricative}, for {sibilant}, for {glide/approximant}, for {liquid}, for {rhotic} or {resonant} (sonorant), for {click}, for {open, front, back, close vowel} and for {labial, alveolar, post-alveolar/palatal, velar, uvular, pharyngeal, glottal consonant}, respectively, and for any sound. For example, the possible syllable shapes of Mandarin can be abstracted as ranging from (an atonic vowel) to (a consonant-glide-vowel-nasal syllable with tone). The letters can be modified with IPA diacritics, for example for {ejective}, for {implosive}, or for {prenasalized consonant}, for {nasal vowel}, for {voiced sibilant}, for {voiceless nasal}, or for {affricate}, for {palatalized consonant} and for {dental consonant}. In speech pathology, capital letters represent indeterminate sounds, and may be superscripted to indicate they are weakly articulated: e.g. is a weak indeterminate alveolar, a weak indeterminate velar. Typical examples of archiphonemic use of capital letters are for the Turkish harmonic vowel set } for the conflated flapped middle consonant of American English "writer" and "rider", and for the homorganic syllable-coda nasal of languages such as Spanish (essentially equivalent to the wild-card usage of the letter). , and have different meanings as Voice Quality Symbols, where they stand for "voice" (generally meaning secondary articulation rather than phonetic voicing), "falsetto" and "creak". They may take diacritics that indicate what kind of voice quality an utterance has, and may be used to extract a suprasegmental feature that occurs on all susceptible segments in a stretch of IPA. For instance, the transcription of Scottish Gaelic 'cat' and 'cats' (Islay dialect) can be made more economical by extracting the suprasegmental labialization of the words: and . The International Phonetic Alphabet is based on the Latin alphabet, using as few non-Latin forms as possible. The Association created the IPA so that the sound values of most consonant letters taken from the Latin alphabet would correspond to "international usage". Hence, the letters , , , (hard) , (non-silent) , (unaspirated) , , , , (unaspirated) , (voiceless) , (unaspirated) , , , and have the values used in English; and the vowel letters from the Latin alphabet (, , , , ) correspond to the (long) sound values of Latin: is like the vowel in "machne", is as in "rle", etc. Other letters may differ from English, but are used with these values in other European languages, such as , , and . This inventory was extended by using small-capital and cursive forms, diacritics and rotation. There are also several symbols derived or taken from the Greek alphabet, though the sound values may differ. For example, is a vowel in Greek, but an only indirectly related consonant in the IPA. For most of these, subtly different glyph shapes have been devised for the IPA, namely , , , , , , and , which are encoded in Unicode separately from their parent Greek letters, though one of them – – is not, while Greek and are generally used for Latin and . The sound values of modified Latin letters can often be derived from those of the original letters. For example, letters with a rightward-facing hook at the bottom represent retroflex consonants; and small capital letters usually represent uvular consonants. Apart from the fact that certain kinds of modification to the shape of a letter generally correspond to certain kinds of modification to the sound represented, there is no way to deduce the sound represented by a symbol from its shape (as for example in Visible Speech) nor even any systematic relation between signs and the sounds they represent (as in Hangul). Beyond the letters themselves, there are a variety of secondary symbols which aid in transcription. Diacritic marks can be combined with IPA letters to transcribe modified phonetic values or secondary articulations. There are also special symbols for suprasegmental features such as stress and tone that are often employed. There are two principal types of brackets used to set off IPA transcriptions: Other conventions are less commonly seen: All three of the above are provided by the IPA "Handbook". The following are not, but may be seen in IPA transcription: IPA letters have cursive forms designed for use in manuscripts and when taking field notes. In the early stages of the alphabet, the typographic variants of "g", opentail () and looptail (), represented different values, but are now regarded as equivalents. Opentail has always represented a voiced velar plosive, while was distinguished from and represented a voiced velar fricative from 1895 to 1900. Subsequently, represented the fricative, until 1931 when it was replaced again by . In 1948, the Council of the Association recognized and as typographic equivalents, and this decision was reaffirmed in 1993. While the 1949 "Principles of the International Phonetic Association" recommended the use of for a velar plosive and for an advanced one for languages where it is preferable to distinguish the two, such as Russian, this practice never caught on. The 1999 "Handbook of the International Phonetic Association", the successor to the "Principles", abandoned the recommendation and acknowledged both shapes as acceptable variants. The International Phonetic Alphabet is occasionally modified by the Association. After each modification, the Association provides an updated simplified presentation of the alphabet in the form of a chart. (See History of the IPA.) Not all aspects of the alphabet can be accommodated in a chart of the size published by the IPA. The alveolo-palatal and epiglottal consonants, for example, are not included in the consonant chart for reasons of space rather than of theory (two additional columns would be required, one between the retroflex and palatal columns and the other between the pharyngeal and glottal columns), and the lateral flap would require an additional row for that single consonant, so they are listed instead under the catchall block of "other symbols". The indefinitely large number of tone letters would make a full accounting impractical even on a larger page, and only a few examples are shown. The procedure for modifying the alphabet or the chart is to propose the change in the "Journal of the IPA." (See, for example, August 2008 on an open central unrounded vowel and August 2011 on central approximants.) Reactions to the proposal may be published in the same or subsequent issues of the Journal (as in August 2009 on the open central vowel). A formal proposal is then put to the Council of the IPA – which is elected by the membership – for further discussion and a formal vote. Only changes to the alphabet or chart that have been approved by the Council can be considered part of the official IPA. Nonetheless, many users of the alphabet, including the leadership of the Association itself, make personal changes or additions in their own practice, either for convenience in the broad phonetic or phonemic transcription of a particular language (see "Illustrations of the IPA" for individual languages in the "Handbook", which for example may use as a phonemic symbol for what is phonetically realized as ), or because they object to some aspect of the official version. Although the IPA offers over 160 symbols for transcribing speech, only a relatively small subset of these will be used to transcribe any one language. It is possible to transcribe speech with various levels of precision. A precise phonetic transcription, in which sounds are described in a great deal of detail, is known as a "narrow transcription". A coarser transcription which ignores some of this detail is called a "broad transcription." Both are relative terms, and both are generally enclosed in square brackets. Broad phonetic transcriptions may restrict themselves to easily heard details, or only to details that are relevant to the discussion at hand, and may differ little if at all from phonemic transcriptions, but they make no theoretical claim that all the distinctions transcribed are necessarily meaningful in the language. For example, the English word "little" may be transcribed broadly using the IPA as , and this broad (imprecise) transcription is a more or less accurate description of many pronunciations. A narrower transcription may focus on individual or dialectical details: in General American, in Cockney, or in Southern US English. It is customary to use simpler letters, without many diacritics, in phonemic transcriptions. The choice of IPA letters may reflect the theoretical claims of the author, or merely be a convenience for typesetting. For instance, in English, either the vowel of "pick" or the vowel of "peak" may be transcribed as (for the pairs or ), and neither is identical to the vowel of the French word "" which is also generally transcribed . That is, letters between slashes do not have absolute values, something true of broader phonetic approximations as well. A narrow transcription may, however, be used to distinguish them: , , . Although IPA is popular for transcription by linguists, American linguists often alternate use of the IPA with Americanist phonetic notation or use the IPA together with some nonstandard symbols, for reasons including reducing the error rate on reading handwritten transcriptions or avoiding perceived awkwardness of IPA in some situations. The exact practice may vary somewhat between languages and even individual researchers, so authors are generally encouraged to include a chart or other explanation of their choices. Some language study programs use the IPA to teach pronunciation. For example, in Russia (and earlier in the Soviet Union) and mainland China, textbooks for children and adults for studying English and French consistently use the IPA. English teachers and textbooks in Taiwan tend to use the Kenyon and Knott system, a slight typographical variant of the IPA first used in the 1944 "Pronouncing Dictionary of American English". Many British dictionaries, including the Oxford English Dictionary and some learner's dictionaries such as the "Oxford Advanced Learner's Dictionary" and the "Cambridge Advanced Learner's Dictionary", now use the International Phonetic Alphabet to represent the pronunciation of words. However, most American (and some British) volumes use one of a variety of pronunciation respelling systems, intended to be more comfortable for readers of English. For example, the respelling systems in many American dictionaries (such as "Merriam-Webster") use for IPA and for IPA , reflecting common representations of those sounds in written English, using only letters of the English Roman alphabet and variations of them. (In IPA, represents the sound of the French (as in ""), and represents the pair of sounds in "graopper".) The IPA is also not universal among dictionaries in languages other than English. Monolingual dictionaries of languages with generally phonemic orthographies generally do not bother with indicating the pronunciation of most words, and tend to use respelling systems for words with unexpected pronunciations. Dictionaries produced in Israel use the IPA rarely and sometimes use the Hebrew alphabet for transcription of foreign words. Monolingual Hebrew dictionaries use pronunciation respelling for words with unusual spelling; for example, the "Even-Shoshan Dictionary" respells as because this word uses kamatz katan. Bilingual dictionaries that translate from foreign languages into Russian usually employ the IPA, but monolingual Russian dictionaries occasionally use pronunciation respelling for foreign words; for example, Sergey Ozhegov's dictionary adds нэ́ in brackets for the French word пенсне ("") to indicate that the final е does not iotate the preceding н. The IPA is more common in bilingual dictionaries, but there are exceptions here too. Mass-market bilingual Czech dictionaries, for instance, tend to use the IPA only for sounds not found in the Czech language. IPA letters have been incorporated into the alphabets of various languages, notably via the Africa Alphabet in many sub-Saharan languages such as Hausa, Fula, Akan, Gbe languages, Manding languages, Lingala, etc. This has created the need for capital variants. For example, Kabiyè of northern Togo has Ɖ ɖ, Ŋ ŋ, Ɣ ɣ, Ɔ ɔ, Ɛ ɛ, Ʋ ʋ. These, and others, are supported by Unicode, but appear in Latin ranges other than the IPA extensions. In the IPA itself, however, only lower-case letters are used. The 1949 edition of the IPA handbook indicated that an asterisk may be prefixed to indicate that a word is a proper name, but this convention was not included in the 1999 "Handbook". IPA has widespread use among classical singers during preparation as they are frequently required to sing in a variety of foreign languages, in addition to being taught by vocal coach in order to perfect the diction of their students and to globally improve tone quality and tuning. Opera librettos are authoritatively transcribed in IPA, such as Nico Castel's volumes and Timothy Cheek's book "Singing in Czech". Opera singers' ability to read IPA was used by the site "Visual Thesaurus", which employed several opera singers "to make recordings for the 150,000 words and phrases in VT's lexical database ... for their vocal stamina, attention to the details of enunciation, and most of all, knowledge of IPA". The International Phonetic Association organizes the letters of the IPA into three categories: pulmonic consonants, non-pulmonic consonants, and vowels. Pulmonic consonant letters are arranged singly or in pairs of voiceless (tenuis) and voiced sounds, with these then grouped in columns from front (labial) sounds on the left to back (glottal) sounds on the right. In official publications by the IPA, two columns are omitted to save space, with the letters listed among 'other symbols', and with the remaining consonants arranged in rows from full closure (occlusives: stops and nasals), to brief closure (vibrants: trills and taps), to partial closure (fricatives) and minimal closure (approximants), again with a row left out to save space. In the table below, a slightly different arrangement is made: All pulmonic consonants are included in the pulmonic-consonant table, and the vibrants and laterals are separated out so that the rows reflect the common lenition pathway of "stop → fricative → approximant," as well as the fact that several letters pull double duty as both fricative and approximant; affricates may be created by joining stops and fricatives from adjacent cells. Shaded cells represent articulations that are judged to be impossible. Vowel letters are also grouped in pairs—of unrounded and rounded vowel sounds—with these pairs also arranged from front on the left to back on the right, and from maximal closure at top to minimal closure at bottom. No vowel letters are omitted from the chart, though in the past some of the mid central vowels were listed among the 'other symbols'. Each character is assigned a number, to prevent confusion between similar characters (such as and , and , or and ) in such situations as the printing of manuscripts. The categories of sounds are assigned different ranges of numbers. The numbers are assigned to sounds and to symbols, e.g. 304 is the open front unrounded vowel, 415 is the centralization diacritic. Together, they form a symbol that represents the open central unrounded vowel, . A pulmonic consonant is a consonant made by obstructing the glottis (the space between the vocal cords) or oral cavity (the mouth) and either simultaneously or subsequently letting out air from the lungs. Pulmonic consonants make up the majority of consonants in the IPA, as well as in human language. All consonants in the English language fall into this category. The pulmonic consonant table, which includes most consonants, is arranged in rows that designate manner of articulation, meaning how the consonant is produced, and columns that designate place of articulation, meaning where in the vocal tract the consonant is produced. The main chart includes only consonants with a single place of articulation. Notes Non-pulmonic consonants are sounds whose airflow is not dependent on the lungs. These include clicks (found in the Khoisan languages of Africa), implosives (found in languages such as Sindhi, Saraiki, Swahili and Vietnamese), and ejectives (found in many Amerindian and Caucasian languages). Notes Affricates and co-articulated stops are represented by two letters joined by a tie bar, either above or below the letters. The six most common affricates are optionally represented by ligatures, though this is no longer official IPA usage, because a great number of ligatures would be required to represent all affricates this way. Alternatively, a superscript notation for a consonant release is sometimes used to transcribe affricates, for example for , paralleling ~ . The letters for the palatal plosives and are often used as a convenience for and or similar affricates, even in official IPA publications, so they must be interpreted with care. Note Co-articulated consonants are sounds that involve two simultaneous places of articulation (are pronounced using two parts of the vocal tract). In English, the in "went" is a coarticulated consonant, being pronounced by rounding the lips and raising the back of the tongue. Similar sounds are and . Notes The IPA defines a vowel as a sound which occurs at a syllable center. Below is a chart depicting the vowels of the IPA. The IPA maps the vowels according to the position of the tongue. The vertical axis of the chart is mapped by vowel height. Vowels pronounced with the tongue lowered are at the bottom, and vowels pronounced with the tongue raised are at the top. For example, (the first vowel in "father") is at the bottom because the tongue is lowered in this position. However, (the vowel in "meet") is at the top because the sound is said with the tongue raised to the roof of the mouth. In a similar fashion, the horizontal axis of the chart is determined by vowel backness. Vowels with the tongue moved towards the front of the mouth (such as , the vowel in "met") are to the left in the chart, while those in which it is moved to the back (such as , the vowel in "but") are placed to the right in the chart. In places where vowels are paired, the right represents a rounded vowel (in which the lips are rounded) while the left is its unrounded counterpart. Diphthongs are typically specified with a non-syllabic diacritic, as in or , or with a superscript for the on- or off-glide, as in or . Sometimes a tie bar is used, especially if it is difficult to tell if the diphthong is characterized by an on-glide, an off-glide or is variable: . Notes Diacritics are used for phonetic detail. They are added to IPA letters to indicate a modification or specification of that letter's normal pronunciation. By being made superscript, any IPA letter may function as a diacritic, conferring elements of its articulation to the base letter. (See secondary articulation for a list of superscript IPA letters supported by Unicode.) Those superscript letters listed below are specifically provided for by the IPA; others include ( with fricative release), ( with affricate onset), (prenasalized ), ( with breathy voice), (glottalized ), ( with a flavor of ), ( with diphthongization), (compressed ). Superscript diacritics placed after a letter are ambiguous between simultaneous modification of the sound and phonetic detail at the end of the sound. For example, labialized may mean either simultaneous and or else with a labialized release. Superscript diacritics placed before a letter, on the other hand, normally indicate a modification of the onset of the sound ( glottalized , with a glottal onset). Notes Subdiacritics (diacritics normally placed below a letter) may be moved above a letter to avoid conflict with a descender, as in voiceless . The raising and lowering diacritics have optional forms , that avoid descenders. The state of the glottis can be finely transcribed with diacritics. A series of alveolar plosives ranging from an open to a closed glottis phonation are: Additional diacritics are provided by the Extensions to the IPA for speech pathology. These symbols describe the features of a language above the level of individual consonants and vowels, such as prosody, tone, length, and stress, which often operate on syllables, words, or phrases: that is, elements such as the intensity, pitch, and gemination of the sounds of a language, as well as the rhythm and intonation of speech. Although most of these symbols indicate distinctions that are phonemic at the word level, symbols also exist for intonation on a level greater than that of the word. Various ligatures of tone letters are used in the IPA "Handbook" despite not being found on the simplified official IPA chart. * The IPA provides six transcriptional conventions for tone letters: with or without a stave, facing left or facing right from a stave, and placed before or after the word or syllable. That is, an with extra-high tone may be transcribed , , , , , . Only left-facing staved letters are shown on the "Chart", and in practice it is currently more common for tone letters to occur after the syllable/word than before, though historically they came before, as the stress marks still do. As of 2020, the old staveless letters do not have full Unicode support. Finer distinctions of tone may be indicated by combining the tone diacritics and tone letters shown above, though not all IPA fonts support this. The four additional rising and falling tones supported by diacritics are high/mid rising , low rising , high falling , and low/mid falling . That is, tone diacritics only support contour tones across three levels (high, mid, low), despite supporting five levels for register tones. For other contour tones, tone letters must be used: , etc. For more complex (peaking and dipping, etc.) tones, one may combine three or four tone diacritics in any permutation, though in practice only generic peaking and dipping combinations are used. For finer detail, tone letters are again required (, etc.) The correspondence between tone diacritics and tone letters is therefore only approximate. A work-around for diacritics sometimes seen when a language has more than one rising or falling tone, and the author wishes to avoid the poorly legible diacritics but does not wish to completely abandon the IPA, is to restrict generic rising and falling to the higher-pitched of the rising and falling tones, say and , and to use the old (retired) IPA subscript diacritics and for the lower-pitched rising and falling tones, say and . When a language has four or six level tones, the two mid tones are sometimes transcribed as high-mid (non-standard) and low-mid . A stress mark typically appears before the stressed syllable, and thus marks the syllable break as well as stress. However, occasionally the stress mark is placed immediately before the stressed vowel, after any consonantal syllable onset. In such transcriptions, the stress mark does not function as a mark of the syllable boundary. Tone letters generally appear after each syllable, for a language with syllable tone (), or after the phonological word, for a language with word tone (). However, in older versions of the IPA, "ad hoc" tone marks were placed before the syllable, the same position as used to mark stress, and this convention is still sometimes seen (, ). There are three boundary markers, for a syllable break, for a minor prosodic break and for a major prosodic break. The tags 'minor' and 'major' are intentionally ambiguous. Depending on need, 'minor' may vary from a foot break to a continuing–prosodic-unit boundary (equivalent to a comma), and while 'major' is often any intonation break, it may be restricted to a final–prosodic-unit boundary (equivalent to a period). Although not part of the IPA, the following boundary symbols are often used in conjunction with the IPA: for a mora or mora boundary, for a syllable or syllable boundary, for a word boundary, for a phrase or intermediate boundary and for a prosodic boundary. For example, C# is a word-final consonant, %V a post-pausa vowel, and T% an IU-final tone (edge tone). IPA diacritics may be doubled to indicate an extra degree of the feature indicated. This is a productive process, but apart from extra-high and extra-low tones being marked by doubled high- and low-tone diacritics, and the major prosodic break being marked as a double minor break , it is not specifically regulated by the IPA. (Note that transcription marks are similar: double slashes indicate extra (morpho)-phonemic, double square brackets especially precise, and double parentheses especially unintelligible.) For example, the stress mark may be doubled to indicate an extra degree of stress, such as prosodic stress in English. An example in French, with a single stress mark for normal prosodic stress at the end of each prosodic unit (marked as a minor prosodic break), and a double stress mark for contrastive/emphatic stress: "." Similarly, a doubled secondary stress mark is commonly used for tertiary (extra-light) stress. Length is commonly extended by repeating the length mark, as in English "shhh!" , or for "overlong" segments in Estonian: (Normally additional degrees of length are handled by the extra-short or half-long diacritics, but in the Estonian examples, the first two cases are analyzed as simply short and long.) Occasionally other diacritics are doubled: The IPA once had parallel symbols from alternative proposals, but in most cases eventually settled on one for each sound. The rejected symbols are now considered obsolete. An example is the vowel letter , rejected in favor of . Letters for affricates and sounds with inherent secondary articulation have also been mostly rejected, with the idea that such features should be indicated with tie bars or diacritics: for is one. In addition, the rare voiceless implosives, , have been dropped and are now usually written . A retired set of click letters, , is still sometimes seen, as the official pipe letters may cause problems with legibility, especially when used with brackets ([ ] or / /), the letter , or the prosodic marks (for this reason, some publications which use the current IPA pipe letters disallow IPA brackets). Individual non-IPA letters may find their way into publications that otherwise use the standard IPA. This is especially common with: In addition, there are typewriter substitutions for when IPA support is not available, such as capital for . The "Extensions to the IPA", often abbreviated as "extIPA" and sometimes called "Extended IPA", are symbols whose original purpose was to accurately transcribe disordered speech. At the Kiel Convention in 1989, a group of linguists drew up the initial extensions, which were based on the previous work of the PRDS (Phonetic Representation of Disordered Speech) Group in the early 1980s. The extensions were first published in 1990, then modified, and published again in 1994 in the "Journal of the International Phonetic Association", when they were officially adopted by the ICPLA. While the original purpose was to transcribe disordered speech, linguists have used the extensions to designate a number of unique sounds within standard communication, such as hushing, gnashing teeth, and smacking lips. In addition to the Extensions to the IPA there are the conventions of the Voice Quality Symbols, which besides the concept of voice quality in phonetics include a number of symbols for additional airstream mechanisms and secondary articulations. The blank cells on the IPA chart can be filled without too much difficulty if the need arises. Some "ad hoc" letters have appeared in the literature for the retroflex lateral flap and the retroflex clicks (having the expected forms of and plus a retroflex tail; the analogous for a retroflex implosive is even mentioned in the IPA "Handbook"), the voiceless lateral fricatives (now provided for by the extIPA), the epiglottal trill (arguably covered by the generally-trilled epiglottal "fricatives" ), the labiodental plosives ( in some old Bantuist texts) and the near-close central vowels ( in some publications). Diacritics can duplicate some of those, such as for the lateral flap, for the labiodental plosives and for the central vowels, and are able to fill in most of the remainder of the charts. If a sound cannot be transcribed, an asterisk may be used, either as a letter or as a diacritic (as in sometimes seen for the Korean "fortis" velar). Representations of consonant sounds outside of the core set are created by adding diacritics to letters with similar sound values. The Spanish bilabial and dental approximants are commonly written as lowered fricatives, and respectively. Similarly, voiced lateral fricatives would be written as raised lateral approximants, . A few languages such as Banda have a bilabial flap as the preferred allophone of what is elsewhere a labiodental flap. It has been suggested that this be written with the labiodental flap letter and the advanced diacritic, . Similarly, a labiodental trill would be written (bilabial trill and the dental sign), and labiodental stops rather than with the "ad hoc" letters sometimes found in the literature. Other taps can be written as extra-short plosives or laterals, e.g. , though in some cases the diacritic would need to be written below the letter. A retroflex trill can be written as a retracted , just as non-subapical retroflex fricatives sometimes are. The remaining consonants, the uvular laterals ( "etc.") and the palatal trill, while not strictly impossible, are very difficult to pronounce and are unlikely to occur even as allophones in the world's languages. The vowels are similarly manageable by using diacritics for raising, lowering, fronting, backing, centering, and mid-centering. For example, the unrounded equivalent of can be transcribed as mid-centered , and the rounded equivalent of as raised or lowered (though for those who conceive of vowel space as a triangle, simple already is the rounded equivalent of ). True mid vowels are lowered or raised , while centered and (or, less commonly, ) are near-close and open central vowels, respectively. The only known vowels that cannot be represented in this scheme are vowels with unexpected roundedness, which would require a dedicated diacritic, such as protruded and compressed (or and ). An IPA symbol is often distinguished from the sound it is intended to represent, since there is not necessarily a one-to-one correspondence between letter and sound in broad transcription, making articulatory descriptions such as "mid front rounded vowel" or "voiced velar stop" unreliable. While the "Handbook of the International Phonetic Association" states that no official names exist for its symbols, it admits the presence of one or two common names for each. The symbols also have nonce names in the Unicode standard. In some cases, the Unicode names and the IPA names do not agree. For example, IPA calls "epsilon", but Unicode calls it "small letter open E". The traditional names of the Latin and Greek letters are usually used for unmodified letters. Letters which are not directly derived from these alphabets, such as , may have a variety of names, sometimes based on the appearance of the symbol or on the sound that it represents. In Unicode, some of the letters of Greek origin have Latin forms for use in IPA; the others use the letters from the Greek section. For diacritics, there are two methods of naming. For traditional diacritics, the IPA notes the name in a well known language; for example, is "acute", based on the name of the diacritic in English and French. Non-traditional diacritics are often named after objects they resemble, so is called "bridge". Geoffrey Pullum and William Ladusaw list a variety of names in use for IPA symbols, both current and retired, in addition to names of many other non-IPA phonetic symbols in their "Phonetic Symbol Guide". IPA typeface support is increasing, and is now included in several typefaces such as the Times New Roman versions that come with various recent computer operating systems. Diacritics are not always properly rendered, however. IPA typefaces that are freely available online include: Some commercial IPA-compatible typefaces include: These all include several ranges of characters in addition to the IPA. Modern Web browsers generally do not need any configuration to display these symbols, provided that a typeface capable of doing so is available to the operating system. Several systems have been developed that map the IPA symbols to ASCII characters. Notable systems include SAMPA and X-SAMPA. The usage of mapping systems in on-line text has to some extent been adopted in the context input methods, allowing convenient keying of IPA characters that would be otherwise unavailable on standard keyboard layouts. Online IPA keyboard utilities are available, and they cover the complete range of IPA symbols and diacritics. In April 2019, Google's Gboard for Android and iOS added an IPA keyboard to its platform.
https://en.wikipedia.org/wiki?curid=14761
Inspector Morse Detective Chief Inspector Endeavour Morse, GM, is the eponymous fictional character in the series of detective novels by British author Colin Dexter. On television, he appears in the 33-episode drama series "Inspector Morse" (1987–2000), in which John Thaw played the character, as well as the (2012–) prequel series "Endeavour", portrayed by Shaun Evans. The older Morse is a senior CID (Criminal Investigation Department) officer with the Thames Valley Police in Oxford in England and, in the prequel, Morse is a young detective constable rising through the ranks with the Oxford City Police and in later series the Thames Valley Police. Morse presents, to some, a reasonably sympathetic personality, despite his sullen and snobbish temperament, with a classic Jaguar car (a Lancia in the early novels), a thirst for English real ale, and a love of classical music (especially opera and Wagner), poetry, art and cryptic crossword puzzles. In his later career he is usually assisted by Sergeant Robbie Lewis. Morse's partnership and formal friendship with Lewis is fundamental to the series. Morse's father was a taxi driver, and Morse likes to explain the origin of his additional private income by saying that he "used to drive the Aga Khan". In the episode "Cherubim and Seraphim", it is revealed that Morse's parents divorced when he was 12. He remained with his mother until her death three years later, upon which he had to return to his father. Morse had a dreadful relationship with his stepmother Gwen. He claims that he only read poetry to annoy her, and that her petty bullying almost drove him to suicide. He has a half-sister named Joyce with whom he is on better terms. Morse was devastated when Joyce's daughter Marilyn took her own life. Morse prefers to use only his surname, and is generally evasive when asked about his first name, sometimes joking that it is "Inspector". In "The Wench Is Dead" it was stated that his initial was E. At the end of "Death Is Now My Neighbour", it is revealed to be "Endeavour". Two-thirds of the way through the television episode based on the book, he gives the cryptic clue "My whole life's effort has revolved around Eve". In the series, it is noted that Morse's reluctance to use his given name led to his receiving the nickname "Pagan" while at Stamford School (which Colin Dexter, the author of the Morse novels, attended). In the novels, Morse's first name came from the vessel HMS "Endeavour"; his mother was a member of the Religious Society of Friends (Quakers) who have a tradition of "virtue names", and his father admired Captain James Cook. Dexter was a fan of cryptic crosswords and named Morse after champion setter Jeremy Morse, one of Dexter's arch-rivals in writing crossword clues. Dexter used to walk along the bank of the River Thames at Oxford, opposite the boathouse belonging to 22nd Oxford Sea Scout Group; the building is named "T.S. Endeavour". Although details of Morse's education are deliberately kept vague, it is hinted that he won a scholarship to study at St John's College, Oxford. He lost the scholarship as the result of poor academic performance stemming from a failed love affair, which is mentioned in the second episode of the third series, "The Last Enemy", and recounted in detail in the novel "The Riddle of the Third Mile", Chapter 7. Further details are revealed piece-by-piece in the prequel series. After university, he entered the army on National Service. This included serving in West Germany with the Royal Corps of Signals as a cipher clerk. Upon leaving, he joined the police at Carshall-Newtown, before being posted to Oxford with the Oxford City Police. He often reflects on such renowned scholars as A. E. Housman who, like himself, failed to get an academic degree from Oxford. He was awarded the George Medal in the last episode of "Endeavour" Series 4. Morse is ostensibly the embodiment of white, male, middle-class Englishness, with a set of prejudices and assumptions to match (even though as the son of a taxi driver his background was thoroughly working class). As a result, he may be considered a late example of the gentleman detective, a staple of British detective fiction. This is in sharp contrast to the working-class lifestyle of his assistant Lewis (named after another rival clue-writer Mrs. B. Lewis); in the novels, Lewis is Welsh , but in the TV series this is altered to a Tyneside (Geordie) background, appropriately for the actor Kevin Whately. Morse is in his forties at the start of the books ("Service of all the Dead", Chapter Six: "… a bachelor still, forty-seven years old …"), and Lewis slightly younger (eg "The Secret of Annexe 3", Chapter Twenty-Six: "a slightly younger man – another policeman, and one also in plain clothes"). John Thaw was 45 at the beginning of shooting the TV series and Kevin Whately was 36. Morse's relationships with authority, the establishment, bastions of power and the status quo, are markedly ambiguous, as are some of his relations with women. He is frequently portrayed as patronising female characters, and once stereotyped the female sex as not naturally prone to crime, being caring and non-violent, but also often empathises with women. He is not shy to show his liking for attractive women and often dates those involved in cases. Indeed a woman he falls in love with sometimes turns out to be the culprit. Morse is highly intelligent. He is a crossword addict and dislikes grammatical and spelling errors; in every personal or private document that he receives, he manages to point out at least one mistake. He claims that his approach to crime-solving is deductive, and one of his key tenets is that "there is a 50 per cent chance that the person who finds the body is the murderer". Morse uses immense intuition and his fantastic memory to get to the killer. Among Morse's conservative tastes are that he likes to drink real ale and whisky, and in the early novels, drives a Lancia. In the television and radio productions, this is altered to a suitably British classic Jaguar Mark 2. His favourite music is opera, which is echoed in the soundtracks to the television series, along with original music by Barrington Pheloung. Morse was portrayed as being an atheist. The novels in the series are: Inspector Morse also appears in several stories in Dexter's short story collection, "Morse's Greatest Mystery and Other Stories" (1993, expanded edition 1994). The Inspector Morse novels were made into a TV series (also called "Inspector Morse") for the British commercial TV network ITV. The series was made by Zenith Productions for Central (a company later acquired by Carlton) and comprises 33 two-hour episodes (100 minutes excluding commercials)—20 more episodes than there are novels—produced between 1987 and 2000. The last episode was adapted from the final novel "The Remorseful Day", in which Morse dies. A spin-off series - similarly comprising 33 two-hour episodes and based on the television incarnation of Lewis - was titled "Lewis"; it first aired in 2006 and last showed in 2015. In August 2011, ITV announced plans to film a prequel drama called "Endeavour", with author Colin Dexter's participation. English actor Shaun Evans was cast as a young Morse in his early career. The drama was broadcast on 2 January 2012 on ITV 1. Four new episodes were televised from 14 April 2013, showing Morse's early cases working for DI Fred Thursday and with Jim Strange, his later boss, and pathologist Max De Bryn. A second series of four episodes followed, screening in March and April 2014. In January 2016, the third series aired, also containing four episodes. A fourth series was aired, with four episodes, in January 2017. Filming of a fifth series of six episodes began in Spring 2017 with the first episode aired on 4 February 2018. In 2019 the sixth series aired, which comprises four 1 hour 30 minute episodes. A seventh series of three episodes was filmed in late 2019, and in August 2019 ITV announced that the series has been recommissioned for an eighth series. An adaptation by Melville Jones of Last Bus to Woodstock featured in BBC Radio 4's Saturday Night Theatre series in June 1985, with Andrew Burt as Morse and Christopher Douglas as Lewis. In the 1990s, an occasional BBC Radio 4 series (for "The Saturday Play") was made starring the voices of John Shrapnel as Morse and Robert Glenister as Lewis. The series was written by Guy Meredith and directed by Ned Chaillet. Episodes included: "The Wench is Dead" (23 March 1992); "Last Seen Wearing" (28 May 1994); and "The Silent World of Nicholas Quinn" (10 February 1996). An Inspector Morse stage play appeared in 2010, written by Alma Cullen (writer of four Morse screenplays for ITV). The part of Morse was played by Colin Baker. The play, entitled "Morse—House of Ghosts", saw DCI Morse looking to his past, when an old acquaintance becomes the lead suspect in a murder case that involves the on-stage death of a young actress. The play toured the UK from August to December 2010. It was broadcast by BBC Radio 4 on 25 March 2017 with Neil Pearson playing Morse and Lee Ingleby playing Lewis.
https://en.wikipedia.org/wiki?curid=14762
History of the Isle of Man The Isle of Man had become separated from Great Britain and Ireland by 6500 BC. It appears that colonisation took place by sea sometime during the Mesolithic era (about 6500 BC). The island has been visited by various raiders and trading peoples over the years. After being settled by people from Ireland in the first millennium, the Isle of Man was converted to Christianity and then suffered raids by Vikings from Norway. After becoming subject to Norwegian suzerainty as part of the Kingdom of Mann and the Isles, the Isle of Man later became a possession of the Scottish and then the English crowns. In 1603 during the union of the crowns of England and Scotland through James VI and I Since 1866, the Isle of Man has been a Crown Dependency and has democratic self-government. Police The Isle of Man effectively became an island around 8,500 years ago at around the time when rising sea levels caused by the melting glaciers cut Mesolithic Britain off from continental Europe for the last time. A land bridge had earlier existed between the Isle of Man and Cumbria, but the location and opening of the land bridge remain poorly understood. The earliest traces of people on the Isle of Man date back to the Mesolithic Period, also known as the Middle Stone Age. The first residents lived in small natural shelters, hunting, gathering and fishing for their food. They used small tools made of flint or bone, examples of which have been found near the coast. Representatives of these artifacts are kept at the Manx National Heritage museum. The Neolithic Period marked the coming of farming, improved stone tools and pottery. During this period megalithic monuments began to appear around the island. Examples are found at Cashtal yn Ard near Maughold, King Orry's Grave in Laxey, Meayll Circle near Cregneash, and Ballaharra Stones in St John's. The builders of the megaliths were not the only culture during this time; there are also remains of the local Ronaldsway culture (lasting from the late Neolithic into the Bronze Age). The Iron Age marked the beginning of Celtic cultural influence. Large hill forts appeared on hill summits and smaller promontory forts along the coastal cliffs, whilst large timber-framed roundhouses were built. It is likely that the first Celts to inhabit the Island were Brythonic tribes from mainland Britain. The secular history of the Isle of Man during the Brythonic period remains mysterious. It is not known if the Romans ever made a landing on the island; if they did, little evidence has been discovered; however there is evidence for contact with Roman Britain as an amphora was discovered at the settlement on the South Barrule; it is hypothesised this may have been trade goods or plunder. It has been speculated that the island may have become a haven for Druids and other refugees from Anglesey after the sacking of Mona in AD 60. It is generally assumed that Irish invasion or immigration formed the basis of the modern Manx language; Irish migration to the island probably began in the 5th century AD. This is evident in the change in language used in Ogham inscriptions. The transition between "Manx Brythonic" (a Brythonic language like modern Welsh) and "Manx Gaelic" (a Goidelic language like modern Scottish Gaelic and Irish) may have been gradual. One question is whether the present-day Manx language survives from pre-Norse days or reflects a linguistic reintroduction after the Norse invasion. The island lends its name to "Manannán", the Brythonic and Gaelic sea god who is said in myth to have once ruled the island. Tradition attributes the island's conversion to Christianity to St Maughold (Maccul), an Irish missionary who gives his name to a parish. There are the remains of around 200 tiny early chapels called keeils scattered across the island. Evidence such as radiocarbon dating and magnetic drift points to many of these being built around AD 550–600. The Brythonic culture of "Manaw" appears throughout early British tradition and later Welsh writings. The family origins of Gwriad ap Elidyr (father of Merfyn Frych and grandfather of Rhodri the Great) are attributed to a "Manaw" and he is sometimes named as "Gwriad Manaw". The 1896 discovery of a cross inscribed "Crux Guriat" (Cross of Gwriad) and dated to the 8th or 9th century greatly supports this theory. The best record of any event before the incursions of the Northmen is attributed to Báetán mac Cairill, king of Ulster, who (according to the "Annals of Ulster") led an expedition to Man in 577–578, imposing his authority on the island (though some have thought this event may refer to Manau Gododdin between the Firths of Clyde and Forth, rather than the Isle of Man). After Báetán's death in 581, his rival Áedán mac Gabráin, king of Dál Riata, is said to have taken the island in 582. Even if the supposed conquest of the Menavian islands – Mann and Anglesey – by Edwin of Northumbria, in 616, did take place, it could not have led to any permanent results, for when the English were driven from the coasts of Cumberland and Lancashire soon afterwards, they could not well have retained their hold on the island to the west of these coasts. One can speculate, however, that when Ecgfrið's Northumbrians laid Ireland waste from Dublin to Drogheda in 684, they temporarily occupied Mann. The period of Scandinavian domination is divided into two main epochs – before and after the conquest of Mann by Godred Crovan in 1079. Warfare and unsettled rule characterise the earlier epoch; the later saw comparatively more peace. Between about AD 800 and 815 the Vikings came to Mann chiefly for plunder; between about 850 and 990, when they settled there, the island fell under the rule of the Scandinavian Kings of Dublin; and between 990 and 1079, it became subject to the powerful Earls of Orkney. There was a mint producing coins on Mann between c. 1025 and c. 1065. These Manx coins were minted from an imported type 2 Hiberno-Norse penny die from Dublin. Hiberno-Norse coins were first minted under Sihtric, King of Dublin. This illustrates that Mann may have been under the thumb of Dublin at this time. Little is known about the conqueror, Godred Crovan. According to the "Chronicon Manniae" he subdued Dublin, and a great part of Leinster, and held the Scots in such subjection that supposedly no one who set out to build a vessel dared to insert more than three bolts. The memory of such a ruler would be likely to survive in tradition, and it seems probable therefore that he is the person commemorated in Manx legend under the name of King Gorse or Orry. He created the Kingdom of Mann and the Isles in around 1079; it included the south-western islands of Scotland until 1164, when two separate kingdoms were formed from it. In 1154, what was later to be known as the Diocese of Sodor and Man was formed by the Catholic Church. The islands which were under his rule were called the "Suðr-eyjar" (south isles, in contradistinction to the "Norðr-eyjar", or the "north isles", i.e. Orkney and Shetland), and they consisted of the Hebrides, and of all the smaller western islands of Scotland, and Mann. At a later date his successors took the title of (King of Mann and of the Isles). The kingdom's capital was on St Patrick's Isle, where Peel Castle was built on the site of a Celtic monastery. Olaf, Godred's son, exercised considerable power, and according to the Chronicle, maintained such close alliance with the kings of Ireland and Scotland that no one ventured to disturb the Isles during his time (1113–1152). In 1156, his son, Godred (reigned 1153–1158), who for a short period ruled over Dublin also, lost the smaller islands off the coast of Argyll as a result of a quarrel with Somerled (the ruler of Argyll). An independent sovereignty thus appeared between the two divisions of his kingdom. In the 1130s the Catholic Church sent a small mission to establish the first bishopric on the Isle of Man, and appointed Wimund as the first bishop. He soon afterwards embarked with a band of followers on a career of murder and looting throughout Scotland and the surrounding islands. During the whole of the Scandinavian period, the Isles remained nominally under the suzerainty of the Kings of Norway, but the Norwegians only occasionally asserted it with any vigour. The first such king to assert control over the region was likely Magnus Barelegs, at the turn of the 12th century. It was not until Hakon Hakonarson's 1263 expedition that another king returned to the Isles. From the middle of the 12th century until 1217 the suzerainty had remained of a very shadowy character; Norway had become a prey to civil dissensions. But after that date it became a reality, and Norway consequently came into collision with the growing power of the kingdom of Scotland. Early in the 13th century, when Ragnald (reigned 1187–1229) paid homage to King John of England (reigned 1199–1216), we hear for the first time of English intervention in the affairs of Mann. But a period of Scots domination would precede the establishment of full English control. Finally, in 1261, Alexander III of Scotland sent envoys to Norway to negotiate for the cession of the isles, but their efforts led to no result. He therefore initiated a war, which ended in the indecisive Battle of Largs against the Norwegian fleet in 1263. However, the Norwegian king Haakon Haakonsson died the following winter, and this allowed King Alexander to bring the war to a successful conclusion. Magnus Olafsson, King of Mann and the Isles (reigned 1252–1265), who had campaigned on the Norwegian side, had to surrender all the islands over which he had ruled, except Mann, for which he did homage. Two years later Magnus died and in 1266 King Magnus VI of Norway ceded the islands, including Mann, to Scotland in the Treaty of Perth in consideration of the sum of 4,000 marks (known as in Scotland) and an annuity of 100 marks. But Scotland's rule over Mann did not become firmly established till 1275, when the Manx suffered defeat in the decisive Battle of Ronaldsway, near Castletown. In 1290 King Edward I of England sent Walter de Huntercombe to seize possession of Mann, and it remained in English hands until 1313, when Robert Bruce took it after besieging Castle Rushen for five weeks. In about 1333 King Edward III of England granted Mann to William de Montacute, 3rd Baron Montacute (later the 1st Earl of Salisbury), as his absolute possession, without reserving any service to be rendered to him. Then, in 1346, the Battle of Neville's Cross decided the long struggle between England and Scotland in England's favour. King David II of Scotland, Robert Bruce's last male heir, had been captured in the Battle of Neville's cross and ransomed; however, when Scotland was unable to raise one of the ransom installments, David made a secret agreement with King Edward III of England to cancel it, in return for transferring the Scottish kingdom to an English prince. Following the secret agreement, there followed a confused period when Mann sometimes experienced English rule and sometimes Scottish. In 1388 the island was "ravaged" by Sir William Douglas of Nithsdale on his way home from the destruction of the town of Carlingford. In 1392 William de Montacute's son sold the island, including sovereignty, to Sir William le Scrope. In 1399 Henry Bolinbroke brought about the beheading of Le Scrope, who had taken the side of Richard II when Bolinbroke usurped the throne and appointed himself "Henry IV". The island then came into the de facto possession of Henry, who granted it to Henry Percy, 1st Earl of Northumberland; but following the latter's later attainder, Henry IV, in 1405, made a lifetime grant of it, with the patronage of the bishopric, to Sir John Stanley. In 1406 this grant was extended – on a feudatory basis under the English Crown – to Sir John's heirs and assigns, the feudal fee being the service of rendering homage and two falcons to all future Kings of England on their coronations. With the accession of the Stanleys to the throne there begins a more settled epoch in Manx history. Though the island's new rulers rarely visited its shores, they placed it under governors, who, in the main, seem to have treated it with the justice of the time. Of the thirteen members of the family who ruled in Mann, the second Sir John Stanley (1414–1432), James, the 7th Earl (1627–1651), and the 10th Earl of the same name (1702–1736) had the most important influence on it. They first curbed the power of the spiritual barons, introduced trial by jury, which superseded trial by battle, and ordered the laws to be written. The second, known as the Great Stanley, and his wife, Charlotte de la Tremoille (or Tremouille), are probably the most striking figures in Manx history. In 1643 Charles I ordered James Stanley, 7th Earl of Derby to go to Mann, where the people, no doubt influenced by events in England, threatened to revolt. Stanley's arrival, with English soldiers, soon put a stop to anything of this kind. He conciliated the people by his affability, brought in Englishmen to teach various handicrafts and tried to help the farmers by improving the breed of Manx horses, and, at the same time, he restricted the exactions of the Church. But the Manx also lost much of their liberty under his rule: they were heavily taxed; troops were quartered upon them; and they also had the more lasting grievance of being compelled to accept leases for three lifetimes instead of holding their land by the straw tenure, which they considered to be equivalent to a customary inheritance. Six months after the execution death of Charles I (on 30 January 1649), Stanley received a summons from General Ireton to surrender the island, but he declined to do so. In August 1651 Stanley went to England with some of his troops, among whom were 300 Manxmen, to join King Charles II. Charles was decisively defeated at the Battle of Worcester and Stanley was captured, imprisoned in Chester Castle and then tried by court-martial and executed at Bolton. Soon after Stanley's death, the Manx Militia, under the command of William Christian (known by his Manx name of Illiam Dhone), rose against the Countess and captured all the insular forts except Rushen and Peel. They were then joined by a Parliamentary force under Colonel Duckenfield, to whom the Countess surrendered after a brief resistance. Oliver Cromwell had appointed Thomas Fairfax "Lord of Mann and the Isles" in September 1651, so that Mann continued under a monarchical government and remained in the same relation to England as before. The restoration of Stanley government in 1660 therefore caused as little friction and alteration as its temporary cessation had. One of the first acts of the new Lord, Charles Stanley, 8th Earl of Derby, was to order Christian to be tried. He was found guilty and executed. Of the other persons implicated in the rebellion only three were excepted from the general amnesty. But by Order in Council, Charles II pardoned them, and the judges responsible for the sentence on Christian were punished. Charles Stanley's next act was to dispute the permanency of the tenants' holdings, which they had not at first regarded as being affected by the acceptance of leases, a proceeding which led to an almost open rebellion against his authority and to the neglect of agriculture, in lieu of which the people devoted themselves to the fisheries and to contraband trade. Charles Stanley, who died in 1672, was succeeded first by his son William Richard George Stanley, 9th Earl of Derby until his death in 1702. The agrarian question subsided only in 1704, when James Stanley, 10th Earl of Derby, William's brother and successor, largely through the influence of Bishop Wilson, entered into a compact with his tenants, which became embodied in an Act, called the Act of Settlement. Their compact secured the tenants in the possession of their estates in perpetuity subject only to a fixed rent, and a small fine on succession or alienation. From the great importance of this act to the Manx people it has been called their "Magna Carta". As time went on, and the value of the estates increased, the rent payable to the Lord became so small in proportion as to be almost nominal, being extinguished by purchase in 1916. James died in 1736, and the suzerainty of the isle passed to James Murray, 2nd Duke of Atholl, his first cousin and heir-male. In 1764 he was succeeded by his only surviving child Charlotte, Baroness Strange, and her husband, John Murray, who (in right of his wife) became Lord of Mann. In about 1720 the contraband trade had greatly increased. In 1726 Parliament had checked it somewhat for a time, but during the last ten years of the Atholl regime (1756–1765) it assumed such proportions that, in the interests of the Imperial revenue, it became necessary to suppress it. With a view to so doing, Parliament passed the Isle of Man Purchase Act 1765 (commonly called the "Revestment Act" by the Manx), under which it purchased the rights of the Atholls as Lords of Mann, including the customs revenues of the island, for the sum of £70,000 sterling, and granted an annuity to the Duke and Duchess. The Atholls still retained their manorial rights, the patronage of the bishopric, and certain other perquisites, until they sold them for the sum of £417,144 in 1828. Up to the time of the revestment, Tynwald had passed laws concerning the government of the island in all respects and had control over its finances, subject to the approval of the Lord of Mann. After the revestment, or rather after the passage of the Smuggling Act 1765 (commonly called the Mischief Act by the Manx), the Parliament at Westminster legislated with respect to customs, harbours and merchant shipping, and, in measures of a general character, it occasionally inserted clauses permitting the enforcement in the island of penalties in contravention of the Acts of which they formed part. It also assumed the control of the insular customs duties. Such changes, rather than the transference of the full suzerainty to the King of Great Britain and Ireland, modified the (unwritten) constitution of the Isle of Man. Its ancient laws and tenures remained untouched, but in many ways the revestment affected it adversely. The hereditary Lords of Mann had seldom, if ever, functioned as model rulers, but most of them had taken some personal share in its government, and had interested themselves in the well-being of the inhabitants. But now the whole direction of its affairs became the work of officials who regarded the island as a pestilent nest of smugglers, from which it seemed their duty to extract as much revenue as possible. There was some alleviation of this state of things between 1793 and 1826, when John Murray, 4th Duke of Atholl served as Governor, since, though he quarrelled with the House of Keys and unduly cared for his own pecuniary interests, he did occasionally exert himself to promote the welfare of the island. After his departure the English officials resumed their sway, but they showed more consideration than before. Moreover, since smuggling, which the Isle of Man Purchase Act had only checked – not suppressed – had by that time almost disappeared, and since the Manx revenue had started to produce a large and increasing surplus, the authorities looked more favourably on the Isle of Man, and, thanks to this fact and to the representations of the Manx people to British ministers in 1837, 1844 and 1853, it obtained a somewhat less stringent customs tariff and an occasional dole towards erecting its much neglected public works. Since 1866, when the Isle of Man obtained a nominal measure of Home Rule, the Manx people have made remarkable progress, and currently form a prosperous community, with a thriving offshore financial centre, a tourist industry (albeit smaller than in the past) and a variety of other industries. The Isle of Man was a base for alien civilian internment camps in both the First World War (1914–18) and the Second World War (1939–45). During the First World War there were two camps: one a requisitioned holiday camp in Douglas and the other the purpose-built Knockaloe camp near Peel in the parish of Patrick. During the Second World War there were a number of smaller camps in Douglas, Peel, Port Erin and Ramsey. The (now disbanded) Manx Regiment was raised in 1938 and saw action during the Second World War. On 2 August 1973, a flash fire killed between 50 and 53 people at the Summerland amusement centre in Douglas. The early-20th century saw a revival of music and dance, and a limited revival of the Manx language - although the last "native" speaker of Manx Gaelic died in the 1970s. In the middle of the 20th century the Taoiseach of the Republic of Ireland, Éamon de Valera, visited, and was so dissatisfied with the lack of support for Manx that he immediately had two recording vans sent over. During the 20th century the Manx tourist economy declined, as the English and Irish started flying to Spain for package holidays. The Manx Government responded to this by successfully promoting the island, with its low tax-rates, as an offshore financial centre, although Man has avoided a place on a 2009 UK blacklist of tax havens. The financial centre has had its detractors who have pointed to the potential for money laundering. In 1949 an Executive Council, chaired by the Lieutenant-Governor and including members of Tynwald, was established. This marked the start of a transfer of executive power from the un-elected Lieutenant-Governor to democratically elected Manx politicians. Finance and the police passed to Manx control between 1958 and 1976. In 1980 a chairman elected by Tynwald replaced the Lieutenant-Governor as Chairman of the Executive Council. Following legislation in 1984, the Executive Council was reconstituted in 1985 to include the chairmen of the eight principal Boards; in 1986 they were given the title of Minister and the chairman was re-titled "Chief Minister". In 1986 Sir Miles Walker CBE became the first Chief Minister of the Isle of Man. In 1990 the Executive Council was renamed the "Council of Ministers". The 1960s also saw a rise in Manx nationalism, spawning the parties Mec Vannin and the Manx National Party, as well as the now defunct (literally "Underground"), which mounted a direct-action campaign of spray-painting and attempted house-burning. On 5 July 1973 control of the postal service passed from the UK General Post Office to the new Isle of Man Post, which began to issue its own postage stamps. The 1990s and early 21st century have seen a greater recognition of indigenous Manx culture, including the opening of the first Manx-language primary school. Since 1983 the Isle of Man government has designated more than 250 historic structures as Registered Buildings of the Isle of Man.
https://en.wikipedia.org/wiki?curid=14763
Geography of the Isle of Man The Isle of Man is an island in the Irish Sea, between Great Britain and Ireland in Western Europe, with a population of almost 85,000. It is a British Crown dependency. It has a small islet, the Calf of Man, to its south. It is located at . Area: "Land:" "Water:" "Total:" This makes it: The Isle of Man has a coastline of , and a territorial sea extending to a maximum of 12 nm from the coast, or the midpoint between other countries. The total territorial sea area is about 4000 km2 or 1500 sq miles, which is about 87% of the total area of the jurisdiction of the Isle of Man. The Isle of Man only holds exclusive fishing rights in the first 3 nm. The territorial sea is managed by the Isle of Man Government Department of Infrastructure. The Raad ny Foillan long distance footpath runs around the Manx coast. The Isle of Man enjoys a temperate climate, with cool summers and mild winters. Average rainfall is high compared to the majority of the British Isles, due to its location to the western side of Great Britain and sufficient distance from Ireland for moisture to be accumulated by the prevailing south-westerly winds. Average rainfall is highest at Snaefell, where it is around a year. At lower levels it can fall to around a year. Temperatures remain fairly cool, with the recorded maximum being at Ronaldsway. The island's terrain is varied. There are two mountainous areas divided by a central valley which runs between Douglas and Peel. The highest point in the Isle of Man, Snaefell, is in the northern area and reaches above sea level. The northern end of the island is a flat plain, consisting of glacial tills and marine sediments. To the south the island is more hilly, with distinct valleys. There is no land below sea level. There are few severe natural hazards, the most common being high winds, rough seas and dense fog. In recent years there has been a marked increase in the frequency of high winds, heavy rains, summer droughts and flooding both from heavy rain and from high seas. Snow fall has decreased significantly over the past century while temperatures are increasing year round with rainfall decreasing. Air pollution, marine pollution and waste disposal are issues in the Isle of Man. In order of importance, international first, non-statutory last. Note that ASSIs and MNRs have equal levels of statutory protection under the Wildlife Act 1990. There are 22 ASSIs on the Isle of Man as of May 2020. One additional ASSI has been designated but later rescinded (Ramsey Harbour). A marine nature reserve was designated in Ramsey Bay in Oct 2011. In 2018 nine further Marine Nature Reserves were given statutory protection. The ten Marine Nature Reserves found around the Isle of Man cover over 10% of the country's territorial waters, in accordance with international requirements. Bird Sanctuaries where formerly designated under the Wild Birds Protection Act 1932. This designation was superseded by Areas of Special Protection for Birds by the Wildlife Act 1990, however the following formerly designated Bird Sanctuaries remain protected: The Isle of Man had 45 non-statutory wildlife sites as of 30 January 2009, covering about 195 ha (0.75 sq miles) of land and an additional of inter-tidal coast. The Manx Wildlife Trust also manages 24 nature reserves, along with the Calf of Man, as of September 2016: The majority of the island is formed from highly faulted and folded sedimentary rocks of the Ordovician period. There is a belt of younger Silurian rocks along the west coast between Niarbyl and Peel, and a small area of Devonian sandstones around Peel. A band of Carboniferous period rocks underlies part of the northern plain, but is nowhere seen at the surface; however similar age rocks do outcrop in the south between Castletown, Silverdale and Port St Mary. Permo-Triassic age rocks are known to lie beneath the Point of Ayre but, as with the rest of the northern plain, these rocks are concealed by substantial thicknesses of superficial deposits. The island has significant deposits of copper, lead and silver, zinc, iron, and plumbago (a mix of graphite and clay). There are also quarries of black marble, limestone flags, clay schist, and granite. These are all modern, and there was no noticeable exploitation of metals or minerals prior to the modern era. The island has a census-estimated population of 84,497 according to the most recent 2011 census: up from 79,805 in 2006 and 76,315 in 2001. The island's largest town and administrative centre is Douglas, whose population is 23,000 — over a quarter of the population of the island. Neighbouring Onchan, Ramsey in the north, Peel in the west and the three southern ports of Castletown, Port Erin and Port St Mary are the island's other main settlements. Almost all its population lives on or very near the coast.
https://en.wikipedia.org/wiki?curid=14764
Politics of the Isle of Man The government of the Isle of Man is a parliamentary representative democracy. As a Crown Dependency, it is not subordinate to the government of the United Kingdom. That government, however, is responsible for defence and external affairs and could intervene in the domestic affairs of the island under its residual responsibilities to guarantee "good government" in all Crown dependencies. The Monarch of the United Kingdom is also the head of state of the Isle of Man, and generally referred to as "The Queen, Lord of Mann". Legislation of the Isle of Man defines "the Crown in right of the Isle of Man" as separate from the "Crown in right of the United Kingdom". Her representative on the island is the Lieutenant Governor of the Isle of Man, but his role is mostly ceremonial, though he does have the power to grant Royal Assent (the withholding of which is the same as a veto). Although the Isle of Man is not part of the United Kingdom, its people are British citizens under UK law — there is no separate Manx citizenship. The United Kingdom is responsible for all the island's external affairs, including citizenship, defence, good governance, and foreign relations. The island has no representation in the UK parliament. The legislative power of the government is vested in a bicameral (sometimes called tricameral) parliament called Tynwald (said to be the world's oldest "continuously existing" parliament), which consists of the directly-elected House of Keys and the indirectly chosen Legislative Council. After every House of Keys general election, the members of Tynwald elect from amongst themselves the Chief Minister of the Isle of Man, who serves as the head of government for five years (until the next general election). Executive power is vested in the Lieutenant Governor (as Governor-in-Council), the Chief Minister, and the Isle of Man's Council of Ministers. The judiciary is independent of the executive and the legislature. Douglas, the largest town on the Isle of Man, is its capital and seat of government, where the Government offices and the parliament chambers (Tynwald) are located. The Head of State is the Lord of Mann, which is a hereditary position held by the British monarch (currently Queen Elizabeth II). The Lieutenant Governor is appointed by the Queen, on the advice of the UK's Secretary of State for Justice, for a five-year term and nominally exercises executive power on behalf of the Queen. The Chief Minister is elected by Tynwald following every House of Keys general election and serves for five years until the next general election. When acting as Lord of Mann, the Queen acts on the advice of the Secretary of State for Justice and Lord Chancellor of the United Kingdom having prime responsibility as Privy Counsellor for Manx affairs. The executive branch under the Chief Minister is referred to as "the Government" or the "Civil Service", and consists of the Council of Ministers, nine Departments, ten Statutory Boards and three Offices. Each Department is run by a Minister who reports directly to the Council of Ministers. The Civil Service has more than 2000 employees and the total number of public sector employees including the Civil Service, teachers, nurses, police, etc. is about 9000 people. This is somewhat more than 10% of the population of the island, and a full 23% of the working population. This does not include any military forces, as defence is the responsibility of the United Kingdom. The Manx legislature is Tynwald, which consists of two chambers. The House of Keys has 24 members, elected for a five-year term in two-seat constituencies by the whole island. The minimum voting age is 16. The Legislative Council has eleven members: the President of Tynwald, the Bishop of Sodor and Man, the Attorney General (non-voting) and eight other members elected by the House of Keys for a five-year term, with four retiring at a time. (In the past they have often already been Members of the House of Keys, but must leave the Keys if elected to the Council.) There are also joint sittings of the Tynwald Court (the two houses together). In the 2016 Manx general election, on 22 September, the Liberal Vannin Party won three seats, tying their 2011 results; all 21 remaining seats were won by independents. However one of those three members now sits as an independent. 12 of the 24 MHKs were newly elected to office. Turnout slightly improved from 2011 with 56% of eligible voters turning out, ranging from 40% in Douglas East to 65% in Ayre & Michael. Most Manx politicians stand for election as independents rather than as representatives of political parties. Though political parties do exist, their influence is not nearly as strong as in the United Kingdom. Consequently, much Manx legislation develops through consensus among the members of Tynwald, which contrasts with the much more adversarial nature of the British Parliament. The largest political party is the Liberal Vannin Party, which promotes liberalism, greater Manx independence and more accountability in Government. A Manx Labour Party also exists, unaffiliated to the British Labour Party. Its candidates won a combined 1.4% of the overall vote in 2016. A political pressure group Mec Vannin advocates the establishment of a sovereign republic. The Isle of Man Green Party, which was founded in 2016, holds two local government seats and promotes Green politics. The island also formerly had a Manx National Party. There are Manx members in the Celtic League, a political pressure group that advocates greater co-operation between and political autonomy for the Celtic nations. The UK Parliament has paramount power to legislate for the Isle of Man on all matters, but it is a long-standing convention that it does not do so on domestic ("insular") matters without Tynwald's consent. Occasionally, the UK Parliament acts against the wishes of Tynwald: the most recent example was the Marine etc. Broadcasting (Offences) Act 1967, which banned pirate radio stations from operating in Manx waters. Legislation to accomplish this was defeated on its second reading in the House of Keys, prompting Westminster to legislate directly. The UK's secondary legislation (regulations and Statutory Instruments) cannot be extended to apply to the Isle of Man. The Isle of Man is subject to certain European Union laws, by virtue of a being a territory for which the UK has responsibility in international law. These laws are those for areas not covered by the Protocol 3 opt-out that the UK obtained for the Isle of Man in its accession treaty: the excluded areas are free movement of persons, services and capital, and taxation and social policy harmonisation. The UK has had several disputes with the European Court of Human Rights about the Isle of Man's laws concerning birching (corporal punishment) and sodomy. The lowest courts in the Isle of Man are presided over by the High Bailiff and the Deputy High Bailiff, along with lay Justices of the Peace. The High Court of Justice consists of three civil divisions and is presided over by a Deemster. Appeals are dealt with by the Staff of Government Division with final appeal to the Judicial Committee of the Privy Council in the United Kingdom. The head of the Judiciary is the First Deemster and Clerk of the Rolls. The other High and Appeal Court Judges are the Second Deemster, The Deemster and the Judge of Appeal, all of whom are appointed by the Lieutenant Governor. The Court of General Gaol Delivery is the criminal court for serious offences (effectively the equivalent of a Crown Court in England). It is theoretically not part of the High Court, but is effectively the criminal division of the court. The Second Deemster normally sits as the judge in this court. In 1992, His Honour Deemster Callow passed the last-ever sentence of death in a court in the British Islands (which was commuted to life imprisonment). Capital punishment in the Isle of Man was formally abolished by Tynwald in 1993 (although the last execution on the island took place in 1872).
https://en.wikipedia.org/wiki?curid=14766
Economy of the Isle of Man The economy of the Isle of Man is a low-tax economy with insurance, online gambling operators and developers, information and communications technology (ICT), and offshore banking forming key sectors of the island's economy. As an offshore financial centre located in the Irish Sea, the Isle of Man is within the British Isles but does not form part of the United Kingdom and is not a member of European Union. As of 2016, the Crown dependency's gross national income (GNI) per capita was US$89,970 as assessed by the World Bank. The Isle of Man Government's own National Income Report shows the largest sectors of the economy are insurance and eGaming with 17% of GNI each, followed by ICT and banking with 9% each, with tourist accommodation in the lowest sector at 0.3%. After 32 years of continued Gross Domestic Product (GDP) growth, the financial year 2015/16 showed the first drop in GDP, of 0.9%, triggered by decline in eGaming revenues. The unemployment rate remains low at around 1%. Property prices are flat or declining, but recent figures also show an increase in resident income tax payers. The government's policy of offering incentives to high-technology companies and financial institutions to locate on the island has expanded employment opportunities in high-income industries. Agriculture, fishing, and the hospitality industry, once the mainstays of the economy, now make declining contributions to the island's GNP. The hospitality sector contributed just of 0.3% of GNP in 2015/16, and 629 jobs in 2016. eGaming and ICT contribute the great bulk of GNP. The stability of the island's government and its openness for business make the Isle of Man an attractive alternative jurisdiction (DAW Index ranked 3). In the Vision2020 the Isle of Man government lays out the national strategy of economic growth, seeking an increase of the economically active population an promoting the Island as an 'Enterprise Island, "Tech Isle', 'Manufacturing centre of excellence', 'Offshore energy hub', 'Destination Island' and for 'Distinctive local food and drink'. The government has published its national economic strategies for several emerging sectors: aerospace, biomed, digital media, ICT. The Isle of Man is a low-tax economy with no capital gains tax, wealth tax, stamp duty, or inheritance tax; and a top rate of income tax of 20%. A tax cap is in force: the maximum amount of tax payable by an individual is £125,000; or £250,000 for couples if they choose to have their incomes jointly assessed. Personal income is assessed and taxed on a total worldwide income basis rather than on a remittance basis. This means that all income earned throughout the world is assessable for Manx tax, rather than only income earned in or brought into the Island. The standard rate of corporation tax for residents and non-residents is 0%; retail business profits above £500,000 and banking business income are taxed at 10%, and rental (or other) income from land and buildings situated on the Isle of Man is taxed at 20%. Trade is mostly with the United Kingdom. The Isle of Man has free access to European Union markets for goods, but only has restricted access for services, people, or financial products. The Isle of Man as an offshore financial centre has been repeatedly featured in the press as a tax haven, most recently in the wake of the Paradise Papers. The Organisation for Economic Co-operation and Development's (OECD) Global Forum on Transparency and Exchange of Information for Tax Purposes has rated the Isle of Man as 'top compliant' for a second time: a status which only three jurisdictions in the world have achieved so far. The island has become the second nation after Austria to ratify a multilateral convention with the OECD to implement measures to prevent Base Erosion and Profit Shifting (BEPS). In a report the European Council lists the Isle of Man together with the other two Crown Dependencies (Guernsey and Jersey) as well as Bermuda, the Cayman Islands and Vanuatu, as committed to addressing the Council's concerns of "Existence of tax regimes that facilitate offshore structures which attract profits without real economic activity" by 2018. The Isle of Man's Department for Enterprise manages the diversified economy in twelve key sectors. The largest individual sectors by GNI are insurance and eGaming with 17% of GNI each, followed by ICT and banking with 9% each. The 2016 census lists 41,636 total employed. The largest sectors by employment are "medical and health", "financial and business services", construction, retail and public administration. Manufacturing, focused on aerospace and the food and drink industry, employs almost 2000 workers and contributes about 5% of GDP. The sector provides laser optics, industrial diamonds, electronics, plastics and aerospace precision engineering. Insurance, banking (includes retail banking, offshore banking and other banking services), other finance and business services, and corporate service providers together contribute the most to the GNI and most of the jobs, with 10,057 people employed in 2016. Among the largest employers of the Island's private sector are eGaming (online gambling) companies like The Stars Group, Microgaming, Newfield, and Playtech. The Manx eGaming Association MEGA is representing the sector. Licenses are issued by the Gambling Supervision Commission. In 2005 PokerStars, one of the world's largest online poker sites, relocated its headquarters to the Isle of Man from Costa Rica. In 2006, RNG Gaming a large gaming software developer of P2P tournaments and Get21, a multiplayer online blackjack site, based their corporate offices on the island. The Isle of Man Government Lottery operated from 1986 to 1997. Since 2 December 1999 the island has participated in the United Kingdom National Lottery. The island is the only jurisdiction outside the United Kingdom where it is possible to play the UK National Lottery. Since 2010 it has also been possible for projects in the Isle of Man to receive national lottery Good Causes Funding. The good causes funding is distributed by the Manx Lottery Trust. Tynwald receives the 12p lottery duty for tickets sold in the Island. The shortage of workers with ICT skills is tackled by several initiatives, like an IT and education campus, a new cyber security degree at the University College of Man, a Code Club, and a work permit waiver for skilled immigrants. Since 1995 Isle of Man Film has co-financed and co-produced over 100 feature film and television dramas which have all filmed on the Island. Among the most successful productions funded in part by Isle of Man Film agency were "Waking Ned", where the Manx countryside stood in for rural Ireland, and films like "Stormbreaker", "Shergar", "Tom Brown's Schooldays", "I Capture the Castle", "The Libertine", "Island at War" (TV series), "Five Children and It", "Colour Me Kubrick", "Sparkle", and others. Other films that have been filmed on the Isle of Man include "Thomas and the Magic Railroad", "Harry Potter and the Chamber of Secrets", "Keeping Mum and Mindhorn." 2011 Isle of Man Film Oxford Economics was commissioned by Isle of Man Film Ltd to conduct a study into the economic impact of the film industry on the Isle of. Man. The recommendation of this report for Isle of Man Film was to partner with a more established film institution in the UK to source more Isle of Man film production opportunities. This led to the investment of the Isle of Man Government to take shares in Pinewood Shepperton Plc which were sold later with profit. Once one of the busiest areas of film production in the British Isles, the Isle of Man hopes to use its strong foundation in film to grow its television and new digital media industry. In a recent Isle of Man Department of Economic Development strategic review, the Island's over 2,000 jobs counting digital sector features 'digital media' and the creative industries, and embraces partnerships with the industry and its individual sector bodies like the Isle of Media, a new media cluster. Hosting of motorsports events, like the Isle of Man Car Rally and the more-prominent TT motorcycle races, contributes to the tourism economy. Tourism in the Isle of Man developed from advances in transport to the island. In 1819 the first steamship "Robert Bruce" came to the island, only seven years after the first steam vessel in the UK. In the 1820s, tourism was growing due to improved transport. The island government's own report for the financial years 2014/15-2015/16 shows tourist accommodation to be in the lowest sector at 0.3%, ranking slightly above 'mining and quarrying' (0.1%). Since 1999, the Isle of Man has received electricity through the world's longest submarine AC cable, the 90 kV Isle of Man to England Interconnector, as well as from a natural gas power station in Douglas, an oil power station in Peel and a small hydro-electric power station in Sulby Glen. The Island is connected with five submarine cables to the UK and Ireland. While the Isle of Man Communications Commission refers to Akamai’s recent State of the Internet Report for Q1 2017, with "the Island ranked 8th in the world for percentage of broadband connections with >4 Mb/s connectivity, with 96% of users connecting at speeds greater than 4Mb/s", an "international league table of broadband speeds puts the Isle of Man at 50th in the world". Manx Telecom recently announced to roll out Fibre-to-the-Home (FTTH) superfast broadband with download speeds of up to 1Gigabit per second. Ronaldsway Airport links the Isle of Man with six airlines to eleven UK and Irish scheduled flight destinations. The Steam Packet Company provides ferry services to Liverpool, Heysham, Belfast and Dublin. Labour force—by occupation: agriculture, forestry and fishing 3%, manufacturing 11%, construction 10%, transport and communication 8%, wholesale and retail distribution 11%, professional and scientific services 18%, public administration 6%, banking and finance 18%, tourism 2%, entertainment and catering 3%, miscellaneous services 10% Unemployment rate: nominally 2.0% (January 2016) Industries: financial services, light manufacturing, tourism Agriculture—products: cereals, vegetables, cattle, sheep, pigs, poultry Exports: $NA Exports—commodities: tweeds, herring, processed shellfish, beef, lamb Exports—partners: UK Imports: $NA Imports—commodities: timber, fertilizers, fish Imports—partners: UK Debt—external: $NA Economic aid—recipient: $NA Currency: 1 Isle of Man pound = 100 pence Exchange rates: the Manx pound is at par with the British pound Fiscal year: 1 April – 31 March
https://en.wikipedia.org/wiki?curid=14767
Communications in the Isle of Man The Isle of Man has an extensive communications infrastructure consisting of telephone cables, submarine cables, and an array of television and mobile phone transmitters and towers. The history of Manx telecommunications starts in 1859, when the Isle of Man Electric Telegraph Company was formed on the island with the intention of connecting across the island by telegraph, and allowing messages to be sent onwards to the UK. In August 1859, a long cable was commissioned from Glass, Elliot and Company of Greenwich and laid from Cranstal (north of Ramsey) to St Bees in Cumbria using the chartered cable ship "Resolute". The cable was single-core, with gutta-percha insulation. Twenty miles of overhead cable were also erected from Cranstal south to Ramsey, and on to Douglas. In England, the telegraph was connected to Whitehaven and the circuits of the Electric Telegraph Company. The telegraph offices were located at 64 Athol Street, Douglas (also the company's head office) and at East Quay, Ramsey (now Marina House). On 10 August 1860 the company was statutorily incorporated by an Act of Tynwald with a capital of £5,500. The currents at Cranstal proved too strong, and in 1864 the cable was taken up and relaid further south, at Port-e-Vullen in Ramsey Bay. It was later relaid to land even further south at Port Cornaa. Following the 1869 finalisation of UK telegraph nationalisation into a General Post Office monopoly, the Isle of Man Telegraph Company was nationalised in 1870 under the Telegraph Act 1870 (an Act of Parliament) at a cost to the British Government of £16,106 (paid in 1872 following arbitration proceedings over the value). Prior to nationalisation, the island's telegraph operations had been performing poorly and the company's share price valued it at around £100. Subsequent to nationalisation, operations were taken over by the GPO. The internal telegraph system was extended within a year to Castletown and Peel, however by then the previous lack of modern communications in Castletown had already started the Isle of Man Government on its move to Douglas. Due to increasing usage in the years following nationalisation, further cables between Port Cornaa and St Bees were laid in 1875 and 1885. By 1883 Smith's Directory listed several telegraph offices operated by the Post Office, in addition to those at Douglas, Ramsey, Castletown and Peel the telegraph was also available at Laxey, Ballaugh, and Port St. Mary. Throughout the First World War, the cable landing station at Port Cornaa was guarded by the Isle of Man Volunteer Corps. The undersea telegraph cables have been disused since the 1950s, but remain in place. A Teleport, with several earth stations, is currently under construction on the Isle of Man. SES Satellite Leasing, the entrepreneurial investment arm of SES. The teleport is expected to enter into service in 2017. It will be a state-of-the-art facility providing satellite telemetry, tracking and commanding (TT&C) facilities and capacity management, together with a wide range of teleport services such as uplink, downlink, and contribution services for broadcasters and data centres. The main telephone provider on the Isle of Man today is Manx Telecom. In 1889 George Gillmore, formerly an electrician for the GPO's Manx telegraph operations, was granted a licence by the Postmaster General to operate the Isle of Man's first telephone service. Based in an exchange in Athol Street, early customers of Gilbert's telephone service included the Isle of Man Steam Packet Company and the Isle of Man Railway. Not having the resources to fund expansion or a link to England, Gillmore sold his licence to the National Telephone Company and stayed on as their manager on the island. By 1901 there were 600 subscribers, and the telephone system had been extended to Ramsey, Castletown, Peel, Port Erin, Port St. Mary and Onchan. On 1 January 1912 the National Telephone Company was nationalised and merged into the General Post Office by the Telephone Transfer Act 1911. Only Guernsey, Portsmouth and Hull remained outside of the GPO. In 1922, the General Post Office offered to sell the island's telephone service to the Manx government, but the offer was not taken up. A similar arrangement in Jersey for that island's telephone service was concluded in 1923. The first off-island telephone link was established in 1929, with the laying of a cable by the "CS Faraday" between Port Erin and Ballyhornan in Northern Ireland, a distance of 57 km, and then between Port Grenaugh and Blackpool, primarily to provide a link to Northern Ireland. The cable was completed on 6 June 1929 and the first call between the Isle of Man and the outside world was made on 28 June 1929 by Lieutenant Governor Sir Claude Hill in Douglas to the Postmaster General in Liverpool. The cable initially carried only two trunk circuits. In 1942, a pioneering VHF frequency-modulated radio-link was established between Creg-na-Baa and the UK to provide an alternative to the sub-sea cable. This has since been discontinued. This was augmented on 24 June 1943 by a long cable between Cemaes Bay in Anglesey and Port Erin, which had the world's first submerged repeater, laid by "HMCS Iris". The repeater doubled the possible number of circuits on the cable, and although it failed after only five months, its replacement worked for seven years. In 1962 a further undersea cable was laid by "HMTS Ariel" between Colwyn Bay and the Island. Historically, the telephone system on the Isle of Man had been run as a monopoly by the British General Post Office, and later British Telecommunications, and operated as part of the Liverpool telephone district. By 1985 the privatised British Telecom had inherited the telephone operations of the GPO, including those on the Isle of Man. At this time the Manx Government announced that it would award a 20-year licence to operate the telephone system in a tender process. As part of this process, in 1986 British Telecom created a Manx-registered subsidiary company, Manx Telecom, to bid for the tender. It was believed that a local identity and management would be more politically acceptable in the tendering process as they competed with Cable & Wireless to win the licence. Manx Telecom won the tender, and commenced operations under the new identity from 1 January 1987. On 28 March 1988 an 8,000 telephone circuit fibre optic cable, the longest unregenerated system in Europe, was inaugurated. In links Port Grenaugh to Silecroft in Cumbria, and was laid in September 1987. The cable was buried in the seabed along its entire length. A further fibre optic cable, known as BT-MT1 was laid in October 1990 between Millom in Cumbria and Douglas, a distance of . Jointly operated by BT and Manx Telecom, it provides six channels each with a bandwidth of 140 Mbit/s. This cable remains in use today. In July 1992, Mercury Communications laid the LANIS fibre-optic cables. LANIS-1 runs for between Port Grenaugh and Blackpool, and LANIS-2 runs for between the Isle of Man and Northern Ireland. They have six channels each with a bandwidth of 565 Mbit/s. The LANIS cables are now operated by Cable & Wireless. The LANIS-1 cable was damaged 600 m off Port Grenaugh on 27 November 2006, causing loss of the link and resulting in temporary Internet access issues for some Manx customers whilst it was awaiting repair. On 17 November 2001 Manx Telecom became part of mmO2 following the demerger of BT Wireless's operations from BT Group, and the company was owned by Telefónica. On 4 June 2010 Manx Telecom was sold by Telefónica to UK private equity investor HgCapital (who were buying the majority stake), alongside telecoms management company CPS Partners In December 2007, the Manx Electricity Authority and its telecoms subsidiary, e-llan Communications, commissioned the lighting of a new undersea fibre-optic link. It was laid in 1999 between Blackpool and Douglas as part of the Isle of Man to England Interconnector which connects the Manx electricity system to the UK's National Grid. In December 2017, Horizon Electronics Isle of Man (formerly Horizon Electro) helped with the online TV services of the Isle of Man. According to the CIA World Factbook, in 1999 there were 51,000 fixed telephone lines in use in the Isle of Man. The Isle of Man is included within the UK telephone numbering system, and is accessed externally via UK area codes, rather than by its own country calling code. The area codes currently in use are: +44 1624 (landlines) and +44 7425 / +44 7624 / +44 7924 (mobiles). Submarine cables in Manx waters are governed by the Submarine Cables Act 2003 (an Act of Tynwald). It is also rumoured that various online gaming companies operate their own networks outside of these providers, although they do not resell that service. The mobile phone network operated by Manx Telecom has been used by O2 as an environment for developing and testing new products and services prior to wider rollout. In December 2001, the company became the first telecommunications operator in Europe to launch a live 3G network. In November 2005, the company became the first in Europe to offer its customers an HSDPA (3.5G) service. Sure built their own mobile network on the island in 2007 and following various upgrades now deliver 2G/3G and 4G services In 1996 the Isle of Man government obtained permission to use the .im national top level domain (TLD) and has ultimate responsibility for its use. The domain is managed on a daily basis by Domicilium (IOM) Limited, an island based Internet service provider. Broadband Internet services are available through five local providers which are Manx Telecom, Sure, Wi-Manx, Domicilium, Opti-Fi Limited and BlueWave Communications. The public-service commercial radio station for the island is Manx Radio. Manx Radio is part funded by government grant, and partly by advertising. There are two other Manx-based FM radio stations, Energy FM and 3 FM. BBC national radio stations are also relayed locally via a transmitter located to the south of Douglas, relayed from Sandale transmitting station in Cumbria, as well as a signal feed from the Holme Moss transmitting station in West Yorkshire. The Douglas transmitter also broadcasts the BBC's DAB digital radio services and Classic FM. Manx Radio is the only local service to broadcast on AM medium wave. No UK services are relayed via local AM transmitters. No longwave stations operate from the Island, although one (Musicmann279) was proposed. There are currently no proposals to broadcast any of the three insular FM stations on DAB. There is no island-specific television service. Local transmitters retransmit UK Freeview broadcasts. The BBC region is BBC North West and the ITV region is Granada Television. Many television services are available by satellite, such as Sky, and Freesat from the Astra 2/Eurobird 1 group, as well as services from a range of other satellites around Europe such as Astra 1 and Hot Bird. Manx ViaSat-IOM, ManSat, Telesat-IOM companies uses the first communications satellite ViaSat-1 that launched in 2011 and positioned at the Isle of Man registered 115.1 degrees West longitude geostationary orbit point. In some areas, terrestrial television directly from the United Kingdom or Republic of Ireland can also be received. Analogue television transmission ceased between 2008 and 2009, when limited local transmission of digital terrestrial television commenced. The UK's television licence regime extends to the island. There is no island-specific opt-out of the BBC regional news programme "North West Tonight", in the way that the Channel Islands get their own version of "Spotlight". Television was first received on the Isle of Man from the Holme Moss transmitter which started broadcasting BBC Television (later BBC One) from 12 October 1951. Signals from Holme Moss were easily received on the Isle of Man. ITV television has been available on parts of the east of the Isle of Man on 3 May 1956 when Granada Television (and ABC Television from 5 May 1956 to 28 July 1968) transmissions started from the Winter Hill transmitting station, and to parts of the west of the island on 31 October 1959 from the Black Mountain transmitting station in Northern Ireland which broadcasts Ulster Television. Parts of the north of the island received Border Television since 1 September 1961, initially directly from the Caldbeck transmitting station in Cumberland (later became Cumbria from 1974). On 26 March 1965, Border Television commenced relay of their signal through a local transmitter on Richmond Hill, above sea level and from the centre of Douglas. The site allowed reliable reception of the Caldbeck signal, which is rebroadcast on a different frequency. The high transmission tower was re-sited from London, where it had been used for early ITV transmissions. Richmond Hill was decommissioned after the close of 405-line broadcasts, although the 200 ft tower remained in use for radio with Manx Radio transmitting on 96.9 MHz and then 97.3 MHz until 1989. Manx Radio moved their FM service to the Carnane site and the frequency changed to the current 97.2 MHz. The television broadcasts are now transmitted from a high transmitter on a hill to the south of Douglas. The transmitter is operated by Arqiva and is directly fed using a fibre optic cable. There are further sub-relay transmitters across the island. Following a realignment of ITV regional services and the digital switchover, the Douglas relay switched ITV broadcasts to Granada Television on Thursday 17 July 2009. The Broadcasting Act 1993 (An Act of Tynwald) allows for the establishment of local television services. Only one application for a licence to run such a service was received by the Communications Commission. That application was rejected. According to the CIA World Factbook, in 1999 there were 27,490 televisions in use in the Isle of Man. Isle of Man Post issues its own stamps for use within the island and for sending post off-island. Only Manx stamps are valid for sending mail using the postal system. The Isle of Man adopted postcodes in 1993 using the prefix IM to fit in with the already established UK postcode system.
https://en.wikipedia.org/wiki?curid=14768
Transport in the Isle of Man There are a number of transport services around the Isle of Man, mostly consisting of paved roads, public transport, rail services, sea ports and an airport. The island has a total of of public roads, all of which are paved. Roads are numbered using a numbering scheme similar to the numbering schemes of Great Britain and Northern Ireland; each road is assigned a letter, which represents the road's category, followed by a 1 or 2 digit number. "A" roads are the main roads of the island whilst roads labelled "B", "C", "D" or "U" decrease in size and/or quality. (The C, D and U numbers are not marked on most maps or on signposts.) There is no national speed limit - some roads may be driven at any speed which is safe and appropriate. Careless and dangerous driving laws still apply, so one may not drive at absolutely any speed, and there are local speed limits on many roads. Many unrestricted roads have frequent bends which even the most experienced driver cannot see round. Drivers are limited to in the first full year after passing their driving test (Isle of Man citizens are permitted to start driving at the age of sixteen) and some are not used to having to make progress in the same way as on a larger road network such as that in the UK: even a cautious driver can get from anywhere in the island to anywhere else in no more than sixty minutes). Set against this is a strong culture of motor sport enthusiasm (pinnacled in the TT, but there are many events during the year) and many residents familiar with the roads are well used to traversing country roads at speeds illegal on similar roads elsewhere. This leads to a very diverse spread of both driving competence and speed. In an official survey in 2006 the introduction of blanket speed limits was refused by the population, suggesting that a large number appreciate the freedom. There is a comprehensive bus network, operated by Bus Vannin, a department of the Isle of Man Government, with most routes originating or terminating in Douglas. The island has a total of of railway. There are seven separate public rail or tram systems on the island: "a"Reduced in 2019 due to works on the promenade. These works have overrun badly, and as at October 2019 the situation with the horse trams in the 2020 season is uncertain. All of these routes are seasonal. The only commercial airport on the island is the Isle of Man Airport at Ronaldsway. Scheduled services operate to and from various cities in the United Kingdom and Ireland, operated by several different airlines. The island's other paved runways are at Jurby and Andreas. Jurby remains in Isle of Man Government ownership and is used for motorsport events and, previously, airshows, while Andreas is privately owned and used by a local glider club. The old Hall Caine Airport, a grass field near Ramsey, is no longer used. The Isle of Man Aircraft Register became operational on 1 May 2007. The register is open to all non-commercial aircraft and is intended to be of particular interest to professionally flown corporate operators. As of November 2012 a total of 537 corporate and private aircraft had been registered. There are ports at Castletown, Douglas, Peel, Port St Mary and Ramsey. Douglas is served by frequent ferries to/from England and occasional ferries to/from Ireland; the sole operator is the Isle of Man Steam Packet Company, with exclusive use of the Isle of Man Sea Terminal and the Douglas port linkspans under the conditions of the user agreement with the Isle of Man Government. The Isle of Man register comprised 404 merchant ships of 1,000 GT or over at the end of 2017.
https://en.wikipedia.org/wiki?curid=14769
Information theory Information theory studies the quantification, storage, and communication of information. It was originally proposed by Claude Shannon in 1948 to find fundamental limits on signal processing and communication operations such as data compression, in a landmark paper titled "A Mathematical Theory of Communication". Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones, the development of the Internet, the study of linguistics and of human perception, the understanding of black holes, and numerous other fields. The field is at the intersection of mathematics, statistics, computer science, physics, neurobiology, information engineering, and electrical engineering. The theory has also found applications in other areas, including statistical inference, natural language processing, cryptography, neurobiology, human vision, the evolution and function of molecular codes (bioinformatics), model selection in statistics, thermal physics, quantum computing, linguistics, plagiarism detection, pattern recognition, and anomaly detection. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory, information-theoretic security, Grey system theory and measures of information. Applications of fundamental topics of information theory include lossless data compression (e.g. ZIP files), lossy data compression (e.g. MP3s and JPEGs), and channel coding (e.g. for DSL). Information theory is used in information retrieval, intelligence gathering, gambling, and even in musical composition. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (with two equally likely outcomes) provides less information (lower entropy) than specifying the outcome from a roll of a (with six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was made concrete in 1948 by Claude Shannon in his paper "A Mathematical Theory of Communication", in which "information" is thought of as a set of possible messages, where the goal is to send these messages over a noisy channel, and then to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Information theory is closely associated with a collection of pure and applied disciplines that have been investigated and reduced to engineering practice under a variety of rubrics throughout the world over the past half century or more: adaptive systems, anticipatory systems, artificial intelligence, complex systems, complexity science, cybernetics, informatics, machine learning, along with systems sciences of many descriptions. Information theory is a broad and deep mathematical theory, with equally broad and deep applications, amongst which is the vital field of coding theory. Coding theory is concerned with finding explicit methods, called "codes", for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis. "See the article ban (unit) for a historical application." The landmark event that "established" the discipline of information theory and brought it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the "Bell System Technical Journal" in July and October 1948. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, "Certain Factors Affecting Telegraph Speed", contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation (recalling Boltzmann's constant), where "W" is the speed of transmission of intelligence, "m" is the number of different voltage levels to choose from at each time step, and "K" is a constant. Ralph Hartley's 1928 paper, "Transmission of Information", uses the word "information" as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as , where "S" was the number of possible symbols, and "n" the number of symbols in a transmission. The unit of information was therefore the decimal digit, which has since sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in "Entropy in thermodynamics and information theory". In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that With it came the ideas of Information theory is based on probability theory and statistics. Information theory often concerns itself with measures of information of the distributions associated with random variables. Important quantities of information are entropy, a measure of information in a single random variable, and mutual information, a measure of information in common between two random variables. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form is considered by convention to be equal to zero whenever . This is justified because formula_1 for any logarithmic base. Based on the probability mass function of each source symbol to be communicated, the Shannon entropy , in units of bits (per symbol), is given by where is the probability of occurrence of the -th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base , where is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy of a discrete random variable is a measure of the amount of "uncertainty" associated with the value of when only its distribution is known. The entropy of a source that emits a sequence of symbols that are independent and identically distributed (iid) is bits (per message of symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length will be less than . If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If 𝕏 is the set of all messages that could be, and is the probability of some formula_3, then the entropy, , of is defined: (Here, is the self-information, which is the entropy contribution of an individual message, and is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable ; i.e., most unpredictable, in which case . The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: The of two discrete random variables and is merely the entropy of their pairing: . This implies that if and are independent, then their joint entropy is the sum of their individual entropies. For example, if represents the position of a chess piece — the row and the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. Despite similar notation, joint entropy should not be confused with . The or "conditional uncertainty" of given random variable (also called the "equivocation" of about ) is the average conditional entropy over : Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: "Mutual information" measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of relative to is given by: where ("S"pecific mutual "I"nformation) is the pointwise mutual information. A basic property of the mutual information is that That is, knowing "Y", we can save an average of bits in encoding "X" compared to not knowing "Y". Mutual information is symmetric: Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of "X" given the value of "Y" and the prior distribution on "X": In other words, this is a measure of how much, on the average, the probability distribution on "X" will change if we are given the value of "Y". This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. The "Kullback–Leibler divergence" (or "information divergence", "information gain", or "relative entropy") is a way of comparing two distributions: a "true" probability distribution "p(X)", and an arbitrary probability distribution "q(X)". If we compress data in a manner that assumes "q(X)" is the distribution underlying some data, when, in reality, "p(X)" is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number "X" is about to be drawn randomly from a discrete set with probability distribution "p(x)". If Alice knows the true distribution "p(x)", while Bob believes (has a prior) that the distribution is "q(x)", then Bob will be more surprised than Alice, on average, upon seeing the value of "X". The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the "log" is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. Other important information theoretic quantities include Rényi entropy (a generalization of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. Network information theory refers to these multi-agent communication models. Any process that generates successive messages can be considered a of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. Information "rate" is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the "average rate" is that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of . Communications over a channel—such as an ethernet cable—is the primary motivation of information theory. However, such channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: Here "X" represents the space of messages transmitted, and "Y" the space of messages received during a unit time over our channel. Let be the conditional probability distribution function of "Y" given "X". We will consider to be an inherent fixed property of our communications channel (representing the nature of the "noise" of our channel). Then the joint distribution of "X" and "Y" is completely determined by our channel and by our choice of , the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the "signal", we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the and is given by: This capacity has the following property related to communicating at information rate "R" (where "R" is usually bits per symbol). For any information rate "R < C" and coding error ε > 0, for large enough "N", there exists a code of length "N" and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate "R > C", it is impossible to transmit with arbitrarily small block error. "Channel coding" is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods currently comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. Semioticians and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics. Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing." Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. Information theory also has applications in Gambling and information theory, black holes, and bioinformatics.
https://en.wikipedia.org/wiki?curid=14773
Information explosion The information explosion is the rapid increase in the amount of published information or data and the effects of this abundance. As the amount of available data grows, the problem of managing the information becomes more difficult, which can lead to information overload. The Online Oxford English Dictionary indicates use of the phrase in a March 1964 "New Statesman" article. "The New York Times" first used the phrase in its editorial content in an article by Walter Sullivan on June 7, 1964, in which he described the phrase as "much discussed". (p11.) The earliest use of the phrase seems to have been in an IBM advertising supplement to the New York Times published on April 30, 1961, and by Frank Fremont-Smith, Director of the American Institute of Biological Sciences Interdisciplinary Conference Program, in an April 1961 article in the AIBS Bulletin (p18.) Many sectors are seeing this rapid increase in the amount of information available such as healthcare, supermarkets, and even governments with birth certificate informations and immunization records. Another sector that is being affected by this phenomenon is journalism. Such profession, which in the past was responsible for the dissemination of information, may be suppressed by so many sources of information today. Techniques to gather knowledge from an overabundance of electronic information (e.g., data fusion may help in data mining) have existed since the 1970s. Another common technique to deal with such amount of information is qualitative research. Such approach aims at organizing the information, synthesizing, categorizing and systematizing in order to be more usable and easier to search. A new metric that is being used in an attempt to characterize the growth in person-specific information, is the disk storage per person (DSP), which is measured in megabytes/person (where megabytes is 106 bytes and is abbreviated MB). Global DSP (GDSP) is the total rigid disk drive space (in MB) of new units sold in a year divided by the world population in that year. The GDSP metric is a crude measure of how much disk storage could possibly be used to collect person-specific data on the world population. In 1983, one million fixed drives with an estimated total of 90 terabytes were sold worldwide; 30MB drives had the largest market segment. In 1996, 105 million drives, totaling 160,623 terabytes were sold with 1 and 2 gigabyte drives leading the industry. By the year 2000, with 20GB drive leading the industry, rigid drives sold for the year are projected to total 2,829,288 terabytes Rigid disk drive sales to top $34 billion in 1997. According to Latanya Sweeney, there are three trends in data gathering today: Type 1. Expansion of the number of fields being collected, known as the “collect more” trend. Type 2. Replace an existing aggregate data collection with a person-specific one, known as the “collect specifically” trend. Type 3. Gather information by starting a new person-specific data collection, known as the “collect it if you can” trend. Since "information" in electronic media is often used synonymously with "data", the term "information explosion" is closely related to the concept of "data flood" (also dubbed "data deluge"). Sometimes the term "information flood" is used as well. All of those basically boil down to the ever-increasing amount of electronic data exchanged per time unit. The awareness about non-manageable amounts of data grew along with the advent of ever more powerful data processing since the mid-1960s. Even though the abundance of information can be beneficial in several levels, some problems may be of concern such as privacy, legal and ethical guidelines, filtering and data accuracy. Filtering refers to finding useful information in the middle of so much data, which relates to the job of data scientists. A typical example of a necessity of data filtering (data mining) is in healthcare since in the next years is due to have EHRs (Electronic Health Records) of patients available. With so much information available, the doctors will need to be able to identify patterns and select important data for the diagnosis of the patient. On the other hand, according to some experts, having so much public data available makes it difficult to provide data that is actually anonymous. Another point to take into account is the legal and ethical guidelines, which relates to who will be the owner of the data and how frequently he/she is obliged to the release this and for how long. With so many sources of data, another problem will be accuracy of such. An untrusted source may be challenged by others, by ordering a new set of data, causing a repetition in the information. According to Edward Huth, another concern is the accessibility and cost of such information. The accessibility rate could be improved by either reducing the costs or increasing the utility of the information. The reduction of costs according to the author, could be done by associations, which should assess which information was relevant and gather it in a more organized fashion. As of August 2005, there were over 70 million web servers. there were over 135 million web servers. According to Technorati, the number of blogs doubles about every 6 months with a total of 35.3 million blogs . This is an example of the early stages of logistic growth, where growth is approximately exponential, since blogs are a recent innovation. As the number of blogs approaches the number of possible producers (humans), saturation occurs, growth declines, and the number of blogs eventually stabilizes.
https://en.wikipedia.org/wiki?curid=14774
Inch The inch (abbreviation: in or ″) is a unit of length in the (British) imperial and United States customary systems of measurement. It is equal to yard or of a foot. Derived from the Roman uncia ("twelfth"), the word "inch" is also sometimes used to translate similar units in other measurement systems, usually understood as deriving from the width of the human thumb. Standards for the exact length of an inch have varied in the past, but since the adoption of the international yard during the 1950s and 1960s it has been based on the metric system and defined as exactly 25.4mm. The English word "inch" () was an early borrowing from Latin "" ("one-twelfth; Roman inch; Roman ounce") not present in other Germanic languages. The vowel change from Latin to Old English (which became Modern English ) is known as umlaut. The consonant change from the Latin (spelled "c") to English is palatalisation. Both were features of Old English phonology; see and for more information. "Inch" is cognate with "ounce" (), whose separate pronunciation and spelling reflect its reborrowing in Middle English from Anglo-Norman "unce" and "ounce". In many other European languages, the word for "inch" is the same as or derived from the word for "thumb", as a man's thumb is about an inch wide (and this was even sometimes used to define the inch). Examples include ; ("inch") and ' ("thumb"); ("thumb"); Danish and ("inch") ' ("thumb"); ; ; ; ; ("inch") and ' ("thumb"); ("thumb"); ("inch") and ' ("thumb"); ("inch") and "tumme" ("thumb"); and ("duim"). The inch is a commonly used customary unit of length in the United States, Canada, and the United Kingdom. It is also used in Japan for electronic parts, especially display screens. In most of continental Europe, the inch is also used informally as a measure for display screens. For the United Kingdom, guidance on public sector use states that, since 1 October 1995, without time limit, the inch (along with the foot) is to be used as a primary unit for road signs and related measurements of distance (with the possible exception of clearance heights and widths) and may continue to be used as a secondary or supplementary indication following a metric measurement for other purposes. Inches are commonly used to specify the diameter of vehicle wheel rims, and the corresponding inner diameter of tyres – the last number in a Car/Truck tire size such as 235/75 R16; the first two numbers give the width (normally expressed in millimetres for cars & Light trucks) and aspect ratio of the tyre (height 75% of width in this example), the R Designates a Radial Ply Construction. Wheel manufacturers commonly specify the wheel width in inches ( typically 6.5, 7, 7.5, or 8 for 235/75 tire). The international standard symbol for inch is in (see ISO 31-1, Annex A) but traditionally the inch is denoted by a double prime, which is often approximated by double quotes, and the foot by a prime, which is often approximated by an apostrophe. For example, can be written as 3′ 2″. (This is akin to how the first and second "cuts" of the hour and degree are likewise indicated by prime and double prime symbols.) Subdivisions of an inch are typically written using dyadic fractions with odd number numerators; for example, would be written as ″ and not as 2.375″ nor as ″. However for engineering purposes fractions are commonly given to three or four places of decimals and have been for many years. 1 international inch is equal to: The earliest known reference to the inch in England is from the "Laws of Æthelberht" dating to the early 7th century, surviving in a single manuscript, the "Textus Roffensis" from 1120. Paragraph LXVII sets out the fine for wounds of various depths: one inch, one shilling, two inches, two shillings, etc. An Anglo-Saxon unit of length was the barleycorn. After 1066, 1 inch was equal to 3 barleycorns, which continued to be its legal definition for several centuries, with the barleycorn being the base unit. One of the earliest such definitions is that of 1324, where the legal definition of the inch was set out in a statute of Edward II of England, defining it as "three grains of barley, dry and round, placed end to end, lengthwise". Similar definitions are recorded in both English and Welsh medieval law tracts. One, dating from the first half of the 10th century, is contained in the Laws of Hywel Dda which superseded those of Dyfnwal, an even earlier definition of the inch in Wales. Both definitions, as recorded in "Ancient Laws and Institutes of Wales" (vol i., pp. 184, 187, 189), are that "three lengths of a barleycorn is the inch". King David I of Scotland in his Assize of Weights and Measures (c. 1150) is said to have defined the Scottish inch as the width of an average man's thumb at the base of the nail, even including the requirement to calculate the average of a small, a medium, and a large man's measures. However, the oldest surviving manuscripts date from the early 14th century and appear to have been altered with the inclusion of newer material. In 1814, Charles Butler, a mathematics teacher at Cheam School, recorded the old legal definition of the inch to be "three grains of sound ripe barley being taken out the middle of the ear, well dried, and laid end to end in a row", and placed the barleycorn, not the inch, as the base unit of the English Long Measure system, from which all other units were derived. John Bouvier similarly recorded in his 1843 law dictionary that the barleycorn was the fundamental measure. Butler observed, however, that "[a]s the length of the barley-corn cannot be fixed, so the inch according to this method will be uncertain", noting that a standard inch measure was now [i.e. by 1843] kept in the Exchequer chamber, Guildhall, and "that" was the legal definition of the inch. This was a point also made by George Long in his 1842 "Penny Cyclopædia", observing that standard measures had since surpassed the barleycorn definition of the inch, and that to recover the inch measure from its original definition, in the event that the standard measure were destroyed, would involve the measurement of large numbers of barleycorns and taking their average lengths. He noted that this process would not perfectly recover the standard, since it might introduce errors of anywhere between one hundredth and one tenth of an inch in the definition of a yard. Before the adoption of the international yard and pound, various definitions were in use. In the United Kingdom and most countries of the British Commonwealth, the inch was defined in terms of the Imperial Standard Yard. The United States adopted the conversion factor 1 metre = 39.37 inches by an act in 1866. In 1893, Mendenhall ordered the physical realization of the inch to be based on the international prototype metres numbers 21 and 27, which had been received from the CGPM, together with the previously adopted conversion factor. As a result of the definitions above, the U.S. inch was effectively defined as 25.4000508 mm (with a reference temperature of 68 degrees Fahrenheit) and the U.K. inch at 25.399977 mm (with a reference temperature of 62 degrees Fahrenheit). When Carl Edvard Johansson started manufacturing gauge blocks in inch sizes in 1912, Johnanson's compromise was to manufacture gauge blocks with a nominal size of 25.4mm, with a reference temperature of 20 degrees Celsius, accurate to within a few parts per million of both official definitions. Because Johannson's blocks were so popular, his blocks became the "de facto" standard for manufacturers internationally, with other manufacturers of gauge blocks following Johannson's definition by producing blocks designed to be equivalent to his. In 1930, the British Standards Institution adopted an inch of exactly 25.4 mm. The American Standards Association followed suit in 1933. By 1935, industry in 16 countries had adopted the "industrial inch" as it came to be known, effectively endorsing Johannson's pragmatic choice of conversion ratio. In 1946, the Commonwealth Science Congress recommended a yard of exactly 0.9144 metres for adoption throughout the British Commonwealth. This was adopted by Canada in 1951; the United States on 1 July 1959; Australia in 1961, effective 1 January 1964; and the United Kingdom in 1963, effective on 1 January 1964. The new standards gave an inch of exactly 25.4 mm, 1.7 millionths of an inch longer than the old imperial inch and 2 millionths of an inch shorter than the old US inch. The United States retains the -metre definition for survey purposes, producing a 2 millionth part difference between standard and US survey inches. This is approximately  inch per mile. In fact, 12.7 kilometres is exactly standard inches and exactly survey inches. This difference is significant when doing calculations in State Plane Coordinate Systems with coordinate values in the hundreds of thousands or millions of feet. In 2020, the U.S. NIST announced that the survey foot would be deprecated from 2022, and by implication, the survey inch with it. Before the adoption of the metric system, several European countries had customary units whose name translates into "inch". The French "pouce" measured 27.0 mm, at least when applied to describe the calibre of artillery pieces. The Amsterdam foot ("voet") consisted of 11 Amsterdam inches ("duim"). The Amsterdam foot is about 8% shorter than an English foot. The now obsolete Scottish inch (), of a Scottish foot, was about 1.0016 imperial inches (about ).
https://en.wikipedia.org/wiki?curid=14775
Inn Inns are generally establishments or buildings where travelers can seek lodging, and usually, food and drink. Inns are typically located in the country or along a highway; before the advent of motorized transportation they also provided accommodation for horses. Inns in Europe were possibly first established when the Romans built their system of Roman roads two millennia ago. Many inns in Europe are several centuries old. In addition to providing for the needs of travelers, inns traditionally acted as community gathering places. Historically, inns in Europe provided not only food and lodging, but stabling and fodder for the travelers' horses, as well. Famous London examples of inns include The George and The Tabard. However, there is no longer a formal distinction between an inn and several other kinds of establishments: many pubs use the name "inn", either because they are long established and may have been formerly coaching inns, or to summon up a particular kind of image. Inns were like bed and breakfasts, with a community dining room which was also used for town meetings or rented for wedding parties. The front, facing the road, was ornamental and welcoming for travelers. The back also usually had at least one livery barn for travelers to keep their horses. There were no lobbies as in modern inns; rather, the innkeeper would answer the door for each visitor and judge the people whom he decided to accommodate. Many inns were simply large houses that had extra rooms for renting. During the 19th century, the inn played a major role in the growing transportation system of England. Industry was on the rise, and people were traveling more in order to keep and maintain business. The English inn was considered an important part of English infrastructure, as it helped maintain a smooth flow of travel throughout the country. As modes of transport have evolved, tourist lodging has adapted to serve each generation of traveller. A stagecoach made frequent stops at roadside coaching inns for water, food, and horses. A passenger train stopped only at designated stations in the city centre, around which were built grand railway hotels. Motorcar traffic on old-style two-lane highways might have paused at any camp, cabin court, or motel along the way, while freeway traffic was restricted to access from designated off-ramps to side roads which quickly become crowded with hotel chain operators. The original functions of an inn are now usually split among separate establishments. For example, hotels, lodges and motels might provide the traditional functions of an inn but focus more on lodging customers than on other services; public houses (pubs) are primarily alcohol-serving establishments; and restaurants and taverns serve food and drink. (Hotels often contain restaurants serving full breakfasts and meals, thus providing all of the functions of traditional inns. Economy, limited service properties, however, lack a kitchen and bar, and therefore claim at most an included continental breakfast.) The lodging aspect of the word "inn" lives on in some hotel brand names, like Holiday Inn, and the Inns of Court in London were once accommodations for members of the legal profession. Some laws refer to lodging operators as "innkeepers". Other forms of inns exist throughout the world. Among them are the honjin and ryokan of Japan, caravanserai of Central Asia and the Middle East, and Jiuguan in ancient China. In Asia Minor, during the periods of rule by the Seljuq and Ottoman Turks, impressive structures functioning as inns () were built because inns were considered socially significant. These inns provided accommodations for people and either their vehicles or animals, and served as a resting place to those travelling on foot or by other means. These inns were built between towns if the distance between municipalities was too far for one day's travel. These structures, called caravansarais, were inns with large courtyards and ample supplies of water for drinking and other uses. They typically contained a café, in addition to supplies of food and fodder. After the caravans traveled a while they would take a break at these caravansarais, and often spend the night to rest the human travellers and their animals. The term "inn" historically characterized a rural hotel which provided lodging, food and refreshments, and accommodations for travelers' horses. To capitalize on this nostalgic image many typically lower end and middling modern motor hotel operators seek to distance themselves from similar motels by styling themselves "inns", regardless of services and accommodations provided. Examples are Comfort Inn, Days Inn, Holiday Inn, Knights Inn, and Premier Inn. The term "inn" is also retained in its historic use in many laws governing motels and hotels, often known as "innkeeper's acts", or refer to hôteliers and motel operators as "innkeepers" in the body of the legislation These laws typically define the innkeepers' liability for valuables entrusted to them by clients and determine whether an innkeeper holds any lien against such goods. In some jurisdictions, an offence named as "defrauding an innkeeper" prohibits fraudulently obtaining "food, lodging, or other accommodation at any hotel, inn, boarding house, or eating house"; in this context, the term is often an anachronism as the majority of modern restaurants are free-standing and not attached to coaching inns or tourist lodging.
https://en.wikipedia.org/wiki?curid=14776
International Olympiad in Informatics The International Olympiad in Informatics (IOI) is an annual competitive programming competition for secondary school students. It is the second largest olympiad, after International Mathematical Olympiad, in terms of number of participating countries (83 at IOI 2017). The first IOI was held in 1989 in Pravetz, Bulgaria. The contest consists of two days of computer programming/coding and problem-solving of algorithmic nature. To deal with problems involving very large amounts of data, it is necessary to have not only programmers, "but also creative coders, who can dream up what it is that the programmers need to tell the computer to do. The hard part isn't the programming, but the mathematics underneath it." Students at the IOI compete on an individual basis, with up to four students competing from each participating country (with 81 countries in 2012). Students in the national teams are selected through national computing contests, such as the Australian Informatics Olympiad, British Informatics Olympiad, Indian Computing Olympiad or Bundeswettbewerb Informatik (Germany). The International Olympiad in Informatics is one of the most prestigious computer science competitions in the world. UNESCO and IFIP are patrons. On each of the two competition days, the students are typically given three problems which they have to solve in five hours. Each student works on his/her own, with only a computer and no other help allowed, specifically no communication with other contestants, books etc. Usually to solve a task the contestant has to write a computer program (in C, C++, Pascal, or Java) and submit it before the five-hour competition time ends. The program is graded by being run with secret test data. From IOI 2010, tasks are divided into subtasks with graduated difficulty, and points are awarded only when all tests for a particular subtask yield correct results, within specific time and memory limits. In some cases, the contestant's program has to interact with a secret computer library, which allows problems where the input is not fixed, but depends on the program's actions – for example in game problems. Another type of problem has known inputs which are publicly available already during the five hours of the contest. For these, the contestants have to submit an output file instead of a program, and it is up to them whether they obtain the output files by writing a program (possibly exploiting special characteristics of the input), or by hand, or by a combination of these means. Pascal will have been removed as an available programming language by 2019.:11 IOI 2010 for the first time had a live web scoreboard with real-time provisional results. Submissions will be scored as soon as possible during the contest, and the results posted. Contestants will be aware of their scores, but not others', and may resubmit to improve their scores. Starting from 2012, IOI has been using the Contest Management System (CMS) for developing and monitoring the contest. The scores from the two competition days and all problems are summed up separately for each contestant. At the awarding ceremony, contestants are awarded medals depending on their relative total score. The top 50% of the contestants are awarded medals, such that the relative number of gold : silver : bronze : no medal is approximately 1:2:3:6 (thus 1/12 of the contestants get a gold medal). Prior to IOI 2010, students who did not receive medals did not have their scores published, making it impossible for a country to be ranked by adding together scores of its competitors unless each wins a medal. From IOI 2010, although the scores of students who did not receive medals are still not available in the official results, they are known from the live web scoreboard. In IOI 2012 the top 3 nations ranked by aggregate score (Russia, China and USA) were subsequently awarded during the closing ceremony. Analysis of female performance shows 77.9 % of women obtain no medal, while 49.2 % of men obtain no medal. "The average female participation was 4.4% in 1989–1994 and 2.2% in 1996–2014." It also suggests women participate much more on the national level, claiming sometimes a double-digit percentage of women participate on the first stage. President of the IOI, Richard Forster, says the competition has difficulty attracting women and that in spite of trying to solve it, "none of us have hit on quite what the problem is, let alone the solution." In IOI 2017 held in Iran, due to not being able to participate in Iran, the Israeli students participated in an offsite competition organized by IOI in Russia.:11 Due to visa issues, the full USA team was unable to attend, although one contestant Zhezheng Luo was able to attend by traveling with the Chinese team and winning gold medal and 3rd place in standings. In IOI 2019 held in Azerbaijan, the Armenia team did not participate due to the dispute between the two countries. For 2020, due to COVID-19 pandemic, the IOI 2020 which will be originally scheduled to host in Singapore in July, is postponed to September, and later converted to online format. Singapore will host the onsite IOI 2021, replacing Egypt which will host IOI 2024. The following is a list of the top performers in the history of the IOI. The P sign indicates a perfect score, a rare achievement in IOI history. The U sign indicates an unofficial participation, where a contestant participated in a host's second team. Also, first (I), second (II) and third (III) places among gold medalists are indicated where appropriate. This list includes only those countries where the national selection contest allows the same participant to go multiple times to the IOI. Most participating countries use feeder competitions to select their team. A number of these are listed below:
https://en.wikipedia.org/wiki?curid=14777
ISP (disambiguation) ISP often refers to Internet Service Provider. ISP may also refer to:
https://en.wikipedia.org/wiki?curid=14780
Erectile dysfunction Erectile dysfunction (ED), also known as impotence, is a type of sexual dysfunction characterized by the inability to develop or maintain an erection of the penis during sexual activity. ED can have psychological consequences as it can be tied to relationship difficulties and self-image. A physical cause can be identified in about 80% of cases. These include cardiovascular disease, diabetes mellitus, neurological problems such as following prostatectomy, hypogonadism, and drug side effects. Psychological impotence is where erection or penetration fails due to thoughts or feelings; this is somewhat less frequent, on the order of about 10% of cases. In psychological impotence, there is a strong response to placebo treatment. The term "erectile dysfunction" is not used for other disorders of erection, such as priapism. Treatment involves addressing the underlying causes, lifestyle modifications, and addressing psychosocial issues. In many cases, a trial of pharmacological therapy with a PDE5 inhibitor, such as sildenafil, can be attempted. In some cases, treatment can involve inserting prostaglandin pellets into the urethra, injecting smooth muscle relaxants and vasodilators into the penis, a penile implant, a penis pump, or vascular reconstructive surgery. It is the most common sexual problem in men. ED is characterized by the regular or repeated inability to achieve or maintain an erection of sufficient rigidity to accomplish sexual activity. It is defined as the "persistent or recurrent inability to achieve and maintain a penile erection of sufficient rigidity to permit satisfactory sexual activity for at least 3 months." ED often has an impact on the emotional well-being of both men and their partners. Many men do not seek treatment due to feelings of embarrassment. About 75% of diagnosed cases of ED go untreated. Causes of or contributors to ED include the following: Surgical intervention for a number of conditions may remove anatomical structures necessary to erection, damage nerves, or impair blood supply. ED is a common complication of treatments for prostate cancer, including prostatectomy and destruction of the prostate by external beam radiation, although the prostate gland itself is not necessary to achieve an erection. As far as inguinal hernia surgery is concerned, in most cases, and in the absence of postoperative complications, the operative repair can lead to a recovery of the sexual life of people with preoperative sexual dysfunction, while, in most cases, it does not affect people with a preoperative normal sexual life. ED can also be associated with bicycling due to both neurological and vascular problems due to compression. The increase risk appears to be about 1.7-fold. Concerns that use of pornography can cause ED have little support in epidemiological studies, according to a 2015 literature review. Penile erection is managed by two mechanisms: the reflex erection, which is achieved by directly touching the penile shaft, and the psychogenic erection, which is achieved by erotic or emotional stimuli. The former involves the peripheral nerves and the lower parts of the spinal cord, whereas the latter involves the limbic system of the brain. In both cases, an intact neural system is required for a successful and complete erection. Stimulation of the penile shaft by the nervous system leads to the secretion of nitric oxide (NO), which causes the relaxation of the smooth muscles of the corpora cavernosa (the main erectile tissue of the penis), and subsequently penile erection. Additionally, adequate levels of testosterone (produced by the testes) and an intact pituitary gland are required for the development of a healthy erectile system. As can be understood from the mechanisms of a normal erection, impotence may develop due to hormonal deficiency, disorders of the neural system, lack of adequate penile blood supply or psychological problems. Spinal cord injury causes sexual dysfunction, including ED. Restriction of blood flow can arise from impaired endothelial function due to the usual causes associated with coronary artery disease, but can also be caused by prolonged exposure to bright light. In many cases, the diagnosis can be made based on the person's history of symptoms. In other cases, a physical examination and laboratory investigations are done to rule out more serious causes such as hypogonadism or prolactinoma. One of the first steps is to distinguish between physiological and psychological ED. Determining whether involuntary erections are present is important in eliminating the possibility of psychogenic causes for ED. Obtaining full erections occasionally, such as nocturnal penile tumescence when asleep (that is, when the mind and psychological issues, if any, are less present), tends to suggest that the physical structures are functionally working. Similarly, performance with manual stimulation, as well as any performance anxiety or acute situational ED, may indicate a psychogenic component to ED. Other factors leading to ED are diabetes mellitus, which is a well-known cause of neuropathy). ED is also related to generally poor physical health, poor dietary habits, obesity, and most specifically cardiovascular disease, such as coronary artery disease and peripheral vascular disease. Screening for cardiovascular risk factors, such as smoking, dyslipidemia, hypertension, and alcoholism is helpful. In some particular cases, the simple search for a previously undetected groin hernia can prove useful since it can affect sexual functions in men and is relatively easily curable. The current diagnostic and statistical manual of mental diseases (DSM-IV) has included a listing for ED. Penile ultrasonography with doppler can be used to examine the penis in erected state. Most cases of ED of organic causes are related to changes in blood flow in the corpora cavernosa, represented by occlusive artery disease, most often of atherosclerotic origin, or due to failure of the veno-occlusive mechanism. Preceding the ultrasound examination with Doppler, the penis should be examined in B mode, in order to identify possible tumors, fibrotic plaques, calcifications, or hematomas, as well as to evaluate the appearance of the cavernous arteries, which can be tortuous or atheromatous. Erection can be induced by injecting 10-20 µg of prostaglandin E1, with evaluations of the arterial flow every five minutes for 25-30 min (see image). The use of prostaglandin E1 is contraindicated in patients with a predisposition to priapism (e.g., those with sickle cell anemia), as well as in those with an anatomical deformity of the penis or a penile implant. Phentolamine (2 mg) is often added. Visual and tactile stimulation produces better results. Some authors recommend the use of sildenafil by mouth to replace the injectable drugs in cases of contraindications, although the efficacy of such medication is controversial. Prior to the injection of the chosen drug, the flow pattern is monophasic, with low systolic velocities and an absence of diastolic flow. After injection, it is expected that systolic and diastolic peak velocities will increase, decreasing progressively with vein occlusion and becoming negative when the penis becomes rigid (see image below). The reference values vary across studies, ranging from > 25 cm/s to > 35 cm/s. Values above 35 cm/s indicate the absence of arterial disease, values below 25 cm/s indicate arterial insufficiency, and values of 25–35 cm/s are indeterminate because they are less specific (see image below). The data obtained should be correlated with the degree of erection observed. If the peak systolic velocities are normal, the final diastolic velocities should be evaluated, those above 5 cm/s being associated with venogenic ED. Treatment depends on the underlying cause. In general, exercise, particularly of the aerobic type, is effective for preventing ED during midlife. Counseling can be used if the underlying cause is psychological, including how to lower stress or anxiety related to sex. Medications by mouth and vacuum erection devices are first-line treatments, followed by injections of drugs into the penis, as well as penile implants. Vascular reconstructive surgeries are beneficial in certain groups. Treatments, other than surgery, do not fix the underlying physiological problem, but are used as needed before sex. The PDE5 inhibitors sildenafil (Viagra), vardenafil (Levitra) and tadalafil (Cialis) are prescription drugs which are taken by mouth. As of 2018, sildenafil is available in the UK without a prescription. Additionally, a cream combining alprostadil with the permeation enhancer DDAIP has been approved in Canada as a first line treatment for ED. Penile injections, on the other hand, can involve one of the following medications: papaverine, phentolamine, and prostaglandin E1, also known as alprostadil. In addition to injections, there is an alprostadil suppository that can be inserted into the urethra. Once inserted, an erection can begin within 10 minutes and last up to an hour. Medications to treat ED may cause a side effect called priapism. Men with low levels of testosterone can experience ED. Taking testosterone may help maintain an erection. Men with type 2 diabetes are twice as likely to have lower levels of testosterone, and are three times more likely to experience ED than non-diabetic men. A vacuum erection device helps draw blood into the penis by applying negative pressure. This type of device is sometimes referred to as penis pump and may be used just prior to sexual intercourse. Several types of FDA approved vacuum therapy devices are available under prescription. When pharmacological methods fail, a purpose-designed external vacuum pump can be used to attain erection, with a separate compression ring fitted to the base of the penis to maintain it. These pumps should be distinguished from other penis pumps (supplied without compression rings) which, rather than being used for temporary treatment of impotence, are claimed to increase penis length if used frequently, or vibrate as an aid to masturbation. More drastically, inflatable or rigid penile implants may be fitted surgically. Often, as a last resort if other treatments have failed, the most common procedure is prosthetic implants which involves the insertion of artificial rods into the penis. Some sources show that vascular reconstructive surgeries are viable options for some people. The Food and Drug Administration (FDA) does not recommend alternative therapies to treat sexual dysfunction. Many products are advertised as "herbal viagra" or "natural" sexual enhancement products, but no clinical trials or scientific studies support the effectiveness of these products for the treatment of ED, and synthetic chemical compounds similar to sildenafil have been found as adulterants in many of these products. The FDA has warned consumers that any sexual enhancement product that claims to work as well as prescription products is likely to contain such a contaminant. Attempts to treat ED date back well over 1,000 years. In the 8th century, men of Ancient Rome and Greece wore talismans of rooster and goat genitalia, believing these talismans would serve as an aphrodisiac and promote sexual function. In the 13th century Albertus Magnus recommended ingesting roasted wolf penis as a remedy for impotence. During the late 16th and 17th centuries in France, male impotence was considered a crime, as well as legal grounds for a divorce. The practice, which involved inspection of the complainants by court experts, was declared obscene in 1677. The first successful vacuum erection device, or penis pump, was developed by Vincent Marie Mondat in the early 1800s. A more advanced device, based on a bicycle pump, was developed by Geddings Osbon, a Pentecostal preacher, in the 1970s. In 1982, he received FDA approval to market the product as the ErecAid®. John R. Brinkley initiated a boom in male impotence cures in the U.S. in the 1920s and 1930s. His radio programs recommended expensive goat gland implants and "mercurochrome" injections as the path to restored male virility, including operations by surgeon Serge Voronoff. Modern drug therapy for ED made a significant advance in 1983, when British physiologist Giles Brindley dropped his trousers and demonstrated to a shocked Urodynamics Society audience his papaverine-induced erection. The drug Brindley injected into his penis was a non-specific vasodilator, an alpha-blocking agent, and the mechanism of action was clearly corporal smooth muscle relaxation. The effect that Brindley discovered established the fundamentals for the later development of specific, safe, and orally effective drug therapies. The current first-line treatment for ED, the oral PDE5 inhibitor, was introduced by Pfizer in 1999. The Latin term "impotentia coeundi" describes simple inability to insert the penis into the vagina; it is now mostly replaced by more precise terms, such as "erectile dysfunction" (ED). The study of ED within medicine is covered by andrology, a sub-field within urology. Research indicates that ED is common, and it is suggested that approximately 40% of males experience symptoms compatible with ED, at least occasionally. The condition is also on occasion called "phallic impotence". Its antonym, or opposite condition, is priapism.
https://en.wikipedia.org/wiki?curid=14783
Iran–Contra affair The Iran–Contra affair (, ), popularized in Iran as the McFarlane affair, the Iran–Contra scandal, or simply Iran–Contra, was a political scandal in the United States that occurred during the second term of the Reagan Administration. Senior administration officials secretly facilitated the sale of arms to the Khomeini government of the Islamic Republic of Iran, which was the subject of an arms embargo. The administration hoped to use the proceeds of the arms sale to fund the Contras in Nicaragua. Under the Boland Amendment, further funding of the Contras by the government had been prohibited by Congress. The official justification for the arms shipments was that they were part of an operation to free seven American hostages being held in Lebanon by Hezbollah, a paramilitary group with Iranian ties connected to the Islamic Revolutionary Guard Corps. The plan was for Israel to ship weapons to Iran, for the United States to resupply Israel, and for Israel to pay the United States. The Iranian recipients promised to do everything in their power to achieve the release of the hostages. The first arms sales authorized to Iran were in 1981, prior to the American hostages having been taken in Lebanon. The plan was later complicated in late 1985, when Lieutenant Colonel Oliver North of the National Security Council diverted a portion of the proceeds from the Iranian weapon sales to fund the Contras, a group of anti-Sandinista rebels, in their insurgency against the socialist government of Nicaragua. While President Ronald Reagan was a vocal supporter of the Contra cause, the evidence is disputed as to whether he personally authorized the diversion of funds to the Contras. Handwritten notes taken by Defense Secretary Caspar Weinberger on 7 December 1985 indicate that Reagan was aware of potential hostage transfers with Iran, as well as the sale of Hawk and TOW missiles to "moderate elements" within that country. Weinberger wrote that Reagan said "he could answer to charges of illegality but couldn't answer to the charge that 'big strong President Reagan passed up a chance to free the hostages. After the weapon sales were revealed in November 1986, Reagan appeared on national television and stated that the weapons transfers had indeed occurred, but that the United States did not trade arms for hostages. The investigation was impeded when large volumes of documents relating to the affair were destroyed or withheld from investigators by Reagan administration officials. On 4 March 1987, Reagan made a further nationally televised address, taking full responsibility for the affair and stating that "what began as a strategic opening to Iran deteriorated, in its implementation, into trading arms for hostages". The affair was investigated by the U.S. Congress and by the three-person, Reagan-appointed Tower Commission. Neither investigation found evidence that President Reagan himself knew of the extent of the multiple programs. In the end, fourteen administration officials were indicted, including then-Secretary of Defense Caspar Weinberger. Eleven convictions resulted, some of which were vacated on appeal. The rest of those indicted or convicted were all pardoned in the final days of the presidency of George H. W. Bush, who had been Vice President at the time of the affair. The United States was the largest seller of arms to Iran under Mohammad Reza Pahlavi, and the vast majority of the weapons that the Islamic Republic of Iran inherited in January 1979 were American-made. To maintain this arsenal, Iran required a steady supply of spare parts to replace those broken and worn out. After Iranian students stormed the American embassy in Tehran in November 1979 and took 52 Americans hostage, U.S. President Jimmy Carter imposed an arms embargo on Iran. After Iraq invaded Iran in September 1980, Iran desperately needed weapons and spare parts for its current weapons. After Ronald Reagan took office as President on 20 January 1981, he vowed to continue Carter's policy of blocking arms sales to Iran on the grounds that Iran supported terrorism. A group of senior Reagan administration officials in the Senior Interdepartmental Group conducted a secret study on 21 July 1981, and concluded that the arms embargo was ineffective because Iran could always buy arms and spare parts for its American weapons elsewhere, while at the same time the arms embargo opened the door for Iran to fall into the Soviet sphere of influence as the Kremlin could sell Iran weapons if the United States would not. The conclusion was that the United States should start selling Iran arms as soon as it was politically possible to keep Iran from falling into the Soviet sphere of influence. At the same time, the openly declared goal of Ayatollah Khomeini to export his Islamic revolution all over the Middle East and overthrow the governments of Iraq, Kuwait, Saudi Arabia and the other states around the Persian Gulf led to the Americans perceiving Khomeini as a major threat to the United States. In the spring of 1983, the United States launched Operation Staunch, a wide-ranging diplomatic effort to persuade other nations all over the world not to sell arms or spare parts for weapons to Iran. At least part of the reason the Iran–Contra affair proved so humiliating for the United States when the story first broke in November 1986 that the US was selling arms to Iran was that American diplomats, as part of Operation Staunch had, from the spring of 1983 on, been lecturing other nations about how morally wrong it was to sell arms to the Islamic Republic of Iran and applying strong pressure to prevent these arms sales to Iran. At the same time that the American government was considering their options on selling arms to Iran, Contra militants based in Honduras were waging a guerrilla war to topple the Sandinista National Liberation Front (FSLN) revolutionary government of Nicaragua. Almost from the time he took office in 1981, a major goal of the Reagan administration was the overthrow of the left-wing Sandinista government in Nicaragua and to support the Contra rebels. The Reagan administration's policy towards Nicaragua produced a major clash between the executive and legislative arms as Congress sought to limit, if not curb altogether, the ability of the White House to support the Contras. Direct U.S. funding of the Contras insurgency was made illegal through the Boland Amendment, the name given to three U.S. legislative amendments between 1982 and 1984 aimed at limiting U.S. government assistance to Contra militants. Funding ran out for the Contras by July 1984, and in October a total ban was placed in effect. The second Boland Amendment, in effect from 3 October 1984 to 3 December 1985, stated:During the fiscal year 1985 no funds available to the Central Intelligence Agency, the Department of Defense or any other agency or entity of the United States involved in intelligence activities may be obligated or expended for the purpose of or which may have the effect of supporting directly or indirectly military or paramilitary operations in Nicaragua by any nation, organization, group, movement, or individual. In violation of the Boland Amendment, senior officials of the Reagan administration continued to secretly arm and train the Contras and provide arms to Iran, an operation they called "the Enterprise". As the Contras were heavily dependent upon U.S. military and financial support, the second Boland amendment threatened to break the Contra movement and led to President Reagan in 1984 to order the National Security Council (NSC) to "keep the Contras together 'body and soul'", no matter what Congress voted for. A major legal debate at the center of the Iran–Contra affair concerned the question of whether the NSC was one of the "any other agency or entity of the United States involved in intelligence activities" covered by the Boland amendment. The Reagan administration argued it was not, and many in Congress argued that it was. The majority of constitutional scholars have asserted the NSC did indeed fall within the purview of the second Boland amendment, though the amendment did not mention the NSC by name. The broader constitutional question at stake was the power of Congress versus the power of the presidency. The Reagan administration argued that because the constitution assigned the right to conduct foreign policy to the executive, its efforts to overthrow the government of Nicaragua were a presidential prerogative that Congress had no right to try to halt via the Boland amendments. By contrast congressional leaders argued that the constitution had assigned Congress control of the budget, and Congress had every right to use that power not to fund projects like attempting to overthrow the government of Nicaragua that they disapproved of. As part of the effort to circumvent the Boland amendment, the NSC established "the Enterprise", an arms-smuggling network headed by a retired U.S. Air Force officer turned arms dealer Richard Secord that supplied arms to the Contras. It was ostensibly a private sector operation, but in fact was controlled by the NSC. To fund "the Enterprise", the Reagan administration was constantly on the look-out for funds that came from outside the U.S. government in order not to explicitly violate the letter of the Boland amendment, though the efforts to find alternative funding for the Contras violated the spirit of the Boland amendment. Ironically, military aid to the Contras was reinstated with Congressional consent in October 1986, a month before the scandal broke. As reported in "The New York Times" in 1991, "continuing allegations that Reagan campaign officials made a deal with the Iranian Government of Ayatollah Ruhollah Khomeini in the fall of 1980" led to "limited investigations." However "limited," those investigations established that "Soon after taking office in 1981, the Reagan Administration secretly and abruptly changed United States policy." Secret Israeli arms sales and shipments to Iran began in that year, even as, in public, "the Reagan Administration" presented a different face, and "aggressively promoted a public campaign... to stop worldwide transfers of military goods to Iran." "The New York Times" explains: "Iran at that time was in dire need of arms and spare parts for its American-made arsenal to defend itself against Iraq, which had attacked it in September 1980," while "Israel [a U.S. ally] was interested in keeping the war between Iran and Iraq going to ensure that these two potential enemies remained preoccupied with each other." Maj. Gen. Avraham Tamir, a high-ranking Israeli Defense Ministry in 1981, said there was a "oral agreement" to allow the sale of "spare parts" to Iran. This was based on an "understanding" with Secretary Alexander Haig (which a Haig adviser denied). This account was confirmed by a former senior American diplomat with a few modifications. The diplomat claimed that "[Ariel] Sharon violated it, and Haig backed away...". A former "high-level" CIA official who saw the reports of arms sales to Iran by Israel in the early 1980's estimated that the total was about 2 billion a year. But also said that "The degree to which it was sanctioned I don't know." On 17 June 1985, National Security Adviser Robert McFarlane wrote a National Security Decision Directive which called for the United States to begin a rapprochement with the Islamic Republic of Iran. The paper read: Dynamic political evolution is taking place inside Iran. Instability caused by the pressures of the Iraq-Iran war, economic deterioration and regime in-fighting create the potential for major changes inside Iran. The Soviet Union is better positioned than the U.S. to exploit and benefit from any power struggle that results in changes from the Iranian regime ... The U.S should encourage Western allies and friends to help Iran meet its import requirements so as to reduce the attractiveness of Soviet assistance ... This includes provision of selected military equipment. Defense Secretary Caspar Weinberger was highly negative, writing on his copy of McFarlane's paper: "This is almost too absurd to comment on ... like asking Qaddafi to Washington for a cozy chat." Secretary of State George Shultz was also opposed, stating that having designated Iran a State Sponsor of Terrorism in January 1984, how could the United States possibly sell arms to Iran? Only the Director of the Central Intelligence Agency William Casey supported McFarlane's plan to start selling arms to Iran. In early July 1985, the historian Michael Ledeen, a consultant of National Security Adviser Robert McFarlane, requested assistance from Israeli Prime Minister Shimon Peres for help in the sale of arms to Iran. Having talked to an Israeli diplomat David Kimche and Ledeen, McFarlane learned that the Iranians were prepared to have Hezbollah release American hostages in Lebanon in exchange for Israelis shipping Iran American weapons. Having been designated a State Sponsor of Terrorism since January 1984, Iran was in the midst of the Iran–Iraq War and could find few Western nations willing to supply it with weapons. The idea behind the plan was for Israel to ship weapons through an intermediary (identified as Manucher Ghorbanifar) to the Islamic republic as a way of aiding a supposedly moderate, politically influential faction within the regime of Ayatollah Khomeini who was believed to be seeking a rapprochement with the United States; after the transaction, the United States would reimburse Israel with the same weapons, while receiving monetary benefits. McFarlane in a memo to Shultz and Weinberger wrote: The short term dimension concerns the seven hostages; the long term dimension involves the establishment of a private dialogue with Iranian officials on the broader relations ... They sought specifically the delivery from Israel of 100 TOW missiles ... The plan was discussed with President Reagan on 18 July 1985 and again on 6 August 1985. Shultz at the latter meeting warned Reagan that "we were just falling into the arms-for-hostages business and we shouldn't do it." The Americans believed that there was a moderate faction in the Islamic republic headed by Akbar Hashemi Rafsanjani, the powerful speaker of the "Majlis" who was seen as a leading potential successor to Khomeini and who was alleged to want a rapprochement with the United States. The Americans believed that Rafsanjani had the power to order Hezbollah to free the American hostages and establishing a relationship with him by selling Iran arms would ultimately place Iran back within the American sphere of influence. It remains unclear if Rafsanjani really wanted a rapprochement with the United States or was just deceiving Reagan administration officials who were willing to believe that he was a moderate who would effect a rapprochement. Rafsanjani, whose nickname is "the Shark" was described by the British journalist Patrick Brogan as a man of great charm and formidable intelligence known for his subtlety and ruthlessness whose motives in the Iran–Contra affair remain completely mysterious. The Israeli government required that the sale of arms meet high level approval from the United States government, and when McFarlane convinced them that the U.S. government approved the sale, Israel obliged by agreeing to sell the arms. In 1985, President Reagan entered Bethesda Naval Hospital for colon cancer surgery. While the President was recovering in the hospital, McFarlane met with him and told him that representatives from Israel had contacted the National Security Agency to pass on confidential information from what Reagan later described as the "moderate" Iranian faction headed by Rafsanjani opposed to the Ayatollah's hardline anti-American policies. According to Reagan, these Iranians sought to establish a quiet relationship with the United States, before establishing formal relationships upon the death of the aging Ayatollah. In Reagan's account, McFarlane told Reagan that the Iranians, to demonstrate their seriousness, offered to persuade the Hezbollah militants to release the seven U.S. hostages. McFarlane met with the Israeli intermediaries; Reagan claimed that he allowed this because he believed that establishing relations with a strategically located country, and preventing the Soviet Union from doing the same, was a beneficial move. Although Reagan claims that the arms sales were to a "moderate" faction of Iranians, the Walsh Iran/Contra Report states that the arms sales were "to Iran" itself, which was under the control of the Ayatollah. Following the Israeli–U.S. meeting, Israel requested permission from the United States to sell a small number of BGM-71 TOW antitank missiles to Iran, claiming that this would aid the "moderate" Iranian faction, by demonstrating that the group actually had high-level connections to the U.S. government. Reagan initially rejected the plan, until Israel sent information to the United States showing that the "moderate" Iranians were opposed to terrorism and had fought against it. Now having a reason to trust the "moderates", Reagan approved the transaction, which was meant to be between Israel and the "moderates" in Iran, with the United States reimbursing Israel. In his 1990 autobiography "An American Life", Reagan claimed that he was deeply committed to securing the release of the hostages; it was this compassion that supposedly motivated his support for the arms initiatives. The president requested that the "moderate" Iranians do everything in their capability to free the hostages held by Hezbollah. Reagan always insisted in public after the scandal broke in late 1986 that purpose behind the arms-for-hostages trade was to establish a working relationship with the "moderate" faction associated with Rafsanjani to facilitate the reestablishment of the American–Iranian alliance after the soon to be expected death of Khomeini, to end the Iran–Iraq war and end Iranian support for Islamic terrorism while downplaying the importance of freeing the hostages in Lebanon as a secondary issue. By contrast, when testifying before the Tower Commission, Reagan declared that hostage issue was the main reason for selling arms to Iran. The following arms were supplied to Iran: The first arms sales to Iran began in 1981, though the official paper trail has them beginning in 1985 (see above). On 20 August 1985, Israel sent 96 American-made TOW missiles to Iran through an arms dealer Manucher Ghorbanifar. Subsequently, on 14 September 1985, 408 more TOW missiles were delivered. On 15 September 1985, following the second delivery, Reverend Benjamin Weir was released by his captors, the Islamic Jihad Organization. On 24 November 1985, 18 Hawk anti-aircraft missiles were delivered. Robert McFarlane resigned on 4 December 1985, stating that he wanted to spend more time with his family, and was replaced by Admiral John Poindexter. Two days later, Reagan met with his advisors at the White House, where a new plan was introduced. This called for a slight change in the arms transactions: instead of the weapons going to the "moderate" Iranian group, they would go to "moderate" Iranian army leaders. As each weapons delivery was made from Israel by air, hostages held by Hezbollah would be released. Israel would continue to be reimbursed by the United States for the weapons. Though staunchly opposed by Secretary of State George Shultz and Secretary of Defense Caspar Weinberger, the plan was authorized by Reagan, who stated that, "We were "not" trading arms for hostages, nor were we negotiating with terrorists". In his notes of a meeting held in the White House on 7 December 1985, Weinberger wrote he told Reagan that this plan was illegal, writing: I argued strongly that we have an embargo that makes arms sales to Iran illegal and President couldn't violate it and that 'washing' transactions thru Israel wouldn't make it legal. Shultz, Don Regan agreed. Weinberger's notes have Reagan saying he "could answer charges of illegality but he couldn't answer charge that 'big strong President Reagan' passed up a chance to free hostages." Now retired National Security Advisor McFarlane flew to London to meet with Israelis and Ghorbanifar in an attempt to persuade the Iranian to use his influence to release the hostages before any arms transactions occurred; this plan was rejected by Ghorbanifar. On the day of McFarlane's resignation, Oliver North, a military aide to the United States National Security Council (NSC), proposed a new plan for selling arms to Iran, which included two major adjustments: instead of selling arms through Israel, the sale was to be direct at a markup; and a portion of the proceeds would go to Contras, or Nicaraguan paramilitary fighters waging guerrilla warfare against the democratically elected Sandinista government. The dealings with the Iranians were conducted via the NSC with Admiral Poindexter and his deputy Colonel North, with the American historians Malcolm Byrne and Peter Kornbluh writing that Poindexter granted much power to North "...who made the most of the situation, often deciding important matters on his own, striking outlandish deals with the Iranians, and acting in the name of the president on issues that were far beyond his competence. All of these activities continued to take place within the framework of the president's broad authorization. Until the press reported on the existence of the operation, nobody in the administration questioned the authority of Poindexter's and North's team to implement the president's decisions". North proposed a $15 million markup, while contracted arms broker Ghorbanifar added a 41% markup of his own. Other members of the NSC were in favor of North's plan; with large support, Poindexter authorized it without notifying President Reagan, and it went into effect. At first, the Iranians refused to buy the arms at the inflated price because of the excessive markup imposed by North and Ghorbanifar. They eventually relented, and in February 1986, 1,000 TOW missiles were shipped to the country. From May to November 1986, there were additional shipments of miscellaneous weapons and parts. Both the sale of weapons to Iran and the funding of the Contras attempted to circumvent not only stated administration policy, but also the Boland Amendment. Administration officials argued that regardless of Congress restricting funds for the Contras, or any affair, the President (or in this case the administration) could carry on by seeking alternative means of funding such as private entities and foreign governments. Funding from one foreign country, Brunei, was botched when North's secretary, Fawn Hall, transposed the numbers of North's Swiss bank account number. A Swiss businessman, suddenly $10 million richer, alerted the authorities of the mistake. The money was eventually returned to the Sultan of Brunei, with interest. On 7 January 1986, John Poindexter proposed to Reagan a modification of the approved plan: instead of negotiating with the "moderate" Iranian political group, the United States would negotiate with "moderate" members of the Iranian government. Poindexter told Reagan that Ghorbanifar had important connections within the Iranian government, so with the hope of the release of the hostages, Reagan approved this plan as well. Throughout February 1986, weapons were shipped directly to Iran by the United States (as part of Oliver North's plan), but none of the hostages were released. Retired National Security Advisor McFarlane conducted another international voyage, this one to Tehran – bringing with him a gift of a bible with a handwritten inscription by Ronald Reagan and, according to George Cave, a cake baked in the shape of a key. Howard Teicher described the cake as a joke between North and Ghorbanifar. McFarlane met directly with Iranian officials associated with Rafsanjani, who sought to establish U.S.-Iranian relations in an attempt to free the four remaining hostages. The American delegation comprised McFarlane, North, Cave (a retired CIA agent who worked in Iran in the 1960s–70s), Teicher, Israeli diplomat Amiram Nir and a CIA translator. They arrived in Tehran in an Israeli plane carrying forged Irish passports on 25 May 1986. This meeting also failed. Much to McFarlane's disgust, he did not meet ministers, and instead met in his words "third and fourth level officials". At one point, an angry McFarlane shouted: "As I am a Minister, I expect to meet with decision-makers. Otherwise, you can work with my staff." The Iranians requested concessions such as Israel's withdrawal from the Golan Heights, which the United States rejected. More importantly, McFarlane refused to ship spare parts for the Hawk missiles until the Iranians had Hezbollah release the American hostages, whereas the Iranians wanted to reverse that sequence with the spare parts being shipped first before the hostages were freed. The differing negotiating positions led to McFarlane's mission going home after four days. After the failure of the secret visit to Tehran, McFarlane advised Reagan not to talk to the Iranians anymore, advice that was disregarded. On 26 July 1986, Hezbollah freed the American hostage Father Lawrence Jenco, former head of Catholic Relief Services in Lebanon. Following this, William Casey, head of the CIA, requested that the United States authorize sending a shipment of small missile parts to Iranian military forces as a way of expressing gratitude. Casey also justified this request by stating that the contact in the Iranian government might otherwise lose face or be executed, and hostages might be killed. Reagan authorized the shipment to ensure that those potential events would not occur. North used this release to persuade Reagan to switch over to a "sequential" policy of freeing the hostages one by one, instead of the "all or nothing" policy that the Americans had pursued until then. By this point, the Americans had grown tired of Ghobanifar who had proven himself a dishonest intermediary who played off both sides to his own commercial advantage. In August 1986, the Americans had established a new contact in the Iranian government, Ali Hashemi Bahramani, the nephew of Rafsanjani and an officer in the Revolutionary Guard. The fact that the Revolutionary Guard was deeply involved in international terrorism seemed only to attract the Americans more to Bahramani, who was seen as someone with the influence to change Iran's policies. Richard Secord, an American arms dealer, who was being used as a contact with Iran, wrote to North: "My judgment is that we have opened up a new and probably better channel into Iran". North was so impressed with Bahramani that he arranged for him to secretly visit Washington D.C and gave him a guided tour at midnight of the White House. North frequently met with Bahramani in the summer and fall of 1986 in West Germany, discussing arms sales to Iran, the freeing of hostages held by Hezbollah and how best to overthrow President Saddam Hussein of Iraq and the establishment of "a non-hostile regime in Baghdad". In September and October 1986 three more Americans – Frank Reed, Joseph Cicippio, and Edward Tracy – were abducted in Lebanon by a separate terrorist group, who referred to them simply as "G.I. Joe," after the popular American toy. The reasons for their abduction are unknown, although it is speculated that they were kidnapped to replace the freed Americans. One more original hostage, David Jacobsen, was later released. The captors promised to release the remaining two, but the release never happened. During a secret meeting in Frankfurt in October 1986, North told Bahramani that: "Saddam Hussein must go". North also claimed that Reagan had told him to tell Bahramani that: "Saddam Hussein is an asshole." Behramani during a secret meeting in Mainz informed North that Rafsanjani "for his own politics ... decided to get all the groups involved and give them a role to play." Thus, all the factions in the Iranian government would be jointly responsible for the talks with the Americans and "there would not be an internal war". This demand of Behramani caused much dismay on the American side as it made clear to them that they would not be dealing solely with a "moderate" faction in the Islamic Republic, as the Americans liked to pretend to themselves, but rather with all the factions in the Iranian government – including those who were very much involved in terrorism. Despite this the talks were not broken off. After a leak by Mehdi Hashemi, a senior official in the Islamic Revolutionary Guard Corps, the Lebanese magazine "Ash-Shiraa" exposed the arrangement on 3 November 1986. The leak may have been orchestrated by a covert team led by Arthur S. Moreau Jr., assistant to the chairman of the United States Joint Chiefs of Staff, due to fears the scheme had grown out of control. This was the first public report of the weapons-for-hostages deal. The operation was discovered only after an airlift of guns (Corporate Air Services HPF821) was downed over Nicaragua. Eugene Hasenfus, who was captured by Nicaraguan authorities after surviving the plane crash, initially alleged in a press conference on Nicaraguan soil that two of his coworkers, Max Gomez and Ramon Medina, worked for the Central Intelligence Agency. He later said he did not know whether they did or not. The Iranian government confirmed the "Ash-Shiraa" story, and ten days after the story was first published, President Reagan appeared on national television from the Oval Office on 13 November, stating: My purpose was ... to send a signal that the United States was prepared to replace the animosity between [the U.S. and Iran] with a new relationship ... At the same time we undertook this initiative, we made clear that Iran must oppose all forms of international terrorism as a condition of progress in our relationship. The most significant step which Iran could take, we indicated, would be to use its influence in Lebanon to secure the release of all hostages held there. The scandal was compounded when Oliver North destroyed or hid pertinent documents between 21 November and 25 November 1986. During North's trial in 1989, his secretary, Fawn Hall, testified extensively about helping North alter, shred, and remove official United States National Security Council (NSC) documents from the White House. According to "The New York Times", enough documents were put into a government shredder to jam it. North's explanation for destroying some documents was to protect the lives of individuals involved in Iran and Contra operations. It was not until 1993, years after the trial, that North's notebooks were made public, and only after the National Security Archive and Public Citizen sued the Office of the Independent Counsel under the Freedom of Information Act. During the trial, North testified that on 21, 22 or 24 November, he witnessed Poindexter destroy what may have been the only signed copy of a presidential covert-action finding that sought to authorize CIA participation in the November 1985 Hawk missile shipment to Iran. U.S. Attorney General Edwin Meese admitted on 25 November that profits from weapons sales to Iran were made available to assist the Contra rebels in Nicaragua. On the same day, John Poindexter resigned, and President Reagan fired Oliver North. Poindexter was replaced by Frank Carlucci on 2 December 1986. When the story broke, many legal and constitutional scholars expressed dismay that the NSC, which was supposed to be just an advisory body to assist the President with formulating foreign policy had "gone operational" by becoming an executive body covertly executing foreign policy on its own. The National Security Act of 1947, which created the NSC, gave it the vague right to perform "such other functions and duties related to the intelligence as the National Security Council may from time to time direct." However, the NSC had usually, although not always, acted as an advisory agency until the Reagan administration when the NSC had "gone operational", a situation that was condemned by both the Tower commission and by Congress as a departure from the norm. The American historian James Canham-Clyne asserted that Iran–Contra affair and the NSC "going operational" were not departures from the norm, but were the logical and natural consequence of existence of the "national security state", the plethora of shadowy government agencies with multi-million dollar budgets operating with little oversight from Congress, the courts or the media, and for whom upholding national security justified almost everything. Canham-Clyne argued that for the "national security state", the law was an obstacle to be surmounted rather than something to uphold and that the Iran–Contra affair was just "business as usual", something he asserted that the media missed by focusing on the NSC having "gone operational." In "Veil: The Secret Wars of the CIA 1981–1987", journalist Bob Woodward chronicled the role of the CIA in facilitating the transfer of funds from the Iran arms sales to the Nicaraguan Contras spearheaded by Oliver North. According to Woodward, then-Director of the CIA William J. Casey admitted to him in February 1987 that he was aware of the diversion of funds to the Contras. The controversial admission occurred while Casey was hospitalized for a stroke, and, according to his wife, was unable to communicate. On 6 May 1987, William Casey died the day after Congress began public hearings on Iran–Contra. Independent Counsel, Lawrence Walsh later wrote: "Independent Counsel obtained no documentary evidence showing Casey knew about or approved the diversion. The only direct testimony linking Casey to early knowledge of the diversion came from [Oliver] North." Gust Avrakodos, who was responsible for the arms supplies to the Afghans at this time, was aware of the operation as well and strongly opposed it, in particular the diversion of funds allotted to the Afghan operation. According to his Middle Eastern experts the operation was pointless because the moderates in Iran were not in a position to challenge the fundamentalists. However, he was overruled by Clair George. On 25 November 1986, President Reagan announced the creation of a Special Review Board to look into the matter; the following day, he appointed former Senator John Tower, former Secretary of State Edmund Muskie, and former National Security Adviser Brent Scowcroft to serve as members. This Presidential Commission took effect on 1 December and became known as the Tower Commission. The main objectives of the commission were to inquire into "the circumstances surrounding the Iran–Contra matter, other case studies that might reveal strengths and weaknesses in the operation of the National Security Council system under stress, and the manner in which that system has served eight different presidents since its inception in 1947". The Tower Commission was the first presidential commission to review and evaluate the National Security Council. President Reagan appeared before the Tower Commission on 2 December 1986, to answer questions regarding his involvement in the affair. When asked about his role in authorizing the arms deals, he first stated that he had; later, he appeared to contradict himself by stating that he had no recollection of doing so. In his 1990 autobiography, "An American Life", Reagan acknowledges authorizing the shipments to Israel. The report published by the Tower Commission was delivered to the president on 26 February 1987. The Commission had interviewed 80 witnesses to the scheme, including Reagan, and two of the arms trade middlemen: Manucher Ghorbanifar and Adnan Khashoggi. The 200-page report was the most comprehensive of any released, criticizing the actions of Oliver North, John Poindexter, Caspar Weinberger, and others. It determined that President Reagan did not have knowledge of the extent of the program, especially about the diversion of funds to the Contras, although it argued that the president ought to have had better control of the National Security Council staff. The report heavily criticized Reagan for not properly supervising his subordinates or being aware of their actions. A major result of the Tower Commission was the consensus that Reagan should have listened to his National Security Advisor more, thereby placing more power in the hands of that chair. In January 1987, Congress announced it was opening an investigation into the Iran–Contra affair. Depending upon one's political perspective, the Congressional investigation into the Iran–Contra affair was either an attempt by the legislative arm to gain control over an out-of-control executive arm, a partisan "witch hunt" by the Democrats against a Republican administration or a feeble effort by Congress that did far too little to rein in the "imperial presidency" that had run amok by breaking numerous laws. The Democratic-controlled United States Congress issued its own report on 18 November 1987, stating that "If the president did not know what his national security advisers were doing, he should have." The congressional report wrote that the president bore "ultimate responsibility" for wrongdoing by his aides, and his administration exhibited "secrecy, deception and disdain for the law". It also read that "the central remaining question is the role of the President in the Iran–Contra affair. On this critical point, the shredding of documents by Poindexter, North and others, and the death of Casey, leave the record incomplete". Reagan expressed regret regarding the situation in a nationally televised address from the Oval Office on 4 March 1987, and in two other speeches. Reagan had not spoken to the American people directly for three months amidst the scandal, and he offered the following explanation for his silence: The reason I haven't spoken to you before now is this: You deserve the truth. And as frustrating as the waiting has been, I felt it was improper to come to you with sketchy reports, or possibly even erroneous statements, which would then have to be corrected, creating even more doubt and confusion. There's been enough of that. Reagan then took full responsibility for the acts committed: First, let me say I take full responsibility for my own actions and for those of my administration. As angry as I may be about activities undertaken without my knowledge, I am still accountable for those activities. As disappointed as I may be in some who served me, I'm still the one who must answer to the American people for this behavior. Finally, the president acknowledged that his previous assertions that the U.S. did not trade arms for hostages were incorrect: A few months ago I told the American people I did not trade arms for hostages. My heart and my best intentions still tell me that's true, but the facts and the evidence tell me it is not. As the Tower board reported, what began as a strategic opening to Iran deteriorated, in its implementation, into trading arms for hostages. This runs counter to my own beliefs, to administration policy, and to the original strategy we had in mind. To this day, Reagan's role in these transactions is not definitively known. It is unclear exactly what Reagan knew and when, and whether the arms sales were motivated by his desire to save the U.S. hostages. Oliver North wrote that "Ronald Reagan knew of and approved a great deal of what went on with both the Iranian initiative and private efforts on behalf of the contras and he received regular, detailed briefings on both...I have no doubt that he was told about the use of residuals for the Contras, and that he approved it. Enthusiastically." Handwritten notes by Defense Secretary Weinberger indicate that the President was aware of potential hostage transfers with Iran, as well as the sale of Hawk and TOW missiles to what he was told were "moderate elements" within Iran. Notes taken by Weinberger on 7 December 1985 record that Reagan said that "he could answer charges of illegality but he couldn't answer charge that 'big strong President Reagan passed up a chance to free hostages'". The Republican-written "Report of the Congressional Committees Investigating the Iran–Contra Affair" made the following conclusion: There is some question and dispute about precisely the level at which he chose to follow the operation details. There is no doubt, however, ... [that] the President set the US policy towards Nicaragua, with few if any ambiguities, and then left subordinates more or less free to implement it. Domestically, the affair precipitated a drop in President Reagan's popularity. His approval ratings suffered "the largest single drop for any U.S. president in history", from 67% to 46% in November 1986, according to a "The New York Times"/CBS News poll. The "Teflon President", as Reagan was nicknamed by critics, survived the affair, however, and his approval rating recovered. Internationally, the damage was more severe. Magnus Ranstorp wrote, "U.S. willingness to engage in concessions with Iran and the Hezbollah not only signaled to its adversaries that hostage-taking was an extremely useful instrument in extracting political and financial concessions for the West but also undermined any credibility of U.S. criticism of other states' deviation from the principles of no-negotiation and no concession to terrorists and their demands." In Iran, Mehdi Hashemi, the leaker of the scandal, was executed in 1987, allegedly for activities unrelated to the scandal. Though Hashemi made a full video confession to numerous serious charges, some observers find the coincidence of his leak and the subsequent prosecution highly suspicious. The Independent Counsel, Lawrence E. Walsh, chose not to re-try North or Poindexter. In total, several dozen people were investigated by Walsh's office. During his election campaign in 1988, Vice President Bush denied any knowledge of the Iran–Contra affair by saying he was "out of the loop". Though his diaries included that he was "one of the few people that know fully the details", he repeatedly refused to discuss the incident and won the election. A book published in 2008 by Israeli journalist and terrorism expert Ronen Bergman asserts that Bush was also personally and secretly briefed on the affair by Amiram Nir, a counterterrorism adviser to the then Israeli prime minister, Yitzhak Shamir, when Bush was on a visit to Israel. "Nir could have incriminated the incoming President. That Nir was killed in a mysterious chartered airplane crash in Mexico in December 1988 has given rise to numerous conspiracy theories", writes Bergman. On 24 December 1992, having been defeated for reelection, and nearing the end of his term in office, lame duck President George H. W. Bush pardoned five administration officials who had been found guilty on charges relating to the affair. They were: Bush also pardoned Caspar Weinberger, who had not yet come to trial. Attorney General William P. Barr advised the President on these pardons, especially that of Caspar Weinberger. In response to these Bush pardons, Independent Counsel Lawrence E. Walsh, who headed the investigation of Reagan Administration officials' criminal conduct in the Iran–Contra scandal, stated that "the Iran–Contra cover-up, which has continued for more than six years, has now been completed." Walsh noted that in issuing the pardons Bush appears to have been preempting being implicated himself in the crimes of Iran–Contra by evidence that was to come to light during the Weinberger trial, and noted that there was a pattern of "deception and obstruction" by Bush, Weinberger and other senior Reagan administration officials. In Poindexter's hometown of Odon, Indiana, a street was renamed to John Poindexter Street. Bill Breeden, a former minister, stole the street's sign in protest of the Iran–Contra affair. He claimed that he was holding it for a ransom of $30 million, in reference to the amount of money given to Iran to transfer to the Contras. He was later arrested and confined to prison, making him, as satirically noted by Howard Zinn, "the only person to be imprisoned as a result of the Iran–Contra Scandal". The Iran–Contra affair and the ensuing deception to protect senior administration officials (including President Reagan) has been cast as an example of post-truth politics. The 100th Congress formed a Joint Committee of the United States Congress (Congressional Committees Investigating The Iran-Contra Affair) and held hearings in mid-1987. Transcripts were published as: "Iran–Contra Investigation: Joint Hearings Before the Senate Select Committee on Secret Military Assistance to Iran and the Nicaraguan Opposition and the House Select Committee to Investigate Covert Arms Transactions with Iran" (U.S. GPO 1987–88). A closed Executive Session heard classified testimony from North and Poindexter; this transcript was published in a redacted format. The joint committee's final report was "Report of the Congressional Committees Investigating the Iran–Contra Affair With Supplemental, Minority, and Additional Views" (U.S. GPO 17 November 1987). The records of the committee are at the National Archives, but many are still non-public. Testimony was also heard before the House Foreign Affairs Committee, House Permanent Select Committee on Intelligence, and Senate Select Committee on Intelligence and can be found in the Congressional Record for those bodies. The Senate Intelligence Committee produced two reports: "Preliminary Inquiry into the Sale of Arms to Iran and Possible Diversion of Funds to the Nicaraguan Resistance" (2 February 1987) and "Were Relevant Documents Withheld from the Congressional Committees Investigating the Iran–Contra Affair?" (June 1989). The Tower Commission Report was published as the "Report of the President's Special Review Board". U.S. GPO 26 February 1987. It was also published as "The Tower Commission Report", Bantam Books, 1987, The Office of Independent Counsel/Walsh investigation produced four interim reports to Congress. Its final report was published as the "Final Report of the Independent Counsel for Iran/Contra Matters". Walsh's records are available at the National Archives.
https://en.wikipedia.org/wiki?curid=14787
Infocom Infocom was a software company based in Cambridge, Massachusetts that produced numerous works of interactive fiction. They also produced one notable business application, a relational database called "Cornerstone". Infocom was founded on June 22, 1979, by staff and students of Massachusetts Institute of Technology, and lasted as an independent company until 1986, when it was bought by Activision. Activision shut down the Infocom division in 1989, although they released some titles in the 1990s under the Infocom "Zork" brand. Activision abandoned the Infocom trademark in 2002. Infocom games are text adventures where users direct the action by entering short strings of words to give commands when prompted. Generally the program will respond by describing the results of the action, often the contents of a room if the player has moved within the virtual world. The user reads this information, decides what to do, and enters another short series of words. Examples include "go west" or "take flashlight". Infocom games were written using a roughly LISP-like programming language called "ZIL" (Zork Implementation Language or Zork Interactive Language—it was referred to as both) that compiled into a byte code able to run on a standardized virtual machine called the Z-machine. As the games were text based and used variants of the same Z-machine interpreter, the interpreter had to be ported to new computer architectures only once per architecture, rather than once per game. Each game file included a sophisticated parser which allowed the user to type complex instructions to the game. Unlike earlier works of interactive fiction which only understood commands of the form 'verb noun', Infocom's parser could understand a wider variety of sentences. For instance one might type "open the large door, then go west", or "go to festeron". With the Z-machine, Infocom was able to release most of their games for most popular home computers of the day simultaneously—the Apple II family, Atari 800, IBM PC compatibles, Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Commodore Plus/4, Commodore 128, Kaypro CP/M, Texas Instruments TI-99/4A, the Mac, Atari ST, the Commodore Amiga, and the Radio Shack TRS-80. The company was also known for shipping creative props, or "feelies" (and even "smellies"), with its games. Inspired by "Colossal Cave", Marc Blank and Dave Lebling created what was to become the first Infocom game, "Zork", in 1977 at MIT's Laboratory for Computer Science. Despite the development of a revolutionary virtual memory system that allowed games to be much larger than the average personal computer's normal capacity, the enormous mainframe-developed game had to be split into three roughly equal parts. "Zork I" was released originally for the TRS-80 in 1980. Infocom was founded on June 22, 1979; the founding members were Tim Anderson, Joel Berez, Marc Blank, Mike Broos, Scott Cutler, Stu Galley, Dave Lebling, J. C. R. Licklider, Chris Reeve, and Al Vezza. Lebling and Blank each authored several more games, and additional game writers (or "Implementors") were hired, notably including Steve Meretzky. Other popular and inventive titles included a number of sequels and spinoff games in the "Zork" series, "The Hitchhiker's Guide to the Galaxy" by Douglas Adams, and "A Mind Forever Voyaging". In its first few years of operation, text adventures proved to be a huge revenue stream for the company. Whereas most computer games of the era would achieve initial success and then suffer a significant drop-off in sales, Infocom titles continued to sell for years and years. Employee Tim Anderson said of their situation, "It was phenomenal – we had a basement that just printed money." By 1983 Infocom was perhaps the most dominant computer-game company; for example, all ten of its games were on the "Softsel" top 40 list of best-selling computer games for the week of December 12, 1983, with "Zork" in first place and two others in the top ten. In late 1984, management declined an offer by publisher Simon & Schuster to acquire Infocom for $28 million, far more than the board of directors's valuation of $10–12 million. In 1993, "Computer Gaming World" described this era as the "Cambridge Camelot, where the Great Underground Empire was formed". Infocom games were popular, "InfoWorld" said, in part because "in offices all over America (more than anyone realizes) executives and managers are playing games on their computers". An estimated 25% had a computer game "hidden somewhere in their drawers", "Inc." reported, and they preferred Infocom adventures to arcade games. The company stated that year that 75% of players were over 25 years old and that 80% were men; more women played its games than other companies', especially the mysteries. Most players enjoyed reading books; in 1987 president Joel Berez stated, "[Infocom's] audience tends to be composed of heavy readers. We sell to the minority that does read". A 1996 article in "Next Generation" said Infocom's "games were noted for having more depth than any other adventure games, before or since." Three components proved key to Infocom's success: marketing strategy, rich storytelling and feelies. Whereas most game developers sold their games mainly in software stores, Infocom also distributed their games via bookstores. Infocom's products appealed more to those with expensive computers, such as the Apple Macintosh, IBM PC, and Commodore Amiga. Berez stated that "there is no noticeable correlation between graphics machines and our penetration. There is a high correlation between the price of the machine and our sales ... people who are putting more money into their machines tend to buy more of our software". Since their games were text-based, patrons of bookstores were drawn to the Infocom games as they were already interested in reading. Unlike most computer software, Infocom titles were distributed under a no-returns policy, which allowed them to make money from a single game for a longer period of time. Next, Infocom titles featured strong storytelling and rich descriptions, eschewing the inherent restrictions of graphic displays and allowing users to use their own imaginations for the lavish and exotic locations the games described. Infocom's puzzles were unique in that they were usually tightly integrated into the storyline, and rarely did gamers feel like they were being made to jump through one arbitrary hoop after another, as was the case in many of the competitors' games. The puzzles were generally logical but also required close attention to the clues and hints given in the story, causing many gamers to keep copious notes as they went along. Sometimes, though, Infocom threw in puzzles just for the humor of it—if the user never ran into these, they could still finish the game. But discovering these early Easter Eggs was satisfying for some fans of the games. For example, one popular Easter egg was in the "Enchanter" game, which involves collecting magic spells to use in accomplishing the quest. One of these is a summoning spell, which the player needs to use to summon certain characters at different parts of the game. At one point the game mentions the "Implementers" who were responsible for creating the land of Zork. If the player tries to summon the Implementers, the game produces a vision of Dave Lebling and Marc Blank at their computers, surprised at this "bug" in the game and working feverishly to fix it. Third, the inclusion of "feelies"—imaginative props and extras tied to the game's theme—provided copy protection against copyright infringement. Some games were unsolvable without the extra content provided with the boxed game. And because of the cleverness and uniqueness of the feelies, users rarely felt like they were an intrusion or inconvenience, as was the case with most of the other copy-protection schemes of the time. Although Infocom started out with "Zork", and although the "Zork" world was the centerpiece of their product line throughout the "Zork" and "Enchanter" series, the company quickly branched out into a wide variety of story lines: fantasy, science-fiction, mystery, horror, historical adventure, children's stories, and others that defied easy categorization. In an attempt to reach out to female customers, Infocom also produced "Plundered Hearts", which cast the gamer in the role of the heroine of a swashbuckling adventure on the high seas, and which required the heroine to use more feminine tactics to win the game, since hacking-and-slashing was not a very ladylike way to behave. And to compete with the "Leisure Suit Larry" style games that were also appearing, Infocom also came out with "Leather Goddesses of Phobos" in 1986, which featured "tame", "suggestive", and "lewd" playing modes. It included among its "feelies" a "scratch-and-sniff" card with six odors that corresponded to cues given to the player during the game. Originally, hints for the game were provided as a "pay-per-hint" service created by Mike Dornbrook, called the Zork Users Group (ZUG). Dornbrook also started Infocom's customer newsletter, called "The New Zork Times", to discuss game hints and preview and showcase new products. The pay-per-hint service eventually led to the development of InvisiClues: books with hints, maps, clues, and solutions for puzzles in the games. The answers to the puzzles were printed in invisible ink that only became visible when rubbed with a special marker that was provided with each book. Usually, two or more answers were given for each question that a gamer might have. The first answer would provide a subtle hint, the second a less subtle hint, and so forth until the last one gave an explicit walkthrough. Gamers could thus reveal only the hints that they needed to have to play the game. To prevent the mere questions (printed in normal ink) from giving away too much information about the game, a certain number of misleading fake questions were included in every InvisiClues book. Answers to these questions would start by giving misleading or impossible to carry out answers, before the final answer revealed that the question was a fake (and usually admonishing the player that revealing random clues from the book would spoil their enjoyment of the game). The InvisiClues books were regularly ranked in near the top of best seller lists for computer books. In the Solid Gold line of re-releases, InvisiClues were integrated into the game. By typing "HINT" twice the player would open up a screen of possible topics where they could then reveal one hint at a time for each puzzle, just like the books. Infocom also released a small number of "interactive fiction paperbacks" (gamebooks), which were based on the games and featured the ability to choose a different path through the story. Similar to the "Choose Your Own Adventure" series, every couple of pages the book would give the reader the chance to make a choice, such as which direction they wanted to go or how they wanted to respond to another character. The reader would then choose one of the given answers and turn to the appropriate page. These books, however, never did sell particularly well, and quickly disappeared from the bookshelves. Despite their success with computer games, Vezza and other company founders hoped to produce successful business programs like Lotus Development, also founded by people from MIT and located in the same building as Infocom. Lotus released its first product, 1-2-3, in January 1983; within a year it had earned $53 million, compared to Infocom's $6 million. In 1982 Infocom started putting resources into a new division to produce business products. In 1985 they released a database product, "Cornerstone", aimed at capturing the then booming database market for small business. Though this application was hailed upon its release for ease of use, it sold only 10,000 copies; not enough to cover the development expenses. The program failed for a number of reasons. Although it was packaged in a slick hard plastic carrying case and was a very good database for personal and home use, it was originally priced at USD$495 per copy and used copy-protected disks. Another serious miscalculation was that the program did not include any kind of scripting language, so it was not promoted by any of the database consultants that small businesses typically hired to create and maintain their DB applications. Reviewers were also consistently disappointed that Infocom—noted for the natural language syntax of their games—did not include a natural language query ability, which had been the most anticipated feature for this database application. In a final disappointment, "Cornerstone" was available only for IBM PCs; while "Cornerstone" had been programmed with its own virtual machine for maximum portability, it was not ported to any of the other platforms that Infocom supported for their games, so that feature had become essentially irrelevant. And because Cornerstone used this virtual machine for its processing, it suffered from slow, lackluster performance. Infocom's games' sales benefited significantly from the portability offered by running on top of a virtual machine. "InfoWorld" wrote in 1984 that "the company always sells games for computers you don't normally think of as game machines, such as the DEC Rainbow or the Texas Instruments Professional Computer. This is one of the key reasons for the continued success of old titles such as Zork." Dornbrook estimated that year that of the 1.8 million home computers in America, one half million homes had Infocom games ("all, if you count the pirated games"). Computer companies sent prototypes of new systems to encourage Infocom to port Z-machine to them; the virtual machine supported more than 20 different systems, including orphaned computers for which Infocom games were among the only commercial products. The company produced the only third-party games available for the Macintosh at launch, and Berlyn promised that all 13 of its games would be available for the Atari ST within one month of its release. The virtual machine significantly slowed "Cornerstone"s execution speed, however. Businesses were moving "en masse" to the IBM PC platform by that time, so portability was no longer a significant differentiator. Infocom had sunk much of the money from games sales into "Cornerstone"; this, in addition to a slump in computer game sales, left the company in a very precarious financial position. By the time Infocom removed the copy-protection and reduced the price to less than $100, it was too late, and the market had moved on to other database solutions. By 1982 the market was moving to graphic adventures. Infocom was interested in producing them, that year proposing to Penguin Software that Antonio Antiochia, author of its "Transylvania", provide artwork. Within Infocom the game designers tended to oppose graphics, while marketing and business employees supported using them for the company to remain competitive. The partnership negotiations failed, in part because of the difficulty of adding graphics to the Z-machine, and Infocom instead began a series of advertisements mocking graphical games as "graffiti" compared to the human imagination. The marketing campaign was very successful, and Infocom's success led to other companies like Broderbund and Electronic Arts also releasing their own text games. After Cornerstone's failure, Infocom laid off half of its 100 employees, and Activision acquired the company on June 13, 1986 for $7.5 million. The merger was pushed by Activision's CEO Jim Levy, who was a fan of Infocom games and felt their two companies were in similar situations. Berez stated that although the two companies' headquarters and product lines would remain separate, "One of the effects of the merger will be for both of us to broaden our horizons". He said that "We're looking at graphics a lot", while Activision was reportedly interested in using Infocom's parser. While relations were cordial between the two companies at first, Activision's ousting of Levy with new CEO Bruce Davis created problems in the working relationship with Infocom. Davis believed that his company had paid too much for Infocom and initiated a lawsuit against them to recoup some of the cost, along with changing the way Infocom was run. For example: By 1988, rumors spread of disputes between Activision and Infocom. Infocom employees reportedly believed that Activision gave poorer-quality games to Infocom, such as Tom Snyder Productions' unsuccessful "Infocomics". Activision moved Infocom development to California in 1989, and the company was now just a publishing label. Rising costs and falling profits, exacerbated by the lack of new products in 1988 and technical issues with its DOS products, caused Activision to close Infocom in 1989, after which some of the remaining Infocom designers such as Steve Meretzky moved to the company Legend Entertainment, founded by Bob Bates and Mike Verdu, to continue creating games in the Infocom tradition. Activision itself was struggling in the marketplace following Davis' promotion to CEO. Activision had rebranded itself as Mediagenic and tried to produce business productivity software, but became significantly in debt. In 1991, Mediagenic was purchased by Bobby Kotick, who put into measures immediately to try to turn the company around, which included returning to its Activision name, and putting to use its past IP properties. This included the Infocom games; Kotick recognized the value of the branding of "Zork" and other titles. Activision began to sell bundles of the Infocom games that year, packaged as themed collections (usually by genre, such as the Science Fiction collection); in 1991, they published "The Lost Treasures of Infocom", followed in 1992 by "The Lost Treasures of Infocom II". These compilations featured nearly every game produced by Infocom before 1988. ("Leather Goddesses of Phobos" was not included in either bundle, but could be ordered via a coupon included with "Lost Treasures II".) The compilations lacked the "feelies" that came with each game, but in some cases included photographs of them. In 1996, the first bundles were followed by "Classic Text Adventure Masterpieces of Infocom", a single CD-ROM which contained the works of both collections. This release, however, was missing "The Hitchhiker's Guide to the Galaxy" and "Shogun" because the licenses from Douglas Adams' and James Clavell's estates had expired. Under Kotick's leadership, Activision also developed "Return to Zork", published under its Infocom label. Eventually, Activision abandoned the "Infocom" name. The brand name was registered by Oliver Klaeffling of Germany in 2007, then was abandoned the following year. The Infocom trademark was then held by Pete Hottelet's Omni Consumer Products, who registered the name around the same time as Klaeffling in 2007. As of March 2017, the trademark is owned by infocom.xyz, according to Bob Bates. With the exception of "The Hitchhiker's Guide to the Galaxy" and "Shogun", the copyrights to the Infocom games are believed to be still held by Activision. "Dungeon", the mainframe precursor to the commercial Zork trilogy, is believed to be free for non-commercial use. but prohibited for commercial use. It was this copy that the popular Fortran mainframe version was based on. The C version was based on the Fortran version. and is available from The Interactive Fiction Archive as original FORTRAN source code, a Z-machine story file and as various native source ports. Many Infocom titles can be downloaded via the Internet, but only in violation of the copyright. Activision did at one point release the original trilogy for free-of-charge download as a promotion but prohibited redistribution and have since discontinued this. There are currently at least four Infocom sampler and demos available from the IF Archive as Z-machine story files which require a Z-machine interpreter to play. Interpreters are available for most computer platforms, the most widely used being the Frotz, Zip, and Nitfol interpreters. Five games ("Zork I", "Planetfall", "The Hitchhiker's Guide to the Galaxy", "Wishbringer" and "Leather Goddesses of Phobos") were re-released in Solid Gold format. The Solid Gold versions of those games include a built-in InvisiClues hint system. In 2012, Activision released "Lost Treasures of Infocom" for iOS devices. In-app purchases provide access for 27 of the titles. It also lacks "Shogun" and "The Hitchhiker's Guide to the Galaxy" as well as "Beyond Zork", "Zork Zero" and "Nord and Bert". Efforts have been made to make the Infocom games source code available for preservation. In 2008, Jason Scott, a video game preservationist contributing towards the Internet Archive, received the so-called "Infocom Drive", a large archive of the entire contents of Infocom's main server made during the last few days before the company was relocated to California; besides source code for all of Infocom's games (including unreleased ones), it also contained the software manuals, design documents and other essential content alongside Infocom's business documentation. Scott later published all of the source files in their original Z-engine format to GitHub in 2019. "Zork" made a cameo appearance as an easter egg in Activision and Treyarch's "". It can be accessed during the main menu and runs much like a DOS program.
https://en.wikipedia.org/wiki?curid=14788
Interactive fiction Interactive fiction, often abbreviated IF, is software simulating environments in which players use text commands to control characters and influence the environment. Works in this form can be understood as literary narratives, either in the form of interactive narratives or interactive narrations. These works can also be understood as a form of video game, either in the form of an adventure game or role-playing game. In common usage, the term refers to text adventures, a type of adventure game where the entire interface can be "text-only", however, graphical text adventure games, where the text is accompanied by graphics (still images, animations or video) still fall under the text adventure category if the main way to interact with the game is by typing text. Some users of the term distinguish between interactive fiction, known as "Puzzle-free", that focuses on narrative, and "text adventures" that focus on puzzles. Due to their text-only nature, they sidestepped the problem of writing for widely divergent graphics architectures. This feature meant that interactive fiction games were easily ported across all the popular platforms at the time, including CP/M (not known for gaming or strong graphics capabilities). The number of interactive fiction works is increasing steadily as new ones are produced by an online community, using freely available development systems. The term can also be used to refer to digital versions of literary works that are not read in a linear fashion, known as gamebooks, where the reader is instead given choices at different points in the text; these decisions determine the flow and outcome of the story. The most famous example of this form of printed fiction is the "Choose Your Own Adventure" book series, and the collaborative "" format has also been described as a form of interactive fiction. The term "interactive fiction" is sometimes used also to refer to visual novels, a type of interactive narrative software popular in Japan. Text adventures are one of the oldest types of computer games and form a subset of the adventure genre. The player uses text input to control the game, and the game state is relayed to the player via text output. Interactive fiction usually relies on reading from a screen and on typing input, although text-to-speech synthesizers allow blind and visually impaired users to play interactive fiction titles as audio games. Input is usually provided by the player in the form of simple sentences such as "get key" or "go east", which are interpreted by a text parser. Parsers may vary in sophistication; the first text adventure parsers could only handle two-word sentences in the form of verb-noun pairs. Later parsers, such as those built on ZIL (Zork Implementation Language), could understand complete sentences. Later parsers could handle increasing levels of complexity parsing sentences such as "open the red box with the green key then go north". This level of complexity is the standard for works of interactive fiction today. Despite their lack of graphics, text adventures include a physical dimension where players move between rooms. Many text adventure games boasted their total number of rooms to indicate how much gameplay they offered. These games are unique in that they may create an "illogical space", where going north from area A takes you to area B, but going south from area B did not take you back to area A. This can create mazes that do not behave as players expect, and thus players must maintain their own map. These illogical spaces are much more rare in today's era of 3D gaming, and the Interactive Fiction community in general decries the use of mazes entirely, claiming that mazes have become arbitrary 'puzzles for the sake of puzzles' and that they can, in the hands of inexperienced designers, become immensely frustrating for players to navigate. Interactive fiction shares much in common with Multi-User Dungeons ('MUDs'). MUDs, which became popular in the mid-1980s, rely on a textual exchange and accept similar commands from players as do works of IF; however, since interactive fiction is single player, and MUDs, by definition, have multiple players, they differ enormously in gameplay styles. MUDs often focus gameplay on activities that involve communities of players, simulated political systems, in-game trading, and other gameplay mechanics that are not possible in a single player environment. Interactive fiction features two distinct modes of writing: the player input and the game output. As described above, player input is expected to be in simple command form (imperative sentences). A typical command may be:> PULL Lever The responses from the game are usually written from a second-person point of view, in present tense. This is because, unlike in most works of fiction, the main character is closely associated with the player, and the events are seen to be happening as the player plays. While older text adventures often identified the protagonist with the player directly, newer games tend to have specific, well-defined protagonists with separate identities from the player. The classic essay "Crimes Against Mimesis" discusses, among other IF issues, the nature of "You" in interactive fiction. A typical response might look something like this, the response to "look in tea chest" at the start of "Curses": "That was the first place you tried, hours and hours ago now, and there's nothing there but that boring old book. You pick it up anyway, bored as you are." Many text adventures, particularly those designed for humour (such as "Zork", "The Hitchhiker's Guide to the Galaxy", and "Leather Goddesses of Phobos"), address the player with an informal tone, sometimes including sarcastic remarks (see the transcript from "Curses", above, for an example). The late Douglas Adams, in designing the IF version of his 'Hitchhiker's Guide to the Galaxy', created a unique solution to the final puzzle of the game: the game requires the one solitary item that the player "didn't" choose at the outset of play. Some IF works dispense with second-person narrative entirely, opting for a first-person perspective ('I') or even placing the player in the position of an observer, rather than a direct participant. In some 'experimental' IF, the concept of self-identification is eliminated entirely, and the player instead takes the role of an inanimate object, a force of nature, or an abstract concept; experimental IF usually pushes the limits of the concept and challenges many assumptions about the medium. Though neither program was developed as a narrative work, the software programs ELIZA (1964–1966) and SHRDLU (1968–1970) can formally be considered early examples of interactive fiction, as both programs used natural language processing to take input from their user and respond in a virtual and conversational manner. ELIZA simulated a psychotherapist that appeared to provide human-like responses to the user's input, while SHRDLU employed an artificial intelligence that could move virtual objects around an environment and respond to questions asked about the environment's shape. The development of effective natural language processing would become an essential part of interactive fiction development. Around 1975, Will Crowther, a programmer and an amateur caver, wrote the first text adventure game, "Adventure" (originally called "ADVENT" because a filename could only be six characters long in the operating system he was using, and later named "Colossal Cave Adventure"). Having just gone through a divorce, he was looking for a way to connect with his two young children. Over the course of a few weekends, he wrote a text based cave exploration game that featured a sort of guide/narrator who talked in full sentences and who understood simple two-word commands that came close to natural English. Adventure was programmed in Fortran for the PDP-10. Crowther's original version was an accurate simulation of part of the real Colossal Cave, but also included fantasy elements (such as axe-wielding dwarves and a magic bridge). Stanford University graduate student Don Woods discovered "Adventure" while working at the Stanford Artificial Intelligence Laboratory, and in 1977 obtained and expanded Crowther's source code (with Crowther's permission). Woods's changes were reminiscent of the writings of J. R. R. Tolkien, and included a troll, elves, and a volcano some claim is based on Mount Doom, but Woods says was not. In early 1977, "Adventure" spread across ARPAnet, and has survived on the Internet to this day. The game has since been ported to many other operating systems, and was included with the floppy-disk distribution of Microsoft's MS-DOS 1.0 OS. "Adventure" is a cornerstone of the online IF community; there currently exist dozens of different independently programmed versions, with additional elements, such as new rooms or puzzles, and various scoring systems. The popularity of "Adventure" led to the wide success of interactive fiction during the late 1970s, when home computers had little, if any, graphics capability. Many elements of the original game have survived into the present, such as the command 'xyzzy', which is now included as an Easter Egg in modern games, such as "Microsoft Minesweeper". "Adventure" was also directly responsible for the founding of Sierra Online (later Sierra Entertainment); Ken and Roberta Williams played the game and decided to design one of their own, but with graphics. Adventure International was founded by Scott Adams (not to be confused with the creator of "Dilbert"). In 1978, Adams wrote "Adventureland", which was loosely patterned after (the original) "Colossal Cave Adventure". He took out a small ad in a computer magazine in order to promote and sell "Adventureland", thus creating the first commercial adventure game. In 1979 he founded Adventure International, the first commercial publisher of interactive fiction. That same year, "Dog Star Adventure" was published in source code form in "SoftSide", spawning legions of similar games in BASIC. The largest company producing works of interactive fiction was Infocom, which created the "Zork" series and many other titles, among them "Trinity", "The Hitchhiker's Guide to the Galaxy" and "A Mind Forever Voyaging". In June 1977, Marc Blank, Bruce K. Daniels, Tim Anderson, and Dave Lebling began writing the mainframe version of "Zork" (also known as "Dungeon"), at the MIT Laboratory for Computer Science. The game was programmed in a computer language called MDL, a variant of LISP. The term Implementer was the self-given name of the creators of the text adventure series Zork. It is for this reason that game designers and programmers can be referred to as an implementer, often shortened to Imp, rather than a writer. In early 1979, the game was completed. Ten members of the "MIT Dynamics Modelling Group" went on to join Infocom when it was incorporated later that year. In order to make its games as portable as possible, Infocom developed the Z-machine, a custom virtual machine that could be implemented on a large number of platforms, and took standardized "story files" as input. In a non-technical sense, Infocom was responsible for developing the interactive style that would be emulated by many later interpreters. The Infocom parser was widely regarded as the best of its era. It accepted complex, complete sentence commands like "put the blue book on the writing desk" at a time when most of its competitors parsers were restricted to simple two-word verb-noun combinations such as "put book". The parser was actively upgraded with new features like undo and error correction, and later games would 'understand' multiple sentence input: 'pick up the gem and put it in my bag. take the newspaper clipping out of my bag then burn it with the book of matches'. Several companies offered optional commercial feelies (physical props associated with a game). The tradition of 'feelies' (and the term itself) is believed to have originated with "Deadline" (1982), the third Infocom title after "Zork I" and "II". When writing this game, it was not possible to include all of the information in the limited (80KB) disk space, so Infocom created the first feelies for this game; extra items that gave more information than could be included within the digital game itself. These included police interviews, the coroner's findings, letters, crime scene evidence and photos of the murder scene. These materials were very difficult for others to copy or otherwise reproduce, and many included information that was essential to completing the game. Seeing the potential benefits of both aiding game-play immersion and providing a measure of creative copy-protection, in addition to acting as a deterrent to software piracy, Infocom and later other companies began creating feelies for numerous titles. In 1987, Infocom released a special version of the first three "Zork" titles together with plot-specific coins and other trinkets. This concept would be expanded as time went on, such that later game feelies would contain passwords, coded instructions, page numbers, or other information that would be required to successfully complete the game. Interactive fiction became a standard product for many software companies. By 1982 "Softline" wrote that "the demands of the market are weighted heavily toward hi-res graphics" in games like Sierra's "The Wizard and the Princess" and its imitators. Such graphic adventures became the dominant form of the genre on computers with graphics, like the Apple II. By 1982 Adventure International began releasing versions of its games with graphics. The company went bankrupt in 1985. Synapse Software and Acornsoft were also closed in 1985. Leaving Infocom as the leading company producing text-only adventure games on the Apple II with sophisticated parsers and writing, and still advertising its lack of graphics as a virtue. The company was bought by Activision in 1986 after the failure of "Cornerstone", Infocom's database software program, and stopped producing text adventures a few years later. Soon after Telaium/Trillium also closed. Probably the first commercial work of interactive fiction produced outside the U.S. was the dungeon crawl game of "Acheton", produced in Cambridge, England, and first commercially released by Acornsoft (later expanded and reissued by Topologika). Other leading companies in the UK were Magnetic Scrolls and Level 9 Computing. Also worthy of mention are Delta 4, Melbourne House, and the homebrew company Zenobi. In the early 1980s Edu-Ware also produced interactive fiction for the Apple II as designated by the "if" graphic that was displayed on startup. Their titles included the "Prisoner" and "Empire" series ("Empire I: World Builders", "Empire II: Interstellar Sharks", "Empire III: Armageddon"). In 1981, CE Software published "SwordThrust" as a commercial successor to the "Eamon" gaming system for the Apple II. SwordThrust and Eamon were simple two-word parser games with many role-playing elements not available in other interactive fiction. While SwordThrust published seven different titles, it was vastly overshadowed by the non-commercial Eamon system which allowed private authors to publish their own titles in the series. By March 1984, there were 48 titles published for the Eamon system (and over 270 titles in total as of March 2013). In Italy, interactive fiction games were mainly published and distributed through various magazines in included tapes. The largest number of games were published in the two magazines Viking and Explorer, with versions for the main 8-bit home computers (Sinclair ZX Spectrum, Commodore 64 and MSX). The software house producing those games was Brainstorm Enterprise, and the most prolific IF author was Bonaventura Di Bello, who produced 70 games in the Italian language. The wave of interactive fiction in Italy lasted for a couple of years thanks to the various magazines promoting the genre, then faded and remains still today a topic of interest for a small group of fans and less known developers, celebrated on Web sites and in related newsgroups. In Spain, interactive fiction was considered a minority genre, and was not very successful. The first Spanish interactive fiction commercially released was "Yenght" in 1983, by Dinamic Software, for the ZX Spectrum. Later on, in 1987, the same company produced an interactive fiction about "Don Quijote". After several other attempts, the company Aventuras AD, emerged from Dinamic, became the main interactive fiction publisher in Spain, including titles like a Spanish adaptation of "Colossal Cave Adventure", an adaptation of the Spanish comic "El Jabato", and mainly the "Ci-U-Than" trilogy, composed by "La diosa de Cozumel" (1990), "Los templos sagrados" (1991) and "Chichen Itzá" (1992). During this period, the Club de Aventuras AD (CAAD), the main Spanish speaking community around interactive fiction in the world, was founded, and after the end of Aventuras AD in 1992, the CAAD continued on its own, first with their own magazine, and then with the advent of Internet, with the launch of an active internet community that still produces interactive non-commercial fiction nowadays. Legend Entertainment was founded by Bob Bates and Mike Verdu in 1989. It started out from the ashes of Infocom. The text adventures produced by Legend Entertainment used (high-resolution) graphics as well as sound. Some of their titles include "Eric the Unready", the "Spellcasting" series and "Gateway" (based on Frederik Pohl's novels). The last text adventure created by Legend Entertainment was "Gateway II" (1992), while the last game ever created by Legend was "" (2003) – a well-known first-person shooter action game using the Unreal Engine for both impressive graphics and realistic physics. In 2004, Legend Entertainment was acquired by Atari, who published "Unreal II" and released for both Microsoft Windows and Microsoft's Xbox. Many other companies such as Level 9 Computing, Magnetic Scrolls, Delta 4 and Zenobi had closed by 1992. In 1991 and 1992, Activision released "The Lost Treasures of Infocom" in two volumes, a collection containing most of Infocom's games, followed in 1996 by "Classic Text Adventure Masterpieces of Infocom". After the decline of the commercial interactive fiction market in the 1990s, an online community eventually formed around the medium. In 1987, the Usenet newsgroup rec.arts.int-fiction was created, and was soon followed by rec.games.int-fiction. By custom, the topic of rec.arts.int-fiction is interactive fiction authorship and programming, while rec.games.int-fiction encompasses topics related to playing interactive fiction games, such as hint requests and game reviews. As of late 2011, discussions between writers have mostly moved from rec.arts.int-fiction to the Interactive Fiction Community Forum. One of the most important early developments was the reverse-engineering of Infocom's Z-Code format and Z-Machine virtual machine in 1987 by a group of enthusiasts called the InfoTaskForce and the subsequent development of an interpreter for Z-Code story files. As a result, it became possible to play Infocom's work on modern computers. For years, amateurs with the IF community produced interactive fiction works of relatively limited scope using the Adventure Game Toolkit and similar tools. The breakthrough that allowed the interactive fiction community to truly prosper, however, was the creation and distribution of two sophisticated development systems. In 1987, Michael J. Roberts released TADS, a programming language designed to produce works of interactive fiction. In 1993, Graham Nelson released Inform, a programming language and set of libraries which compiled to a Z-Code story file. Each of these systems allowed anyone with sufficient time and dedication to create a game, and caused a growth boom in the online interactive fiction community. Despite the lack of commercial support, the availability of high quality tools allowed enthusiasts of the genre to develop new high quality games. Competitions such as the annual Interactive Fiction Competition for short works, the Spring Thing for longer works, and the XYZZY Awards, further helped to improve the quality and complexity of the games. Modern games go much further than the original "Adventure" style, improving upon Infocom games, which relied extensively on puzzle solving, and to a lesser extent on communication with non-player characters, to include experimentation with writing and story-telling techniques. While the majority of modern interactive fiction that is developed is distributed for free, there are some commercial endeavors. In 1998, Michael Berlyn, a former Implementor at Infocom, started a new game company, Cascade Mountain Publishing, whose goals were to publish interactive fiction. Despite the Interactive Fiction community providing social and financial backing Cascade Mountain Publishing went out of business in 2000. Other commercial endeavours include Peter Nepstad's "", several games by Howard Sherman published as Malinche Entertainment, The General Coffee Company's "Future Boy!," "Cypher", a graphically enhanced cyberpunk game and various titles by "Textfyre". Emily Short was commissioned to develop the game "City of Secrets" but the project fell through and she ended up releasing it herself. The original interactive fiction "Colossal Cave Adventure" was programmed in Fortran, originally developed by IBM. "Adventure"s parsers could only handle two-word sentences in the form of verb-noun pairs. Infocom's games of 1979–88, such as "Zork", were written using a LISP-like programming language called ZIL (Zork Implementation Language or Zork Interactive Language, it was referred to as both) that compiled into a byte code able to run on a standardized virtual machine called the Z-machine. As the games were text based and used variants of the same Z-machine interpreter, the interpreter only had to be ported to a computer once, rather than once each game. Each game file included a sophisticated parser which allowed the user to type complex instructions to the game. Unlike earlier works of interactive fiction which only understood commands of the form 'verb noun', Infocom's parser could understand a wider variety of sentences. For instance one might type "open the large door, then go west", or "go to the hall". With the Z-machine, Infocom was able to release most of their games for most popular home computers of the time simultaneously, including Apple II family, Atari 800, IBM PC compatibles, Amstrad CPC/PCW (one disc worked on both machines), Commodore 64, Commodore Plus/4, Commodore 128, Kaypro CP/M, Texas Instruments TI-99/4A, the Mac, Atari ST, the Commodore Amiga and the Radio Shack TRS-80. Infocom was also known for shipping creative props, or "feelies" (and even "smellies"), with its games. During the 1990s Interactive fiction was mainly written with C-like languages, such as TADS 2 and Inform 6. A number of systems for writing interactive fiction now exist. The most popular remain Inform, TADS, or ADRIFT, but they diverged in their approach to IF-writing during the 2000s, giving today's IF writers an objective choice. By 2006 IFComp, most games were written for Inform, with a strong minority of games for TADS and ADRIFT, followed by a small number of games for other systems. While familiarity with a programming language leads many new authors to attempt to produce their own complete IF application, most established IF authors recommend use of a specialised IF language, arguing that such systems allow authors to avoid the technicalities of producing a full featured parser, while allowing broad community support. The choice of authoring system usually depends on the author's desired balance of ease of use versus power, and the portability of the final product. Other development systems include: Interpreters are the software used to play the works of interactive fiction created with a development system. Since they need to interact with the player, the "story files" created by development systems are programs in their own right. Rather than running directly on any one computer, they are programs run by Interpreters, or virtual machines, which are designed specially for IF. They may be part of the development system, or can be compiled together with the work of fiction as a standalone executable file. The Z-machine was designed by the founders of Infocom, in 1979. They were influenced by the then-new idea of a virtual Pascal computer, but replaced P with Z for Zork, the celebrated adventure game of 1977–79. The Z-machine evolved during the 1980s but over 30 years later, it remains in use essentially unchanged. Glulx was designed by Andrew Plotkin in the late 1990s as a new-generation IF virtual machine. It overcomes the technical constraint on the Z-machine by being a 32-bit rather than 16-bit processor. Frotz is a modern Z-machine interpreter originally written in C (programming language) by Stefan Jokisch in 1995 for DOS. Over time it was ported to other platforms, such as Unix, RISC OS, Mac OS and most recently iOS. Modern Glulx interpreters are based on "Glulxe", by Andrew Plotkin, and "Git", by Iain Merrick. Other interpreters include Zoom for Mac OS X, or for Unix or Linux, maintained by Andrew Hunter, and Spatterlight for Mac OS X, maintained by Tor Andersson. In addition to commercial distribution venues and individual websites, many works of free interactive fiction are distributed through community websites. These include the Interactive Fiction Database (IFDb), The Interactive Fiction Reviews Organization (IFRO), a game catalog and recommendation engine, and the Interactive Fiction Archive. Works may be distributed for playing with in a separate interpreter. In which case they are often made available in the Blorb package format that many interpreters support. A filename ending .zblorb is a story file intended for a Z-machine in a Blorb wrapper, while a filename ending .gblorb is a story file intended for a Glulx in a Blorb wrapper. It is not common but IF files are sometimes also seen without a Blorb wrapping, though this usually means cover art, help files, and so forth are missing, like a book with the covers torn off. Z-machine story files usually have names ending .z5 or .z8, the number being a version number, and Glulx story files usually end .ulx. Alternatively, works may be distributed for playing in a web browser. For example, the 'Parchment' project is for web browser-based IF Interpreter, for both Z-machine and Glulx files. Some software such as Twine publishes directly to HTML, the standard language used to create web pages, reducing the requirement for an Interpreter or virtual machine.
https://en.wikipedia.org/wiki?curid=14789
Ice hockey Ice hockey is a contact team sport played on ice, usually in a rink, in which two teams of skaters use their sticks to shoot a vulcanized rubber puck into their opponent's net to score goals. The sport is known to be fast-paced and physical, with teams usually fielding six players at a time: one goaltender, and five players who skate the span of the ice trying to control the puck and score goals against the opposing team. Ice hockey is most popular in Canada, central and eastern Europe, the Nordic countries, Russia, and the United States. Ice hockey is the official national winter sport of Canada. In addition, ice hockey is the most popular winter sport in Belarus, Croatia, the Czech Republic, Finland, Latvia, Russia, Slovakia, Sweden, and Switzerland. North America's National Hockey League (NHL) is the highest level for men's ice hockey and the strongest professional ice hockey league in the world. The Kontinental Hockey League (KHL) is the highest league in Russia and much of Eastern Europe. The International Ice Hockey Federation (IIHF) is the formal governing body for international ice hockey, with the IIHF managing international tournaments and maintaining the IIHF World Ranking. Worldwide, there are ice hockey federations in 76 countries. In Canada, the United States, Nordic countries, and some other European countries the sport is known simply as hockey; the name "ice hockey" is used in places where "hockey" more often refers to the more established field hockey, such as countries in South America, Asia, Africa, Australasia, and some European countries including the United Kingdom, Ireland and the Netherlands. Ice hockey is believed to have evolved from simple stick and ball games played in the 18th and 19th centuries in the United Kingdom and elsewhere. These games were brought to North America and several similar winter games using informal rules were developed, such as shinny and ice polo. The contemporary sport of ice hockey was developed in Canada, most notably in Montreal, where the first indoor hockey game was played on March 3, 1875. Some characteristics of that game, such as the length of the ice rink and the use of a puck, have been retained to this day. Amateur ice hockey leagues began in the 1880s, and professional ice hockey originated around 1900. The Stanley Cup, emblematic of ice hockey club supremacy, was first awarded in 1893 to recognize the Canadian amateur champion and later became the championship trophy of the NHL. In the early 1900s, the Canadian rules were adopted by the Ligue Internationale de Hockey Sur Glace, the precursor of the IIHF and the sport was played for the first time at the Olympics during the 1920 Summer Olympics. In international competitions, the national teams of six countries (the Big Six) predominate: Canada, Czech Republic, Finland, Russia, Sweden and the United States. Of the 69 medals awarded all-time in men's competition at the Olympics, only seven medals were not awarded to one of those countries (or two of their precursors, the Soviet Union for Russia, and Czechoslovakia for the Czech Republic). In the annual Ice Hockey World Championships, 177 of 201 medals have been awarded to the six nations. Teams outside the Big Six have won only five medals in either competition since 1953. The World Cup of Hockey is organized by the National Hockey League and the National Hockey League Players' Association (NHLPA), unlike the annual World Championships and quadrennial Olympic tournament, both run by the International Ice Hockey Federation. World Cup games are played under NHL rules and not those of the IIHF, and the tournament occurs prior to the NHL pre-season, allowing for all NHL players to be available, unlike the World Championships, which overlaps with the NHL's Stanley Cup playoffs. Furthermore, all 12 Women's Olympic and 36 IIHF World Women's Championship medals were awarded to one of these six countries. The Canadian national team or the United States national team have between them won every gold medal of either series. In England, field hockey has historically been called simply "hockey" and what was referenced by first appearances in print. The first known mention spelled as "hockey" occurred in the 1772 book "Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education", by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey". The 1527 Statute of Galway banned a sport called hokie'—the hurling of a little ball with sticks or staves". A form of this word was thus being used in the 16th century, though much removed from its current usage. The belief that hockey was mentioned in a 1363 proclamation by King Edward III of England is based on modern translations of the proclamation, which was originally in Latin and explicitly forbade the games "Pilam Manualem, Pedivam, & Bacularem: & ad Canibucam & Gallorum Pugnam". The English historian and biographer John Strype did not use the word "hockey" when he translated the proclamation in 1720, instead translating "Canibucam" as "Cambuck"; this may have referred to either an early form of hockey or a game more similar to golf or croquet. According to the Austin Hockey Association, the word "puck" derives from the Scottish Gaelic "puc" or the Irish "poc" (to poke, punch or deliver a blow). "...The blow given by a hurler to the ball with his camán or hurley is always called a puck." Stick-and-ball games date back to pre-Christian times. In Europe, these games included the Irish game of hurling, the closely related Scottish game of shinty and versions of field hockey (including bandy ball, played in England). IJscolf, a game resembling colf on an ice-covered surface, was popular in the Low Countries between the Middle Ages and the Dutch Golden Age. It was played with a wooden curved bat (called a "colf" or "kolf"), a wooden or leather ball and two poles (or nearby landmarks), with the objective to hit the chosen point using the least number of strokes. A similar game ("knattleikr") had been played for a thousand years or more by the Scandinavian peoples, as documented in the Icelandic sagas. Polo has been referred to as "hockey on horseback". In England, field hockey developed in the late 17th century, and there is evidence that some games of field hockey took place on the ice. These games of "hockey on ice" were sometimes played with a bung (a plug of cork or oak used as a stopper on a barrel). William Pierre Le Cocq stated, in a 1799 letter written in Chesham, England: I must now describe to you the game of Hockey; we have each a stick turning up at the end. We get a bung. There are two sides one of them knocks one way and the other side the other way. If any one of the sides makes the bung reach that end of the churchyard it is victorious. A 1797 engraving unearthed by Swedish sport historians Carl Gidén and Patrick Houda shows a person on skates with a stick and bung on the River Thames, probably in December 1796. British soldiers and immigrants to Canada and the United States brought their stick-and-ball games with them and played them on the ice and snow of winter. In 1825, John Franklin wrote "The game of hockey played on the ice was the morning sport" on Great Bear Lake during one of his Arctic expeditions. A mid-1830s watercolour portrays New Brunswick lieutenant-governor Archibald Campbell and his family with British soldiers on skates playing a stick-on-ice sport. Captain R.G.A. Levinge, a British Army officer in New Brunswick during Campbell's time, wrote about "hockey on ice" on Chippewa Creek (a tributary of the Niagara River) in 1839. In 1843 another British Army officer in Kingston, Ontario wrote, "Began to skate this year, improved quickly and had great fun at hockey on the ice." An 1859 "Boston Evening Gazette" article referred to an early game of hockey on ice in Halifax that year. An 1835 painting by John O'Toole depicts skaters with sticks and bung on a frozen stream in the American state of West Virginia, at that time still part of Virginia. In the same era, the Mi'kmaq, a First Nations people of the Canadian Maritimes, also had a stick-and-ball game. Canadian oral histories describe a traditional stick-and-ball game played by the Mi'kmaq, and Silas Tertius Rand (in his 1894 "Legends of the Micmacs") describes a Mi'kmaq ball game known as "tooadijik". Rand also describes a game played (probably after European contact) with hurleys, known as "wolchamaadijik". Sticks made by the Mi'kmaq were used by the British for their games. Early 19th-century paintings depict shinney (or "shinny"), an early form of hockey with no standard rules which was played in Nova Scotia. Many of these early games absorbed the physical aggression of what the Onondaga called "dehuntshigwa'es" (lacrosse). Shinney was played on the St. Lawrence River at Montreal and Quebec City, and in Kingston and Ottawa. The number of players was often large. To this day, shinney (derived from "shinty") is a popular Canadian term for an informal type of hockey, either ice or street hockey. Thomas Chandler Haliburton, in "The Attache: Second Series" (published in 1844) imagined a dialogue, between two of the novel's characters, which mentions playing "hurly on the long pond on the ice". This has been interpreted by some historians from Windsor, Nova Scotia as reminiscent of the days when the author was a student at King's College School in that town in 1810 and earlier. Based on Haliburton's quote, claims were made that modern hockey was invented in Windsor, Nova Scotia, by King's College students and perhaps named after an individual ("Colonel Hockey's game"). Others claim that the origins of hockey come from games played in the area of Dartmouth and Halifax in Nova Scotia. However, several references have been found to hurling and shinty being played on the ice long before the earliest references from both Windsor and Dartmouth/Halifax, and the word "hockey" was used to designate a stick-and-ball game at least as far back as 1773, as it was mentioned in the book "Juvenile Sports and Pastimes, to Which Are Prefixed, Memoirs of the Author: Including a New Mode of Infant Education" by Richard Johnson (Pseud. Master Michel Angelo), whose chapter XI was titled "New Improvements on the Game of Hockey". While the game's origins lie elsewhere, Montreal is at the centre of the development of the sport of contemporary ice hockey, and is recognized as the birthplace of organized ice hockey. On March 3, 1875, the first organized indoor game was played at Montreal's Victoria Skating Rink between two nine-player teams, including James Creighton and several McGill University students. Instead of a ball or bung, the game featured a "flat circular piece of wood" (to keep it in the rink and to protect spectators). The goal posts were apart (today's goals are six feet wide). In 1876, games played in Montreal were "conducted under the 'Hockey Association' rules"; the Hockey Association was England's field hockey organization. In 1877, "The Gazette" (Montreal) published a list of seven rules, six of which were largely based on six of the Hockey Association's twelve rules, with only minor differences (even the word "ball" was kept); the one added rule explained how disputes should be settled. The McGill University Hockey Club, the first ice hockey club, was founded in 1877 (followed by the Quebec Hockey Club in 1878 and the Montreal Victorias in 1881). In 1880, the number of players per side was reduced from nine to seven. The number of teams grew, enough to hold the first "world championship" of ice hockey at Montreal's annual Winter Carnival in 1883. The McGill team won the tournament and was awarded the Carnival Cup. The game was divided into thirty-minute halves. The positions were now named: left and right wing, centre, rover, point and cover-point, and goaltender. In 1886, the teams competing at the Winter Carnival organized the Amateur Hockey Association of Canada (AHAC), and played a season comprising "challenges" to the existing champion. In Europe, it is believed that in 1885 the Oxford University Ice Hockey Club was formed to play the first Ice Hockey Varsity Match against traditional rival Cambridge in St. Moritz, Switzerland; however, this is undocumented. The match was won by the Oxford Dark Blues, 6–0; the first photographs and team lists date from 1895. This rivalry continues, claiming to be the oldest hockey rivalry in history; a similar claim is made about the rivalry between Queen's University at Kingston and Royal Military College of Kingston, Ontario. Since 1986, considered the 100th anniversary of the rivalry, teams of the two colleges play for the Carr-Harris Cup. In 1888, the Governor General of Canada, Lord Stanley of Preston (whose sons and daughter were hockey enthusiasts), first attended the Montreal Winter Carnival tournament and was impressed with the game. In 1892, realizing that there was no recognition for the best team in Canada (although a number of leagues had championship trophies), he purchased a silver bowl for use as a trophy. The Dominion Hockey Challenge Cup (which later became known as the Stanley Cup) was first awarded in 1893 to the Montreal Hockey Club, champions of the AHAC; it continues to be awarded annually to the National Hockey League's championship team. Stanley's son Arthur helped organize the Ontario Hockey Association, and Stanley's daughter Isobel was one of the first women to play ice hockey. By 1893, there were almost a hundred teams in Montreal alone; in addition, there were leagues throughout Canada. Winnipeg hockey players used cricket pads to better protect the goaltender's legs; they also introduced the "scoop" shot, or what is now known as the wrist shot. William Fairbrother, from Ontario, Canada is credited with inventing the ice hockey net in the 1890s. Goal nets became a standard feature of the Canadian Amateur Hockey League (CAHL) in 1900. Left and right defence began to replace the point and cover-point positions in the OHA in 1906. In the United States, ice polo, played with a ball rather than a puck, was popular during this period; however, by 1893 Yale University and Johns Hopkins University held their first ice hockey matches. American financier Malcolm Greene Chace is credited with being the father of hockey in the United States. In 1892, as an amateur tennis player, Chace visited Niagara Falls, New York for a tennis match, where he met some Canadian hockey players. Soon afterwards, Chace put together a team of men from Yale, Brown, and Harvard, and toured across Canada as captain of this team. The first collegiate hockey match in the United States was played between Yale University and Johns Hopkins in Baltimore. Yale, led by captain Chace, beat Hopkins, 2–1. In 1896, the first ice hockey league in the US was formed. The US Amateur Hockey League was founded in New York City, shortly after the opening of the artificial-ice St. Nicholas Rink. Lord Stanley's five sons were instrumental in bringing ice hockey to Europe, defeating a court team (which included the future Edward VII and George V) at Buckingham Palace in 1895. By 1903, a five-team league had been founded. The "Ligue Internationale de Hockey sur Glace" was founded in 1908 to govern international competition, and the first European championship was won by Great Britain in 1910. The sport grew further in Europe in the 1920s, after ice hockey became an Olympic sport. Many bandy players switched to hockey so as to be able to compete in the Olympics. Bandy remained popular in the Soviet Union, which only started its ice hockey program in the 1950s. In the mid-20th century, the "Ligue" became the International Ice Hockey Federation. As the popularity of ice hockey as a spectator sport grew, earlier rinks were replaced by larger rinks. Most of the early indoor ice rinks have been demolished; Montreal's Victoria Rink, built in 1862, was demolished in 1925. Many older rinks succumbed to fire, such as Denman Arena, Dey's Arena, Quebec Skating Rink and Montreal Arena, a hazard of the buildings' wood construction. The Stannus Street Rink in Windsor, Nova Scotia (built in 1897) may be the oldest still in existence; however, it is no longer used for hockey. The Aberdeen Pavilion (built in 1898) in Ottawa was used for hockey in 1904 and is the oldest existing facility that has hosted Stanley Cup games. The oldest indoor ice hockey arena still in use today for hockey is Boston's Matthews Arena, which was built in 1910. It has been modified extensively several times in its history and is used today by Northeastern University for hockey and other sports. It was the original home rink of the Boston Bruins professional team, itself the oldest United States-based team in the NHL, starting play in the league in today's Matthews Arena on December 1, 1924. Madison Square Garden in New York City, built in 1968, is the oldest continuously-operating arena in the NHL. Professional hockey has existed since the early 20th century. By 1902, the Western Pennsylvania Hockey League was the first to employ professionals. The league joined with teams in Michigan and Ontario to form the first fully professional league—the International Professional Hockey League (IPHL)—in 1904. The WPHL and IPHL hired players from Canada; in response, Canadian leagues began to pay players (who played with amateurs). The IPHL, cut off from its largest source of players, disbanded in 1907. By then, several professional hockey leagues were operating in Canada (with leagues in Manitoba, Ontario and Quebec). In 1910, the National Hockey Association (NHA) was formed in Montreal. The NHA would further refine the rules: dropping the rover position, dividing the game into three 20-minute periods and introducing minor and major penalties. After re-organizing as the National Hockey League in 1917, the league expanded into the United States, starting with the Boston Bruins in 1924. Professional hockey leagues developed later in Europe, but amateur leagues leading to national championships were in place. One of the first was the Swiss National League A, founded in 1916. Today, professional leagues have been introduced in most countries of Europe. Top European leagues include the Kontinental Hockey League, the Czech Extraliga, the Finnish Liiga and the Swedish Hockey League. While the general characteristics of the game stay the same wherever it is played, the exact rules depend on the particular code of play being used. The two most important codes are those of the IIHF and the NHL. Both of the codes, and others, originated from Canadian rules of ice hockey of the early 20th Century. Ice hockey is played on a "hockey rink". During normal play, there are six players per side on the ice at any time, one of them being the goaltender, each of whom is on ice skates. The objective of the game is to score "goals" by shooting a hard vulcanized rubber disc, the "puck", into the opponent's goal net, which is placed at the opposite end of the rink. The players use their sticks to pass or shoot the puck. Within certain restrictions, players may redirect the puck with any part of their body. Players may not hold the puck in their hand and are prohibited from using their hands to pass the puck to their teammates unless they are in the defensive zone. Players are also prohibited from kicking the puck into the opponent's goal, though unintentional redirections off the skate are permitted. Players may not intentionally bat the puck into the net with their hands. Hockey is an off-side game, meaning that forward passes are allowed, unlike in rugby. Before the 1930s, hockey was an on-side game, meaning that only backward passes were allowed. Those rules favoured individual stick-handling as a key means of driving the puck forward. With the arrival of offside rules, the forward pass transformed hockey into a true team sport, where individual performance diminished in importance relative to team play, which could now be coordinated over the entire surface of the ice as opposed to merely rearward players. The six players on each team are typically divided into three forwards, two defencemen, and a goaltender. The term "skaters" is typically used to describe all players who are not goaltenders. The "forward" positions consist of a "centre" and two "wingers": a "left wing" and a "right wing". Forwards often play together as units or "lines", with the same three forwards always playing together. The "defencemen" usually stay together as a pair generally divided between left and right. Left and right side wingers or defencemen are generally positioned as such, based on the side on which they carry their stick. A substitution of an entire unit at once is called a "line change". Teams typically employ alternate sets of forward lines and defensive pairings when "short-handed" or on a "power play". The goaltender stands in a, usually blue, semi-circle called the "crease" in the defensive zone keeping pucks from going in. Substitutions are permitted at any time during the game, although during a stoppage of play the home team is permitted the final change. When players are substituted during play, it is called changing "on the fly". A new NHL rule added in the 2005–06 season prevents a team from changing their line after they "ice" the puck. The boards surrounding the ice help keep the puck in play and they can also be used as tools to play the puck. Players are permitted to bodycheck opponents into the boards as a means of stopping progress. The referees, linesmen and the outsides of the goal are "in play" and do not cause a stoppage of the game when the puck or players are influenced (by either bouncing or colliding) into them. Play can be stopped if the goal is knocked out of position. Play often proceeds for minutes without interruption. When play is stopped, it is restarted with a faceoff. Two players face each other and an official drops the puck to the ice, where the two players attempt to gain control of the puck. Markings (circles) on the ice indicate the locations for the faceoff and guide the positioning of players. The three major rules of play in ice hockey that limit the movement of the puck: "offside", "icing", and the puck going out of play. A player is offside if he enters his opponent's zone before the puck itself. Under many situations, a player may not "ice the puck", shoot the puck all the way across both the centre line and the opponent's goal line. The puck goes out of play whenever it goes past the perimeter of the ice rink (onto the player benches, over the glass, or onto the protective netting above the glass) and a stoppage of play is called by the officials using whistles. It also does not matter if the puck comes back onto the ice surface from those areas as the puck is considered dead once it leaves the perimeter of the rink. Under IIHF rules, each team may carry a maximum of 20 players and two goaltenders on their roster. NHL rules restrict the total number of players per game to 18, plus two goaltenders. In the NHL, the players are usually divided into four lines of three forwards, and into three pairs of defencemen. On occasion, teams may elect to substitute an extra defenceman for a forward. The seventh defenceman may play as a substitute defenceman, spend the game on the bench, or if a team chooses to play four lines then this seventh defenceman may see ice-time on the fourth line as a forward. A professional game consists of three periods of twenty minutes, the clock running only when the puck is in play. The teams change ends after each period of play, including overtime. Recreational leagues and children's leagues often play shorter games, generally with three shorter periods of play. Various procedures are used if a tie occurs. In tournament play, as well as in the NHL playoffs, North Americans favour "sudden death overtime", in which the teams continue to play twenty-minute periods until a goal is scored. Up until the 1999–2000 season regular season NHL games were settled with a single five-minute sudden death period with five players (plus a goalie) per side, with both teams awarded one point in the standings in the event of a tie. With a goal, the winning team would be awarded two points and the losing team none (just as if they had lost in regulation). From the 1999–2000 until the 2003–04 seasons, the National Hockey League decided ties by playing a single five-minute sudden death overtime period with each team having four skaters per side (plus the goalie). In the event of a tie, each team would still receive one point in the standings but in the event of a victory the winning team would be awarded two points in the standings and the losing team one point. The idea was to discourage teams from playing for a tie, since previously some teams might have preferred a tie and 1 point to risking a loss and zero points. The only exception to this rule is if a team opts to pull their goalie in exchange for an extra skater during overtime and is subsequently scored upon (an "empty net" goal), in which case the losing team receives no points for the overtime loss. Since the 2015–16 season, the single five-minute sudden death overtime session involves three skaters on each side. Since three skaters must always be on the ice in an NHL game, the consequences of penalties are slightly different from those during regulation play. If a team is on a powerplay when overtime begins, that team will play with more than three skaters (usually four, very rarely five) until the expiration of the penalty. Any penalty during overtime that would result in a team losing a skater during regulation instead causes the non-penalized team to add a skater. Once the penalized team's penalty ends, the number of skaters on each side is adjusted accordingly, with the penalized team adding a skater in regulation and the non-penalized team subtracting a skater in overtime. This goes until the next stoppage of play. International play and several North American professional leagues, including the NHL (in the regular season), now use an overtime period identical to that from 1999–2000 to 2003–04 followed by a penalty shootout. If the score remains tied after an extra overtime period, the subsequent shootout consists of three players from each team taking penalty shots. After these six total shots, the team with the most goals is awarded the victory. If the score is still tied, the shootout then proceeds to a "sudden death" format. Regardless of the number of goals scored during the shootout by either team, the final score recorded will award the winning team one more goal than the score at the end of regulation time. In the NHL if a game is decided in overtime or by a shootout the winning team is awarded two points in the standings and the losing team is awarded one point. Ties no longer occur in the NHL. The overtime mode for the NHL playoffs differ from the regular season. In the playoffs there are no shootouts nor ties. If a game is tied after regulation an additional 20 minutes of 5 on 5 sudden death overtime will be added. In case of a tied game after the overtime, multiple 20-minute overtimes will be played until a team scores, which wins the match. Since 2019, the IIHF World Championships and the medal games in the Olympics use the same format, but in a 3-on-3 format. In ice hockey, infractions of the rules lead to play stoppages whereby the play is restarted at a face off. Some infractions result in the imposition of a "penalty" to a player or team. In the simplest case, the offending player is sent to the "penalty box" and their team has to play with one less player on the ice for a designated amount of time. "Minor" penalties last for two minutes, "major" penalties last for five minutes, and a "double minor" penalty is two "consecutive" penalties of two minutes duration. A single minor penalty may be extended by a further two minutes for causing visible injury to the victimized player. This is usually when blood is drawn during high sticking. Players may be also assessed personal extended penalties or game expulsions for misconduct in addition to the penalty or penalties their team must serve. The team that has been given a penalty is said to be playing "short-handed" while the opposing team is on a "power play". A two-minute minor penalty is often charged for lesser infractions such as tripping, elbowing, roughing, high-sticking, delay of the game, too many players on the ice, boarding, illegal equipment, charging (leaping into an opponent or body-checking him after taking more than two strides), holding, holding the stick (grabbing an opponent's stick), interference, hooking, slashing, kneeing, unsportsmanlike conduct (arguing a penalty call with referee, extremely vulgar or inappropriate verbal comments), "butt-ending" (striking an opponent with the knob of the stick—a very rare penalty), "spearing", or cross-checking. As of the 2005–2006 season, a minor penalty is also assessed for diving, where a player embellishes or simulates an offence. More egregious fouls may be penalized by a four-minute double-minor penalty, particularly those that injure the victimized player. These penalties end either when the time runs out or when the other team scores during the power play. In the case of a goal scored during the first two minutes of a double-minor, the penalty clock is set down to two minutes upon a score, effectively expiring the first minor penalty. Five-minute major penalties are called for especially violent instances of most minor infractions that result in intentional injury to an opponent, or when a minor penalty results in visible injury (such as bleeding), as well as for fighting. Major penalties are always served in full; they do not terminate on a goal scored by the other team. Major penalties assessed for fighting are typically offsetting, meaning neither team is short-handed and the players exit the penalty box upon a stoppage of play following the expiration of their respective penalties. The foul of boarding (defined as "check[ing] an opponent in such a manner that causes the opponent to be thrown violently in the boards") is penalized either by a minor or major penalty at the discretion of the referee, based on the violent state of the hit. A minor or major penalty for boarding is often assessed when a player checks an opponent from behind and into the boards. Some varieties of penalties do not always require the offending team to play a man short. Concurrent five-minute major penalties in the NHL usually result from fighting. In the case of two players being assessed five-minute fighting majors, both the players serve five minutes without their team incurring a loss of player (both teams still have a full complement of players on the ice). This differs with two players from opposing sides getting minor penalties, at the same time or at any intersecting moment, resulting from more common infractions. In this case, both teams will have only four skating players (not counting the goaltender) until one or both penalties expire (if one penalty expires before the other, the opposing team gets a power play for the remainder of the time); this applies regardless of current pending penalties. However, in the NHL, a team always has at least three skaters on the ice. Thus, ten-minute "misconduct" penalties are served in full by the penalized player, but his team may immediately substitute another player on the ice "unless" a minor or major penalty is assessed in conjunction with the misconduct (a "two-and-ten" or "five-and-ten"). In this case, the team designates another player to serve the minor or major; both players go to the penalty box, but only the designee may not be replaced, and he is released upon the expiration of the two or five minutes, at which point the ten-minute misconduct begins. In addition, "game misconducts" are assessed for deliberate intent to inflict severe injury on an opponent (at the officials' discretion), or for a major penalty for a stick infraction or repeated major penalties. The offending player is ejected from the game and must immediately leave the playing surface (he does not sit in the penalty box); meanwhile, if an additional minor or major penalty is assessed, a designated player must serve out of that segment of the penalty in the box (similar to the above-mentioned "two-and-ten"). In some rare cases, a player may receive up to nineteen minutes in penalties for one string of plays. This could involve receiving a four-minute double minor penalty, getting in a fight with an opposing player who retaliates, and then receiving a game misconduct after the fight. In this case, the player is ejected and two teammates must serve the double-minor and major penalties. A penalty shot is awarded to a player when the illegal actions of another player stop a clear scoring opportunity, most commonly when the player is on a breakaway. A penalty shot allows the obstructed player to pick up the puck on the centre red-line and attempt to score on the goalie with no other players on the ice, to compensate for the earlier missed scoring opportunity. A penalty shot is also awarded for a defender other than the goaltender covering the puck in the goal crease, a goaltender intentionally displacing his own goal posts during a breakaway to avoid a goal, a defender intentionally displacing his own goal posts when there is less than two minutes to play in regulation time or at any point during overtime, or a player or coach intentionally throwing a stick or other object at the puck or the puck carrier and the throwing action disrupts a shot or pass play. Officials also stop play for puck movement violations, such as using one's hands to pass the puck in the offensive end, but no players are penalized for these offences. The sole exceptions are deliberately falling on or gathering the puck to the body, carrying the puck in the hand, and shooting the puck out of play in one's defensive zone (all penalized two minutes for delay of game). In the NHL, a unique penalty applies to the goalies. The goalies now are forbidden to play the puck in the "corners" of the rink near their own net. This will result in a two-minute penalty against the goalie's team. Only in the area in-front of the goal line and immediately behind the net (marked by two red lines on either side of the net) the goalie can play the puck. An additional rule that has never been a penalty, but was an infraction in the NHL before recent rules changes, is the two-line offside pass. Prior to the 2005–06 NHL season, play was stopped when a pass from inside a team's defending zone crossed the centre line, with a face-off held in the defending zone of the offending team. Now, the centre line is no longer used in the NHL to determine a two-line pass infraction, a change that the IIHF had adopted in 1998. Players are now able to pass to teammates who are more than the blue and centre ice red line away. The NHL has taken steps to speed up the game of hockey and create a game of finesse, by retreating from the past when illegal hits, fights, and "clutching and grabbing" among players were commonplace. Rules are now more strictly enforced, resulting in more penalties, which in turn provides more protection to the players and facilitates more goals being scored. The governing body for United States' amateur hockey has implemented many new rules to reduce the number of stick-on-body occurrences, as well as other detrimental and illegal facets of the game ("zero tolerance"). In men's hockey, but not in women's, a player may use his hip or shoulder to hit another player if the player has the puck or is the last to have touched it. This use of the hip and shoulder is called "body checking". Not all physical contact is legal—in particular, hits from behind, hits to the head and most types of forceful stick-on-body contact are illegal. A "delayed penalty call" occurs when a penalty offence is committed by the team that does not have possession of the puck. In this circumstance the team with possession of the puck is allowed to complete the play; that is, play continues until a goal is scored, a player on the opposing team gains control of the puck, or the team in possession commits an infraction or penalty of their own. Because the team on which the penalty was called cannot control the puck without stopping play, it is impossible for them to score a goal. In these cases, the team in possession of the puck can pull the goalie for an extra attacker without fear of being scored on. However, it is possible for the controlling team to mishandle the puck into their own net. If a delayed penalty is signalled and the team in possession scores, the penalty is still assessed to the offending player, but not served. In 2012, this rule was changed by the United States' National Collegiate Athletic Association (NCAA) for college level hockey. In college games, the penalty is still enforced even if the team in possession scores. A typical game of hockey is governed by two to four "officials" on the ice, charged with enforcing the rules of the game. There are typically two "linesmen" who are mainly responsible for calling "offside" and "icing" violations, breaking up fights, and conducting faceoffs, and one or two "referees", who call goals and all other penalties. Linesmen can, however, report to the referee(s) that a penalty should be assessed against an offending player in some situations. The restrictions on this practice vary depending on the governing rules. On-ice officials are assisted by off-ice officials who act as goal judges, time keepers, and official scorers. The most widespread system in use today is the "three-man system", that uses one referee and two linesmen. Another less commonly used system is the two referee and one linesman system. This system is very close to the regular three-man system except for a few procedure changes. With the first being the National Hockey League, a number of leagues have started to implement the "four-official system", where an additional referee is added to aid in the calling of penalties normally difficult to assess by one single referee. The system is now used in every NHL game since 2001, at IIHF World Championships, the Olympics and in many professional and high-level amateur leagues in North America and Europe. Officials are selected by the league they work for. Amateur hockey leagues use guidelines established by national organizing bodies as a basis for choosing their officiating staffs. In North America, the national organizing bodies Hockey Canada and USA Hockey approve officials according to their experience level as well as their ability to pass rules knowledge and skating ability tests. Hockey Canada has officiating levels I through VI. USA Hockey has officiating levels 1 through 4. Since men's ice hockey is a full contact sport, body checks are allowed so injuries are a common occurrence. Protective equipment is mandatory and is enforced in all competitive situations. This includes a helmet with either a visor or a full face mask, shoulder pads, elbow pads, mouth guard, protective gloves, heavily padded shorts (also known as hockey pants) or a girdle, athletic cup (also known as a jock, for males; and jill, for females), shin pads, skates, and (optionally) a neck protector. Goaltenders use different equipment. With hockey pucks approaching them at speeds of up to 100 mph (160 km/h) they must wear equipment with more protection. Goaltenders wear specialized goalie skates (these skates are built more for movement side to side rather than forwards and backwards), a jock or jill, large leg pads (there are size restrictions in certain leagues), blocking glove, catching glove, a chest protector, a goalie mask, and a large jersey. Goaltenders' equipment has continually become larger and larger, leading to fewer goals in each game and many official rule changes. Hockey skates are optimized for physical acceleration, speed and manoeuvrability. This includes rapid starts, stops, turns, and changes in skating direction. In addition, they must be rigid and tough to protect the skater's feet from contact with other skaters, sticks, pucks, the boards, and the ice itself. Rigidity also improves the overall manoeuvrability of the skate. Blade length, thickness (width), and curvature (rocker/radius (front to back) and radius of hollow (across the blade width) are quite different from speed or figure skates. Hockey players usually adjust these parameters based on their skill level, position, and body type. The blade width of most skates are about thick. The hockey stick consists of a long, relatively wide, and slightly curved flat blade, attached to a shaft. The curve itself has a big impact on its performance. A deep curve allows for lifting the puck easier while a shallow curve allows for easier backhand shots. The flex of the stick also impacts the performance. Typically, a less flexible stick is meant for a stronger player since the player is looking for the right balanced flex that allows the stick to flex easily while still having a strong "whip-back" which sends the puck flying at high speeds. It is quite distinct from sticks in other sports games and most suited to hitting and controlling the flat puck. Its unique shape contributed to the early development of the game. Ice hockey is a full contact sport and carries a high risk of injury. Players are moving at speeds around approximately and quite a bit of the game revolves around the physical contact between the players. Skate blades, hockey sticks, shoulder contact, hip contact, and hockey pucks can all potentially cause injuries. The types of injuries associated with hockey include: lacerations, concussions, contusions, ligament tears, broken bones, hyperextensions, and muscle strains. Women's ice hockey players are allowed to contact other players but are not allowed to body check. Compared to athletes who play other sports, ice hockey players are at higher risk of overuse injuries and injuries caused by early sports specialization by teenagers. According to the Hughston Health Alert, "Lacerations to the head, scalp, and face are the most frequent types of injury [in hockey]." Even a shallow cut to the head results in a loss of a large amount of blood. Direct trauma to the head is estimated to account for 80% of all hockey injuries as a result of player contact with other players or hockey equipment. One of the leading causes of head injury is body checking from behind. Due to the danger of delivering a check from behind, many leagues, including the NHL have made this a major and game misconduct penalty (called "boarding"). Another type of check that accounts for many of the player-to-player contact concussions is a check to the head resulting in a misconduct penalty (called "head contact"). A check to the head can be defined as delivering a hit while the receiving player's head is down and their waist is bent and the aggressor is targeting the opponent player's head. The most dangerous result of a head injury in hockey can be classified as a concussion. Most concussions occur during player-to-player contact rather than when a player is checked into the boards. Checks to the head have accounted for nearly 50% of concussions that players in the National Hockey League have suffered. In recent years, the NHL has implemented new rules which penalize and suspend players for illegal checks to the heads, as well as checks to unsuspecting players. Concussions that players suffer may go unreported because there is no obvious physical signs if a player is not knocked unconscious. This can prove to be dangerous if a player decides to return to play without receiving proper medical attention. Studies show that ice hockey causes 44.3% of all traumatic brain injuries among Canadian children. In severe cases, the traumatic brain injuries are capable of resulting in death. Occurrences of death from these injuries are rare. An important defensive tactic is checking—attempting to take the puck from an opponent or to remove the opponent from play. "Stick checking", "sweep checking", and "poke checking" are legal uses of the stick to obtain possession of the puck. The "neutral zone trap" is designed to isolate the puck carrier in the neutral zone preventing him from entering the offensive zone. "Body checking" is using one's shoulder or hip to strike an opponent who has the puck or who is the last to have touched it (the last person to have touched the puck is still legally "in possession" of it, although a penalty is generally called if he is checked more than two seconds after his last touch). Body checking is also a penalty in certain leagues in order to reduce the chance of injury to players. Often the term checking is used to refer to body checking, with its true definition generally only propagated among fans of the game. Offensive tactics include improving a team's position on the ice by advancing the puck out of one's zone towards the opponent's zone, progressively by gaining lines, first your own blue line, then the red line and finally the opponent's blue line. NHL rules instated for the 2006 season redefined the offside rule to make the two-line pass legal; a player may pass the puck from behind his own blue line, past both that blue line and the centre red line, to a player on the near side of the opponents' blue line. Offensive tactics are designed ultimately to score a goal by taking a shot. When a player purposely directs the puck towards the opponent's goal, he or she is said to "shoot" the puck. A "deflection" is a shot that redirects a shot or a pass towards the goal from another player, by allowing the puck to strike the stick and carom towards the goal. A "one-timer" is a shot struck directly off a pass, without receiving the pass and shooting in two separate actions. "Headmanning the puck", also known as "breaking out", is the tactic of rapidly passing to the player farthest down the ice. "Loafing", also known as "cherry-picking", is when a player, usually a forward, skates behind an attacking team, instead of playing defence, in an attempt to create an easy scoring chance. A team that is losing by one or two goals in the last few minutes of play will often elect to "pull the goalie"; that is, remove the goaltender and replace him or her with an "extra attacker" on the ice in the hope of gaining enough advantage to score a goal. However, it is an act of desperation, as it sometimes leads to the opposing team extending their lead by scoring a goal in the empty net. One of the most important strategies for a team is their "forecheck". Forechecking is the act of attacking the opposition in their defensive zone. Forechecking is an important part of the "dump and chase" strategy (i.e. shooting the puck into the offensive zone and then chasing after it). Each team will use their own unique system but the main ones are: 2–1–2, 1–2–2, and 1–4. The 2–1–2 is the most basic forecheck system where two forwards will go in deep and pressure the opposition's defencemen, the third forward stays high and the two defencemen stay at the blueline. The 1–2–2 is a bit more conservative system where one forward pressures the puck carrier and the other two forwards cover the oppositions' wingers, with the two defencemen staying at the blueline. The 1–4 is the most defensive forecheck system, referred to as the neutral zone trap, where one forward will apply pressure to the puck carrier around the oppositions' blueline and the other 4 players stand basically in a line by their blueline in hopes the opposition will skate into one of them. Another strategy is the left wing lock, which has two forwards pressure the puck and the left wing and the two defencemen stay at the blueline. There are many other little tactics used in the game of hockey. "Cycling" moves the puck along the boards in the offensive zone to create a scoring chance by making defenders tired or moving them out of position. "Pinching" is when a defenceman pressures the opposition's winger in the offensive zone when they are breaking out, attempting to stop their attack and keep the puck in the offensive zone. A "saucer pass" is a pass used when an opposition's stick or body is in the passing lane. It is the act of raising the puck over the obstruction and having it land on a teammate's stick. A deke, short for "decoy", is a feint with the body or stick to fool a defender or the goalie. Many modern players, such as Pavel Datsyuk, Sidney Crosby and Patrick Kane, have picked up the skill of "dangling", which is fancier deking and requires more stick handling skills. Although fighting is officially prohibited in the rules, it is not an uncommon occurrence at the professional level, and its prevalence has been both a target of criticism and a considerable draw for the sport. At the professional level in North America fights are unofficially condoned. Enforcers and other players fight to demoralize the opposing players while exciting their own, as well as settling personal scores. A fight will also break out if one of the team's skilled players gets hit hard or someone receives what the team perceives as a dirty hit. The amateur game penalizes fisticuffs more harshly, as a player who receives a fighting major is also assessed at least a 10-minute misconduct penalty (NCAA and some Junior leagues) or a game misconduct penalty and suspension (high school and younger, as well as some casual adult leagues). Crowds seem to like fighting in ice hockey and cheer when fighting erupts. Ice hockey is one of the fastest growing women's sports in the world, with the number of participants increasing by 400 percent from 1995 to 2005. In 2011, Canada had 85,827 women players, United States had 65,609, Finland 4,760, Sweden 3,075 and Switzerland 1,172. While there are not as many organized leagues for women as there are for men, there exist leagues of all levels, including the Canadian Women's Hockey League (CWHL), Western Women's Hockey League, National Women's Hockey League (NWHL), Mid-Atlantic Women's Hockey League, and various European leagues; as well as university teams, national and Olympic teams, and recreational teams. The IIHF holds IIHF World Women's Championships tournaments in several divisions; championships are held annually, except that the top flight does not play in Olympic years. The chief difference between women's and men's ice hockey is that body checking is prohibited in women's hockey. After the 1990 Women's World Championship, body checking was eliminated in women's hockey. In current IIHF women's competition, body checking is either a minor or major penalty, decided at the referee's discretion. In addition, players in women's competition are required to wear protective full-face masks. In Canada, to some extent ringette has served as the female counterpart to ice hockey, in the sense that traditionally, boys have played hockey while girls have played ringette. Women are known to have played the game in the 19th century. Several games were recorded in the 1890s in Ottawa, Ontario, Canada. The women of Lord Stanley's family were known to participate in the game of ice hockey on the outdoor ice rink at Rideau Hall, the residence of Canada's Governor-General. The game developed at first without an organizing body. A tournament in 1902 between Montreal and Trois-Rivieres was billed as the first championship tournament. Several tournaments, such as at the Banff Winter Carnival, were held in the early 20th century and numerous women's teams such as the Seattle Vamps and Vancouver Amazons existed. Organizations started to develop in the 1920s, such as the Ladies Ontario Hockey Association, and later, the Dominion Women's Amateur Hockey Association. Starting in the 1960s, the game spread to universities. Today, the sport is played from youth through adult leagues, and in the universities of North America and internationally. There have been two major professional women's hockey leagues to have paid its players: the National Women's Hockey League with teams in the United States and the Canadian Women's Hockey League with teams in Canada, China, and the United States. The first women's world championship tournament, albeit unofficial, was held in 1987 in Toronto, Ontario, Canada. This was followed by the first IIHF World Championship in 1990 in Ottawa. Women's ice hockey was added as a medal sport at the 1998 Winter Olympics in Nagano, Japan. The United States won the gold, Canada won the silver and Finland won the bronze medal. Canada won in 2002, 2006, 2010, and 2014, and also reached the gold medal game in 2018, where it lost in a shootout to the United States, their first loss in a competitive Olympic game since 2002. The United States Hockey League (USHL) welcomed the first female professional ice hockey player in 1969–70, when the Marquette Iron Rangers signed Karen Koch. One woman, Manon Rhéaume, has played in an NHL pre-season game as a goaltender for the Tampa Bay Lightning against the St. Louis Blues. In 2003, Hayley Wickenheiser played with the Kirkkonummi Salamat in the Finnish men's Suomi-sarja league. Several women have competed in North American minor leagues, including Rhéaume, goaltenders Kelly Dyer and Erin Whitten and defenceman Angela Ruggiero. With interest in women's ice hockey growing, between 2007 and 2010 the number of registered female players worldwide grew from 153,665 to 170,872. Women's hockey is on the rise in almost every part of the world and there are teams in North America, Europe, Asia, Oceania, Africa and Latin America. The future of international women's ice hockey was discussed at the World Hockey Summit in 2010, and IIHF member associations could work together. International Olympic Committee president Jacques Rogge stated that the women's hockey tournament might be eliminated from the Olympics since the event was not competitively balanced, and dominated by Canada and the United States. Team Canada captain Hayley Wickenheiser explained that the talent gap between the North American and European countries was due to the presence of women's professional leagues in North America, along with year-round training facilities. She stated the European players were talented, but their respective national team programs were not given the same level of support as the European men's national teams, or the North American women's national teams. She stressed the need for women to have their own professional league which would be for the benefit of international hockey. The primary women's professional hockey league in North America is the National Women's Hockey League (NWHL) with five teams located in the United States and one in Canada. From 2007 until 2019 the Canadian Women's Hockey League (CWHL) operated with teams in Canada, the United States and China. The CWHL was founded in 2007 and originally consisted of seven teams in Canada, but had several membership changes including adding a team in the United States in 2010. When the league launched, its players were only compensated for travel and equipment. The league began paying its players a stipend in the 2017–18 season when the league launched its first teams in China. For the league's 2018–19 season, there were six teams consisting of the Calgary Inferno, Les Canadiennes de Montreal, Markham Thunder, Shenzhen KRS Vanke Rays, Toronto Furies, and the Worcester Blades. The CWHL ceased operations in 2019 citing unsustainable business operations. The NWHL was founded in 2015 with four teams in the Northeast United States and was the first North American women's league to pay its players. The league expanded to five teams in 2018 with the addition of the formerly independent Minnesota Whitecaps. The league had conditionally approved of Canadian expansion teams in Montreal and Toronto following the dissolution of the CWHL, but lack of investors has caused the postponement of any further expansion. On April 22, 2020, the NWHL officially announced that Toronto was awarded an expansion team for the 2020–21 season growing the league to six teams. The six teams in the league are the Boston Pride, Buffalo Beauts, Connecticut Whale, Metropolitan Riveters, Minnesota Whitecaps, and the Toronto Six. The following is a list of professional ice hockey leagues by attendance: The NHL is by far the best attended and most popular ice hockey league in the world, and is among the major professional sports leagues in the United States and Canada. The league's history began after Canada's National Hockey Association decided to disband in 1917; the result was the creation of the National Hockey League with four teams. The league expanded to the United States beginning in 1924 and had as many as 10 teams before contracting to six teams by 1942–43. In 1967, the NHL doubled in size to 12 teams, undertaking one of the greatest expansions in professional sports history. A few years later, in 1972, a new 12-team league, the World Hockey Association (WHA) was formed and due to its ensuing rivalry with the NHL, it caused an escalation in players salaries. In 1979, the 17-team NHL merged with the WHA creating a 21-team league. By 2017, the NHL had expanded to 31 teams, and after a realignment in 2013, these teams were divided into two conferences and four divisions. The league is expected to expand to 32 teams by 2021. The American Hockey League (AHL), sometimes referred to as "The A", is the primary developmental professional league for players aspiring to enter the NHL. It comprises 31 teams from the United States and Canada. It is run as a "farm league" to the NHL, with the vast majority of AHL players under contract to an NHL team. The ECHL (called the East Coast Hockey League before the 2003–04 season) is a mid-level minor league in the United States with a few players under contract to NHL or AHL teams. As of 2019, there are three minor professional leagues with no NHL affiliations: the Federal Prospects Hockey League (FPHL), Ligue Nord-Américaine de Hockey (LNAH), and the Southern Professional Hockey League (SPHL). U Sports ice hockey is the highest level of play at the Canadian university level under the auspices of U Sports, Canada's governing body for university sports. As these players compete at the university level, they are obligated to follow the rule of standard eligibility of five years. In the United States especially, college hockey is popular and the best university teams compete in the annual NCAA Men's Ice Hockey Championship. The American Collegiate Hockey Association is composed of college teams at the club level. In Canada, the Canadian Hockey League is an umbrella organization comprising three major junior leagues: the Ontario Hockey League, the Western Hockey League, and the Quebec Major Junior Hockey League. It attracts players from Canada, the United States and Europe. The major junior players are considered amateurs as they are under 21-years-old and not paid a salary, however, they do get a stipend and play a schedule similar to a professional league. Typically, the NHL drafts many players directly from the major junior leagues. In the United States, the United States Hockey League (USHL) is the highest junior league. Players in this league are also amateur with players required to be under 21-years old, but do not get a stipend, which allows players to retain their eligibility for participation in NCAA ice hockey. The Kontinental Hockey League (KHL) is the largest and most popular ice hockey league in Eurasia. The league is the direct successor to the Russian Super League, which in turn was the successor to the Soviet League, the history of which dates back to the Soviet adoption of ice hockey in the 1940s. The KHL was launched in 2008 with clubs predominantly from Russia, but featuring teams from other post-Soviet states. The league expanded beyond the former Soviet countries beginning in the 2011–12 season, with clubs in Croatia and Slovakia. The KHL currently comprises member clubs based in Belarus (1), China (1), Finland (1), Latvia (1), Kazakhstan (1) and Russia (19) for a total of 24. The second division of hockey in Eurasia is the Supreme Hockey League (VHL). This league features 24 teams from Russia and 2 from Kazakhstan. This league is currently being converted to a farm league for the KHL, similarly to the AHL's function in relation to the NHL. The third division is the Russian Hockey League, which features only teams from Russia. The Asia League, an international ice hockey league featuring clubs from China, Japan, South Korea, and the Russian Far East, is the successor to the Japan Ice Hockey League. The highest junior league in Eurasia is the Junior Hockey League (MHL). It features 32 teams from post-Soviet states, predominantly Russia. The second tier to this league is the Junior Hockey League Championships (MHL-B). Several countries in Europe have their own top professional senior leagues. Many future KHL and NHL players start or end their professional careers in these leagues. The National League A in Switzerland, Swedish Hockey League in Sweden, Liiga in Finland, and Czech Extraliga in the Czech Republic are all very popular in their respective countries. Beginning in the 2014–15 season, the Champions Hockey League was launched, a league consisting of first-tier teams from several European countries, running parallel to the teams' domestic leagues. The competition is meant to serve as a Europe-wide ice hockey club championship. The competition is a direct successor to the European Trophy and is related to the 2008–09 tournament of the same name. There are also several annual tournaments for clubs, held outside of league play. Pre-season tournaments include the European Trophy, Tampere Cup and the Pajulahti Cup. One of the oldest international ice hockey competition for clubs is the Spengler Cup, held every year in Davos, Switzerland, between Christmas and New Year's Day. It was first awarded in 1923 to the Oxford University Ice Hockey Club. The Memorial Cup, a competition for junior-level (age 20 and under) clubs is held annually from a pool of junior championship teams in Canada and the United States. International club competitions organized by the IIHF include the Continental Cup, the Victoria Cup and the European Women's Champions Cup. The World Junior Club Cup is an annual tournament of junior ice hockey clubs representing each of the top junior leagues. The Australian Ice Hockey League and New Zealand Ice Hockey League are represented by nine and five teams respectively. As of 2012, the two top teams of the previous season from each league compete in the Trans-Tasman Champions League. Ice hockey in Africa is a small but growing sport; while no African ice hockey playing nation has a domestic national leagues, there are several regional leagues in South Africa. Ice hockey has been played at the Winter Olympics since 1924 (and was played at the summer games in 1920). Hockey is Canada's national winter sport, and Canadians are extremely passionate about the game. The nation has traditionally done very well at the Olympic games, winning 6 of the first 7 gold medals. However, by 1956 its amateur club teams and national teams could not compete with the teams of government-supported players from the Soviet Union. The USSR won all but two gold medals from 1956 to 1988. The United States won its first gold medal in 1960. On the way to winning the gold medal at the 1980 Lake Placid Olympics, amateur US college players defeated the heavily favoured Soviet squad—an event known as the "Miracle on Ice" in the United States. Restrictions on professional players were fully dropped at the 1988 games in Calgary. NHL agreed to participate ten years later. 1998 Games saw the full participation of players from the NHL, which suspended operations during the Games and has done so in subsequent Games up until 2018. The 2010 games in Vancouver were the first played in an NHL city since the inclusion of NHL players. The 2010 games were the first played on NHL-sized ice rinks, which are narrower than the IIHF standard. National teams representing the member federations of the IIHF compete annually in the IIHF Ice Hockey World Championships. Teams are selected from the available players by the individual federations, without restriction on amateur or professional status. Since it is held in the spring, the tournament coincides with the annual NHL Stanley Cup playoffs and many of the top players are hence not available to participate in the tournament. Many of the NHL players who do play in the IIHF tournament come from teams eliminated before the playoffs or in the first round, and federations often hold open spots until the tournament to allow for players to join the tournament after their club team is eliminated. For many years, the tournament was an amateur-only tournament, but this restriction was removed, beginning in 1977. The 1972 Summit Series and 1974 Summit Series, two series pitting the best Canadian and Soviet players without IIHF restrictions were major successes, and established a rivalry between Canada and the USSR. In the spirit of best-versus-best without restrictions on amateur or professional status, the series were followed by five Canada Cup tournaments, played in North America. Two NHL versus USSR series were also held: the 1979 Challenge Cup and Rendez-vous '87. The Canada Cup tournament later became the World Cup of Hockey, played in 1996, 2004 and 2016. The United States won in 1996 and Canada won in 2004 and 2016. Since the initial women's world championships in 1990, there have been fifteen tournaments. Women's hockey has been played at the Olympics since 1998. The only finals in the women's world championship or Olympics that did not involve both Canada and the United States were the 2006 Winter Olympic final between Canada and Sweden and 2019 World Championship final between the US and Finland. Other ice hockey tournaments featuring national teams include the World U20 Championship, the World U18 Championships, the World U-17 Hockey Challenge, the World Junior A Challenge, the Ivan Hlinka Memorial Tournament, the World Women's U18 Championships and the 4 Nations Cup. The annual Euro Hockey Tour, an unofficial European championship between the national men's teams of the Czech Republic, Finland, Russia and Sweden have been played since 1996–97. The attendance record for an ice hockey game was set on December 11, 2010, when the University of Michigan's men's ice hockey team faced cross-state rival Michigan State in an event billed as "The Big Chill at the Big House". The game was played at Michigan's (American) football venue, Michigan Stadium in Ann Arbor, with a capacity of 109,901 as of the 2010 football season. When UM stopped sales to the public on May 6, 2010, with plans to reserve remaining tickets for students, over 100,000 tickets had been sold for the event. Ultimately, a crowd announced by UM as 113,411, the largest in the stadium's history (including football), saw the homestanding Wolverines win 5–0. "Guinness World Records", using a count of ticketed fans who actually entered the stadium instead of UM's figure of tickets sold, announced a final figure of 104,173. The record was approached but not broken at the 2014 NHL Winter Classic, which also held at Michigan Stadium, with the Detroit Red Wings as the home team and the Toronto Maple Leafs as the opposing team with an announced crowd of 105,491. The record for a NHL Stanley Cup playoff game is 28,183, set on April 23, 1996, at the Thunderdome during a Tampa Bay Lightning – Philadelphia Flyers game. Number of registered hockey players, including male, female and junior, provided by the respective countries' federations. Note that this list only includes the 38 of 81 IIHF member countries with more than 1,000 registered players as of October 2019. Pond hockey is a form of ice hockey played generally as pick-up hockey on lakes, ponds and artificial outdoor rinks during the winter. Pond hockey is commonly referred to in hockey circles as shinny. Its rules differ from traditional hockey because there is no hitting and very little shooting, placing a greater emphasis on skating, stickhandling and passing abilities. Since 2002, the World Pond Hockey Championship has been played on Roulston Lake in Plaster Rock, New Brunswick, Canada. Since 2006, the US Pond Hockey Championships have been played in Minneapolis, Minnesota, and the Canadian National Pond Hockey Championships have been played in Huntsville, Ontario. Sledge hockey is an adaption of ice hockey designed for players who have a physical disability. Players are seated in sleds and use a specialized hockey stick that also helps the player navigate on the ice. The sport was created in Sweden in the early 1960s, and is played under similar rules to ice hockey. Ice hockey is the official winter sport of Canada. Ice hockey, partially because of its popularity as a major professional sport, has been a source of inspiration for numerous films, television episodes and songs in North American popular culture.
https://en.wikipedia.org/wiki?curid=14790
IEEE 802.3 IEEE 802.3 is a working group and a collection of Institute of Electrical and Electronics Engineers (IEEE) standards produced by the working group defining the physical layer and data link layer's media access control (MAC) of wired Ethernet. This is generally a local area network (LAN) technology with some wide area network (WAN) applications. Physical connections are made between nodes and/or infrastructure devices (hubs, switches, routers) by various types of copper or fiber cable. 802.3 is a technology that supports the IEEE 802.1 network architecture. 802.3 also defines LAN access method using CSMA/CD.
https://en.wikipedia.org/wiki?curid=14791
Integer (computer science) In computer science, an integer is a datum of integral data type, a data type that represents some range of mathematical integers. Integral data types may be of different sizes and may or may not be allowed to contain negative values. Integers are commonly represented in a computer as a group of binary digits (bits). The size of the grouping varies so the set of integer sizes available varies between different types of computers. Computer hardware, including virtual machines, nearly always provide a way to represent a processor register or memory address as an integer. The "value" of an item with an integral type is the mathematical integer that it corresponds to. Integral types may be "unsigned" (capable of representing only non-negative integers) or "signed" (capable of representing negative integers as well). An integer value is typically specified in the source code of a program as a sequence of digits optionally prefixed with + or −. Some programming languages allow other notations, such as hexadecimal (base 16) or octal (base 8). Some programming languages also permit digit group separators. The "internal representation" of this datum is the way the value is stored in the computer's memory. Unlike mathematical integers, a typical datum in a computer has some minimal and maximum possible value. The most common representation of a positive integer is a string of bits, using the binary numeral system. The order of the memory bytes storing the bits varies; see endianness. The "width" or "precision" of an integral type is the number of bits in its representation. An integral type with "n" bits can encode 2"n" numbers; for example an unsigned type typically represents the non-negative values 0 through 2"n"−1. Other encodings of integer values to bit patterns are sometimes used, for example binary-coded decimal or Gray code, or as printed character codes such as ASCII. There are four well-known ways to represent signed numbers in a binary computing system. The most common is two's complement, which allows a signed integral type with "n" bits to represent numbers from −2("n"−1) through 2("n"−1)−1. Two's complement arithmetic is convenient because there is a perfect one-to-one correspondence between representations and values (in particular, no separate +0 and −0), and because addition, subtraction and multiplication do not need to distinguish between signed and unsigned types. Other possibilities include offset binary, sign-magnitude, and ones' complement. Some computer languages define integer sizes in a machine-independent way; others have varying definitions depending on the underlying processor word size. Not all language implementations define variables of all integer sizes, and defined sizes may not even be distinct in a particular implementation. An integer in one programming language may be a different size in a different language or on a different processor. Different CPUs support different integral data types. Typically, hardware will support both signed and unsigned types, but only a small, fixed set of widths. The table above lists integral type widths that are supported in hardware by common processors. High level programming languages provide more possibilities. It is common to have a 'double width' integral type that has twice as many bits as the biggest hardware-supported type. Many languages also have "bit-field" types (a specified number of bits, usually constrained to be less than the maximum hardware-supported width) and "range" types (that can represent only the integers in a specified range). Some languages, such as Lisp, Smalltalk, REXX, Haskell, Python, and Raku support "arbitrary precision" integers (also known as "infinite precision integers" or "bignums"). Other languages that do not support this concept as a top-level construct may have libraries available to represent very large numbers using arrays of smaller variables, such as Java's BigInteger class or Perl's "bigint" package. These use as much of the computer's memory as is necessary to store the numbers; however, a computer has only a finite amount of storage, so they too can only represent a finite subset of the mathematical integers. These schemes support very large numbers, for example one kilobyte of memory could be used to store numbers up to 2466 decimal digits long. A Boolean or Flag type is a type that can represent only two values: 0 and 1, usually identified with "false" and "true" respectively. This type can be stored in memory using a single bit, but is often given a full byte for convenience of addressing and speed of access. A four-bit quantity is known as a "nibble" (when eating, being smaller than a "bite") or "nybble" (being a pun on the form of the word "byte"). One nibble corresponds to one digit in hexadecimal and holds one digit or a sign code in binary-coded decimal. The term "byte" initially meant 'the smallest addressable unit of memory'. In the past, 5-, 6-, 7-, 8-, and 9-bit bytes have all been used. There have also been computers that could address individual bits ('bit-addressed machine'), or that could only address 16- or 32-bit quantities ('word-addressed machine'). The term "byte" was usually not used at all in connection with bit- and word-addressed machines. The term "octet" always refers to an 8-bit quantity. It is mostly used in the field of computer networking, where computers with different byte widths might have to communicate. In modern usage "byte" almost invariably means eight bits, since all other sizes have fallen into disuse; thus "byte" has come to be synonymous with "octet". The term 'word' is used for a small group of bits that are handled simultaneously by processors of a particular architecture. The size of a word is thus CPU-specific. Many different word sizes have been used, including 6-, 8-, 12-, 16-, 18-, 24-, 32-, 36-, 39-, 40-, 48-, 60-, and 64-bit. Since it is architectural, the size of a "word" is usually set by the first CPU in a family, rather than the characteristics of a later compatible CPU. The meanings of terms derived from "word", such as "longword", "doubleword", "quadword", and "halfword", also vary with the CPU and OS. Practically all new desktop processors are capable of using 64-bit words, though embedded processors with 8- and 16-bit word size are still common. The 36-bit word length was common in the early days of computers. One important cause of non-portability of software is the incorrect assumption that all computers have the same word size as the computer used by the programmer. For example, if a programmer using the C language incorrectly declares as int a variable that will be used to store values greater than 215−1, the program will fail on computers with 16-bit integers. That variable should have been declared as long, which has at least 32 bits on any computer. Programmers may also incorrectly assume that a pointer can be converted to an integer without loss of information, which may work on (some) 32-bit computers, but fail on 64-bit computers with 64-bit pointers and 32-bit integers. A "short integer" can represent a whole number that may take less storage, while having a smaller range, compared with a standard integer on the same machine. In C, it is denoted by short. It is required to be at least 16 bits, and is often smaller than a standard integer, but this is not required. A conforming program can assume that it can safely store values between −(215−1) and 215−1, but it may not assume that the range isn't larger. In Java, a short is "always" a 16-bit integer. In the Windows API, the datatype SHORT is defined as a 16-bit signed integer on all machines. A "long integer" can represent a whole integer whose range is greater than or equal to that of a standard integer on the same machine. In C, it is denoted by long. It is required to be at least 32 bits, and may or may not be larger than a standard integer. A conforming program can assume that it can safely store values between −(231−1) and 231−1, but it may not assume that the range isn't larger. In the C99 version of the C programming language and the C++11 version of C++, a codice_1 type is supported that has double the minimum capacity of the standard codice_2. This type is not supported by compilers that require C code to be compliant with the previous C++ standard, C++03, because the long long type did not exist in C++03. For an ANSI/ISO compliant compiler, the minimum requirements for the specified ranges, that is, −(263−1) to 263−1 for signed and 0 to 264−1 for unsigned, must be fulfilled; however, extending this range is permitted. This can be an issue when exchanging code and data between platforms, or doing direct hardware access. Thus, there are several sets of headers providing platform independent exact width types. The C standard library provides "stdint.h"; this was introduced in C99 and C++11.
https://en.wikipedia.org/wiki?curid=14794
Icon An icon (from the Greek "eikṓn" "image", "resemblance") is a religious work of art, most commonly a painting, in the cultures of the Eastern Orthodox Church, Oriental Orthodoxy, the Roman Catholic, and certain Eastern Catholic churches. They are not simply artworks but "an icon is a sacred image used in religious devotion." The most common subjects include Christ, Mary, saints and angels. Although especially associated with portrait-style images concentrating on one or two main figures, the term also covers most religious images in a variety of artistic media produced by Eastern Christianity, including narrative scenes, usually from the Bible or lives of saints. Icons may also be cast in metal, carved in stone, embroidered on cloth, painted on wood, done in mosaic or fresco work, printed on paper or metal, etc. Comparable images from Western Christianity can be classified as "icons", although "iconic" may also be used to describe a static style of devotional image. Eastern Orthodox tradition holds that the production of Christian images dates back to the very early days of Christianity, and that it has been a continuous tradition since then. Modern academic art history considers that, while images may have existed earlier, the tradition can be traced back only as far as the 3rd century, and that the images which survive from Early Christian art often differ greatly from later ones. The icons of later centuries can be linked, often closely, to images from the 5th century onwards, though very few of these survive. Widespread destruction of images occurred during the Byzantine Iconoclasm of 726–842, although this did settle permanently the question of the appropriateness of images. Since then icons have had a great continuity of style and subject; far greater than in the icons of the Western church. At the same time there has been change and development. Pre-Christian religions had produced and used art works, but Christian tradition dating from the 8th century identifies Luke the Evangelist as the first icon painter. Aside from the legend that Pilate had made an image of Christ, the 4th-century Eusebius of Caesarea, in his "Church History", provides a more substantial reference to a "first" icon of Jesus. He relates that King Abgar of Edessa (died 50 CE) sent a letter to Jesus at Jerusalem, asking Jesus to come and heal him of an illness. This version of the Abgar story does not mention an image, but a later account found in the Syriac "Doctrine of Addai" ( 400 ?) mentions a painted image of Jesus in the story; and even later, in the 6th-century account given by Evagrius Scholasticus, the painted image transforms into an image that miraculously appeared on a towel when Christ pressed the cloth to his wet face. Further legends relate that the cloth remained in Edessa until the 10th century, when it was taken to Constantinople. It went missing in 1204 when Crusaders sacked Constantinople, but by then numerous copies had firmly established its iconic type. The 4th-century Christian Aelius Lampridius produced the earliest known written records of Christian images treated like icons (in a pagan or Gnostic context) in his "Life of Alexander Severus" (xxix) that formed part of the "Augustan History". According to Lampridius, the emperor Alexander Severus (), himself not a Christian, had kept a domestic chapel for the veneration of images of deified emperors, of portraits of his ancestors, and of Christ, Apollonius, Orpheus and Abraham. Saint Irenaeus, ( 130–202) in his "Against Heresies" (1:25;6) says scornfully of the Gnostic Carpocratians: "They also possess images, some of them painted, and others formed from different kinds of material; while they maintain that a likeness of Christ was made by Pilate at that time when Jesus lived among them. They crown these images, and set them up along with the images of the philosophers of the world that is to say, with the images of Pythagoras, and Plato, and Aristotle, and the rest. They have also other modes of honouring these images, after the same manner of the Gentiles [pagans]". On the other hand, Irenaeus does not speak critically of icons or portraits in a general sense—only of certain gnostic sectarians' use of icons. Another criticism of image veneration appears in the non-canonical 2nd-century "Acts of John" (generally considered a gnostic work), in which the Apostle John discovers that one of his followers has had a portrait made of him, and is venerating it: (27) "...he [John] went into the bedchamber, and saw the portrait of an old man crowned with garlands, and lamps and altars set before it. And he called him and said: Lycomedes, what do you mean by this matter of the portrait? Can it be one of thy gods that is painted here? For I see that you are still living in heathen fashion." Later in the passage John says, "But this that you have now done is childish and imperfect: you have drawn a dead likeness of the dead." At least some of the hierarchy of the Christian churches still strictly opposed icons in the early 4th century. At the Spanish non-ecumenical Synod of Elvira ( 305) bishops concluded, "Pictures are not to be placed in churches, so that they do not become objects of worship and adoration". Bishop Epiphanius of Salamis, wrote his letter 51 to John, Bishop of Jerusalem ( 394) in which he recounted how he tore down an image in a church and admonished the other bishop that such images are "opposed . . . to our religion". Elsewhere in his "Church History", Eusebius reports seeing what he took to be portraits of Jesus, Peter and Paul, and also mentions a bronze statue at Banias / Paneas under Mount Hermon, of which he wrote, "They say that this statue is an image of Jesus" ("H.E." 7:18); further, he relates that locals regarded the image as a memorial of the healing of the woman with an issue of blood by Jesus (Luke 8:43–48), because it depicted a standing man wearing a double cloak and with arm outstretched, and a woman kneeling before him with arms reaching out as if in supplication. John Francis Wilson suggests the possibility that this refers to a pagan bronze statue whose true identity had been forgotten; some have thought it to represent Aesculapius, the Greek god of healing, but the description of the standing figure and the woman kneeling in supplication precisely matches images found on coins depicting the bearded emperor Hadrian () reaching out to a female figure—symbolizing a province—kneeling before him. When asked by Constantia (Emperor Constantine's half-sister) for an image of Jesus, Eusebius denied the request, replying: "To depict purely the human form of Christ before its transformation, on the other hand, is to break the commandment of God and to fall into pagan error." Hence Jaroslav Pelikan calls Eusebius "the father of iconoclasm". After the emperor Constantine I extended official toleration of Christianity within the Roman Empire in 313, huge numbers of pagans became converts. This period of Christianization probably saw the use of Christian images became very widespread among the faithful, though with great differences from pagan habits. Robin Lane Fox states "By the early fifth century, we know of the ownership of private icons of saints; by c. 480–500, we can be sure that the inside of a saint's shrine would be adorned with images and votive portraits, a practice which had probably begun earlier." When Constantine himself () apparently converted to Christianity, the majority of his subjects remained pagans. The Roman Imperial cult of the divinity of the emperor, expressed through the traditional burning of candles and the offering of incense to the emperor's image, was tolerated for a period because it would have been politically dangerous to attempt to suppress it. Indeed, in the 5th century the courts of justice and municipal buildings of the empire still honoured the portrait of the reigning emperor in this way. In 425 Philostorgius, an allegedly Arian Christian, charged the Orthodox Christians in Constantinople with idolatry because they still honored the image of the emperor Constantine the Great in this way. Dix notes that this occurred more than a century before we find the first reference to a similar honouring of the image of Christ or of His apostles or saints, but that it would seem a natural progression for the image of Christ, the King of Heaven and Earth, to be paid similar veneration as that given to the earthly Roman emperor. However, the Orthodox, Eastern Catholics, and other groups insist on explicitly distinguishing the veneration of icons from the worship of idols by pagans. See further below on the doctrine of veneration as opposed to worship. After adoption of Christianity as the only permissible Roman state religion under Theodosius I, Christian art began to change not only in quality and sophistication, but also in nature. This was in no small part due to Christians being free for the first time to express their faith openly without persecution from the state, in addition to the faith spreading to the non-poor segments of society. Paintings of martyrs and their feats began to appear, and early writers commented on their lifelike effect, one of the elements a few Christian writers criticized in pagan art—the ability to imitate life. The writers mostly criticized pagan works of art for pointing to false gods, thus encouraging idolatry.   Statues in the round were avoided as being too close to the principal artistic focus of pagan cult practices, as they have continued to be (with some small-scale exceptions) throughout the history of Eastern Christianity. Nilus of Sinai (d. c. 430), in his "Letter to Heliodorus Silentiarius", records a miracle in which St. Plato of Ankyra appeared to a Christian in a dream. The Saint was recognized because the young man had often seen his portrait. This recognition of a religious apparition from likeness to an image was also a characteristic of pagan pious accounts of appearances of gods to humans, and was a regular "topos" in hagiography. One critical recipient of a vision from Saint Demetrius of Thessaloniki apparently specified that the saint resembled the "more ancient" images of him—presumably the 7th-century mosaics still in Hagios Demetrios. Another, an African bishop, had been rescued from Arab slavery by a young soldier called Demetrios, who told him to go to his house in Thessaloniki. Having discovered that most young soldiers in the city seemed to be called Demetrios, he gave up and went to the largest church in the city, to find his rescuer on the wall. During this period the church began to discourage all non-religious human images—the Emperor and donor figures counting as religious. This became largely effective, so that most of the population would only ever see religious images and those of the ruling class. The word icon referred to any and all images, not just religious ones, but there was barely a need for a separate word for these. It is in a context attributed to the 5th century that the first mention of an image of Mary painted from life appears, though earlier paintings on catacomb walls bear resemblance to modern icons of Mary. Theodorus Lector, in his 6th-century "History of the Church" 1:1 stated that Eudokia (wife of emperor Theodosius II, d. 460) sent an image of the "Mother of God" named Icon of the Hodegetria from Jerusalem to Pulcheria, daughter of Arcadius, the former emperor and father of Theodosius II. The image was specified to have been "painted by the Apostle Luke." Margherita Guarducci relates a tradition that the original icon of Mary attributed to Luke, sent by Eudokia to Pulcheria from Palestine, was a large circular icon only of her head. When the icon arrived in Constantinople it was fitted in as the head into a very large rectangular icon of her holding the Christ child and it is this composite icon that became the one historically known as the Hodegetria. She further states another tradition that when the last Latin Emperor of Constantinople, Baldwin II, fled Constantinople in 1261 he took this original circular portion of the icon with him. This remained in the possession of the Angevin dynasty who had it likewise inserted into a much larger image of Mary and the Christ child, which is presently enshrined above the high altar of the Benedictine Abbey church of Montevergine. Unfortunately this icon has been over the subsequent centuries subjected to repeated repainting, so that it is difficult to determine what the original image of Mary's face would have looked like. However, Guarducci also states that in 1950 an ancient image of Mary at the Church of Santa Francesca Romana was determined to be a very exact, but reverse mirror image of the original circular icon that was made in the 5th century and brought to Rome, where it has remained until the present. In later tradition the number of icons of Mary attributed to Luke would greatly multiply; the Salus Populi Romani, the Theotokos of Vladimir, the Theotokos Iverskaya of Mount Athos, the Theotokos of Tikhvin, the Theotokos of Smolensk and the Black Madonna of Częstochowa are examples, and another is in the cathedral on St Thomas Mount, which is believed to be one of the seven painted by St. Luke the Evangelist and brought to India by St. Thomas. Ethiopia has at least seven more. Bissera V. Pentcheva concludes, “The myth [of Luke painting an icon] was invented in order to support the legitimacy of icon veneration during the Iconoclast controversy [8th and 9th centuries]. By claiming the existence of a portrait of the Theotokos painted during her lifetime by the evangelist Luke, the perpetrators of this fiction fabricated evidence for the apostolic origins and divine approval of images.” In the period before and during the Iconoclastic Controversy, stories attributing the creation of icons to the New Testament period greatly increased, with several apostles and even the Virgin herself believed to have acted as the artist or commissioner of images (also embroidered in the case of the Virgin). There was a continuing opposition to images and their misuse within Christianity from very early times. "Whenever images threatened to gain undue influence within the church, theologians have sought to strip them of their power". Further, "there is no century between the fourth and the eighth in which there is not some evidence of opposition to images even within the Church". Nonetheless, popular favor for icons guaranteed their continued existence, while no systematic apologia for or against icons, or doctrinal authorization or condemnation of icons yet existed. The use of icons was seriously challenged by Byzantine Imperial authority in the 8th century. Though by this time opposition to images was strongly entrenched in Judaism and Islam, attribution of the impetus toward an iconoclastic movement in Eastern Orthodoxy to Muslims or Jews ""seems to have been highly exaggerated, both by contemporaries and by modern scholars"". Though significant in the history of religious doctrine, the Byzantine controversy over images is not seen as of primary importance in Byzantine history. "Few historians still hold it to have been the greatest issue of the period..." The Iconoclastic Period began when images were banned by Emperor Leo III the Isaurian sometime between 726 and 730. Under his son Constantine V, a council forbidding image veneration was held at Hieria near Constantinople in 754. Image veneration was later reinstated by the Empress Regent Irene, under whom another   council was held reversing the decisions of the previous iconoclast council and taking its title as Seventh Ecumenical Council. The council anathemized all who hold to iconoclasm, i.e. those who held that veneration of images constitutes idolatry. Then the ban was enforced again by Leo V in 815. And finally icon veneration was decisively restored by Empress Regent Theodora in 843. From then on all Byzantine coins had a religious image or symbol on the reverse, usually an image of Christ for larger denominations, with the head of the Emperor on the obverse, reinforcing the bond of the state and the divine order. The tradition of "acheiropoieta" (, literally "not-made-by-hand") accrued to icons that are alleged to have come into existence miraculously, not by a human painter. Such images functioned as powerful relics as well as icons, and their images were naturally seen as especially authoritative as to the true appearance of the subject: naturally and especially because of the reluctance to accept mere human productions as embodying anything of the divine, a commonplace of Christian deprecation of man-made "idols". Like icons believed to be painted directly from the live subject, they therefore acted as important references for other images in the tradition. Beside the developed legend of the "mandylion" or Image of Edessa, was the tale of the Veil of Veronica, whose very name signifies "true icon" or "true image", the fear of a "false image" remaining strong. Although there are earlier records of their use, no panel icons earlier than the few from the 6th century preserved at the Greek Orthodox Saint Catherine's Monastery in Egypt survive, as the other examples in Rome have all been drastically over-painted. The surviving evidence for the earliest depictions of Christ, Mary and saints therefore comes from wall-paintings, mosaics and some carvings. They are realistic in appearance, in contrast to the later stylization. They are broadly similar in style, though often much superior in quality, to the mummy portraits done in wax (encaustic) and found at Fayyum in Egypt. As we may judge from such items, the first depictions of Jesus were generic rather than portrait images, generally representing him as a beardless young man. It was some time before the earliest examples of the long-haired, bearded face that was later to become standardized as the image of Jesus appeared. When they did begin to appear there was still variation.  Augustine of Hippo (354–430) said that no one knew the appearance of Jesus or that of Mary. However, Augustine was not a resident of the Holy Land and therefore was not familiar with the local populations and their oral traditions. Gradually, paintings of Jesus took on characteristics of portrait images. At this time the manner of depicting Jesus was not yet uniform, and there was some controversy over which of the two most common icons was to be favored. The first or "Semitic" form showed Jesus with short and "frizzy" hair; the second showed a bearded Jesus with hair parted in the middle, the manner in which the god Zeus was depicted. Theodorus Lector remarked that of the two, the one with short and frizzy hair was "more authentic". To support his assertion, he relates a story (excerpted by John of Damascus) that a pagan commissioned to paint an image of Jesus used the "Zeus" form instead of the "Semitic" form, and that as punishment his hands withered. Though their development was gradual, we can date the full-blown appearance and general ecclesiastical (as opposed to simply popular or local) acceptance of Christian images as venerated and miracle-working objects to the 6th century, when, as Hans Belting writes, "we first hear of the church's use of religious images." "As we reach the second half of the sixth century, we find that images are attracting direct veneration and some of them are credited with the performance of miracles" Cyril Mango writes, "In the post-Justinianic period the icon assumes an ever increasing role in popular devotion, and there is a proliferation of miracle stories connected with icons, some of them rather shocking to our eyes".   However, the earlier references by Eusebius and Irenaeus indicate veneration of images and reported miracles associated with them as early as the 2nd century. In the icons of Eastern Orthodoxy, and of the Early Medieval West, very little room is made for artistic license. Almost everything within the image has a symbolic aspect. Christ, the saints, and the angels all have halos. Angels (and often John the Baptist) have wings because they are messengers. Figures have consistent facial appearances, hold attributes personal to them, and use a few conventional poses. Colour plays an important role as well. Gold represents the radiance of Heaven; red, divine life. Blue is the color of human life, white is the Uncreated Light of God, only used for resurrection and transfiguration of Christ. If you look at icons of Jesus and Mary: Jesus wears red undergarment with a blue outer garment (God become Human) and Mary wears a blue undergarment with a red overgarment (human was granted gifts by God), thus the doctrine of deification is conveyed by icons. Letters are symbols too. Most icons incorporate some calligraphic text naming the person or event depicted. Even this is often presented in a stylized manner. In the Eastern Orthodox Christian tradition there are reports of particular, wonderworking icons that exude myrrh (fragrant, healing oil), or perform miracles upon petition by believers. When such reports are verified by the Orthodox hierarchy, they are understood as miracles performed by God through the prayers of the saint, rather than being magical properties of the painted wood itself. Theologically, all icons are considered to be sacred, and are miraculous by nature, being a means of spiritual communion between the heavenly and earthly realms. However, it is not uncommon for specific icons to be characterised as "miracle-working", meaning that God has chosen to glorify them by working miracles through them. Such icons are often given particular names (especially those of the Virgin Mary), and even taken from city to city where believers gather to venerate them and pray before them. Islands like that of Tinos are renowned for possessing such "miraculous" icons, and are visited every year by thousands of pilgrims. The Eastern Orthodox view of the origin of icons is generally quite different from that of most secular scholars and from some in contemporary Roman Catholic circles: "The Orthodox Church maintains and teaches that the sacred image has existed from the beginning of Christianity", Léonid Ouspensky has written. Accounts that some non-Orthodox writers consider legendary are accepted as history within Eastern Orthodoxy, because they are a part of church tradition. Thus accounts such as that of the miraculous "Image Not Made by Hands", and the weeping and moving "Mother of God of the Sign" of Novgorod are accepted as fact: "Church Tradition tells us, for example, of the existence of an Icon of the Savior during His lifetime (the "Icon-Made-Without-Hands") and of Icons of the Most-Holy Theotokos [Mary] immediately after Him." Eastern Orthodoxy further teaches that "a clear understanding of the importance of Icons" was part of the church from its very beginning, and has never changed, although explanations of their importance may have developed over time. This is because icon painting is rooted in the theology of the Incarnation (Christ being the "eikon" of God) which didn't change, though its subsequent clarification within the Church occurred over the period of the first seven Ecumenical Councils. Also, icons served as tools of edification for the illiterate faithful during most of the history of Christendom. Thus, icons are words in painting; they refer to the history of salvation and to its manifestation in concrete persons. In the Orthodox Church "icons have always been understood as a visible gospel, as a testimony to the great things given man by God the incarnate Logos" In the Council of 860 it was stated that "all that is uttered in words written in syllables is also proclaimed in the language of colors". Eastern Orthodox find the first instance of an image or icon in the Bible when God made man in His own image (Septuagint Greek "eikona"), in Genesis 1:26–27. In Exodus, God commanded that the Israelites not make any graven image; but soon afterwards, he commanded that they make graven images of cherubim and other like things, both as statues and woven on tapestries. Later, Solomon included still more such imagery when he built the first temple. Eastern Orthodox believe these qualify as icons, in that they were visible images depicting heavenly beings and, in the case of the cherubim, used to indirectly indicate God's presence above the Ark. In the Book of Numbers it is written that God told Moses to make a bronze serpent, "Nehushtan", and hold it up, so that anyone looking at the snake would be healed of their snakebites. In John 3, Jesus refers to the same serpent, saying that he must be lifted up in the same way that the serpent was. John of Damascus also regarded the brazen serpent as an icon. Further, Jesus Christ himself is called the "image of the invisible God" in Colossians 1:15, and is therefore in one sense an icon. As people are also made in God's images, people are also considered to be living icons, and are therefore "censed" along with painted icons during Orthodox prayer services. According to John of Damascus, anyone who tries to destroy icons "is the enemy of Christ, the Holy Mother of God and the saints, and is the defender of the Devil and his demons." This is because the theology behind icons is closely tied to the Incarnational theology of the humanity and divinity of Jesus, so that attacks on icons typically have the effect of undermining or attacking the Incarnation of Jesus himself as elucidated in the Ecumenical Councils. Basil of Caesarea, in his writing "On the Holy Spirit", says: "The honor paid to the image passes to the prototype". He also illustrates the concept by saying, "If I point to a statue of Caesar and ask you 'Who is that?', your answer would properly be, 'It is Caesar.' When you say such you do not mean that the stone itself is Caesar, but rather, the name and honor you ascribe to the statue passes over to the original, the archetype, Caesar himself." So it is with an Icon. Thus to kiss an icon of Christ, in the Eastern Orthodox view, is to show love towards Christ Jesus himself, not mere wood and paint making up the physical substance of the icon. Worship of the icon as somehow entirely separate from its prototype is expressly forbidden by the Seventh Ecumenical Council. Icons are often illuminated with a candle or jar of oil with a wick. (Beeswax for candles and olive oil for oil lamps are preferred because they burn very cleanly, although other materials are sometimes used.) The illumination of religious images with lamps or candles is an ancient practice pre-dating Christianity. Of the icon painting tradition that developed in Byzantium, with Constantinople as the chief city, we have only a few icons from the 11th century and none preceding them, in part because of the Iconoclastic reforms during which many were destroyed or lost, and also because of plundering by the Republic of Venice in 1204 during the Fourth Crusade, and finally the Fall of Constantinople in 1453. It was only in the Komnenian period (1081–1185) that the cult of the icon became widespread in the Byzantine world, partly on account of the dearth of richer materials (such as mosaics, ivory, and vitreous enamels), but also because an "iconostasis" a special screen for icons was introduced then in ecclesiastical practice. The style of the time was severe, hieratic and distant. In the late Comnenian period this severity softened, and emotion, formerly avoided, entered icon painting. Major monuments for this change include the murals at Daphni Monastery (c. 1100) and the Church of St. Panteleimon near Skopje (1164). The Theotokos of Vladimir (c. 1115, "illustration, right") is probably the most representative example of the new trend towards spirituality and emotion. The tendency toward emotionalism in icons continued in the Paleologan period, which began in 1261. Palaiologan art reached its pinnacle in mosaics such as those of Chora Church. In the last half of the 14th century, Palaiologan saints were painted in an exaggerated manner, very slim and in contorted positions, that is, in a style known as the Palaiologan Mannerism, of which is a superb example. After 1453, the Byzantine tradition was carried on in regions previously influenced by its religion and culture—in the Balkans, Russia, and other Slavic countries, Georgia and Armenia in the Caucasus, and among Eastern Orthodox minorities in the Islamic world. In the Greek-speaking world Crete, ruled by Venice until the mid-17th century, was an important centre of painted icons, as home of the Cretan School, exporting many to Europe. Crete was under Venetian control from 1204 and became a thriving center of art with eventually a "Scuola di San Luca", or organized painter's guild, the Guild of Saint Luke, on Western lines. Cretan painting was heavily patronized both by Catholics of Venetian territories and by Eastern Orthodox. For ease of transport, Cretan painters specialized in panel paintings, and developed the ability to work in many styles to fit the taste of various patrons. El Greco, who moved to Venice after establishing his reputation in Crete, is the most famous artist of the school, who continued to use many Byzantine conventions in his works. In 1669 the city of Heraklion, on Crete, which at one time boasted at least 120 painters, finally fell to the Turks, and from that time Greek icon painting went into a decline, with a revival attempted in the 20th century by art reformers such as Photis Kontoglou, who emphasized a return to earlier styles. Russian icons are typically paintings on wood, often small, though some in churches and monasteries may be as large as a table top. Many religious homes in Russia have icons hanging on the wall in the "krasny ugol"—the "red" corner (see Icon corner). There is a rich history and elaborate religious symbolism associated with icons. In Russian churches, the nave is typically separated from the sanctuary by an "iconostasis", a wall of icons. The use and making of icons entered Kievan Rus' following its conversion to Orthodox Christianity from the Eastern Roman (Byzantine) Empire in 988 AD. As a general rule, these icons strictly followed models and formulas hallowed by usage, some of which had originated in Constantinople. As time passed, the Russians—notably Andrei Rublev and Dionisius—widened the vocabulary of iconic types and styles far beyond anything found elsewhere. The personal, improvisatory and creative traditions of Western European religious art are largely lacking in Russia before the 17th century, when Simon Ushakov's painting became strongly influenced by religious paintings and engravings from Protestant as well as Catholic Europe. In the mid-17th century, changes in liturgy and practice instituted by Patriarch Nikon of Moscow resulted in a split in the Russian Orthodox Church. The traditionalists, the persecuted "Old Ritualists" or "Old Believers", continued the traditional stylization of icons, while the State Church modified its practice. From that time icons began to be painted not only in the traditional stylized and nonrealistic mode, but also in a mixture of Russian stylization and Western European realism, and in a Western European manner very much like that of Catholic religious art of the time. The Stroganov School and the icons from Nevyansk rank among the last important schools of Russian icon-painting. In Romania, icons painted as reversed images behind glass and set in frames were common in the 19th century and are still made. The process is known as reverse glass painting. "In the Transylvanian countryside, the expensive icons on panels imported from Moldavia, Wallachia, and Mt. Athos were gradually replaced by small, locally produced icons on glass, which were much less expensive and thus accessible to the Transylvanian peasants[.]" The earliest historical records about icons in Serbia dates back to the period of Nemanjić dynasty. One of the notable schools of Serb icons was active in the Bay of Kotor from the 17th century to the 19th century. Trojeručica meaning "Three-handed Theotokos" is the most important icon of the Serbian Orthodox Church and main icon of Mount Athos. The Coptic Orthodox Church of Alexandria and Oriental Orthodoxy also have distinctive, living icon painting traditions. Coptic icons have their origin in the Hellenistic art of Egyptian Late Antiquity, as exemplified by the Fayum mummy portraits. Beginning in the 4th century, churches painted their walls and made icons to reflect an authentic expression of their faith. The Aleppo School was a school of icon-painting, founded by the priest Yusuf al-Musawwir (also known as Joseph the Painter) and active in Aleppo, which was then a part of the Ottoman Empire, between at least 1645 and 1777. Although the word "icon" is not used in Western Christianity, there are religious works of art which were largely patterned on Byzantine works, and equally conventional in composition and depiction. Until the 13th century, "icon"-like portraits followed East pattern—although very few survive from this early period. From the 13th century, the western tradition came slowly to allow the artist far more flexibility, and a more realist approach to the figures. If only because there was a much smaller number of skilled artists, the quantity of works of art, in the sense of panel paintings, was much smaller in the West, and in most Western settings a single diptych as an altarpiece, or in a domestic room, probably stood in place of the larger collections typical of Orthodox "icon corners". Only in the 15th century did production of painted works of art begin to approach Eastern levels, supplemented by mass-produced imports from the Cretan School. In this century, the use of "icon"-like portraits in the West was enormously increased by the introduction of old master prints on paper, mostly woodcuts which were produced in vast numbers (although hardly any survive). They were mostly sold, hand-coloured, by churches, and the smallest sizes (often only an inch high) were affordable even by peasants, who glued or pinned them straight onto a wall. With the Reformation, after an initial uncertainty among early Lutherans, who painted a few "icon"-like depictions of leading Reformers, and continued to paint scenes from Scripture, Protestants came down firmly against icon-like portraits, especially larger ones, even of Christ. Many Protestants found these "idolatrous". The Catholic Church accepted the decrees of the iconodule Seventh Ecumenical Council regarding images. There is some minor difference, however, in the Catholic attitude to images from that of the Orthodox. Following Gregory the Great, Catholics emphasize the role of images as the "Biblia Pauperum", the "Bible of the Poor", from which those who could not read could nonetheless learn. Catholics also, however, share the same viewpoint with the Orthodox when it comes to image veneration, believing that whenever approached, sacred images are to be reverenced. Though using both flat wooden panel and stretched canvas paintings, Catholics traditionally have also favored images in the form of three-dimensional statuary, whereas in the East, statuary is much less widely employed. A joint Lutheran–Orthodox statement made in the 7th Plenary of the Lutheran–Orthodox Joint Commission, on July 1993 in Helsinki, reaffirmed the ecumenical council decisions on the nature of Christ and the veneration of images:
https://en.wikipedia.org/wiki?curid=14800
Icon (programming language) Icon is a very high-level programming language featuring goal-directed execution and many facilities for managing strings and textual patterns. It is related to SNOBOL and SL5, string processing languages. Icon is not object-oriented, but an object-oriented extension called Idol was developed in 1996 which eventually became Unicon. The Icon language is derived from the ALGOL-class of structured programming languages, and thus has syntax similar to C or Pascal. Icon is most similar to Pascal, using syntax for assignments, the keyword and similar syntax. On the other hand, Icon uses C-style brackets for structuring execution groups, and programs start by running a procedure called . In many ways Icon also shares features with most scripting languages (as well as SNOBOL and SL5, from which they were taken): variables do not have to be declared, types are cast automatically, and numbers can be converted to strings and back automatically. Another feature common to many scripting languages, but not all, is the lack of a line-ending character; in Icon, lines not ended by a semicolon get ended by an implied semicolon if it makes sense. Procedures are the basic building blocks of Icon programs. Although they use Pascal naming, they work more like C functions and can return values; there is no keyword in Icon. One of Icon's key concepts is that control structures are based on the "success" or "failure" of expressions, rather than on boolean logic, as in most other programming languages. This feature derives directly from SNOBOL, in which any pattern match and/or replacement operation could be followed by success and/or failure clauses that specified a statement label to be branched to under the requisite condition. Under the goal-directed branching model, a simple comparison like does not mean, "if the operations to the right evaluate to true" as they would under most languages; instead, it means something more like, "if the operations to the right "succeed"". In this case the < operator succeeds if the comparison is true, so the end result is the same. In addition, the < operator returns its second argument if it succeeds, allowing things like , a common type of comparison that in most languages must be written as a conjunction of two inequalities like . Icon uses success or failure for all flow control, so this simple code: if a := read() then write(a) will copy one line of standard input to standard output. It will work even if the read() causes an error, for instance, if the file does not exist. In that case the statement a := read() will fail, and write will simply not be called. Success and failure are passed "up" through functions, meaning that a failure inside a nested function will cause the functions calling it to fail as well. For instance, here is a program that copies an entire file: while write(read()) When the read() command fails, at the end of file for instance, the failure will be passed up the call chain, and write() will fail as well. The while, being a control structure, stops on failure. A similar example written in pseudocode (using syntax close to Java): This case needs two comparisons: one for end of file (EOF) and another for all other errors. Since Java does not allow exceptions to be compared as logic elements, as under Icon, the lengthy syntax must be used instead. Try blocks also impose a performance penalty, even if no exception is thrown, a distributed cost that Icon avoids. Icon refers to this concept as "goal-directed execution", referring to the way that execution continues until some goal is reached. In the example above the goal is to read the entire file; the read command succeeds when information has been read, and fails when it hasn't. The goal is thus coded directly in the language, instead of by checking return codes or similar constructs. Expressions in Icon often return a single value, for instance, will evaluate and succeed if the value of x is less than 5, or else fail. However, many expressions do not "immediately" return success or failure, returning values in the meantime. This drives the examples with and ; causes to continue to return values until it fails. This is a key concept in Icon, known as "generators". Generators drive much of the loop functionality in the language, but does so without the need for an explicit loop comparing values at each iteration. Within the parlance of Icon, the evaluation of an expression or function produces a "result sequence". A result sequence contains all the possible values that can be generated by the expression or function. When the result sequence is exhausted the expression or function fails. Iteration over the result sequence is achieved either implicitly via Icon's goal directed evaluation or explicitly via the clause. Icon includes several generator-builders. The "alternator" syntax allows a series of items to be generated in sequence until one fails: can generate "1", "hello", and "5" if x is less than 5. Alternators can be read as "or" in many cases, for instance: will write out the value of y if it is smaller than x "or" 5. Internally Icon checks every value from left to right until one succeeds or the list empties and it returns a failure. Functions will not be called unless evaluating their parameters succeeds, so this example can be shortened to: Another simple generator is , which generates lists of integers; will call ten times. The "bang syntax" generates every item of a list; will output each character of aString on a new line. This concept is powerful for string operations. Most languages include a function known as or that returns the location of one string within another. For example: This code will return 4, the position of the first occurrence of the word "the" (assuming the indices start at 0). To get the next instance of "the" an alternate form must be used, the 5 at the end saying it should look from position 5 on. So to extract all the occurrences of "the", a loop must be used: Under Icon the function is a generator, and will return the next instance of the string each time it is resumed before failing when it reaches the end of the string. The same code can be written: Of course there are times where one wants to find a string after some point in input, for instance, if scanning a text file containing data in multiple columns. Goal-directed execution works here as well: The position will only be returned if "the" appears after position 5, the comparison will fail otherwise. Comparisons that succeed return the right-hand result, so it is important to put the find on the right-hand side of the comparison. If it were written: then "5" would be written instead of the result of . Icon adds several control structures for looping through generators. The operator is similar to , looping through every item returned by a generator and exiting on failure: The syntax actually injects values into the function in a fashion similar to blocks under Smalltalk. For instance, the above loop can be re-written this way: Generators can be defined as procedures using the keyword: This example loops over "theString" using find to look for "pattern". When one is found, and the position is odd, the location is returned from the function with . Unlike , memorizes the state of the generator, allowing it to pick up where it left off on the next iteration. Icon has features to make working with strings easier. Most notable is the "scanning" system, which repeatedly calls functions on a string: s ? write(find("the")) is a short form of the examples shown earlier. In this case the "subject" of the function is placed outside the parameters in front of the question-mark. Icon function signatures identify the subject parameter so that it can be hoist in this fashion. Substrings can be extracted from a string by using a range specification within brackets. A range specification can return a point to a single character, or a slice of the string. Strings can be indexed from either the right or the left. Positions within a string are defined to be between the characters 1A2B3C4 and can be specified from the right −3A−2B−1C0 For example, Where the last example shows using a length instead of an ending position The subscripting specification can be used as a lvalue within an expression. This can be used to insert strings into another string or delete parts of a string. For example, Icon's subscript indices are between the elements. Given the string s := "ABCDEFG", the indexes are: 1A2B3C4D5E6F7G8. The slice s[3:5] is the string between the indices 3 and 5, which is the string "CD". Icon also has syntax to build lists (or "arrays"): aCat := ["muffins", "tabby", 2002, 8] The items within a list can be of any type, including other structures. To build larger lists, Icon includes the generator; generates a list containing 10 copies of "word". Like arrays in other languages, Icon allows items to be looked up by position, e.g., . As with strings, the indices are between the elements, and a slice of a list can be obtained by specifying the range, e.g., produces the list . Unlike for strings, a slice of an array is not an lvalue. The "bang-syntax" enumerates the range. For example, will print out four lines, each with one element. Icon includes stack-like functions, and to allow arrays to form the bases of stacks and queues. Icon also includes functionality for sets and associative arrays with "tables": This code creates a table that will use zero as the default value of any unknown key. It then adds two items into it, with the keys "there" and "here", and values 1 and 2. One of the powerful features of Icon is string scanning. The scan string operator, saves the current string scanning environment and creates a new string scanning environment. The string scanning environment consists of two keyword variables, codice_1 and . Where &subject is the string being scanned, and &pos is the "cursor" or current position within the subject string. For example, would produce subject=[this is a string] pos=[1] Built-in and user-defined functions can be used to move around within the string being scanned. Many of the built-in functions will default to &subject and &pos (for example the "find" function). The following, for example, will write all blank delimited "words" in a string. A more complex example demonstrates the integration of generators and string scanning within the language. The idiom of returns the value of the last expression. The definitive work is "The Icon Programming Language" (third edition) by Griswold and Griswold, . It is out of print but can be downloaded as a PDF. Icon also has co-expressions, providing non-local exits for program execution. See "The Icon Programming language" and also Shamim Mohamed's article "Co-expressions in Icon".
https://en.wikipedia.org/wiki?curid=14801
Iconology Iconology is a method of interpretation in cultural history and the history of the visual arts used by Aby Warburg, Erwin Panofsky and their followers that uncovers the cultural, social, and historical background of themes and subjects in the visual arts. Though Panofsky differentiated between iconology and iconography, the distinction is not very widely followed, "and they have never been given definitions accepted by all iconographers and iconologists". Few 21st-century authors continue to use the term "iconology" consistently, and instead use iconography to cover both areas of scholarship. To those who use the term, iconology is derived from synthesis rather than scattered analysis and examines symbolic meaning on more than its face value by reconciling it with its historical context and with the artist's body of work – in contrast to the widely descriptive iconography, which, as described by Panofsky, is an approach to studying the content and meaning of works of art that is primarily focused on classifying, establishing dates, provenance and other necessary fundamental knowledge concerning the subject matter of an artwork that is needed for further interpretation. Panofsky's "use of iconology as the principle tool of art analysis brought him critics." For instance, in 1946, Jan Gerrit Van Gelder "criticized Panofsky's iconology as putting too much emphasis on the symbolic content of the work of art, neglecting its formal aspects and the work as a unity of form and content." Furthermore, iconology is mostly avoided by social historians who do not accept the theoretical dogmaticism in the work of Panofsky. Erwin Panofsky defines iconography as "a known principle in the known world", while iconology is "an iconography turned interpretive". According to his view, iconology tries to reveal the underlying principles that form the basic attitude of a nation, a period, a class, a religious or philosophical perspective, which is modulated by one personality and condensed into one work. According to Roelof van Straten, iconology "can explain why an artist or patron chose a particular subject at a specific location and time and represented it in a certain way. An iconological investigation should concentrate on the social-historical, not art-historical, influences and values that the artist might not have consciously brought into play but are nevertheless present. The artwork is primarily seen as a document of its time." Warburg used the term "iconography" in his early research, replacing it in 1908 with "iconology" in his particular method of visual interpretation called "critical iconology", which focused on the tracing of motifs through different cultures and visual forms. In 1932, Panofsky published a seminal article, introducing a three-step method of visual interpretation dealing with (1) primary or natural subject matter; (2) secondary or conventional subject matter, i.e. iconography; (3) tertiary or intrinsic meaning or content, i.e. iconology. Whereas iconography analyses the world of images, stories and allegories and requires knowledge of literary sources, an understanding of the history of types and how themes and concepts were expressed by objects and events under different historical conditions, iconology interprets intrinsic meaning or content and the world of symbolical values by using "synthetic intuition". The interpreter is aware of the essential tendencies of the human mind as conditioned by psychology and world view; he analyses the history of cultural symptoms or symbols, or how tendencies of the human mind were expressed by specific themes due to different historical conditions. Moreover, when understanding the work of art as a document of a specific civilization, or of a certain religious attitude therein, the work of art becomes a symptom of something else, which expresses itself in a variety of other symptoms. Interpreting these symbolical values, which can be unknown to, or different from, the artist's intention, is the object of iconology. Panofsky emphasized that "iconology can be done when there are no originals to look at and nothing but artificial light to work in." According to Ernst Gombrich, "the emerging discipline of iconology ... must ultimately do for the image what linguistics has done for the word." However, Michael Camille is of the opinion that "though Panofsky's concept of iconology has been very influential in the humanities and is quite effective when applied to Renaissance art, it is still problematic when applied to art from periods before and after." In 1952, Creighton Gilbert added another suggestion for a useful meaning of the word "iconology". According to his view, iconology was not the actual investigation of the work of art but rather the result of this investigation. The Austrian art historian Hans Sedlmayr differentiated between "sachliche" and "methodische" iconology. "Sachliche" iconology refers to the "general meaning of an individual painting or of an artistic complex (church, palace, monument) as seen and explained with reference to the ideas which take shape in them." In contrast, "methodische" iconology is the "integral iconography which accounts for the changes and development in the representations". In "Iconology: Images, Text, Ideology" (1986), W.J.T. Mitchell writes that iconology is a study of "what to say about images", concerned with the description and interpretation of visual art, and also a study of "what images say" – the ways in which they seem to speak for themselves by persuading, telling stories, or describing. He pleads for a postlinguistic, postsemiotic "iconic turn", emphasizing the role of "non-linguistic symbol systems". Instead of just pointing out the difference between the material (pictorial or artistic) images, "he pays attention to the dialectic relationship between material images and mental images". According to Dennise Bartelo and Robert Morton, the term "iconology" can also be used for characterizing "a movement toward seeing connections across all the language processes" and the idea about "multiple levels and forms used to communicate meaning" in order to get "the total picture” of learning. "Being both literate in the traditional sense and visually literate are the true mark of a well-educated human." For several years, new approaches to iconology have developed in the theory of images. This is the case of what Jean-Michel Durafour, French philosopher and theorist of cinema, proposed to call "econology", a biological approach to images as forms of life, crossing iconology, ecology and sciences of nature. In an econological regime, the image ("eikon") self-speciates, that is to say, it self-iconicizes with others and eco-iconicizes with them its iconic habitat ("oikos"). The iconology, mainly Warburghian iconology, is thus merged with a conception of the relations between the beings of the nature inherited, among others (Arne Næss, etc.) from the writings of Kinji Imanishi. For Imanishi, living beings are subjects. Or, more precisely, the environment and the living being are juste one. One of the main consequences is that the "specity", the living individual, "self-eco-speciates its place of life" ("Freedom in Evolution"). As far as the images are concerned: "If the living species self-specify, the images self-iconicize. This is not a tautology. The images update some of their iconic virtualities. They live in the midst of other images, past or present, but also future (those are only human classifications), which they have relations with. They self-iconicize in an iconic environment which they interact with, and which in particular makes them the images they are. Or more precisely, insofar as images have an active part: "the images self-eco-iconicize their iconic environment"." "Studies in Iconology" is the title of a book by Erwin Panofsky on humanistic themes in the art of the Renaissance, which was first published in 1939. It is also the name of a peer-reviewed series of books started in 2014 under the editorship of Barbara Baert and published by Peeters international academic publishers, Leuven, Belgium, addressing the deeper meaning of the visual medium throughout human history in the fields of philosophy, art history, theology and cultural anthropology.
https://en.wikipedia.org/wiki?curid=14802
List of Indian massacres In the history of the European colonization of the Americas, an atrocity termed "Indian massacre" is a specific incident wherein a group of people (military, mob or other) deliberately kill a significant number of relatively defenseless people — usually civilian noncombatants — or to the summary execution of prisoners-of-war. The term may refer to either the killing of people of European descent by Native Americans and First Nations or to the killing of Native American and First Nation peoples by people of European descent and/or the military. "Indian massacre" is a phrase whose use and definition has evolved and expanded over time. The phrase was initially used by European colonists to describe attacks by indigenous Americans which resulted in mass colonial casualties. While similar attacks by colonists on Indian villages were called "raids" or "battles", successful Indian attacks on white settlements or military posts were routinely termed "massacres". Knowing very little about the native inhabitants of the American frontier, the colonists were deeply fearful, and often, European Americans who had rarely – or never – seen a Native American read Indian atrocity stories in popular literature and newspapers. Emphasis was placed on the depredations of "murderous savages" in their information about Indians, and as the migrants headed further west, they frequently feared the Indians they would encounter. The phrase eventually became commonly used also to describe mass killings of American Indians. Killings described as "massacres" often had an element of indiscriminate targeting, barbarism, or genocidal intent. According to one historian, "Any discussion of genocide must, of course, eventually consider the so-called Indian Wars", the term commonly used for U.S. Army campaigns to subjugate Indian nations of the American West beginning in the 1860s. In an older historiography, key events in this history were narrated as battles. Since the late 20th century, it has become more common for scholars to refer to certain of these events as massacres, especially if there were large numbers of women and children as victims. This includes the Colorado territorial militia's slaughter of Cheyenne at Sand Creek (1864), and the US army's slaughter of Shoshone at Bear River (1863), Blackfeet on the Marias River (1870), and Lakota at Wounded Knee (1890). Some scholars have begun referring to these events as "genocidal massacres," defined as the annihilation of a portion of a larger group, sometimes to provide a lesson to the larger group. It is difficult to determine the total number of people who died as a result of "Indian massacres". In "The Wild Frontier: Atrocities during the American-Indian War from Jamestown Colony to Wounded Knee", lawyer William M. Osborn compiled a list of alleged and actual atrocities in what would eventually become the continental United States, from first contact in 1511 until 1890. His parameters for inclusion included the intentional and indiscriminate murder, torture, or mutilation of civilians, the wounded, and prisoners. His list included 7,193 people who died from atrocities perpetrated by those of European descent, and 9,156 people who died from atrocities perpetrated by Native Americans. In "An American Genocide, The United States and the California Catastrophe, 1846–1873", historian Benjamin Madley recorded the numbers of killings of California Indians between 1846 and 1873. He found evidence that during this period, at least 9,400 to 16,000 California Indians were killed by non-Indians. Most of these killings occurred in what he said were more than 370 massacres (defined by him as the "intentional killing of five or more disarmed combatants or largely unarmed noncombatants, including women, children, and prisoners, whether in the context of a battle or otherwise"). This is a listing of some of the events reported then or referred to now as "Indian massacre". This list contains only incidents that occurred in Canada or the United States, or territory presently part of the United States.
https://en.wikipedia.org/wiki?curid=14804
Islamic calendar The Islamic calendar ( '), also known as the Hijri, Lunar Hijri, Muslim or Arabic calendar, is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to determine the proper days of Islamic holidays and rituals, such as the annual period of fasting and the proper time for the Hajj. The civil calendar of almost all countries where the religion is predominantly Muslim is the Gregorian calendar, with Syriac month-names used in the Levant and Mesopotamia (Iraq, Syria, Jordan, Lebanon and Palestine). Notable exceptions to this rule are Iran and Afghanistan, which use the Solar Hijri calendar. Rents, wages and similar regular commitments are generally paid by the civil calendar. The Islamic calendar employs the Hijri era whose epoch was established as the Islamic New Year of 622 AD/CE. During that year, Muhammad and his followers migrated from Mecca to Medina and established the first Muslim community ("ummah"), an event commemorated as the Hijra. In the West, dates in this era are usually denoted AH (, "in the year of the Hijra") in parallel with the Christian (AD), Common (CE) and Jewish eras (AM). In Muslim countries, it is also sometimes denoted as H from its Arabic form (, abbreviated ). In English, years prior to the Hijra are reckoned as BH ("Before the Hijra"). The current Islamic year is 1441 AH. In the Gregorian calendar, 1441 AH runs from approximately 31 August 2019 to 20 August 2020. For central Arabia, especially Mecca, there is a lack of epigraphical evidence but details are found in the writings of Muslim authors of the Abbasid era. Inscriptions of the ancient South Arabian calendars reveal the use of a number of local calendars. At least some of these South Arabian calendars followed the lunisolar system. Both al-Biruni and al-Mas'udi suggest that the ancient Arabs used the same month names as the Muslims, though they also record other month names used by the pre-Islamic Arabs. The Islamic tradition is unanimous in stating that Arabs of Tihamah, Hejaz, and Najd distinguished between two types of months, permitted ("ḥalāl") and forbidden ("ḥarām") months. The forbidden months were four months during which fighting is forbidden, listed as Rajab and the three months around the pilgrimage season, Dhu al-Qa‘dah, Dhu al-Hijjah, and Muharram. A similar if not identical concept to the forbidden months is also attested by Procopius, where he describes an armistice that the Eastern Arabs of the Lakhmid al-Mundhir respected for two months in the summer solstice of 541 AD/CE. However, Muslim historians do not link these months to a particular season. The Qur'an links the four forbidden months with "Nasī’", a word that literally means "postponement". According to Muslim tradition, the decision of postponement was administered by the tribe of Kinanah, by a man known as the "al-Qalammas" of Kinanah and his descendants (pl. "qalāmisa"). Different interpretations of the concept of "Nasī’" have been proposed. Some scholars, both Muslim and Western, maintain that the pre-Islamic calendar used in central Arabia was a purely lunar calendar similar to the modern Islamic calendar. According to this view, "Nasī’" is related to the pre-Islamic practices of the Meccan Arabs, where they would alter the distribution of the forbidden months within a given year without implying a calendar manipulation. This interpretation is supported by Arab historians and lexicographers, like Ibn Hisham, Ibn Manzur, and the corpus of Qur'anic exegesis. This is corroborated by an early Sabaic inscription, where a religious ritual was "postponed" ("ns'’w") due to war. According to the context of this inscription, the verb "ns'’" has nothing to do with intercalation, but only with moving religious events within the calendar itself. The similarity between the religious concept of this ancient inscription and the Qur'an suggests that non-calendaring postponement is also the Qur'anic meaning of "Nasī’". The "Encyclopaedia of Islam" concludes ""The Arabic system of [Nasī’] can only have been intended to move the Hajj and the fairs associated with it in the vicinity of Mecca to a suitable season of the year. It was not intended to establish a fixed calendar to be generally observed."" The term "fixed calendar" is generally understood to refer to the non-intercalated calendar. Others concur that it was originally a lunar calendar, but suggest that about 200 years before the Hijra it was transformed into a lunisolar calendar containing an intercalary month added from time to time to keep the pilgrimage within the season of the year when merchandise was most abundant. This interpretation was first proposed by the medieval Muslim astrologer and astronomer Abu Ma'shar al-Balkhi, and later by al-Biruni, al-Mas'udi, and some western scholars. This interpretation considers "Nasī’" to be a synonym to the Arabic word for "intercalation" ("kabīsa"). The Arabs, according to one explanation mentioned by Abu Ma'shar, learned of this type of intercalation from the Jews. The Jewish "Nasi" was the official who decided when to intercalate the Jewish calendar. Some sources say that the Arabs followed the Jewish practice and intercalated seven months over nineteen years, or else that they intercalated nine months over 24 years; there is, however, no consensus among scholars on this issue. Postponement ("Nasī’") of one ritual in a particular circumstance does not imply alteration of the sequence of months, and scholars agree that this did not happen. Al-Biruni also says this did not happen, and the festivals were kept within their season by intercalation every second or third year of a month between Dhu al-Hijjah and Muharram. He also says that, in terms of the fixed calendar that was not introduced until 10 AH (632 AD/CE), the first intercalation was, for example, of a month between Dhu al-Hijjah and Muharram, the second of a month between Muharram and Safar, the third of a month between Safar and Rabi'I, and so on. The intercalations were arranged so that there were seven of them every nineteen years. The notice of intercalation was issued at the pilgrimage, the next month would be "Nasī’" and Muharram would follow. If, on the other hand, the names relate to the intercalated rather than the fixed calendar, the second intercalation might be, for example, of a month between Muharram and Safar allowing for the first intercalation, and the third intercalation of a month between Safar and Rabi'I allowing for the two preceding intercalations, and so on. The time for the intercalation to move from the beginning of the year to the end (twelve intercalations) is the time it takes the fixed calendar to revolve once through the seasons (about 32 1/2 tropical years). There are two big drawbacks of such a system, which would explain why it is not known ever to have been used anywhere in the world. First, it cannot be regulated by means of a cycle (the only cycles known in antiquity were the octaeteris (3 intercalations in 8 years) and the enneadecaeteris (7 intercalations in 19 years). Secondly, without a cycle it is difficult to establish from the number of the year (a) if it is intercalary and (b) if it is intercalary, where exactly in the year the intercalation is located. Although some scholars (see list above) claim that the holy months were shuffled about for convenience without the use of intercalation, there is no documentary record of the festivals of any of the holy months being observed in any month other than those they are now observed in. The Qu'ran (sura 9.37) only refers to the "postponement" of a sacred month. If they were shuffled as suggested, one would expect there to be a prohibition against "anticipation" as well. If the festivities of the sacred months were kept in season by moving them into later months, they would move through the whole twelve months in only 33 years. Had this happened, at least one writer would have mentioned it. Sura 9.36 states "Verily, the number of months with Allah is twelve months" and sura 37 refers to "adjusting the number of months". Such adjustment can only be effected by intercalation. There are a number of indications that the intercalated calendar was similar to the Jewish calendar, whose year began in the spring. There are clues in the names of the months themselves: In the intercalated calendar's last year (AD/CE 632), Dhu al-Hijjah corresponded to March. The Battle of the Trench in Shawwal and Dhu'l Qi'dah of AH 5 coincided with "harsh winter weather". Military campaigns clustered round Ramadan, when the summer heat had dissipated, and all fighting was forbidden during Rajab, at the height of summer. The invasion of Tabak in Rajab AH 9 was hampered by "too much hot weather" and "drought". In AH 1 Muhammad noted the Jews of Yathrib observing a festival when he arrived on Monday, 8 Rabi'I. Rabi'I is the third month and if it coincided with the third month of the Jewish calendar the festival would have been the Feast of Weeks, which is observed on the 6th and 7th days of that month. In the tenth year of the Hijra, as documented in the Qur'an (Surah At-Tawbah (9):36–37), Muslims believe God revealed the "prohibition of the Nasī'". The prohibition of Nasī' would presumably have been announced when the intercalated month had returned to its position just before the month of Nasi' began. If Nasī' meant intercalation, then the number and the position of the intercalary months between AH 1 and AH 10 are uncertain; western calendar dates commonly cited for key events in early Islam such as the Hijra, the Battle of Badr, the Battle of Uhud and the Battle of the Trench should be viewed with caution as they might be in error by one, two, three or even four lunar months. This prohibition was mentioned by Muhammad during the farewell sermon which was delivered on 9 Dhu al-Hijjah AH 10 (Julian date Friday 6 March, 632 AD/CE) on Mount Arafat during the farewell pilgrimage to Mecca. The three successive sacred (forbidden) months mentioned by Prophet Muhammad (months in which battles are forbidden) are Dhu al-Qa'dah, Dhu al-Hijjah, and Muharram, months 11, 12, and 1 respectively. The single forbidden month is Rajab, month 7. These months were considered forbidden both within the new Islamic calendar and within the old pagan Meccan calendar. The Islamic days, like those in the Hebrew and Bahá'í calendars, begin at sunset. The Christian liturgical day, kept in monasteries, begins with vespers (see vesper), which is evening, in line with the other Abrahamic traditions. Christian and planetary weekdays begin at the following midnight. Muslims gather for worship at a mosque at noon on "gathering day" () which corresponds with Friday. Thus "gathering day" is often regarded as the weekly day off. This is frequently made official, with many Muslim countries adopting Friday and Saturday (e.g., Egypt, Saudi Arabia) or Thursday and Friday as official weekends, during which offices are closed; other countries (e.g., Iran) choose to make Friday alone a day of rest. A few others (e.g., Turkey, Pakistan, Morocco, Nigeria) have adopted the Saturday-Sunday weekend while making Friday a working day with a long midday break to allow time off for worship. Four of the twelve Hijri months are considered sacred: Rajab (7), and the three consecutive months of Dhū al-Qa'dah (11), Dhu al-Ḥijjah (12) and Muḥarram (1). As the lunar calendar lags behind the solar calendar by about ten days every Gregorian year, months of the Islamic calendar fall in different parts of the Gregorian calendar each year. The cycle repeats every 33 lunar years. Each month of the Islamic calendar commences on the birth of the new lunar cycle. Traditionally this is based on actual observation of the moon's crescent ("hilal") marking the end of the previous lunar cycle and hence the previous month, thereby beginning the new month. Consequently, each month can have 29 or 30 days depending on the visibility of the moon, astronomical positioning of the earth and weather conditions. However, certain sects and groups, most notably Bohras Muslims namely Alavis, Dawoodis and Sulaymanis and Shia Ismaili Muslims, use a tabular Islamic calendar (see section below) in which odd-numbered months have thirty days (and also the twelfth month in a leap year) and even months have 29. According to numerous Hadiths, 'Ramadan' is one of the names of God in Islam, and as such it is prohibited to say only "Ramadan" in reference to the calendar month and that it is necessary to say the "month of Ramadan", as reported in Sunni, Shia and Zaydi Hadiths. In pre-Islamic Arabia, it was customary to identify a year after a major event which took place in it. Thus, according to Islamic tradition, Abraha, governor of Yemen, then a province of the Christian Kingdom of Aksum (Ethiopia), attempted to destroy the Kaaba with an army which included several elephants. The raid was unsuccessful, but that year became known as the "Year of the Elephant", during which Muhammad was born (sura al-Fil). Most equate this to the year 570 AD/CE, but a minority use 571 CE. The first ten years of the Hijra were not numbered, but were named after events in the life of Muhammad according to Abū Rayḥān al-Bīrūnī: In AH 17 (638 AD/CE), Abu Musa Ashaari, one of the officials of the Caliph Umar in Basrah, complained about the absence of any years on the correspondence he received from Umar, making it difficult for him to determine which instructions were most recent. This report convinced Umar of the need to introduce an era for Muslims. After debating the issue with his counsellors, he decided that the first year should be the year of Muhammad's arrival at Medina (known as Yathrib, before Muhammad's arrival). Uthman ibn Affan then suggested that the months begin with Muharram, in line with the established custom of the Arabs at that time. The years of the Islamic calendar thus began with the month of Muharram in the year of Muhammad's arrival at the city of Medina, even though the actual emigration took place in Safar and Rabi' I of the intercalated calendar, two months before the commencement of Muharram in the new fixed calendar. Because of the Hijra, the calendar was named the Hijri calendar. F A Shamsi (1984) postulated that the Arabic calendar was never intercalated. According to him, the first day of the first month of the new fixed Islamic calendar (1 Muharram AH 1) was no different from what was observed at the time. The day the Prophet moved from Quba' to Medina was originally 26 Rabi' I on the pre-Islamic calendar. 1 Muharram of the new fixed calendar corresponded to Friday, 16 July 622 AD/CE, the equivalent civil tabular date (same daylight period) in the Julian calendar. The Islamic day began at the preceding sunset on the evening of 15 July. This Julian date (16 July) was determined by medieval Muslim astronomers by projecting back in time their own tabular Islamic calendar, which had alternating 30- and 29-day months in each lunar year plus eleven leap days every 30 years. For example, al-Biruni mentioned this Julian date in the year 1000 AD/CE. Although not used by either medieval Muslim astronomers or modern scholars to determine the Islamic epoch, the thin crescent moon would have also first become visible (assuming clouds did not obscure it) shortly after the preceding sunset on the evening of 15 July, 1.5 days after the associated dark moon (astronomical new moon) on the morning of 14 July. Though Cook and Crone in "" cite a coin from AH 17, the first surviving attested use of a Hijri calendar date alongside a date in another calendar (Coptic) is on a papyrus from Egypt in AH 22, PERF 558. Due to the Islamic calendar's reliance on certain variable methods of observation to determine its month-start-dates, these dates sometimes vary slightly from the month-start-dates of the astronomical lunar calendar, which are based directly on astronomical calculations. Still, the Islamic calendar seldom varies by more than three days from the astronomical-lunar-calendar system, and roughly approximates it. Both the Islamic calendar and the astronomical-lunar-calendar take no account of the solar year in their calculations, and thus both of these strictly lunar based calendar systems have no ability to reckon the timing of the four seasons of the year. In the astronomical-lunar-calendar system, a year of 12 lunar months is 354.37 days long. In this calendar system, lunar months begin precisely at the time of the monthly "conjunction", when the Moon is located most directly between the Earth and the Sun. The month is defined as the average duration of a revolution of the Moon around the Earth (29.53 days). By convention, months of 30 days and 29 days succeed each other, adding up over two successive months to 59 full days. This leaves only a small monthly variation of 44 minutes to account for, which adds up to a total of 24 hours (i.e., the equivalent of one full day) in 2.73 years. To settle accounts, it is sufficient to add one day every three years to the lunar calendar, in the same way that one adds one day to the Gregorian calendar every four years. The technical details of the adjustment are described in Tabular Islamic calendar. The Islamic calendar, however, is based on a different set of conventions being used for the determination of the month-start-dates. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of either 29 or 30 day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th. Such a sighting has to be made by one or more trustworthy men testifying before a committee of Muslim leaders. Determining the most likely day that the hilal could be observed was a motivation for Muslim interest in astronomy, which put Islam in the forefront of that science for many centuries. Still, due to the fact that both lunar reckoning systems are ultimately based on the lunar cycle itself, both systems still do roughly correspond to one another, never being more than three days out of synchronisation with one another. This traditional practice for the determination of the start-date of the month is still followed in the overwhelming majority of Muslim countries. Each Islamic state proceeds with its own monthly observation of the new moon (or, failing that, awaits the completion of 30 days) before declaring the beginning of a new month on its territory. But, the lunar crescent becomes visible only some 17 hours after the conjunction, and only subject to the existence of a number of favourable conditions relative to weather, time, geographic location, as well as various astronomical parameters. Given the fact that the moon sets progressively later than the sun as one goes west, with a corresponding increase in its "age" since conjunction, Western Muslim countries may, under favorable conditions, observe the new moon one day earlier than eastern Muslim countries. Due to the interplay of all these factors, the beginning of each month differs from one Muslim country to another, during the 48 hour period following the conjunction. The information provided by the calendar in any country does not extend beyond the current month. A number of Muslim countries try to overcome some of these difficulties by applying different astronomy-related rules to determine the beginning of months. Thus, Malaysia, Indonesia, and a few others begin each month at sunset on the first day that the moon sets after the sun (moonset after sunset). In Egypt, the month begins at sunset on the first day that the moon sets at least five minutes after the sun. A detailed analysis of the available data shows, however, that there are major discrepancies between what countries say they do on this subject, and what they actually do. In some instances, what a country says it does is impossible. Due to the somewhat variable nature of the Islamic calendar, in most Muslim countries, the Islamic calendar is used primarily for religious purposes, while the Solar-based Gregorian calendar is still used primarily for matters of commerce and agriculture. If the Islamic calendar were prepared using astronomical calculations, Muslims throughout the Muslim world could use it to meet all their needs, the way they use the Gregorian calendar today. But, there are divergent views on whether it is licit to do so. A majority of theologians oppose the use of calculations (beyond the constraint that each month must be not less than 29 nor more than 30 days) on the grounds that the latter would not conform with Muhammad's recommendation to observe the new moon of Ramadan and Shawal in order to determine the beginning of these months. However, some jurists see no contradiction between Muhammad's teachings and the use of calculations to determine the beginnings of lunar months. They consider that Muhammad's recommendation was adapted to the culture of the times, and should not be confused with the acts of worship. Thus the jurists Ahmad Muhammad Shakir and Yusuf al-Qaradawi both endorsed the use of calculations to determine the beginning of all months of the Islamic calendar, in 1939 and 2004 respectively. So did the Fiqh Council of North America (FCNA) in 2006 and the European Council for Fatwa and Research (ECFR) in 2007. The major Muslim associations of France also announced in 2012 that they would henceforth use a calendar based on astronomical calculations, taking into account the criteria of the possibility of crescent sighting in any place on Earth. But, shortly after the official adoption of this rule by the French Council of the Muslim Faith (CFCM) in 2013, the new leadership of the association decided, on the eve of Ramadan 2013, to follow the Saudi announcement rather than to apply the rule just adopted. This resulted in a division of the Muslim community of France, with some members following the new rule, and others following the Saudi announcement. Isma'ili-Taiyebi Bohras having the institution of "da'i al-mutlaq" follow the tabular Islamic calendar (see section below) prepared on the basis of astronomical calculations from the days of Fatimid imams. Turkish Muslims use an Islamic calendar which is calculated several years in advance (currently up to 1444 AH/2022 CE) by the Turkish Presidency of Religious Affairs (Diyanet İşleri Başkanlığı). From 1 Muharrem 1400 AH (21 November 1979) until 29 Zilhicce 1435 (24 October 2014) the computed Turkish lunar calendar was based on the following rule: "The lunar month is assumed to begin on the evening when, within some region of the terrestrial globe, the computed centre of the lunar crescent at local sunset is more than 5° above the local horizon and (geocentrically) more than 8° from the Sun." In the current rule the (computed) lunar crescent has to be above the local horizon of Ankara at sunset. Saudi Arabia uses the sighting method to determine the beginning of each month of the Hijri calendar. Since AH 1419 (1998/99), several official hilal sighting committees have been set up by the government to determine the first visual sighting of the lunar crescent at the beginning of each lunar month. Nevertheless, the religious authorities also allow the testimony of less experienced observers and thus often announce the sighting of the lunar crescent on a date when none of the official committees could see it. The country also uses the Umm al-Qura calendar, based on astronomical calculations, but this is restricted to administrative purposes. The parameters used in the establishment of this calendar underwent significant changes over the past decade. Before AH 1420 (before 18 April 1999), if the moon's age at sunset in Riyadh was at least 12 hours, then the day "ending" at that sunset was the first day of the month. This often caused the Saudis to celebrate holy days one or even two days before other predominantly Muslim countries, including the dates for the Hajj, which can only be dated using Saudi dates because it is performed in Mecca. For AH 1420–22, if moonset occurred after sunset at Mecca, then the day beginning at that sunset was the first day of a Saudi month, essentially the same rule used by Malaysia, Indonesia, and others (except for the location from which the hilal was observed). Since the beginning of AH 1423 (16 March 2002), the rule has been clarified a little by requiring the geocentric conjunction of the sun and moon to occur before sunset, in addition to requiring moonset to occur after sunset at Mecca. This ensures that the moon has moved past the sun by sunset, even though the sky may still be too bright immediately before moonset to actually see the crescent. In 2007, the Islamic Society of North America, the "Fiqh" Council of North America and the European Council for "Fatwa" and Research announced that they will henceforth use a calendar based on calculations using the same parameters as the "Umm al-Qura" calendar to determine (well in advance) the beginning of all lunar months (and therefore the days associated with all religious observances). This was intended as a first step on the way to unify, at some future time, Muslims' calendars throughout the world. Since 1 October 2016, as a cost-cutting measure, Saudi Arabia no longer uses the Islamic calendar for paying the monthly salaries of government employees but the Gregorian calendar. The Solar Hijri calendar is a solar calendar used in Iran and Afghanistan which counts its years from the Hijra or migration of Muhammad from Mecca to Medina in 622 AD/CE. The Tabular Islamic calendar is a rule-based variation of the Islamic calendar, in which months are worked out by arithmetic rules rather than by observation or astronomical calculation. It has a 30-year cycle with 11 leap years of 355 days and 19 years of 354 days. In the long term, it is accurate to one day in about 2,500 solar years or 2,570 lunar years. It also deviates up to about one or two days in the short term. Microsoft uses the "Kuwaiti algorithm", a variant of the tabular Islamic calendar, to convert Gregorian dates to the Islamic ones. Microsoft claimed that the variant is based on a statistical analysis of historical data from Kuwait, however it matches a known tabular calendar. Important dates in the Islamic (Hijri) year are: Days considered important predominantly for Shia Muslims: Conversions may be made by using the Tabular Islamic calendar, or, for greatest accuracy (one day in 15,186 years), via the Jewish calendar. Theoretically, the days of the months correspond in both calendars if the displacements which are a feature of the Jewish system are ignored. The table below gives, for nineteen years, the Muslim month which corresponds to the first Jewish month. This table may be extended since every nineteen years the Muslim month number increases by seven. When it goes above twelve, subtract twelve and add one to the year AH. From 412 AD/CE to 632 AD/CE inclusive the month number is 1 and the calculation gives the month correct to a month or so. 622 AD/CE corresponds to BH 1 and AH 1. For earlier years, year BH = (623 or 622) – year AD/CE). An example calculation: What is the civil date and year AH of the first day of the first month in the year 20875 AD/CE? We first find the Muslim month number corresponding to the first month of the Jewish year which begins in 20874 AD/CE. Dividing 20874 by 19 gives quotient 1098 and remainder 12. Dividing 2026 by 19 gives quotient 106 and remainder 12. 2026 is chosen because it gives the same remainder on division by 19 as 20874. The two years are therefore (1098–106)=992×19 years apart. The Muslim month number corresponding to the first Jewish month is therefore 992×7=6944 higher than in 2026. To convert into years and months divide by twelve – 6944/12=578 years and 8 months. Adding, we get 1447y 10m + 20874y – 2026y + 578y 8m = 20874y 6m. Therefore, the first month of the Jewish year beginning in 20874 AD/CE corresponds to the sixth month of the Muslim year AH 20874. The worked example in Conversion between Jewish and civil dates, shows that the civil date of the first day of this month (ignoring the displacements) is Friday, 14 June. The year AH 20875 will therefore begin seven months later, on the first day of the eighth Jewish month, which the worked example shows to be 7 January, 20875 AD/CE (again ignoring the displacements). The date given by this method, being calculated, may differ by a day from the actual date, which is determined by observation. A reading of the section which follows will show that the year AH 20875 is wholly contained within the year 20875 AD/CE, also that in the Gregorian calendar this correspondence will occur one year earlier. The reason for the discrepancy is that the Gregorian year (like the Julian, though less so) is slightly too long, so the Gregorian date for a given AH date will be earlier and the Muslim calendar catches up sooner. An Islamic year will be entirely within a Gregorian year of the same number in the year 20874, after which year the number of the Islamic year will always be greater than the number of the concurrent civil year. The Islamic calendar year of 1429 occurred entirely within the civil calendar year of 2008. Such years occur once every 33 or 34 Islamic years (32 or 33 civil years). More are listed here: Because a Hijri or Islamic lunar year is between 10 and 12 days shorter than a civil year, it begins 10–12 days earlier in the civil year following the civil year in which the previous Hijri year began. Once every 33 or 34 Hijri years, or once every 32 or 33 civil years, the beginning of a Hijri year (1 Muharram) coincides with one of the first ten days of January. Subsequent Hijri New Years move backward through the civil year back to the beginning of January again, passing through each civil month from December to January. The Islamic calendar is now used primarily for religious purposes, and for official dating of public events and documents in Muslim countries. Because of its nature as a purely lunar calendar, it cannot be used for agricultural purposes and historically Islamic communities have used other calendars for this purpose: the Egyptian calendar was formerly widespread in Islamic countries, and the Iranian calendar and the 1789 Ottoman calendar (a modified Julian calendar) were also used for agriculture in their countries. In the Levant and Iraq the Aramaic names of the Babylonian calendar are still used for all secular matters. In the Maghreb, Berber farmers in the countryside still use the Julian calendar for agrarian purposes. These local solar calendars have receded in importance with the near-universal adoption of the Gregorian calendar for civil purposes. The Saudi Arabia uses the lunar Islamic calendar. In Indonesia, the Javanese calendar, created by Sultan Agung in 1633, combines elements of the Islamic and pre-Islamic Saka calendars. British author Nicholas Hagger writes that after seizing control of Libya, Muammar Gaddafi "declared" on 1 December 1978 "that the Muslim calendar should start with the death of the prophet Mohammed in 632 rather than the hijra (Mohammed's 'emigration' from Mecca to Medina) in 622". This put the country ten solar years behind the standard Muslim calendar. However, according to the 2006 "Encyclopedia of the Developing World", "More confusing still is Qaddafi's unique Libyan calendar, which counts the years from the Prophet's birth, or sometimes from his death. The months July and August, named after Julius and Augustus Caesar, are now Nasser and Hannibal respectively." Reflecting on a 2001 visit to the country, American reporter Neil MacFarquhar observed, "Life in Libya was so unpredictable that people weren't even sure what year it was. The year of my visit was officially 1369. But just two years earlier Libyans had been living through 1429. No one could quite name for me the day the count changed, especially since both remained in play. ... Event organizers threw up their hands and put the Western year in parentheses somewhere in their announcements."
https://en.wikipedia.org/wiki?curid=14810
Interquartile range In descriptive statistics, the interquartile range (IQR), also called the midspread, middle 50%, or Hspread, is a measure of statistical dispersion, being equal to the difference between 75th and 25th percentiles, or between upper and lower quartiles, IQR = "Q"3 −  "Q"1. In other words, the IQR is the first quartile subtracted from the third quartile; these quartiles can be clearly seen on a box plot on the data. It is a trimmed estimator, defined as the 25% trimmed range, and is a commonly used robust measure of scale. The IQR is a measure of variability, based on dividing a data set into quartiles. Quartiles divide a rank-ordered data set into four equal parts. The values that separate parts are called the first, second, and third quartiles; and they are denoted by Q1, Q2, and Q3, respectively. Unlike total range, the interquartile range has a breakdown point of 25%, and is thus often preferred to the total range. The IQR is used to build box plots, simple graphical representations of a probability distribution. The IQR is used in businesses as a marker for their income rates. For a symmetric distribution (where the median equals the midhinge, the average of the first and third quartiles), half the IQR equals the median absolute deviation (MAD). The median is the corresponding measure of central tendency. The IQR can be used to identify outliers (see below). The IQR of a set of values is calculated as the difference between the upper and lower quartiles, Q3 and Q1. Each quartile is a median calculated as follows. Given an even "2n" or odd "2n+1" number of values The "second quartile Q2" is the same as the ordinary median. The following table has 13 rows, and follows the rules for the odd number of entries. For the data in this table the interquartile range is IQR = Q3 − Q1 = 119 - 31 = 88. For the data set in this box plot: This means the 1.5*IQR whiskers can be uneven in lengths. The interquartile range of a continuous distribution can be calculated by integrating the probability density function (which yields the cumulative distribution function—any other means of calculating the CDF will also work). The lower quartile, "Q"1, is a number such that integral of the PDF from -∞ to "Q"1 equals 0.25, while the upper quartile, "Q"3, is such a number that the integral from -∞ to "Q"3 equals 0.75; in terms of the CDF, the quartiles can be defined as follows: where CDF−1 is the quantile function. The interquartile range and median of some common distributions are shown below The IQR, mean, and standard deviation of a population "P" can be used in a simple test of whether or not "P" is normally distributed, or Gaussian. If "P" is normally distributed, then the standard score of the first quartile, "z"1, is −0.67, and the standard score of the third quartile, "z"3, is +0.67. Given "mean" = "X" and "standard deviation" = σ for "P", if "P" is normally distributed, the first quartile and the third quartile If the actual values of the first or third quartiles differ substantially from the calculated values, "P" is not normally distributed. However, a normal distribution can be trivially perturbed to maintain its Q1 and Q2 std. scores at 0.67 and −0.67 and not be normally distributed (so the above test would produce a false positive). A better test of normality, such as Q-Q plot would be indicated here. The interquartile range is often used to find outliers in data. Outliers here are defined as observations that fall below Q1 − 1.5 IQR or above Q3 + 1.5 IQR. In a boxplot, the highest and lowest occurring value within this limit are indicated by "whiskers" of the box (frequently with an additional bar at the end of the whisker) and any outliers as individual points.
https://en.wikipedia.org/wiki?curid=14812
Indiana Jones (character) Dr. Henry Walton "Indiana" Jones, Jr. is the title character and protagonist of the "Indiana Jones" franchise. George Lucas created the character in homage to the action heroes of 1930s film serials. The character first appeared in the 1981 film "Raiders of the Lost Ark", to be followed by "Indiana Jones and the Temple of Doom" in 1984, "Indiana Jones and the Last Crusade" in 1989, "The Young Indiana Jones Chronicles" from 1992 to 1996, and "Indiana Jones and the Kingdom of the Crystal Skull" in 2008. The character is also featured in novels, comics, video games, and other media. Jones is also featured in several Disney theme park rides, including the Indiana Jones Adventure, Indiana Jones et le Temple du Péril, Indiana Jones Adventure: Temple of the Crystal Skull, and "Epic Stunt Spectacular!" attractions. Jones is most famously portrayed by Harrison Ford and has also been portrayed by River Phoenix (as the young Jones in "The Last Crusade") and in the television series "The Young Indiana Jones Chronicles" by Corey Carrier, Sean Patrick Flanery, and George Hall. Doug Lee has supplied the voice of Jones for two LucasArts video games, "Indiana Jones and the Fate of Atlantis" and "Indiana Jones and the Infernal Machine", David Esch supplied his voice for "Indiana Jones and the Emperor's Tomb", and John Armstrong for "Indiana Jones and the Staff of Kings". Jones is characterized by his iconic accoutrements (bullwhip, fedora, satchel, and leather jacket), wry, witty and sarcastic sense of humor, deep knowledge of ancient civilizations and languages, and fear of snakes. Since his first appearance in "Raiders of the Lost Ark", Indiana Jones has become one of cinema's most famous characters. In 2003, the American Film Institute ranked him the second-greatest film hero of all time. He was also named the greatest movie character by "Empire" magazine. "Entertainment Weekly" ranked Indiana 2nd on their list of The All-Time Coolest Heroes in Pop Culture. "Premiere" magazine also placed Indiana at number 7 on their list of The 100 Greatest Movie Characters of All Time. A native of Princeton, New Jersey, Indiana Jones was introduced as a tenured professor of archaeology in the 1981 film "Raiders of the Lost Ark", set in 1936. The character is an adventurer reminiscent of the 1930s film serial treasure hunters and pulp action heroes. His research is funded by Marshall College (a fictional school named after producer Frank Marshall), where he is a professor of archaeology. He studied under the Egyptologist and archaeologist Abner Ravenwood at the Oriental Institute at the University of Chicago. In this first adventure, he is pitted against Nazis commissioned by Hitler to recover artifacts of great power from the Old Testament (see Nazi archaeology). In consequence, Dr Jones travels the world to prevent them from recovering the Ark of the Covenant (see also Biblical archaeology). He is aided by Marion Ravenwood and Sallah. The Nazis are led by Jones's archrival, a Nazi-sympathizing French archaeologist named René Belloq, and Arnold Toht, a sinister Gestapo agent. In the 1984 prequel, "Indiana Jones and the Temple of Doom", set in 1935, Jones travels to India and attempts to free enslaved children and the three Sankara stones from the bloodthirsty Thuggee cult. He is aided by Short Round, a boy played by Jonathan Ke quan, and is accompanied by singer Willie Scott (Kate Capshaw). The prequel is not as centered on archaeology as "Raiders of the Lost Ark" and is considerably darker. The third film, 1989's "Indiana Jones and the Last Crusade", set in 1938, returned to the formula of the original, reintroducing characters such as Sallah and Marcus Brody, a scene from Professor Jones's classroom (he now teaches at Barnett College), the globe trotting element of multiple locations, and the return of the infamous Nazi mystics, this time trying to find the Holy Grail. The film's introduction, set in 1912, provided some back story to the character, specifically the origin of his fear of snakes, his use of a bullwhip, the scar on his chin, and his hat; the film's epilogue also reveals that "Indiana" is not Jones's first name, but a nickname he took from the family dog. The film was a buddy movie of sorts, teaming Jones with his father, Henry Jones, Sr., often to comical effect. Although Lucas intended to make five Indiana Jones films, "Indiana Jones and the Last Crusade" was the last for over eighteen years, as he could not think of a good plot element to drive the next installment. The 2008 film, "Indiana Jones and the Kingdom of the Crystal Skull", is the latest film in the series. Set in 1957, 19 years after the third film, it pits an older, wiser Indiana Jones against Soviet agents bent on harnessing the power of an extraterrestrial device discovered in South America. Jones is aided in his adventure by his former lover, Marion Ravenwood (Karen Allen), and her son—a young greaser named Henry "Mutt" Williams (Shia LaBeouf), later revealed to be Jones's unknown child. There were rumors that Harrison Ford would not return for any future installments and LaBeouf would take over the Indy franchise. This film also reveals that Jones was recruited by the Office of Strategic Services during World War II, attaining the rank of colonel in the United States Army, and implies very strongly that in 1947 he was forced to investigate on the Roswell UFO incident, and the investigation saw that he was involved in affairs related to Hangar 51. He is tasked with conducting covert operations with MI6 agent George McHale against the Soviet Union. In March 2016, Disney announced a fifth "Indiana Jones" film in development, with Ford and Spielberg set to return to the franchise. Initially set for release on July 10, 2020, the film's release date was pushed back to July 9, 2021 due to production issues, then further pushed back to July 29, 2022 due to a reshuffle in Disney's release schedule as due to the COVID-19 pandemic. Indiana Jones is featured at several Walt Disney theme park attractions. The Indiana Jones Adventure attractions at Disneyland and Tokyo DisneySea ("Temple of the Forbidden Eye" and "Temple of the Crystal Skull," respectively) place Indy at the forefront of two similar archaeological discoveries. These two temples each contain a wrathful deity who threatens the guests who ride through in World War II troop transports. The attractions, some of the most expensive of their kind at the time, opened in 1995 and 2001, respectively, with sole design credit attributed to Walt Disney Imagineering. Ford was approached to reprise his role as Indiana Jones, but ultimately negotiations to secure Ford's participation broke down in December 1994, for definitively unknown reasons. Instead, Dave Temple provided the voice of Jones. Ford's physical likeness, however, has nonetheless been used in subsequent audio-animatronic figures for the attractions. Disneyland Paris also features an Indiana Jones-titled ride where people speed off through ancient ruins in a runaway mine wagon similar to that found in "Indiana Jones and the Temple of Doom". "Indiana Jones and the Temple of Peril" is a looping roller coaster engineered by Intamin, designed by Walt Disney Imagineering, and opened in 1993. The "Indiana Jones Epic Stunt Spectacular!" is a live show that has been presented in the Disney's Hollywood Studios theme park of the Walt Disney World Resort with few changes since the park's 1989 opening, as Disney-MGM Studios. The 25-minute show presents various stunts framed in the context of a feature film production, and recruits members of the audience to participate in the show. Stunt artists in the show re-create and ultimately reveal some of the secrets of the stunts of the "Raiders of the Lost Ark" films, including the well-known "running-from-the-boulder" scene. Stunt performer Anislav Varbanov was fatally injured in August 2009, while rehearsing the popular show. Also formerly at Disney's Hollywood Studios, an audio-animatronic Indiana Jones appeared in another attraction; during The Great Movie Ride's "Raiders of the Lost Ark" segment. Indy also appears in the 2004 Dark Horse Comics story "Into the Great Unknown", collected in "Star Wars Tales Volume 5". In this non-canon story bringing together two of Harrison Ford's best-known roles, Indy and Short Round discover a crash-landed "Millennium Falcon" in the Pacific Northwest, along with Han Solo's skeleton and the realization that a rumored nearby Sasquatch is in fact Chewbacca. Indy also appears in a series of Marvel Comics. The four Indiana Jones film scripts were novelized and published in the time-frame of the films' initial releases. "Raiders of the Lost Ark" was novelized by Campbell Black based on the script by Lawrence Kasdan that was based on the story by George Lucas and Philip Kaufman and published in April 1981 by Ballantine Books; "Indiana Jones and the Temple of Doom" was novelized by James Kahn and based on the script by Willard Huyck & Gloria Katz that was based on the story by George Lucas and published May 1984 by Ballantine Books; "Indiana Jones and the Last Crusade" was novelized by Rob MacGregor based on the script by Jeffrey Boam that was based on a story by George Lucas and Menno Meyjes and published June 1989 by Ballantine Books. Nearly 20 years later "Indiana Jones and the Kingdom of the Crystal Skull" was novelized by James Rollins based on the script by David Koepp based on the story by George Lucas and Jeff Nathanson and published May 2008 by Ballantine Books. In addition, in 2008 to accompany the release of "Kingdom of Skulls", Scholastic Books published juvenile novelizations of the four scripts written, successively in the order above, by Ryder Windham, Suzanne Weyn, Ryder Windham, and James Luceno. All these books have been reprinted, with "Raiders of the Lost Ark" being retitled "Indiana Jones and the Raiders of the Lost Ark". While these are the principal titles and authors, there are numerous other volumes derived from the four film properties. From February 1991 through February 1999, twelve original Indiana Jones-themed adult novels were licensed by Lucasfilm, Ltd. and written by three genre authors of the period. Ten years afterward, a thirteenth original novel was added, also written by a popular genre author. The first twelve were published by Bantam Books; the last by Ballantine Books in 2009. (See Indiana Jones (franchise) for broad descriptions of these original adult novels.) The novels are: From 1992 to 1996, George Lucas executive-produced a television series named "The Young Indiana Jones Chronicles", aimed mainly at teenagers and children, which showed many of the important events and historical figures of the early 20th century through the prism of Indiana Jones's life. The show initially featured the formula of an elderly (93 to 94 years of age) Indiana Jones played by George Hall introducing a story from his youth by way of an anecdote: the main part of the episode then featured an adventure with either a young adult Indy (16 to 21 years of age) played by Sean Patrick Flanery or a child Indy (8 to 11 years) played by Corey Carrier. One episode, "Young Indiana Jones and the Mystery of the Blues", is bookended by Harrison Ford as Indiana Jones, rather than Hall. Later episodes and telemovies did not have this bookend format. The bulk of the series centers around the young adult Indiana Jones and his activities during World War I as a 16- to 17-year-old soldier in the Belgian Army and then as an intelligence officer and spy seconded to French intelligence. The child Indy episodes follow the boy's travels around the globe as he accompanies his parents on his father's worldwide lecture tour from 1908 to 1910. The show provided some backstory for the films, as well as new information regarding the character. Indiana Jones was born July 1, 1899, and his middle name is Walton (Lucas's middle name). It is also mentioned that he had a sister called Suzie who died as an infant of fever, and that he eventually has a daughter and grandchildren who appear in some episode introductions and epilogues. His relationship with his father, first introduced in "Indiana Jones and the Last Crusade", was further fleshed out with stories about his travels with his father as a young boy. Indy damages or loses his right eye sometime between the events in 1957 and the early 1990s, when the "Old Indy" segments take place, as the elderly Indiana Jones wears an eyepatch. In 1999, Lucas removed the episode introductions and epilogues by George Hall for the VHS and DVD releases, and re-edited the episodes into chronologically ordered feature-length stories. The series title was also changed to "The Adventures of Young Indiana Jones". The character has appeared in several officially licensed games, beginning with adaptations of "Raiders of the Lost Ark", "Indiana Jones and the Temple of Doom", two adaptations of "Indiana Jones and the Last Crusade" (one with purely action mechanics, one with an adventure- and puzzle-based structure) and "Indiana Jones's Greatest Adventures", which included the storylines from all three of the original films. Following this, the games branched off into original storylines with Indiana Jones in the Lost Kingdom, "Indiana Jones and the Fate of Atlantis", "Indiana Jones and the Infernal Machine", "Indiana Jones and the Emperor's Tomb" and "Indiana Jones and the Staff of Kings". "Emperor's Tomb" sets up Jones's companion Wu Han and the search for Nurhaci's ashes seen at the beginning of "Temple of Doom". The first two games were developed by Hal Barwood and starred Doug Lee as the voice of Indiana Jones; "Emperor's Tomb" had David Esch fill the role and "Staff of Kings" starred John Armstrong. "Indiana Jones and the Infernal Machine" was the first Indy-based game presented in three dimensions, as opposed to 8-bit graphics and side-scrolling games before. There is also a small game from Lucas Arts "Indiana Jones and His Desktop Adventures". A video game was made for young Indy called "Young Indiana Jones and the Instruments of Chaos", as well as a video game version of "The Young Indiana Jones Chronicles". Two Lego Indiana Jones games have also been released. "" was released in 2008 and follows the plots of the first three films. It was followed by "" in late 2009. The sequel includes an abbreviated reprise of the first three films, but focuses on the plot of "Indiana Jones and the Kingdom of the Crystal Skull". Social gaming company Zynga introduced Indiana Jones to their "Adventure World" game in late 2011. "Indiana" Jones's full name is Dr. Henry Walton Jones Jr., and his nickname is often shortened to "Indy". In his role as a college professor of archaeology, Jones is scholarly and learned in a tweed suit, lecturing on ancient civilizations. At the opportunity to recover important artifacts, Dr. Jones transforms into "Indiana," a "non-superhero superhero" image he has concocted for himself. Producer Frank Marshall said, "Indy [is] a fallible character. He makes mistakes and gets hurt. ... That's the other thing people like: He's a real character, not a character with superpowers." Spielberg said there "was the willingness to allow our leading man to get hurt and to express his pain and to get his mad out and to take pratfalls and sometimes be the butt of his own jokes. I mean, Indiana Jones is not a perfect hero, and his imperfections, I think, make the audience feel that, with a little more exercise and a little more courage, they could be just like him." According to Spielberg biographer Douglas Brode, Indiana created his heroic figure so as to escape the dullness of teaching at a school. Both of Indiana's personas reject one another in philosophy, creating a duality. Harrison Ford said the fun of playing the character was that Indiana is both a romantic and a cynic, while scholars have analyzed Indiana as having traits of a lone wolf; a man on a quest; a noble treasure hunter; a hardboiled detective; a human superhero; and an American patriot. Like many characters in his films, Jones has some autobiographical elements of Spielberg. Indiana lacks a proper father figure because of his strained relationship with his father, Henry Senior. His own contained anger is misdirected towards Professor Abner Ravenwood, his mentor at the University of Chicago, leading to a strained relationship with Marion Ravenwood. The teenage Indiana bases his own look on a figure from the prologue of "Indiana Jones and the Last Crusade", after being given his hat. Marcus Brody acts as Indiana's positive role model at the college. Indiana's own insecurities are made worse by the absence of his mother. In "Indiana Jones and the Temple of Doom", he becomes the father figure to Willie Scott and Short Round, to survive; he is rescued from Kali's evil by Short Round's dedication. Indiana also saves many enslaved children. Indiana uses his knowledge of Shiva to defeat Mola Ram. In "Raiders of the Lost Ark", he is wise enough to close his eyes in the presence of God in the Ark of the Covenant. By contrast, his rival Rene Belloq is killed for having the audacity to try to communicate directly with God. In the prologue of "Indiana Jones and the Last Crusade", Jones is seen as a teenager, establishing his look when given a fedora hat. Indiana's intentions are revealed as prosocial, as he believes artifacts "belong in a museum." In the film's climax, Indiana undergoes "literal" tests of faith to retrieve the Grail and save his father's life. He also remembers Jesus as a historical figure—a humble carpenter—rather than an exalted figure when he recognizes the simple nature and tarnished appearance of the real Grail amongst a large assortment of much more ornately decorated ones. Henry Senior rescues his son from falling to his death when reaching for the fallen Grail, telling him to "let it go," overcoming his mercenary nature. "The Young Indiana Jones Chronicles" explains how Indiana becomes solitary and less idealistic following his service in World War I. In "Indiana Jones and the Kingdom of the Crystal Skull", Jones is older and wiser, whereas his sidekicks Mutt and Mac are youthfully, arrogant, and greedy, respectively. Indiana Jones is modeled after the strong-jawed heroes of the matinée serials and pulp magazines that George Lucas and Steven Spielberg enjoyed in their childhoods (such as the Republic Pictures serials, and the Doc Savage series). Sir H. Rider Haggard's safari guide/big game hunter Allan Quatermain of "King Solomon's Mines" is a notable template for Jones. The two friends first discussed the project in Hawaii around the time of the release of the first "Star Wars" film. Spielberg told Lucas how he wanted his next project to be something fun, like a "James Bond" film (this would later be referenced when they cast Sean Connery as Henry Jones Sr.). According to sources, Lucas responded to the effect that he had something "even better", or that he'd "got that beat." One of the possible bases for Indiana Jones is Professor Challenger, created by Sir Arthur Conan Doyle in 1912 for his novel, "The Lost World". Challenger was based on Doyle's physiology professor, William Rutherford, an adventuring academic, albeit a zoologist/anthropologist. Another important influence on the development of the character Indiana Jones is the Disney character Scrooge McDuck. Carl Barks created Scrooge in 1947 as a one-off relation for Donald Duck in the latter's self-titled comic book. Barks realized that the character had more potential, so a separate "Uncle Scrooge" comic book series full of exciting and strange adventures in the company of his duck nephews was developed. This "Uncle Scrooge" comic series strongly influenced George Lucas. This appreciation of Scrooge as an adventurer influenced the development of Jones, with the prologue of "Raiders of the Lost Ark" containing homage to Barks' Scrooge adventure "The Seven Cities of Cibola", published in "Uncle Scrooge" #7 from September 1954. This homage in the film takes the form of playfully mimicking the removal-of-the-statuette-from-its-pedestal and the falling-stone sequences of the comic book. The character was originally named Indiana Smith, after an Alaskan Malamute called Indiana that Lucas owned in the 1970s and on which he based the Star Wars character Chewbacca. Spielberg disliked the name Smith, and Lucas casually suggested Jones as an alternative. The "Last Crusade" script references the name's origin, with Jones's father revealing his son's birth name to be Henry and explaining that "we named the "dog" Indiana", to his son's chagrin. Some have also posited that C.L. Moore's science fiction character Northwest Smith may have also influenced Lucas and Spielberg in their naming choice. Lucas has said on various occasions that Sean Connery's portrayal of British secret agent James Bond was one of the primary inspirations for Jones, a reason Connery was chosen for the role of Indiana's father in "Indiana Jones and the Last Crusade". Spielberg earned the rank of Eagle Scout and Ford the Life Scout badge in their youth, which gave them the inspiration to portray Indiana Jones as a Life Scout at age 13 in "The Last Crusade". Many people are said to be the real-life inspiration of the Indiana Jones character—although none of the following have been confirmed as inspirations by Lucas or Spielberg. There are some suggestions listed here in alphabetical order by last name: Upon requests by Spielberg and Lucas, the costume designer gave the character a distinctive silhouette through the styling of the hat; after examining many hats, the designers chose a tall-crowned, wide-brimmed fedora. As a documentary of "Raiders" pointed out, the hat served a practical purpose. Following the lead of the old "B"-movies that inspired the "Indiana Jones" series, the fedora hid the actor's face sufficiently to allow doubles to perform the more dangerous stunts seamlessly. Examples in "Raiders" include the wider-angle shot of Indy and Marion crashing a statue through a wall, and Indy sliding under a fast-moving vehicle from front to back. Thus it was necessary for the hat to stay in place much of the time. The hat became so iconic that the filmmakers could only come up with very good reasons or jokes to remove it. If it ever fell off during a take, filming would have to stop to put it back on. In jest, Ford put a stapler against his head to stop his hat from falling off when a documentary crew visited during shooting of "Indiana Jones and the Last Crusade". This created the urban legend that Ford stapled the hat to his head. Anytime Indy's hat accidentally came off as part of the storyline (blown off by the wind, knocked off, etc.) and seemed almost irretrievable, filmmakers would make sure Indy and his hat were always reunited, regardless of the implausibility of its return. Although other hats were also used throughout the films, the general style and profile remained the same. Elements of the outfit include: The fedora and leather jacket from "Indiana Jones and the Last Crusade" are on display at the Smithsonian Institution's American History Museum in Washington, D.C. The collecting of props and clothing from the films has become a thriving hobby for some aficionados of the franchise. Jones's whip was the third most popular film weapon, as shown by a 2008 poll held by 20th Century Fox, which surveyed approximately two thousand film fans. Originally, Spielberg suggested Harrison Ford; Lucas resisted the idea, since he had already cast the actor in "American Graffiti", "Star Wars" and "The Empire Strikes Back", and did not want Ford to become known as his "Bobby De Niro" (in reference to the fact that fellow director Martin Scorsese regularly casts Robert De Niro in his films). During an intensive casting process, Lucas and Spielberg auditioned many actors, and finally cast actor Tom Selleck as Indiana Jones. Shortly afterward pre-production began in earnest on "Raiders of the Lost Ark". CBS refused to release Selleck from his contractual commitment to "Magnum, P.I.", forcing him to turn down the role. Shooting for the film could have overlapped with the pilot for "Magnum, P.I." but it later turned out that filming of the pilot episode was delayed and Selleck could have done both. Subsequently, Peter Coyote and Tim Matheson both auditioned for the role. After Spielberg suggested Ford again, Lucas relented, and Ford was cast in the role less than three weeks before filming began. The industry magazine "Archaeology" named eight past and present archaeologists who they felt "embodied [Jones'] spirit" as recipients of the Indy Spirit Awards in 2008. That same year Ford himself was elected to the Board of Directors for the Archaeological Institute of America. Commenting that "understanding the past can only help us in dealing with the present and the future," Ford was praised by the association's president for his character's "significant role in stimulating the public's interest in archaeological exploration." He is perhaps the most influential character in films that explore archaeology. Since the release of "Raiders of the Lost Ark" in 1981, the very idea of archaeology and archaeologists has fundamentally shifted. Prior to the film's release, the stereotypical image of an archaeologist was that of an older, lackluster professor type. In the early years of films involving archaeologists, they were portrayed as victims who would need to be rescued by a more masculine or heroic figure. Following 1981, the stereotypical archaeologist was thought of as an adventurer consistently engaged in fieldwork. Archeologist Anne Pyburn described the influence of Indiana Jones as elitist and sexist, and argued that the film series had caused new discoveries in the field of archaeology to become oversimplified and overhyped in an attempt to gain public interest, which negatively influences archaeology as a whole. Eric Powell, an editor with the magazine "Archaeology", said "O.K., fine, the movie romanticizes what we do", and that "Indy may be a horrible archeologist, but he's a great diplomat for archeology. I think we'll see a spike in kids who want to become archeologists". Kevin McGeough, associate professor of archaeology, describes the original archaeological criticism of the film as missing the point of the film: "dramatic interest is what is at issue, and it is unlikely that film will change in order to promote and foster better archaeological techniques". While himself an homage to various prior adventurers, aspects of Indiana Jones also directly influenced some subsequent characterizations:
https://en.wikipedia.org/wiki?curid=14814
Irreducible fraction An irreducible fraction (or fraction in lowest terms, simplest form or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and -1, when negative numbers are considered). In other words, a fraction a⁄b is irreducible if and only if "a" and "b" are coprime, that is, if "a" and "b" have a greatest common divisor of 1. In higher mathematics, "irreducible fraction" may also refer to rational fractions such that the numerator and the denominator are coprime polynomials. Every positive rational number can be represented as an irreducible fraction in exactly one way. An equivalent definition is sometimes useful: if "a", "b" are integers, then the fraction "a"⁄"b" is irreducible if and only if there is no other equal fraction "c"⁄"d" such that |"c"| < |"a"| or |"d"| < |"b"|, where |"a"| means the absolute value of "a". (Two fractions "a"⁄"b" and "c"⁄"d" are "equal" or "equivalent" if and only if "ad" = "bc".) For example, 1⁄4, 5⁄6, and −101⁄100 are all irreducible fractions. On the other hand, 2⁄4 is reducible since it is equal in value to 1⁄2, and the numerator of 1⁄2 is less than the numerator of 2⁄4. A fraction that is reducible can be reduced by dividing both the numerator and denominator by a common factor. It can be fully reduced to lowest terms if both are divided by their greatest common divisor. In order to find the greatest common divisor, the Euclidean algorithm or prime factorization can be used. The Euclidean algorithm is commonly preferred because it allows one to reduce fractions with numerators and denominators too large to be easily factored. In the first step both numbers were divided by 10, which is a factor common to both 120 and 90. In the second step, they were divided by 3. The final result, 4/3, is an irreducible fraction because 4 and 3 have no common factors other than 1. The original fraction could have also been reduced in a single step by using the greatest common divisor of 90 and 120, which is 30 (That is, gcd(90,120)=30). As , and , one gets Which method is faster "by hand" depends on the fraction and the ease with which common factors are spotted. In case a denominator and numerator remain that are too large to ensure they are coprime by inspection, a greatest common divisor computation is needed anyway to ensure the fraction is actually irreducible. Every rational number has a "unique" representation as an irreducible fraction with a positive denominator (however formula_3 although both are irreducible). Uniqueness is a consequence of the unique prime factorization of integers, since formula_4 implies "ad" = "bc" and so both sides of the latter must share the same prime factorization, yet formula_5 and formula_6 share no prime factors so the set of prime factors of formula_5 (with multiplicity) is a subset of those of formula_8 and vice versa meaning formula_9 and formula_10. The fact that any rational number has a unique representation as an irreducible fraction is utilized in various proofs of the irrationality of the square root of 2 and of other irrational numbers. For example, one proof notes that if the square root of 2 could be represented as a ratio of integers, then it would have in particular the fully reduced representation formula_11 where "a" and "b" are the smallest possible; but given that formula_11 equals the square root of 2, so does formula_13 (since cross-multiplying this with formula_11 shows that they are equal). Since the latter is a ratio of smaller integers, this is a contradiction, so the premise that the square root of two has a representation as the ratio of two integers is false. The notion of irreducible fraction generalizes to the field of fractions of any unique factorization domain: any element of such a field can be written as a fraction in which denominator and numerator are coprime, by dividing both by their greatest common divisor. This applies notably to rational expressions over a field. The irreducible fraction for a given element is unique up to multiplication of denominator and numerator by the same invertible element. In the case of the rational numbers this means that any number has two irreducible fractions, related by a change of sign of both numerator and denominator; this ambiguity can be removed by requiring the denominator to be positive. In the case of rational functions the denominator could similarly be required to be a monic polynomial.
https://en.wikipedia.org/wiki?curid=14822
Isomorphism class In mathematics, an isomorphism class is a collection of mathematical objects isomorphic to each other. Isomorphism classes are often defined if the exact identity of the elements of the set is considered irrelevant, and the properties of the structure of the mathematical object are studied. Examples of this are ordinals and graphs. However, there are circumstances in which the isomorphism class of an object conceals vital internal information about it; consider these examples:
https://en.wikipedia.org/wiki?curid=14826
Isomorphism In mathematics, an isomorphism is a mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος "isos" "equal", and μορφή "morphe" "form" or "shape". The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are "the same up to an isomorphism". An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism if there is only one isomorphism between the two structures (as it is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique. The term "isomorphism" is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective. In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example: Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea. In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations. Let formula_1 be the multiplicative group of positive real numbers, and let formula_2 be the additive group of real numbers. The logarithm function formula_3 satisfies formula_4 for all formula_5, so it is a group homomorphism. The exponential function formula_6 satisfies formula_7 for all formula_8, so it too is a homomorphism. The identities formula_9 and formula_10 show that formula_11 and formula_12 are inverses of each other. Since formula_11 is a homomorphism that has an inverse that is also a homomorphism, formula_11 is an isomorphism of groups. Because formula_11 is an isomorphism, it translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale. Consider the group formula_16, the integers from 0 to 5 with addition modulo 6. Also consider the group formula_17, the ordered pairs where the "x" coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the "x"-coordinate is modulo 2 and addition in the "y"-coordinate is modulo 3. These structures are isomorphic under addition, under the following scheme: or in general mod 6. For example, , which translates in the other system as . Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups formula_18 and formula_19 is isomorphic to formula_20 if and only if "m" and "n" are coprime, per the Chinese remainder theorem. If one object consists of a set "X" with a binary relation R and the other object consists of a set "Y" with a binary relation S then an isomorphism from "X" to "Y" is a bijective function such that: S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is. For example, R is an ordering ≤ and S an ordering formula_22, then an isomorphism from "X" to "Y" is a bijective function such that Such an isomorphism is called an "order isomorphism" or (less commonly) an "isotone isomorphism". If , then this is a relation-preserving automorphism. In abstract algebra, two basic isomorphisms are defined: Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group. In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations. In graph theory, an isomorphism between two graphs "G" and "H" is a bijective map "f" from the vertices of "G" to the vertices of "H" that preserves the "edge structure" in the sense that there is an edge from vertex "u" to vertex "v" in "G" if and only if there is an edge from ƒ("u") to ƒ("v") in "H". See graph isomorphism. In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product. In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's "Introduction to Mathematical Philosophy". In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system. In category theory, given a category "C", an isomorphism is a morphism that has an inverse morphism , that is, and . For example, a bijective linear map is an isomorphism between vector spaces, and a bijective continuous function whose inverse is also continuous is an isomorphism between topological spaces, called a homeomorphism. In a concrete category (that is, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects like groups, rings, and modules, an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces). In certain areas of mathematics, notably category theory, it is valuable to distinguish between "equality" on the one hand and "isomorphism" on the other. Equality is when two objects are exactly the same, and everything that's true about one object is true about the other, while an isomorphism implies everything that's true about a designated part of one object's structure is true about the other's. For example, the sets are "equal"; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets {"A","B","C"} and {1,2,3} are not "equal"—the first has elements that are letters, while the second has elements that are numbers. These are isomorphic as sets, since finite sets are determined up to isomorphism by their cardinality (number of elements) and these both have three elements, but there are many choices of isomorphism—one isomorphism is and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them "identical": one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism. Sometimes the isomorphisms can seem obvious and compelling, but are still not equalities. As a simple example, the genealogical relationships among Joe, John, and Bobby Kennedy are, in a real sense, the same as those among the American football quarterbacks in the Manning family: Archie, Peyton, and Eli. The father-son pairings and the elder-brother-younger-brother pairings correspond perfectly. That similarity between the two family structures illustrates the origin of the word "isomorphism" (Greek "iso"-, "same," and -"morph", "form" or "shape"). But because the Kennedys are not the same people as the Mannings, the two genealogical structures are merely isomorphic and not equal. Another example is more formal and more directly illustrates the motivation for distinguishing equality from isomorphism: the distinction between a finite-dimensional vector space "V" and its dual space } of linear maps from "V" to its field of scalars K. These spaces have the same dimension, and thus are isomorphic as abstract vector spaces (since algebraically, vector spaces are classified by dimension, just as sets are classified by cardinality), but there is no "natural" choice of isomorphism formula_32. If one chooses a basis for "V", then this yields an isomorphism: For all , This corresponds to transforming a column vector (element of "V") to a row vector (element of "V"*) by transpose, but a different choice of basis gives a different isomorphism: the isomorphism "depends on the choice of basis". More subtly, there "is" a map from a vector space "V" to its double dual } that does not depend on the choice of basis: For all This leads to a third notion, that of a natural isomorphism: while "V" and "V"** are different sets, there is a "natural" choice of isomorphism between them. This intuitive notion of "an isomorphism that does not depend on an arbitrary choice" is formalized in the notion of a natural transformation; briefly, that one may "consistently" identify, or more generally map from, a finite-dimensional vector space to its double dual, formula_35, for "any" vector space in a consistent way. Formalizing this intuition is a motivation for the development of category theory. However, there is a case where the distinction between natural isomorphism and equality is usually not made. That is for the objects that may be characterized by a universal property. In fact, there is a unique isomorphism, necessarily natural, between two objects sharing the same universal property. A typical example is the set of real numbers, which may be defined through infinite decimal expansion, infinite binary expansion, Cauchy sequences, Dedekind cuts and many other ways. Formally these constructions define different objects, which all are solutions of the same universal property. As these objects have exactly the same properties, one may forget the method of construction and considering them as equal. This is what everybody does when talking of ""the" set of the real numbers". The same occurs with quotient spaces: they are commonly constructed as sets of equivalence classes. However, talking of set of sets may be counterintuitive, and quotient spaces are commonly considered as a pair of a set of undetermined objects, often called "points", and a surjective map onto this set. If one wishes to draw a distinction between an arbitrary isomorphism (one that depends on a choice) and a natural isomorphism (one that can be done consistently), one may write for an unnatural isomorphism and for a natural isomorphism, as in and This convention is not universally followed, and authors who wish to distinguish between unnatural isomorphisms and natural isomorphisms will generally explicitly state the distinction. Generally, saying that two objects are "equal" is reserved for when there is a notion of a larger (ambient) space that these objects live in. Most often, one speaks of equality of two subsets of a given set (as in the integer set example above), but not of two objects abstractly presented. For example, the 2-dimensional unit sphere in 3-dimensional space which can be presented as the one-point compactification of the complex plane } "or" as the complex projective line (a quotient space) are three different descriptions for a mathematical object, all of which are isomorphic, but not "equal" because they are not all subsets of a single space: the first is a subset of R3, the second is 2 plus an additional point, and the third is a subquotient of C2 In the context of category theory, objects are usually at most isomorphic—indeed, a motivation for the development of category theory was showing that different constructions in homology theory yielded equivalent (isomorphic) groups. Given maps between two objects "X" and "Y", however, one asks if they are equal or not (they are both elements of the set Hom("X", "Y"), hence equality is the proper relationship), particularly in commutative diagrams.
https://en.wikipedia.org/wiki?curid=14828
Intergovernmental organization An intergovernmental organization (IGO) or international organization is an organization composed primarily of sovereign states (referred to as "member states"), or of other intergovernmental organizations. IGOs are established by a treaty that acts as a charter creating the group. Treaties are formed when lawful representatives (governments) of several states go through a ratification process, providing the IGO with an international legal personality. Intergovernmental organizations are an important aspect of public international law. Intergovernmental organizations in a legal sense should be distinguished from simple groupings or coalitions of states, such as the G7 or the Quartet. Such groups or associations have not been founded by a constituent document and exist only as task groups. Intergovernmental organizations must also be distinguished from treaties. Many treaties (such as the North American Free Trade Agreement, or the General Agreement on Tariffs and Trade before the establishment of the World Trade Organization) do not establish an organization and instead rely purely on the parties for their administration becoming legally recognized as an "ad hoc" commission. Other treaties have established an administrative apparatus which was not deemed to have been granted international legal personality. Intergovernmental organizations differ in function, membership, and membership criteria. They have various goals and scopes, often outlined in the treaty or charter. Some IGOs developed to fulfill a need for a neutral forum for debate or negotiation to resolve disputes. Others developed to carry out mutual interests with unified aims to preserve peace through conflict resolution and better international relations, promote international cooperation on matters such as environmental protection, to promote human rights, to promote social development (education, health care), to render humanitarian aid, and to economic development. Some are more general in scope (the United Nations) while others may have subject-specific missions (such as Interpol or the International Telecommunication Union and other standards organizations). Common types include: While treaties, alliances, and multilateral conferences had existed for centuries, IGOs only began to be established in the 19th century. The first regional international organization was the Central Commission for Navigation on the Rhine, initiated in the aftermath of the Napoleonic Wars. The first international organization of a global nature was the International Telegraph Union (the future International Telecommunication Union), which was founded by the signing of the International Telegraph Convention by 20 countries in May 1865. The ITU also served as a model for other international organizations such as the Universal Postal Union (1874), and the emergence of the League of Nations following World War I, designed as an institution to foster collective security in order to sustain peace, and successor to this the United Nations. Held and McGrew counted thousands of IGOs worldwide in 2002 and this number continues to rise. This may be attributed to globalization, which increases and encourages the co-operation among and within states and which has also provided easier means for IGO growth as a result of increased international relations. This is seen economically, politically, militarily, as well as on the domestic level. Economically, IGOs gain material and non-material resources for economic prosperity. IGOs also provide more political stability within the state and among differing states. Military alliances are also formed by establishing common standards in order to ensure security of the members to ward off outside threats. Lastly, the formation has encouraged autocratic states to develop into democracies in order to form an effective and internal government. There are several different reasons a state may choose membership in an intergovernmental organization. But there are also reasons membership may be rejected. Reasons for participation: Reasons for rejecting membership: Intergovernmental organizations are provided with privileges and immunities that are intended to ensure their independent and effective functioning. They are specified in the treaties that give rise to the organization (such as the Convention on the Privileges and Immunities of the United Nations and the Agreement on the Privileges and Immunities of the International Criminal Court), which are normally supplemented by further multinational agreements and national regulations (for example the "International Organizations Immunities Act" in the United States). The organizations are thereby immune from the jurisdiction of national courts. Rather than by national jurisdiction, legal accountability is intended to be ensured by legal mechanisms that are internal to the intergovernmental organization itself and access to administrative tribunals. In the course of many court cases where private parties tried to pursue claims against international organizations, there has been a gradual realization that alternative means of dispute settlement are required as states have fundamental human rights obligations to provide plaintiffs with access to court in view of their right to a fair trial. Otherwise, the organizations’ immunities may be put in question in national and international courts. Some organizations hold proceedings before tribunals relating to their organization to be confidential, and in some instances have threatened disciplinary action should an employee disclose any of the relevant information. Such confidentiality has been criticized as a lack of transparency. The immunities also extend to employment law. In this regard, immunity from national jurisdiction necessitates that reasonable alternative means are available to effectively protect employees’ rights; in this context, a first instance Dutch court considered an estimated duration of proceedings before the Administrative Tribunal of the International Labour Organization of 15 years to be too long. These are some of the strengths and weaknesses of IGOs. Strengths: Weaknesses: They can be deemed unfair as countries with a higher percentage voting power have the right to veto any decision that is not in their favor, leaving the smaller countries powerless.
https://en.wikipedia.org/wiki?curid=14832
International Telecommunication Union The International Telecommunication Union (ITU; or UIT), originally the International Telegraph Union (), is a specialized agency of the United Nations that is responsible for issues that concern information and communication technologies. It is the oldest global international organization. The ITU coordinates the shared global use of the radio spectrum, promotes international cooperation in assigning satellite orbits, works to improve telecommunication infrastructure in the developing world, and assists in the development and coordination of worldwide technical standards. The ITU is also active in the areas of broadband Internet, latest-generation wireless technologies, aeronautical and maritime navigation, radio astronomy, satellite-based meteorology, convergence in fixed-mobile phone, Internet access, data, voice, TV broadcasting, and next-generation networks. The ITU is the oldest international organization, preceded by the now defunct International Telegraph Union which drafted the earliest international standards and regulations governing international telegraph networks. The development of the telegraph in the early 19th century changed the way people communicated on the local and international levels. Between 1849 and 1865, a series of bilateral and regional agreements among Western European states attempted to standardize international communications. By 1865 it was agreed that a comprehensive agreement was needed in order to create a framework that would standardize telegraphy equipment, set uniform operating instructions, and lay down common international tariff and accounting rules. Between 1 March and 17 May 1865, the French Government hosted delegations from 20 European states at the first International Telegraph Conference in Paris. This meeting culminated in the International Telegraph Convention which was signed on 17 May 1865. As a result of the 1865 Conference, the International Telegraph Union, the predecessor to the modern ITU, was founded as the first international standards organization. The Union was tasked with implementing basic principles for international telegraphy. This included: the use of the Morse code as the international telegraph alphabet, the protection of the secrecy of correspondence, and the right of everybody to use the international telegraphy. Another predecessor to the modern ITU, the International Radiotelegraph Union, was established in 1906 at the first International Radiotelegraph Convention in Berlin. The conference was attended by representatives of 29 nations and culminated in the International Radiotelegraph Convention. An annex to the convention eventually became known as radio regulations. At the conference it was also decided that the Bureau of the International Telegraph Union would also act as the conference's central administrator. Between 3 September and 10 December 1932, a joint conference of the International Telegraph Union and the International Radiotelegraph Union convened in order to merge the two organizations into a single entity, the International Telecommunication Union. The Conference decided that the Telegraph Convention of 1875 and the Radiotelegraph Convention of 1927 were to be combined into a single convention, the International Telecommunication Convention, embracing the three fields of telegraphy, telephony and radio. On 15 November 1947, an agreement between ITU and the newly created United Nations recognized the ITU as the specialized agency for global telecommunications. This agreement entered into force on 1 January 1949, officially making the ITU an organ of the United Nations. The ITU comprises three Sectors, each managing a different aspect of the matters handled by the Union, as well as ITU Telecom. The sectors were created during the restructuring of ITU at its 1992 Plenipotentiary Conference. A permanent General Secretariat, headed by the Secretary General, manages the day-to-day work of the Union and its sectors. The basic texts of the ITU are adopted by the ITU Plenipotentiary Conference. The founding document of the ITU was the 1865 International Telegraph Convention, which has since been amended several times and is now entitled the "Constitution and Convention of the International Telecommunication Union". In addition to the Constitution and Convention, the consolidated basic texts include the Optional Protocol on the settlement of disputes, the Decisions, Resolutions and Recommendations in force, as well as the General Rules of Conferences, Assemblies and Meetings of the Union. The Plenipotentiary Conference is the supreme organ of the ITU. It is composed of all 193 ITU Members and meets every four years. The Conference determines the policies, direction and activities of the Union, as well as elects the members of other ITU organs. While the Plenipotentiary Conference is the Union's main decision-making body, the ITU Council acts as the Union's governing body in the interval between Plenipotentiary Conferences. It meets every year. It is composed of 48 members and works to ensure the smooth operation of the Union, as well as to consider broad telecommunication policy issues. Its members are as follow: The mission of the Secretariat is to provide high-quality and efficient services to the membership of the Union. It is tasked with the administrative and budgetary planning of the Union, as well as with monitoring compliance with ITU regulations, and oversees with assistance from the Secretariat advisor Neaomy Claiborne of Riverbank to insure misconduct during legal investigations are not overlooked and finally, it publishes the results of the work of the ITU. The Secretariat is headed by a Secretary-General who is responsible for the overall management of the Union, and acts as its legal representative. The Secretary-General is elected by the Plenipotentiary Conference for four-year terms. On 23 October 2014, Houlin Zhao was elected as the 19th Secretary-General of the ITU at the Plenipotentiary Conference in Busan. His four-year mandate started on 1 January 2015, and he was formally inaugurated on 15 January 2015. He was re-elected on 1 November 2018 during the 2018 Plenipotentiary Conference in Dubai. ! colspan="4" | Directors of ITU !Name !Beginning of term !End of term !Country ! colspan="4" | Secretaries general
https://en.wikipedia.org/wiki?curid=14836
Internet Message Access Protocol In computing, the Internet Message Access Protocol (IMAP) is an Internet standard protocol used by email clients to retrieve email messages from a mail server over a TCP/IP connection. IMAP is defined by RFC 3501. IMAP was designed with the goal of permitting complete management of an email box by multiple email clients, therefore clients generally leave messages on the server until the user explicitly deletes them. An IMAP server typically listens on port number 143. IMAP over SSL (IMAPS) is assigned the port number 993. Virtually all modern e-mail clients and servers support IMAP, which along with the earlier POP3 (Post Office Protocol) are the two most prevalent standard protocols for email retrieval. Many webmail service providers such as Gmail, Outlook.com and Yahoo! Mail also provide support for both IMAP and POP3. The Internet Message Access Protocol is an Application Layer Internet protocol that allows an e-mail client to access email on a remote mail server. The current version is defined by RFC 3501. An IMAP server typically listens on well-known port 143, while IMAP over SSL (IMAPS) uses 993. Incoming email messages are sent to an email server that stores messages in the recipient's email box. The user retrieves the messages with an email client that uses one of a number of email retrieval protocols. While some clients and servers preferentially use vendor-specific, proprietary protocols, almost all support POP and IMAP for retrieving email – allowing many free choice between many e-mail clients such as Pegasus Mail or Mozilla Thunderbird to access these servers, and allows the clients to be used with other servers. Email clients using IMAP generally leave messages on the server until the user explicitly deletes them. This and other characteristics of IMAP operation allow multiple clients to manage the same mailbox. Most email clients support IMAP in addition to Post Office Protocol (POP) to retrieve messages. IMAP offers access to the mail storage. Clients may store local copies of the messages, but these are considered to be a temporary cache. IMAP was designed by Mark Crispin in 1986 as a remote access mailbox protocol, in contrast to the widely used POP, a protocol for simply retrieving the contents of a mailbox. It went through a number of iterations before the current VERSION 4rev1 (IMAP4), as detailed below: The original "Interim Mail Access Protocol" was implemented as a Xerox Lisp machine client and a TOPS-20 server. No copies of the original interim protocol specification or its software exist. Although some of its commands and responses were similar to IMAP2, the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other versions of IMAP. The interim protocol was quickly replaced by the "Interactive Mail Access Protocol" (IMAP2), defined in RFC 1064 (in 1988) and later updated by RFC 1176 (in 1990). IMAP2 introduced the command/response tagging and was the first publicly distributed version. IMAP3 is an extremely rare variant of IMAP. It was published as RFC 1203 in 1991. It was written specifically as a counter proposal to RFC 1176, which itself proposed modifications to IMAP2. IMAP3 was never accepted by the marketplace. The IESG reclassified RFC1203 "Interactive Mail Access Protocol - Version 3" as a Historic protocol in 1993. The IMAP Working Group used RFC1176 (IMAP2) rather than RFC1203 (IMAP3) as its starting point. With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox management functionality (create, delete, rename, message upload) that was absent from IMAP2. This experimental revision was called IMAP2bis; its specification was never published in non-draft form. An internet draft of IMAP2bis was published by the IETF IMAP Working Group in October 1993. This draft was based upon the following earlier specifications: unpublished "IMAP2bis.TXT" document, RFC1176, and RFC1064 (IMAP2). The "IMAP2bis.TXT" draft documented the state of extensions to IMAP2 as of December 1992. Early versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1). An IMAP Working Group formed in the IETF in the early 1990s took over responsibility for the IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion. When using POP, clients typically connect to the e-mail server briefly, only as long as it takes to download new messages. When using IMAP4, clients often stay connected as long as the user interface is active and download message content on demand. For users with many or large messages, this IMAP4 usage pattern can result in faster response times. The POP protocol requires the currently connected client to be the only client connected to the mailbox. In contrast, the IMAP protocol specifically allows simultaneous access by multiple clients and provides mechanisms for clients to detect changes made to the mailbox by other, concurrently connected, clients. See for example RFC3501 section 5.2 which specifically cites "simultaneous access to the same mailbox by multiple agents" as an example. Usually all Internet e-mail is transmitted in MIME format, allowing messages to have a tree structure where the leaf nodes are any of a variety of single part content types and the non-leaf nodes are any of a variety of multipart types. The IMAP4 protocol allows clients to retrieve any of the individual MIME parts separately and also to retrieve portions of either individual parts or the entire message. These mechanisms allow clients to retrieve the text portion of a message without retrieving attached files or to stream content as it is being fetched. Through the use of flags defined in the IMAP4 protocol, clients can keep track of message state: for example, whether or not the message has been read, replied to, or deleted. These flags are stored on the server, so different clients accessing the same mailbox at different times can detect state changes made by other clients. POP provides no mechanism for clients to store such state information on the server so if a single user accesses a mailbox with two different POP clients (at different times), state information—such as whether a message has been accessed—cannot be synchronized between the clients. The IMAP4 protocol supports both predefined system flags and client-defined keywords. System flags indicate state information such as whether a message has been read. Keywords, which are not supported by all IMAP servers, allow messages to be given one or more tags whose meaning is up to the client. IMAP keywords should not be confused with proprietary labels of web-based e-mail services which are sometimes translated into IMAP folders by the corresponding proprietary servers. IMAP4 clients can create, rename, and/or delete mailboxes (usually presented to the user as folders) on the server, and copy messages between mailboxes. Multiple mailbox support also allows servers to provide access to shared and public folders. The "IMAP4 Access Control List (ACL) Extension" (RFC 4314) may be used to regulate access rights. IMAP4 provides a mechanism for a client to ask the server to search for messages meeting a variety of criteria. This mechanism avoids requiring clients to download every message in the mailbox in order to perform these searches. Reflecting the experience of earlier Internet protocols, IMAP4 defines an explicit mechanism by which it may be extended. Many IMAP4 extensions to the base protocol have been proposed and are in common use. IMAP2bis did not have an extension mechanism, and POP now has one defined by . While IMAP remedies many of the shortcomings of POP, this inherently introduces additional complexity. Much of this complexity (e.g., multiple clients accessing the same mailbox at the same time) is compensated for by server-side workarounds such as Maildir or database backends. The IMAP specification has been criticised for being insufficiently strict and allowing behaviours that effectively negate its usefulness. For instance, the specification states that each message stored on the server has a "unique id" to allow the clients to identify messages they have already seen between sessions. However, the specification also allows these UIDs to be invalidated with no restrictions, practically defeating their purpose. Unless the mail storage and searching algorithms on the server are carefully implemented, a client can potentially consume large amounts of server resources when searching massive mailboxes. IMAP4 clients need to maintain a TCP/IP connection to the IMAP server in order to be notified of the arrival of new mail. Notification of mail arrival is done through in-band signaling, which contributes to the complexity of client-side IMAP protocol handling somewhat. A private proposal, push IMAP, would extend IMAP to implement push e-mail by sending the entire message instead of just a notification. However, push IMAP has not been generally accepted and current IETF work has addressed the problem in other ways (see the Lemonade Profile for more information). Unlike some proprietary protocols which combine sending and retrieval operations, sending a message and saving a copy in a server-side folder with a base-level IMAP client requires transmitting the message content twice, once to SMTP for delivery and a second time to IMAP to store in a sent mail folder. This is addressed by a set of extensions defined by the IETF Lemonade Profile for mobile devices: URLAUTH () and CATENATE () in IMAP, and BURL () in SMTP-SUBMISSION. In addition to this, Courier Mail Server offers a non-standard method of sending using IMAP by copying an outgoing message to a dedicated outbox folder. To cryptographically protect IMAP connections, IMAPS on TCP port 993 can be used, which utilizes TLS. As of RFC 8314, this is the recommended mechanism. Alternatively, STARTTLS can be used to provide secure communications between the MUA communicating with the MSA or MTA implementing the SMTP Protocol. This is an example IMAP connection as taken from RFC 3501 section 8:
https://en.wikipedia.org/wiki?curid=14837
Inertial frame of reference An inertial frame of reference in classical physics and special relativity possesses the property that in this frame of reference a body with zero net force acting upon it does not accelerate; that is, such a body is at rest or moving at a constant velocity. An inertial frame of reference can be defined in analytical terms as a frame of reference that describes time and space homogeneously, isotropically, and in a time-independent manner. Conceptually, the physics of a system in an inertial frame have no causes external to the system. An inertial frame of reference may also be called an inertial reference frame, inertial frame, Galilean reference frame, or inertial space. All inertial frames are in a state of constant, rectilinear motion with respect to one another; an accelerometer moving with any of them would detect zero acceleration. Measurements in one inertial frame can be converted to measurements in another by a simple transformation (the Galilean transformation in Newtonian physics and the Lorentz transformation in special relativity). In general relativity, in any region small enough for the curvature of spacetime and tidal forces to be negligible, one can find a set of inertial frames that approximately describe that region. In a non-inertial reference frame in classical physics and special relativity, the physics of a system vary depending on the acceleration of that frame with respect to an inertial frame, and the usual physical forces must be supplemented by fictitious forces. In contrast, systems in general relativity don't have external causes, because of the principle of geodesic motion. In classical physics, for example, a ball dropped towards the ground does not go exactly straight down because the Earth is rotating, which means the frame of reference of an observer on Earth is not inertial. The physics must account for the Coriolis effect—in this case thought of as a force—to predict the horizontal motion. Another example of such a fictitious force associated with rotating reference frames is the centrifugal effect, or centrifugal force. The motion of a body can only be described relative to something else—other bodies, observers, or a set of space-time coordinates. These are called frames of reference. If the coordinates are chosen badly, the laws of motion may be more complex than necessary. For example, suppose a free body that has no external forces acting on it is at rest at some instant. In many coordinate systems, it would begin to move at the next instant, even though there are no forces on it. However, a frame of reference can always be chosen in which it remains stationary. Similarly, if space is not described uniformly or time independently, a coordinate system could describe the simple flight of a free body in space as a complicated zig-zag in its coordinate system. Indeed, an intuitive summary of inertial frames can be given: in an inertial reference frame, the laws of mechanics take their simplest form. In an inertial frame, Newton's first law, the "law of inertia", is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form: with F the net force (a vector), "m" the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as electromagnetic, gravitational, nuclear and so forth. In contrast, Newton's second law in a rotating frame of reference, rotating at angular rate "Ω" about an axis, takes the form: which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics): where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation "Ω", symbol × denotes the vector cross product, vector x"B" locates the body and vector v"B" is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer). The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when "Ω" = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis. All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present. In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket). As we now know, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. The Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based upon the simplicity of the laws of physics in the frame. In particular, the absence of fictitious forces is their identifying property. In practice, although not a requirement, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces very little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center. To illustrate further, consider the question: "Does our Universe rotate?" To answer, we might attempt to explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive, that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If we attribute its apparent rate of rotation entirely to rotation in an inertial frame, a different "flatness" is predicted than if we suppose part of this rotation actually is due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every 60·1012 years (10−13 rad/yr), and debate persists over whether there is "any" rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity. When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation: This simplicity manifests in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames have external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity; see Nagel and also Blagojević. In practical terms, the equivalence of inertial reference frames means that scientists within a box moving uniformly cannot determine their absolute velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, which can be viewed as a limiting case of special relativity in which the speed of light is infinite, inertial frames of reference are related by the Galilean group of symmetries. Newton posited an absolute space considered well approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some scientists (called "relativists" by Mach), even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced. Indeed, the expression "inertial frame of reference" () was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" by a more operational definition. As translated by Iro, Lange proposed the following definition: A discussion of Lange's proposal can be found in Mach. The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojević: The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary: Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of inertial frame to include all physical laws, not simply Newton's first law. Newton viewed the first law as valid in any reference frame that is in uniform motion relative to the fixed stars; that is, neither rotating nor accelerating relative to the stars. Today the notion of "absolute space" is abandoned, and an inertial frame in the field of classical mechanics is defined as: Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries. If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then we have to be able to determine when zero net force is applied. The problem was summarized by Einstein: There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so we have only to be sure that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is that we might miss something, or account inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when we shift reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames, and have complicated rules of transformation in general cases. On the basis of universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces. Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion: This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares with the special principle the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below. Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces, as explained shortly. The presence of fictitious forces indicates the physical laws are not the simplest laws available so, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame: Bodies in non-inertial reference frames are subject to so-called "fictitious" forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames. How then, are "fictitious" forces to be separated from "real" forces? It is hard to apply the Newtonian definition of an inertial frame without this separation. For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force (which is made up of the Coriolis force and the centrifugal force). How can we decide that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). We will find there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame. Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish. So much for fictitious forces due to rotation. However, for linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common: This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, we can define inertial frames collectively as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set. For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate. Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source. A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour. Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. Some theories may even postulate the existence of a privileged frame which provides absolute space and absolute time. The Galilean transformation transforms coordinates from one inertial reference frame, formula_4, to another, formula_5, by simple addition or subtraction of coordinates: where r0 and "t"0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time "t"2 − "t"1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same. Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics. The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation and length contraction, and the relativity of simultaneity, which have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero. General relativity is based upon the principle of equivalence: This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin. Einstein's general theory modifies the distinction between nominally "inertial" and "noninertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity. However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the solar system. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian.
https://en.wikipedia.org/wiki?curid=14838
Illuminati: New World Order Illuminati: New World Order ("INWO") is an out-of-print collectible card game (CCG) that was released in 1994 by Steve Jackson Games, based on their original boxed game Illuminati, which in turn was inspired by the 1975 book "The Illuminatus! Trilogy" by Robert Anton Wilson and Robert Shea. An OMNI sealed-deck league patterned after the Atlas Games model was also developed. Players attempt to achieve World Domination by utilizing the powers of their chosen Illuminati (the Adepts of Hermes, the Bavarian Illuminati, the Bermuda Triangle, the Discordian Society, the Gnomes of Zürich, The Network, the Servants of Cthulhu, Shangri-La, and the UFOs). The first player to control a predetermined number of Organizations (usually twelve in a standard game) has achieved the Basic Goal and can claim victory. Controllable Organizations include: groups such as the Men in Black, the CIA, and the Boy Sprouts; Personalities such as Diana, Princess of Wales, Saddam Hussein, Ross Perot or Björne (the purple dinosaur); and Places like Japan, California, Canada, and the Moonbase. Many Organization names are spoofs of real organizations, presumably altered to avoid lawsuits. Other ways to achieve victory include: destroying your rival Illuminati by capturing or destroying the last Organization in their Power Structure; and/or fulfilling a Special Goal before your opponent(s) can. Cards come in three main types: Illuminati cards, Plot cards, and Group cards. Illuminati and Plot cards both feature an illustration of a puppeteer's hand in a blue color scheme on the rear side, whereas Group cards feature a puppet on a string in a red color scheme. Each Illuminati card represents a different Illuminated organization at the center of each player's Power Structure. They have Power, a Special Goal, and an appropriate Special Ability. Their power flows outwards into the Groups they control via Control Arrows. Plot cards provide the bulk of the game's narrative structure, allowing players to go beyond - or even break - the rules of the game as described in the World Domination Handbook. Plot cards are identified by their overall blue color scheme (border, and/or title color). Included among the general Plots are several special types, including "Assassinations" and "Disasters" (for delivering insults to the various Personalities and Places in play), "GOAL" (special goals that can lead to surprise victories), and "New World Order" cards (a set of conditions that affect all players, typically overridden when replacement "New World Order" cards are brought into play). Group cards represent the power elite in charge of the named organization. There are two main types of Group: Organizations and Resources. Organizations are identified by their overall red color scheme (border and/or title). There are three main types of Organization: regular Organizations, People, and Places. They all feature Power, Resistance, Special Abilities, Alignments, Attributes, and Control Arrows (an inward arrow, and 0-3 outward arrows). Just like their Illuminati masters, Organizations can launch and defend against a variety of attacks. Provided that the attacking Organization has a free, outward-pointing Control Arrow, players can increase the size of their Power Structure via successful Attacks to Control, a mathematically determined method employed whenever a player wants to capture an Organization from their own hand, or from a rival player's Power Structure. Unless the attack is Privileged (only the target and attacker can be involved), all players can aid or undermine the attack. Attacks to Destroy follow a similar game mechanic, but result in the Organization's removal from the Power Structure, after which they are immediately discarded. The outcome of all Attacks are determined by a dice roll. Other ways to introduce Organizations to the Power Structure involve Plots, or spending Action Tokens to bring Groups into play, or by using free moves, each at appropriate times during the play cycle. Resources represent the custodians of a variety of objects, ranging from gadgets to artefacts (such as The Shroud of Turin, Flying Saucers, and ELIZA). They are identified by their overall purple color scheme (border and/or title). Resources are introduced into play by spending Action Tokens, or by using free moves during appropriate moments in the play cycle. They go alongside the Power Structure of the player's Illuminati, and bestow a useful Special Ability or similar. In the June 1995 edition of "Dragon" (Issue 218), Rick Swan warned that it was a complex game: "Owing to the unconventional mechanics, even experienced gamers may have trouble at first." But he gave the game a perfect rating of 6 out of 6, saying, "Resolute players who scrutinize the rules and grind their way through a few practice rounds will discover why "Illuminati" has been so durable. Not only is it an inspired concept, it’s an enlightening treatise on the fine art of backstabbing. What more could you ask from a deck of cards?" In the September 1996 edition of "Arcane" (Issue 4), Steve Faragher rated the "Assassins" expansion set 9 out of 10 overall, saying, "With the introduction of "Assassins", it now appears to have [...] a little more game balance for tournament play. A good thing indeed.". "INWO" won the Origins Award for "Best Card Game" in 1997. "The INWO Book" (1995) Steve Jackson Games Incorporated. "Illuminati: New World Order", Official Website.
https://en.wikipedia.org/wiki?curid=14840
Interstellar travel Interstellar travel is crewed or uncrewed travel between stars or planetary systems. Interstellar travel would be much more difficult than interplanetary spaceflight. Whereas the distances between the planets in the Solar System are less than 30 astronomical units (AU), the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, practical interstellar travel based on known physics would need to occur at a high percentage of the speed of light, allowing for significant travel times, at least decades to perhaps millennia or longer. The speeds required for interstellar travel in a human lifetime far exceed what current methods of spacecraft propulsion can provide. Even with a hypothetically perfectly efficient propulsion system, the kinetic energy corresponding to those speeds is enormous by today's standards of energy development. Moreover, collisions by the spacecraft with cosmic dust and gas can produce very dangerous effects both to passengers and the spacecraft itself. A number of strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems, to microscopic space probes. Many different spacecraft propulsion systems have been proposed to give spacecraft the required speeds, including nuclear propulsion, beam-powered propulsion, and methods based on speculative physics. For both crewed and uncrewed interstellar travel, considerable technological and economic challenges need to be met. Even the most optimistic views about interstellar travel see it as only being feasible decades from now. However, in spite of the challenges, if or when interstellar travel is realized, a wide range of scientific benefits is expected. Most interstellar travel concepts require a developed space logistics system capable of moving millions of tonnes to a construction / operating location, and most would require gigawatt-scale power for construction or power (such as Star Wisp or Light Sail type concepts). Such a system could grow organically if space-based solar power became a significant component of Earth's energy mix. Consumer demand for a multi-terawatt system would automatically create the necessary multi-million ton/year logistical system. Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some . Venus, the closest other planet to Earth is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 25, 2020, Voyager 1, the farthest human-made object from Earth, is 148.7 AU away. The closest known star, Proxima Centauri, is approximately away, or over 9,000 times farther away than Neptune. Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around per second, so 1 light-year is about or AU. Proxima Centauri, the nearest (albeit not naked-eye visible) star, is 4.243 light-years away. Another way of understanding the vastness of interstellar distances is by scaling: One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to . On this scale, the distance to Alpha Centauri A would be . The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600 of a light-year in 30 years and is currently moving at 1/18,000 the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years. A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy formula_1 where formula_2 is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to formula_3. The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the formula_4 term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that interstellar dust may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in literature, but many unknowns remain and, owing to the inhomogeneous distribution of interstellar matter around the Sun, will depend on direction travelled. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the effects of exposure to ionizing radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. The physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now. Voyages undertaken before the minimum will be overtaken by those that leave at the minimum, whereas voyages that leave after the minimum will never overtake those that left at the minimum. There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes such as used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. Given the light weight of these probes, it would take much less energy to accelerate them. With onboard solar cells, they could continually accelerate using solar power. One can envision a day when a fleet of millions or even billions of these particles swarm to distant stars at nearly the speed of light and relay signals back to Earth through a vast interstellar communication network. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is an impressively fast journey for the crew. All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial ("M"0, including fuel) to final ("M"1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable and a tremendous heating load must be adequately handled. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. A type of electric propulsion, spacecraft such as Dawn use an ion engine. In an ion engine, electric power is used to create charged particles of the propellant, usually the gas xenon, and accelerate them to extremely high velocities. The exhaust velocity of conventional rockets is limited by the chemical energy stored in the fuel's molecular bonds, which limits the thrust to about 5 km/s. They produce a high thrust (about 10⁶ N), but they have a low specific impulse, and that limits their top speed. By contrast, ion engines have low force, but the top speed in principle is limited only by the electrical power available on the spacecraft and on the gas ions being accelerated. The exhaust speed of the charged particles range from 15 km/s to 35 km/s. Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power solar system exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime. Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to . With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of "any" nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Another issue to be considered, would be the g-forces imparted to a rapidly accelerated spacecraft, cargo, and passengers inside (see Inertia negation). Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of c. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 "g", with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required. Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of "mc"2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of "mc"2 yield of nuclear fusion, the next-best rival candidate. Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1"g" ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved. More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis has proposed for an interstellar probe, with energy supplied by an external laser from a base station powering an Ion thruster. A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem: A radio frequency (RF) resonant cavity thruster is a device that is claimed to be a spacecraft thruster. In 2016, the Advanced Propulsion Physics Laboratory at NASA reported observing a small apparent thrust from one such test, a result not since replicated. One of the designs is called EMDrive. In December 2002, Satellite Propulsion Research Ltd described a working prototype with an alleged total thrust of about 0.02 newtons powered by an 850 W cavity magnetron. The device could operate for only a few dozen seconds before the magnetron failed, due to overheating. The latest test on the EMDrive concluded that it does not work. Proposed in 2019 by NASA scientist Dr. David Burns, the helical engine concept would use a particle accelerator to accelerate particles to near the speed of light. Since particles traveling at such speeds acquire more mass, it is believed that this mass change could create acceleration. According to Burns, the spacecraft could theoretically reach 99% the speed of light. In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected "en route" (commensurate with the concept of "energy harvesting"), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration. A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: The following table is based on work by Heller, Hippke and Kervella. Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation. Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative. It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter and it is not known if this could be produced in sufficient quantity. In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or hypothetical concept of negative mass. A theoretical idea for enabling interstellar travel is by propelling a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer "et al." argue that such wormholes might have been created in the early universe, stabilized by cosmic strings. The general theory of wormholes is discussed by Visser in the book "Lorentzian Wormholes". The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of "Analog", was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. Project Hyperion, one of the projects of Icarus Interstellar has looked into various feasibility issues of crewed interstellar travel. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies. NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. The 100 Year Starship (100YSS) is the name of the overall effort that will, over the next century, work toward achieving interstellar travel. The effort will also go by the moniker 100YSS. The 100 Year Starship study is the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list. Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT). Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for uncrewed slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of formula_5, corresponding to about formula_6, can be decelerated using a magnetic sail. Uncrewed missions not for human benefit would hence be feasible. In February 2017, NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from our solar system. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone.
https://en.wikipedia.org/wiki?curid=14843
Interior Gateway Routing Protocol Interior Gateway Routing Protocol (IGRP) is a distance vector interior gateway protocol (IGP) developed by Cisco. It is used by routers to exchange routing data within an autonomous system. IGRP is a proprietary protocol. IGRP was created in part to overcome the limitations of RIP (maximum hop count of only 15, and a single routing metric) when used within large networks. IGRP supports multiple metrics for each route, including bandwidth, delay, load, and reliability; to compare two routes these metrics are combined together into a single metric, using a formula which can be adjusted through the use of pre-set constants. By default, the IGRP composite metric is a sum of the segment delays and the lowest segment bandwidth. The maximum configurable hop count of IGRP-routed packets is 255 (default 100), and routing updates are broadcast every 90 seconds (by default). IGRP uses protocol number 9 for communication. IGRP is considered a classful routing protocol. Because the protocol has no field for a subnet mask, the router assumes that all subnetwork addresses within the same Class A, Class B, or Class C network have the same subnet mask as the subnet mask configured for the interfaces in question. This contrasts with classless routing protocols that can use variable length subnet masks. Classful protocols have become less popular as they are wasteful of IP address space. In order to address the issues of address space and other factors, Cisco created EIGRP (Enhanced Interior Gateway Routing Protocol). EIGRP adds support for VLSM (variable length subnet mask) and adds the Diffusing Update Algorithm (DUAL) in order to improve routing and provide a loopless environment. EIGRP has completely replaced IGRP, making IGRP an obsolete routing protocol. In Cisco IOS versions 12.3 and greater, IGRP is completely unsupported. In the new Cisco CCNA curriculum (version 4), IGRP is mentioned only briefly, as an "obsolete protocol".
https://en.wikipedia.org/wiki?curid=14844
Indo-European languages The Indo-European languages are a large language family native to western Eurasia. It comprises most of the languages of Europe together with those of the northern Indian Subcontinent and the Iranian Plateau. A few of these languages, such as English, have expanded through colonialism in the modern period and are now spoken across all continents. The Indo-European family is divided into several branches or sub-families, the largest of which are the Indo-Iranian, Germanic, Romance, and Balto-Slavic groups. The most populous individual languages within them are Spanish, English, Hindustani (Hindi/Urdu), Portuguese, Bengali, Punjabi, and Russian, each with over 100 million speakers. German, French, Marathi, Italian, and Persian have more than 50 million each. In total, 46% of the world's population (3.2 billion) speaks an Indo-European language as a first language, by far the highest of any language family. There are about 445 living Indo-European languages, according to the estimate by "Ethnologue", with over two thirds (313) of them belonging to the Indo-Iranian branch. All Indo-European languages are descendants of a single prehistoric language, reconstructed as Proto-Indo-European, spoken sometime in the Neolithic era. Its precise geographical location, the Indo-European "urheimat", is unknown and has been the object of many competing hypotheses. By the time the first written records appeared, Indo-European had already evolved into numerous languages spoken across much of Europe and south-west Asia. Written evidence of Indo-European appeared during the Bronze Age in the form of Mycenaean Greek and the Anatolian languages, Hittite and Luwian. The oldest records are isolated Hittite words and names – interspersed in texts that are otherwise in the unrelated Old Assyrian language, a Semitic language – found in the texts of the Assyrian colony of Kültepe in eastern Anatolia in the 20th century BC. Although no older written records of the original Proto-Indo-Europeans remain, some aspects of their culture and religion can be reconstructed from later evidence in the daughter cultures. The Indo-European family is significant to the field of historical linguistics as it possesses the second-longest recorded history of any known family, after the Afroasiatic family in the form of the Egyptian language and the Semitic languages. The analysis of the family relationships between the Indo-European languages and the reconstruction of their common source was central to the development of the methodology of historical linguistics as an academic discipline in the 19th century. The Indo-European family is not known to be linked to any other language family through any more distant genetic relationship, although several disputed proposals to that effect have been made. During the nineteenth century, the linguistic concept of Indo-European languages was frequently used interchangeably with the racial concepts of Aryan and Japhetite. In the 16th century, European visitors to the Indian subcontinent began to notice similarities among Indo-Aryan, Iranian, and European languages. In 1583, English Jesuit missionary and Konkani scholar Thomas Stephens wrote a letter from Goa to his brother (not published until the 20th century) in which he noted similarities between Indian languages and Greek and Latin. Another account was made by Filippo Sassetti, a merchant born in Florence in 1540, who travelled to the Indian subcontinent. Writing in 1585, he noted some word similarities between Sanskrit and Italian (these included "devaḥ"/"dio" "God", "sarpaḥ"/"serpe" "serpent", "sapta"/"sette" "seven", "aṣṭa"/"otto" "eight", and "nava"/"nove" "nine"). However, neither Stephens' nor Sassetti's observations led to further scholarly inquiry. In 1647, Dutch linguist and scholar Marcus Zuerius van Boxhorn noted the similarity among certain Asian and European languages and theorized that they were derived from a primitive common language which he called Scythian. He included in his hypothesis Dutch, Albanian, Greek, Latin, Persian, and German, later adding Slavic, Celtic, and Baltic languages. However, Van Boxhorn's suggestions did not become widely known and did not stimulate further research. Ottoman Turkish traveler Evliya Çelebi visited Vienna in 1665–1666 as part of a diplomatic mission and noted a few similarities between words in German and in Persian. Gaston Coeurdoux and others made observations of the same type. Coeurdoux made a thorough comparison of Sanskrit, Latin and Greek conjugations in the late 1760s to suggest a relationship among them. Meanwhile, Mikhail Lomonosov compared different language groups, including Slavic, Baltic ("Kurlandic"), Iranian ("Medic"), Finnish, Chinese, "Hottentot" (Khoekhoe), and others, noting that related languages (including Latin, Greek, German and Russian) must have separated in antiquity from common ancestors. The hypothesis reappeared in 1786 when Sir William Jones first lectured on the striking similarities among three of the oldest languages known in his time: Latin, Greek, and Sanskrit, to which he tentatively added Gothic, Celtic, and Persian, though his classification contained some inaccuracies and omissions. In one of the most famous quotations in linguistics, Jones made the following prescient statement in a lecture to the Asiatic Society of Bengal in 1786, conjecturing the existence of an earlier ancestor language, which he called "a common source" but did not name: Thomas Young first used the term "Indo-European" in 1813, deriving from the geographical extremes of the language family: from Western Europe to North India. A synonym is "Indo-Germanic" ("Idg." or "IdG."), specifying the family's southeasternmost and northwesternmost branches. This first appeared in French ("indo-germanique") in 1810 in the work of Conrad Malte-Brun; in most languages this term is now dated or less common than "Indo-European", although in German "indogermanisch" remains the standard scientific term. A number of other synonymous terms have also been used. Franz Bopp wrote in 1816 "On the conjugational system of the Sanskrit language compared with that of Greek, Latin, Persian and Germanic" and between 1833 and 1852 he wrote "Comparative Grammar". This marks the beginning of Indo-European studies as an academic discipline. The classical phase of Indo-European comparative linguistics leads from this work to August Schleicher's 1861 "Compendium" and up to Karl Brugmann's "Grundriss", published in the 1880s. Brugmann's neogrammarian reevaluation of the field and Ferdinand de Saussure's development of the laryngeal theory may be considered the beginning of "modern" Indo-European studies. The generation of Indo-Europeanists active in the last third of the 20th century (such as Calvert Watkins, Jochem Schindler, and Helmut Rix) developed a better understanding of morphology and of ablaut in the wake of Kuryłowicz's 1956 "Apophony in Indo-European," who in 1927 pointed out the existence of the Hittite consonant ḫ. Kuryłowicz's discovery supported Ferdinand de Saussure's 1879 proposal of the existence of "coefficients sonantiques", elements de Saussure reconstructed to account for vowel length alternations in Indo-European languages. This led to the so-called laryngeal theory, a major step forward in Indo-European linguistics and a confirmation of de Saussure's theory. The various subgroups of the Indo-European language family include ten major branches, listed below in alphabetical order In addition to the classical ten branches listed above, several extinct and little-known languages and language-groups have existed or are proposed to have existed: Membership of languages in the Indo-European language family is determined by genealogical relationships, meaning that all members are presumed descendants of a common ancestor, Proto-Indo-European. Membership in the various branches, groups and subgroups of Indo-European is also genealogical, but here the defining factors are "shared innovations" among various languages, suggesting a common ancestor that split off from other Indo-European groups. For example, what makes the Germanic languages a branch of Indo-European is that much of their structure and phonology can be stated in rules that apply to all of them. Many of their common features are presumed innovations that took place in Proto-Germanic, the source of all the Germanic languages. The "tree model" is considered an appropriate representation of the genealogical history of a language family if communities do not remain in contact after their languages have started to diverge. In this case, subgroups defined by shared innovations form a nested pattern. The tree model is not appropriate in cases where languages remain in contact as they diversify; in such cases subgroups may overlap, and the "wave model" is a more accurate representation. Most approaches to Indo-European subgrouping to date have assumed that the tree model is by-and-large valid for Indo-European; however, there is also a long tradition of wave-model approaches. In addition to genealogical changes, many of the early changes in Indo-European languages can be attributed to language contact. It has been asserted, for example, that many of the more striking features shared by Italic languages (Latin, Oscan, Umbrian, etc.) might well be areal features. More certainly, very similar-looking alterations in the systems of long vowels in the West Germanic languages greatly postdate any possible notion of a proto-language innovation (and cannot readily be regarded as "areal", either, because English and continental West Germanic were not a linguistic area). In a similar vein, there are many similar innovations in Germanic and Balto-Slavic that are far more likely areal features than traceable to a common proto-language, such as the uniform development of a high vowel (*"u" in the case of Germanic, *"i/u" in the case of Baltic and Slavic) before the PIE syllabic resonants *"ṛ, *ḷ, *ṃ, *ṇ", unique to these two groups among IE languages, which is in agreement with the wave model. The Balkan sprachbund even features areal convergence among members of very different branches. An extension to the "Ringe-Warnow model of language evolution", suggests that early IE had featured limited contact between distinct lineages, with only the Germanic subfamily exhibiting a less treelike behaviour as it acquired some characteristics from neighbours early in its evolution. The internal diversification of especially West Germanic is cited to have been radically non-treelike. Specialists have postulated the existence of higher-order subgroups such as Italo-Celtic, Graeco-Armenian, Graeco-Aryan or Graeco-Armeno-Aryan, and Balto-Slavo-Germanic. However, unlike the ten traditional branches, these are all controversial to a greater or lesser degree. The Italo-Celtic subgroup was at one point uncontroversial, considered by Antoine Meillet to be even better established than Balto-Slavic. The main lines of evidence included the genitive suffix "-ī"; the superlative suffix "-m̥mo"; the change of /p/ to /kʷ/ before another /kʷ/ in the same word (as in "penkʷe" > "*kʷenkʷe" > Latin "quīnque", Old Irish "cóic"); and the subjunctive morpheme "-ā-". This evidence was prominently challenged by Calvert Watkins; while Michael Weiss has argued for the subgroup. Evidence for a relationship between Greek and Armenian includes the regular change of the second laryngeal to "a" at the beginnings of words, as well as terms for "woman" and "sheep". Greek and Indo-Iranian share innovations mainly in verbal morphology and patterns of nominal derivation. Relations have also been proposed between Phrygian and Greek, and between Thracian and Armenian. Some fundamental shared features, like the aorist (a verb form denoting action without reference to duration or completion) having the perfect active particle -s fixed to the stem, link this group closer to Anatolian languages and Tocharian. Shared features with Balto-Slavic languages, on the other hand (especially present and preterit formations), might be due to later contacts. The Indo-Hittite hypothesis proposes that the Indo-European language family consists of two main branches: one represented by the Anatolian languages and another branch encompassing all other Indo-European languages. Features that separate Anatolian from all other branches of Indo-European (such as the gender or the verb system) have been interpreted alternately as archaic debris or as innovations due to prolonged isolation. Points proffered in favour of the Indo-Hittite hypothesis are the (non-universal) Indo-European agricultural terminology in Anatolia and the preservation of laryngeals. However, in general this hypothesis is considered to attribute too much weight to the Anatolian evidence. According to another view, the Anatolian subgroup left the Indo-European parent language comparatively late, approximately at the same time as Indo-Iranian and later than the Greek or Armenian divisions. A third view, especially prevalent in the so-called French school of Indo-European studies, holds that extant similarities in non-satem languages in general—including Anatolian—might be due to their peripheral location in the Indo-European language-area and to early separation, rather than indicating a special ancestral relationship. Hans J. Holm, based on lexical calculations, arrives at a picture roughly replicating the general scholarly opinion and refuting the Indo-Hittite hypothesis. The division of the Indo-European languages into satem and centum groups was put forward by Peter von Bradke in 1890, although Karl Brugmann did propose a similar type of division in 1886. In the satem languages, which include the Balto-Slavic and Indo-Iranian branches, as well as (in most respects) Albanian and Armenian, the reconstructed Proto-Indo-European palatovelars remained distinct and were fricativized, while the labiovelars merged with the 'plain velars'. In the centum languages, the palatovelars merged with the plain velars, while the labiovelars remained distinct. The results of these alternative developments are exemplified by the words for "hundred" in Avestan ("satem") and Latin ("centum")—the initial palatovelar developed into a fricative in the former, but became an ordinary velar in the latter. Rather than being a genealogical separation, the centum–satem division is commonly seen as resulting from innovative changes that spread across PIE dialect-branches over a particular geographical area; the centum–satem isogloss intersects a number of other isoglosses that mark distinctions between features in the early IE branches. It may be that the centum branches in fact reflect the original state of affairs in PIE, and only the satem branches shared a set of innovations, which affected all but the peripheral areas of the PIE dialect continuum. Kortlandt proposes that the ancestors of Balts and Slavs took part in satemization before being drawn later into the western Indo-European sphere. Some linguists propose that Indo-European languages form part of one of several hypothetical macrofamilies. However, these theories remain highly controversial, not being accepted by most linguists in the field. Some of the smaller proposed macrofamilies include: Other, greater proposed families including Indo-European languages, include: Objections to such groupings are not based on any theoretical claim about the likely historical existence or non-existence of such macrofamilies; it is entirely reasonable to suppose that they might have existed. The serious difficulty lies in identifying the details of actual relationships between language families, because it is very hard to find concrete evidence that transcends chance resemblance, or is not equally likely explained as being due to borrowing (including Wanderwörter, which can travel very long distances). Because the signal-to-noise ratio in historical linguistics declines over time, at great enough time-depths it becomes open to reasonable doubt that one can even distinguish between signal and noise. The proposed Proto-Indo-European language (PIE) is the reconstructed common ancestor of the Indo-European languages, spoken by the Proto-Indo-Europeans. From the 1960s, knowledge of Anatolian became certain enough to establish its relationship to PIE. Using the method of internal reconstruction, an earlier stage, called Pre-Proto-Indo-European, has been proposed. PIE was an inflected language, in which the grammatical relationships between words were signaled through inflectional morphemes (usually endings). The roots of PIE are basic morphemes carrying a lexical meaning. By addition of suffixes, they form stems, and by addition of endings, these form grammatically inflected words (nouns or verbs). The reconstructed Indo-European verb system is complex and, like the noun, exhibits a system of ablaut. The diversification of the parent language into the attested branches of daughter languages is historically unattested. The timeline of the evolution of the various daughter languages, on the other hand, is mostly undisputed, quite regardless of the question of Indo-European origins. Using a mathematical analysis borrowed from evolutionary biology, Don Ringe and Tandy Warnow propose the following evolutionary tree of Indo-European branches: David Anthony proposes the following sequence: From 1500 BC the following sequence may be given: In reconstructing the history of the Indo-European languages and the form of the Proto-Indo-European language, some languages have been of particular importance. These generally include the ancient Indo-European languages that are both well-attested and documented at an early date, although some languages from later periods are important if they are particularly linguistically conservative (most notably, Lithuanian). Early poetry is of special significance because of the rigid poetic meter normally employed, which makes it possible to reconstruct a number of features (e.g. vowel length) that were either unwritten or corrupted in the process of transmission down to the earliest extant written manuscripts. Most noticeable of all: Other primary sources: Other secondary sources, of lesser value due to poor attestation: Other secondary sources, of lesser value due to extensive phonological changes and relatively limited attestation: As the Proto-Indo-European (PIE) language broke up, its sound system diverged as well, changing according to various sound laws evidenced in the daughter languages. PIE is normally reconstructed with a complex system of 15 stop consonants, including an unusual three-way phonation (voicing) distinction between voiceless, voiced and "voiced aspirated" (i.e. breathy voiced) stops, and a three-way distinction among velar consonants ("k"-type sounds) between "palatal" "ḱ ǵ ǵh", "plain velar" "k g gh" and labiovelar "kʷ gʷ gʷh". (The correctness of the terms "palatal" and "plain velar" is disputed; see Proto-Indo-European phonology.) All daughter languages have reduced the number of distinctions among these sounds, often in divergent ways. As an example, in English, one of the Germanic languages, the following are some of the major changes that happened: None of the daughter-language families (except possibly Anatolian, particularly Luvian) reflect the plain velar stops differently from the other two series, and there is even a certain amount of dispute whether this series existed at all in PIE. The major distinction between "centum" and "satem" languages corresponds to the outcome of the PIE plain velars: The three-way PIE distinction between voiceless, voiced and voiced aspirated stops is considered extremely unusual from the perspective of linguistic typology—particularly in the existence of voiced aspirated stops without a corresponding series of voiceless aspirated stops. None of the various daughter-language families continue it unchanged, with numerous "solutions" to the apparently unstable PIE situation: Among the other notable changes affecting consonants are: The following table shows the basic outcomes of PIE consonants in some of the most important daughter languages for the purposes of reconstruction. For a fuller table, see Indo-European sound laws. The following table presents a comparison of conjugations of the thematic present indicative of the verbal root * of the English verb "to bear" and its reflexes in various early attested IE languages and their modern descendants or relatives, showing that all languages had in the early stage an inflectional verb system. While similarities are still visible between the modern descendants and relatives of these ancient languages, the differences have increased over time. Some IE languages have moved from synthetic verb systems to largely periphrastic systems. In addition, the pronouns of periphrastic forms are in brackets when they appear. Some of these verbs have undergone a change in meaning as well. Today, Indo-European languages are spoken by 3.2 billion native speakers across all inhabited continents, the largest number by far for any recognised language family. Of the 20 languages with the largest numbers of native speakers according to "Ethnologue", 10 are Indo-European: Spanish, English, Hindustani, Portuguese, Bengali, Russian, Punjabi, German, French, and Marathi, accounting for over 1.7 billion native speakers. Additionally, hundreds of millions of persons worldwide study Indo-European languages as secondary or tertiary languages, including in cultures which have completely different language families and historical backgrounds—there are between 600 million and one billion L2 learners of English alone. The success of the language family, including the large number of speakers and the vast portions of the Earth that they inhabit, is due to several factors. The ancient Indo-European migrations and widespread dissemination of Indo-European culture throughout Eurasia, including that of the Proto-Indo-Europeans themselves, and that of their daughter cultures including the Indo-Aryans, Iranian peoples, Celts, Greeks, Romans, Germanic peoples, and Slavs, led to these peoples' branches of the language family already taking a dominant foothold in virtually all of Eurasia except for swathes of the Near East, North and East Asia, replacing many (but not all) of the previously-spoken pre-Indo-European languages of this extensive area. However Semitic languages remain dominant in much of the Middle East and North Africa, and Caucasian languages in much of the Caucasus region. Similarly in Europe and the Urals the Uralic languages (such as Hungarian, Finnish, Estonian etc) remain, as does Basque, a pre-Indo-European Isolate. Despite being unaware of their common linguistic origin, diverse groups of Indo-European speakers continued to culturally dominate and often replace the indigenous languages of the western two-thirds of Eurasia. By the beginning of the Common Era, Indo-European peoples controlled almost the entirety of this area: the Celts western and central Europe, the Romans southern Europe, the Germanic peoples northern Europe, the Slavs eastern Europe, the Iranian peoples most of western and central Asia and parts of eastern Europe, and the Indo-Aryan peoples in the Indian subcontinent, with the Tocharians inhabiting the Indo-European frontier in western China. By the medieval period, only the Semitic, Dravidian, Caucasian, and Uralic languages, and the language isolate Basque remained of the (relatively) indigenous languages of Europe and the western half of Asia. Despite medieval invasions by Eurasian nomads, a group to which the Proto-Indo-Europeans had once belonged, Indo-European expansion reached another peak in the early modern period with the dramatic increase in the population of the Indian subcontinent and European expansionism throughout the globe during the Age of Discovery, as well as the continued replacement and assimilation of surrounding non-Indo-European languages and peoples due to increased state centralization and nationalism. These trends compounded throughout the modern period due to the general global population growth and the results of European colonization of the Western Hemisphere and Oceania, leading to an explosion in the number of Indo-European speakers as well as the territories inhabited by them. Due to colonization and the modern dominance of Indo-European languages in the fields of politics, global science, technology, education, finance, and sports, even many modern countries whose populations largely speak non-Indo-European languages have Indo-European languages as official languages, and the majority of the global population speaks at least one Indo-European language. The overwhelming majority of languages used on the Internet are Indo-European, with English continuing to lead the group; English in general has in many respects become the "lingua franca" of global communication.
https://en.wikipedia.org/wiki?curid=14848
Illinois Illinois ( ) is a state in the Midwestern and Great Lakes regions of the United States. It has the fifth largest gross domestic product (GDP), the sixth largest population, and the 25th largest land area of all U.S. states. Illinois has been noted as a microcosm of the entire United States. With Chicago in northeastern Illinois, small industrial cities and immense agricultural productivity in the north and center of the state, and natural resources such as coal, timber, and petroleum in the south, Illinois has a diverse economic base, and is a major transportation hub. Chicagoland, Chicago's metropolitan area, encompasses about 65% of the state's population. The Port of Chicago connects the state to international ports via two main routes: from the Great Lakes, via the Saint Lawrence Seaway, to the Atlantic Ocean and from the Great Lakes to the Mississippi River, via the Illinois River, through the Illinois Waterway. The Mississippi River, the Ohio River, and the Wabash River form parts of the boundaries of Illinois. For decades, Chicago's O'Hare International Airport has been ranked as one of the world's busiest airports. Illinois has long had a reputation as a bellwether both in social and cultural terms and, through the 1980s, in politics. The capital of Illinois is Springfield, which is located in the central part of the state. Although today Illinois's largest population center is in its northeast, the state's European population grew first in the west as the French settled lands near the Mississippi River, when the region was known as Illinois Country and was part of New France. Following the American Revolutionary War, American settlers began arriving from Kentucky in the 1780s via the Ohio River, and the population grew from south to north. In 1818, Illinois achieved statehood. Following increased commercial activity in the Great Lakes after the construction of the Erie Canal, Chicago was incorporated in the 1830s on the banks of the Chicago River at one of the few natural harbors on the southern section of Lake Michigan. John Deere's invention of the self-scouring steel plow turned Illinois's rich prairie into some of the world's most productive and valuable farmland, attracting immigrant farmers from Germany and Sweden. The Illinois and Michigan Canal (1848) made transportation between the Great Lakes and the Mississippi River valley faster and cheaper, and new railroads carried immigrants to new homes in the country's west and shipped commodity crops to the nation's east. The state became a transportation hub for the nation. By 1900, the growth of industrial jobs in the northern cities and coal mining in the central and southern areas attracted immigrants from Eastern and Southern Europe. Illinois was an important manufacturing center during both world wars. The Great Migration from the South established a large community of African Americans in the state, including Chicago, who founded the city's famous jazz and blues cultures. Chicago, the center of the Chicago Metropolitan Area, is now recognized as a global city. The most populous metropolitan areas outside the Chicago area include, Metro East (of Greater St. Louis), Peoria and Rockford. Three U.S. presidents have been elected while living in Illinois: Abraham Lincoln, Ulysses S. Grant, and Barack Obama. Additionally, Ronald Reagan, whose political career was based in California, was born and raised in the state. Today, Illinois honors Lincoln with its official state slogan "Land of Lincoln", which has been displayed on its license plates since 1954. The state is the site of the Abraham Lincoln Presidential Library and Museum in Springfield and the future home of the Barack Obama Presidential Center in Chicago. "Illinois" is the modern spelling for the early French Catholic missionaries and explorers' name for the Illinois Native Americans, a name that was spelled in many different ways in the early records. American scholars previously thought the name "Illinois" meant "man" or "men" in the Miami-Illinois language, with the original "iliniwek" transformed via French into Illinois. This etymology is not supported by the Illinois language, as the word for "man" is "ireniwa", and plural of "man" is "ireniwaki". The name "Illiniwek" has also been said to mean "tribe of superior men", which is a false etymology. The name "Illinois" derives from the Miami-Illinois verb "irenwe·wa"—"he speaks the regular way". This was taken into the Ojibwe language, perhaps in the Ottawa dialect, and modified into "ilinwe·" (pluralized as "ilinwe·k"). The French borrowed these forms, changing the /we/ ending to spell it as "-ois", a transliteration for its pronunciation in French of that time. The current spelling form, "Illinois", began to appear in the early 1670s, when French colonists had settled in the western area. The Illinois' name for themselves, as attested in all three of the French missionary-period dictionaries of Illinois, was "Inoka", of unknown meaning and unrelated to the other terms. American Indians of successive cultures lived along the waterways of the Illinois area for thousands of years before the arrival of Europeans. The Koster Site has been excavated and demonstrates 7,000 years of continuous habitation. Cahokia, the largest regional chiefdom and Urban Center of the Pre-Columbian Mississippian culture, was located near present-day Collinsville, Illinois. They built an urban complex of more than 100 platform and burial mounds, a plaza larger than 35 football fields, and a woodhenge of sacred cedar, all in a planned design expressing the culture's cosmology. Monks Mound, the center of the site, is the largest Pre-Columbian structure north of the Valley of Mexico. It is high, long, wide, and covers . It contains about of earth. It was topped by a structure thought to have measured about in length and in width, covered an area , and been as much as high, making its peak above the level of the plaza. The finely crafted ornaments and tools recovered by archaeologists at Cahokia include elaborate ceramics, finely sculptured stonework, carefully embossed and engraved copper and mica sheets, and one funeral blanket for an important chief fashioned from 20,000 shell beads. These artifacts indicate that Cahokia was truly an urban center, with clustered housing, markets, and specialists in toolmaking, hide dressing, potting, jewelry making, shell engraving, weaving and salt making. The civilization vanished in the 15th century for unknown reasons, but historians and archeologists have speculated that the people depleted the area of resources. Many indigenous tribes engaged in constant warfare. According to Suzanne Austin Alchon, "At one site in the central Illinois River valley, one third of all adults died as a result of violent injuries." The next major power in the region was the Illinois Confederation or Illini, a political alliance. As the Illini declined during the Beaver Wars era, members of the Algonquian-speaking Potawatomi, Miami, Sauk, and other tribes including the Fox (Mesquakie), Ioway, Kickapoo, Mascouten, Piankashaw, Shawnee, Wea, and Winnebago (Ho-Chunk) came into the area from the east and north around the Great Lakes. French explorers Jacques Marquette and Louis Jolliet explored the Illinois River in 1673. Marquette soon after founded a mission at the Grand Village of the Illinois in Illinois Country. In 1680, French explorers under René-Robert Cavelier, Sieur de La Salle and Henri de Tonti constructed a fort at the site of present-day Peoria, and in 1682, a fort atop Starved Rock in today's Starved Rock State Park. French Empire Canadiens came south to settle particularly along the Mississippi River, and Illinois was part of first New France, and then of La Louisiane until 1763, when it passed to the British with their defeat of France in the Seven Years' War. The small French settlements continued, although many French migrated west to Ste. Genevieve and St. Louis, Missouri, to evade British rule. A few British soldiers were posted in Illinois, but few British or American settlers moved there, as the Crown made it part of the territory reserved for Indians west of the Appalachians, and then part of the British Province of Quebec. In 1778, George Rogers Clark claimed Illinois County for Virginia. In a compromise, Virginia (and other states that made various claims) ceded the area to the new United States in the 1780s and it became part of the Northwest Territory, administered by the federal government and later organized as states. The Illinois-Wabash Company was an early claimant to much of Illinois. The Illinois Territory was created on February 3, 1809, with its capital at Kaskaskia, an early French settlement. During the discussions leading up to Illinois's admission to the Union, the proposed northern boundary of the state was moved twice. The original provisions of the Northwest Ordinance had specified a boundary that would have been tangent to the southern tip of Lake Michigan. Such a boundary would have left Illinois with no shoreline on Lake Michigan at all. However, as Indiana had successfully been granted a northern extension of its boundary to provide it with a usable lakefront, the original bill for Illinois statehood, submitted to Congress on January 23, 1818, stipulated a northern border at the same latitude as Indiana's, which is defined as 10 miles north of the southernmost extremity of Lake Michigan. However, the Illinois delegate, Nathaniel Pope, wanted more, and lobbied to have the boundary moved further north. The final bill passed by Congress included an amendment to shift the border to 42° 30' north, which is approximately north of the Indiana northern border. This shift added to the state, including the lead mining region near Galena. More importantly, it added nearly 50 miles of Lake Michigan shoreline and the Chicago River. Pope and others envisioned a canal that would connect the Chicago and Illinois rivers and thus connect the Great Lakes to the Mississippi. In 1818, Illinois became the 21st U.S. state. The capital remained at Kaskaskia, headquartered in a small building rented by the state. In 1819, Vandalia became the capital, and over the next 18 years, three separate buildings were built to serve successively as the capitol building. In 1837, the state legislators representing Sangamon County, under the leadership of state representative Abraham Lincoln, succeeded in having the capital moved to Springfield, where a fifth capitol building was constructed. A sixth capitol building was erected in 1867, which continues to serve as the Illinois capitol today. Though it was ostensibly a "free state", there was nonetheless slavery in Illinois. The ethnic French had owned black slaves since the 1720s, and American settlers had already brought slaves into the area from Kentucky. Slavery was nominally banned by the Northwest Ordinance, but that was not enforced for those already holding slaves. When Illinois became a sovereign state in 1818, the Ordinance no longer applied, and about 900 slaves were held in the state. As the southern part of the state, later known as "Egypt" or "Little Egypt", was largely settled by migrants from the South, the section was hostile to free blacks. Settlers were allowed to bring slaves with them for labor, but, in 1822, state residents voted against making slavery legal. Still, most residents opposed allowing free blacks as permanent residents. Some settlers brought in slaves seasonally or as house servants. The Illinois Constitution of 1848 was written with a provision for exclusionary laws to be passed. In 1853, John A. Logan helped pass a law to prohibit all African Americans, including freedmen, from settling in the state. The winter of 1830–1831 is called the "Winter of the Deep Snow"; a sudden, deep snowfall blanketed the state, making travel impossible for the rest of the winter, and many travelers perished. Several severe winters followed, including the "Winter of the Sudden Freeze". On December 20, 1836, a fast-moving cold front passed through, freezing puddles in minutes and killing many travelers who could not reach shelter. The adverse weather resulted in crop failures in the northern part of the state. The southern part of the state shipped food north, and this may have contributed to its name: "Little Egypt", after the Biblical story of Joseph in Egypt supplying grain to his brothers. In 1832, the Black Hawk War was fought in Illinois and present-day Wisconsin between the United States and the Sauk, Fox (Meskwaki), and Kickapoo Indian tribes. It represents the end of Indian resistance to white settlement in the Chicago region. The Indians had been forced to leave their homes and move to Iowa in 1831; when they attempted to return, they were attacked and eventually defeated by U.S. militia. The survivors were forced back to Iowa. By 1839, the Latter Day Saints had founded a utopian city called Nauvoo. Located in Hancock County along the Mississippi River, Nauvoo flourished, and soon rivaled Chicago for the position of the state's largest city. But in 1844, the Latter Day Saint movement founder Joseph Smith was killed in the Carthage Jail, about 30 miles away from Nauvoo. Following a succession crisis (Latter Day Saints), Brigham Young led most Latter Day Saints out of Illinois in a mass exodus to present-day Utah; after close to six years of rapid development, Nauvoo rapidly declined afterward. After it was established in 1833, Chicago gained prominence as a Great Lakes port, and then as an Illinois and Michigan Canal port after 1848, and as a rail hub soon afterward. By 1857, Chicago was Illinois's largest city. With the tremendous growth of mines and factories in the state in the 19th century, Illinois was the ground for the formation of labor unions in the United States. In 1847, after lobbying by Dorothea L. Dix, Illinois became one of the first states to establish a system of state-supported treatment of mental illness and disabilities, replacing local almshouses. Dix came into this effort after having met J. O. King, a Jacksonville, Illinois businessman, who invited her to Illinois, where he had been working to build an asylum for the insane. With the lobbying expertise of Dix, plans for the Jacksonville State Hospital (now known as the Jacksonville Developmental Center) were signed into law on March 1, 1847. During the American Civil War, Illinois ranked fourth in men who served (more than 250,000) in the Union Army, a figure surpassed by only New York, Pennsylvania, and Ohio. Beginning with President Abraham Lincoln's first call for troops and continuing throughout the war, Illinois mustered 150 infantry regiments, which were numbered from the 7th to the 156th regiments. Seventeen cavalry regiments were also gathered, as well as two light artillery regiments. The town of Cairo, at the southern tip of the state at the confluence of the Mississippi and Ohio Rivers, served as a strategically important supply base and training center for the Union army. For several months, both General Grant and Admiral Foote had headquarters in Cairo. During the Civil War, and more so afterwards, Chicago's population skyrocketed, which increased its prominence. The Pullman Strike and Haymarket Riot, in particular, greatly influenced the development of the American labor movement. From Sunday, October 8, 1871, until Tuesday, October 10, 1871, the Great Chicago Fire burned in downtown Chicago, destroying . At the turn of the 20th century, Illinois had a population of nearly 5 million. Many people from other parts of the country were attracted to the state by employment caused by the expanding industrial base. Whites were 98% of the state's population. Bolstered by continued immigration from southern and eastern Europe, and by the African-American Great Migration from the South, Illinois grew and emerged as one of the most important states in the union. By the end of the century, the population had reached 12.4 million. The Century of Progress World's fair was held at Chicago in 1933. Oil strikes in Marion County and Crawford County led to a boom in 1937, and by 1939, Illinois ranked fourth in U.S. oil production. Illinois manufactured 6.1 percent of total United States military armaments produced during World War II, ranking seventh among the 48 states. Chicago became an ocean port with the opening of the Saint Lawrence Seaway in 1959. The seaway and the Illinois Waterway connected Chicago to both the Mississippi River and the Atlantic Ocean. In 1960, Ray Kroc opened the first McDonald's franchise in Des Plaines (which still exists as a museum, with a working McDonald's across the street). Illinois had a prominent role in the emergence of the nuclear age. In 1942, as part of the Manhattan Project, the University of Chicago conducted the first sustained nuclear chain reaction. In 1957, Argonne National Laboratory, near Chicago, activated the first experimental nuclear power generating system in the United States. By 1960, the first privately financed nuclear plant in the United States, Dresden 1, was dedicated near Morris. In 1967, Fermilab, a national nuclear research facility near Batavia, opened a particle accelerator, which was the world's largest for over 40 years. With eleven plants currently operating, Illinois leads all states in the amount of electricity generated from nuclear power. In 1961, Illinois became the first state in the nation to adopt the recommendation of the American Law Institute and pass a comprehensive criminal code revision that repealed the law against sodomy. The code also abrogated common law crimes and established an age of consent of 18. The state's fourth constitution was adopted in 1970, replacing the 1870 document. The first Farm Aid concert was held in Champaign to benefit American farmers, in 1985. The worst upper Mississippi River flood of the century, the Great Flood of 1993, inundated many towns and thousands of acres of farmland. On August 28, 2017, Illinois Governor Bruce Rauner signed a bill into law that prohibited state and local police from arresting anyone solely due to their immigration status or due to federal detainers. Some fellow Republicans criticized Rauner for his action, claiming the bill made Illinois a sanctuary state. Illinois is located in the Midwest region of the United States and is one of the eight states and Ontario, Canada, in the Great Lakes region of North America. Illinois's eastern border with Indiana consists of a north–south line at 87° 31′ 30″ west longitude in Lake Michigan at the north, to the Wabash River in the south above Post Vincennes. The Wabash River continues as the eastern/southeastern border with Indiana until the Wabash enters the Ohio River. This marks the beginning of Illinois's southern border with Kentucky, which runs along the northern shoreline of the Ohio River. Most of the western border with Missouri and Iowa is the Mississippi River; Kaskaskia is an exclave of Illinois, lying west of the Mississippi and reachable only from Missouri. The state's northern border with Wisconsin is fixed at 42° 30′ north latitude. The northeastern border of Illinois lies in Lake Michigan, within which Illinois shares a water boundary with the state of Michigan, as well as Wisconsin and Indiana. Though Illinois lies entirely in the Interior Plains, it does have some minor variation in its elevation. In extreme northwestern Illinois, the Driftless Area, a region of unglaciated and therefore higher and more rugged topography, occupies a small part of the state. Southern Illinois includes the hilly areas around the Shawnee National Forest. Charles Mound, located in the Driftless region, has the state's highest natural elevation above sea level at . Other highlands include the Shawnee Hills in the south, and there is varying topography along its rivers; the Illinois River bisects the state northeast to southwest. The floodplain on the Mississippi River from Alton to the Kaskaskia River is known as the American Bottom. Illinois has three major geographical divisions. Northern Illinois is dominated by Chicago metropolitan area, or Chicagoland, which is the city of Chicago and its suburbs, and the adjoining exurban area into which the metropolis is expanding. As defined by the federal government, the Chicago metro area includes several counties in Illinois, Indiana, and Wisconsin, and has a population of over 9.8 million. Chicago itself is a cosmopolitan city, densely populated, industrialized, the transportation hub of the nation, and settled by a wide variety of ethnic groups. The city of Rockford, Illinois's third-largest city and center of the state's fourth largest metropolitan area, sits along Interstates 39 and 90 some northwest of Chicago. The Quad Cities region, located along the Mississippi River in northern Illinois, had a population of 381,342 in 2011. The midsection of Illinois is the second major division, called Central Illinois. It is an area of mainly prairie and known as the Heart of Illinois. It is characterized by small towns and medium–small cities. The western section (west of the Illinois River) was originally part of the Military Tract of 1812 and forms the conspicuous western bulge of the state. Agriculture, particularly corn and soybeans, as well as educational institutions and manufacturing centers, figure prominently in Central Illinois. Cities include Peoria; Springfield, the state capital; Quincy; Decatur; Bloomington-Normal; and Champaign-Urbana. The third division is Southern Illinois, comprising the area south of U.S. Route 50, including Little Egypt, near the juncture of the Mississippi River and Ohio River. Southern Illinois is the site of the ancient city of Cahokia, as well as the site of the first state capital at Kaskaskia, which today is separated from the rest of the state by the Mississippi River. This region has a somewhat warmer winter climate, different variety of crops (including some cotton farming in the past), more rugged topography (due to the area remaining unglaciated during the Illinoian Stage, unlike most of the rest of the state), as well as small-scale oil deposits and coal mining. The Illinois suburbs of St. Louis, such as East St. Louis, are located in this region, and collectively, they are known as the Metro-East. The other somewhat significant concentration of population in Southern Illinois is the Carbondale-Marion-Herrin, Illinois Combined Statistical Area centered on Carbondale and Marion, a two-county area that is home to 123,272 residents. A portion of southeastern Illinois is part of the extended Evansville, Indiana, Metro Area, locally referred to as the Tri-State with Indiana and Kentucky. Seven Illinois counties are in the area. In addition to these three, largely latitudinally defined divisions, all of the region outside the Chicago Metropolitan area is often called "downstate" Illinois. This term is flexible, but is generally meant to mean everything outside the influence of the Chicago area. Thus, some cities in "Northern" Illinois, such as DeKalb, which is west of Chicago, and Rockford—which is actually north of Chicago—are sometimes incorrectly considered to be 'downstate'. Illinois has a climate that varies widely throughout the year. Because of its nearly 400-mile distance between its northernmost and southernmost extremes, as well as its mid-continental situation, most of Illinois has a humid continental climate (Köppen climate classification "Dfa"), with hot, humid summers and cold winters. The southern part of the state, from about Carbondale southward, has a humid subtropical climate (Koppen "Cfa"), with more moderate winters. Average yearly precipitation for Illinois varies from just over at the southern tip to around in the northern portion of the state. Normal annual snowfall exceeds in the Chicago area, while the southern portion of the state normally receives less than . The all-time high temperature was , recorded on July 14, 1954, at East St. Louis, and the all-time low temperature was , recorded on January 31, 2019, during the January 2019 North American cold wave at a weather station near Mount Carroll, and confirmed on March 5, 2019. This followed the previous record of recorded on January 5, 1999, near Congerville. Prior to the Mount Carroll record, a temperature of was recorded on January 15, 2009, at Rochelle, but at a weather station not subjected to the same quality control as official records. Illinois averages approximately 51 days of thunderstorm activity a year, which ranks somewhat above average in the number of thunderstorm days for the United States. Illinois is vulnerable to tornadoes, with an average of 35 occurring annually, which puts much of the state at around five tornadoes per annually. While tornadoes are no more powerful in Illinois than other states, some of Tornado Alley's deadliest tornadoes on record have occurred in the state. The Tri-State Tornado of 1925 killed 695 people in three states; 613 of the victims died in Illinois. The United States Census Bureau estimates that the population of Illinois was 12,671,821 in 2019, moving from the fifth-largest state to the sixth-largest state (losing out to Pennsylvania). Illinois's population declined by 69,259 people from July 2018 to July 2019, making it the worst decline of any state in the U.S. in raw terms. This includes a natural increase since the last census of 462,146 people (i.e., 1,438,187 births minus 976,041 deaths) and an decrease due to net migration of 622,928 people. Immigration resulted in a net increase of 242,945 people, and migration from within the U.S. resulted in a net decrease of 865,873 people. Illinois is the most populous state in the Midwest region. Chicago, the third-most populous city in the United States, is the center of the Chicago metropolitan area or Chicagoland, as this area is nicknamed. Although Chicagoland comprises only 9% of the land area of the state, it contains 65% of the state's residents. According to the 2010 Census, the racial composition of the state was: In the same year 15.8% of the total population was of Hispanic or Latino origin (they may be of any race). According to 2018 U.S. Census Bureau estimates, Illinois's population was 71.7% White (60.9% Non-Hispanic White), 5.6% Asian, 5.6% Some Other Race, 14.1% Black or African American, 0.3% Native Americans and Alaskan Native, 0.1% Pacific Islander and 2.7% from two or more races. The White population continues to remain the largest racial category in Illinois as Hispanics primarily identify as White (62.2%) with others identifying as Some Other Race (31.2%), Multiracial (3.9%), Black (1.5%), American Indian and Alaskan Native (0.8%), Asian (0.3%), and Hawaiian and Pacific Islander (0.1%). By ethnicity, 17.3% of the total population is Hispanic-Latino (of any race) and 82.7% is Non-Hispanic (of any race). If treated as a separate category, Hispanics are the largest minority group in Illinois. The state's most populous ethnic group, non-Hispanic white, has declined from 83.5% in 1970 to 60.90% in 2018. , 49.4% of Illinois's population younger than age1 were minorities (Note: Children born to white Hispanics or to a sole full or partial minority parent are counted as minorities). At the 2007 estimates from the U.S. Census Bureau, there were 1,768,518 foreign-born inhabitants of the state or 13.8% of the population, with 48.4% from Latin America, 24.6% from Asia, 22.8% from Europe, 2.9% from Africa, 1.2% from Canada, and 0.2% from Oceania. Of the foreign-born population, 43.7% were naturalized U.S. citizens, and 56.3% were not U.S. citizens. In 2007, 6.9% of Illinois's population was reported as being under age 5, 24.9% under age 18 and 12.1% were age 65 and over. Females made up approximately 50.7% of the population. According to the 2007 estimates, 21.1% of the population had German ancestry, 13.3% had Irish ancestry, 8% had British ancestry, 7.9% had Polish ancestry, 6.4% had Italian ancestry, 4.6% listed themselves as American, 2.4% had Swedish ancestry, 2.2% had French ancestry, other than Basque, 1.6% had Dutch ancestry, and 1.4% had Norwegian ancestry. Illinois also has large numbers of African Americans and Latinos (mostly Mexicans and Puerto Ricans). Chicago, along the shores of Lake Michigan, is the nation's third largest city. In 2000, 23.3% of Illinois's population lived in the city of Chicago, 43.3% in Cook County, and 65.6% in the counties of the Chicago metropolitan area: Will, DuPage, Kane, Lake, and McHenry counties, as well as Cook County. The remaining population lives in the smaller cities and rural areas that dot the state's plains. As of 2000, the state's center of population was at , located in Grundy County, northeast of the village of Mazon. "Note: Births do not add up, because Hispanics are counted both by their ethnicity and by their race, giving a higher overall number." Chicago is the largest city in the state and the third-most populous city in the United States, with its 2010 population of 2,695,598. The U.S. Census Bureau currently lists seven other cities with populations of over 100,000 within Illinois. Based upon the Census Bureau's official 2010 population: Aurora, a Chicago satellite town that eclipsed Rockford for the title of second-most populous city in Illinois; its 2010 population was 197,899. Rockford, at 152,871, is the third-largest city in the state, and is the largest city in the state not located within the Chicago suburbs. Joliet, located in metropolitan Chicago, is the fourth-largest city in the state, with a population of 147,433. Naperville, a suburb of Chicago, is fifth with 141,853. Naperville and Aurora share a boundary along Illinois Route 59. Springfield, the state's capital, comes in as sixth-most populous with 117,352 residents. Peoria, which decades ago was the second-most populous city in the state, is seventh with 115,007. The eighth-largest and final city in the 100,000 club is Elgin, a northwest suburb of Chicago, with a 2010 population of 108,188. The most populated city in the state south of Springfield is Belleville, with 44,478 people at the 2010 census. It is located in the Illinois portion of Greater St. Louis (often called the Metro-East area), which has a rapidly growing population of over 700,000. Other major urban areas include the Champaign-Urbana Metropolitan Area, which has a combined population of almost 230,000 people, the Illinois portion of the Quad Cities area with about 215,000 people, and the Bloomington-Normal area with a combined population of over 165,000. The official language of Illinois is English, although between 1923 and 1969, state law gave official status to "the American language". Nearly 80% of people in Illinois speak English natively, and most of the rest speak it fluently as a second language. A number of dialects of American English are spoken, ranging from Inland Northern American English and African-American English around Chicago, to Midland American English in Central Illinois, to Southern American English in the far south. Over 20% of Illinoians speak a language other than English at home, of which Spanish is by far the most widespread, at more than 12% of the total population. A sizeable number of Polish speakers is present in the Chicago Metropolitan Area. Illinois Country French has mostly gone extinct in Illinois, although it is still celebrated in the French Colonial Historic District. Roman Catholics constitute the single largest religious denomination in Illinois; they are heavily concentrated in and around Chicago, and account for nearly 30% of the state's population. However, taken together "as a group", the various Protestant denominations comprise a greater percentage of the state's population than do Catholics. In 2010 Catholics in Illinois numbered 3,648,907. The largest Protestant denominations were the United Methodist Church with 314,461, and the Southern Baptist Convention, with 283,519 members. Illinois has one of the largest concentrations of Missouri Synod Lutherans in the United States. Illinois played an important role in the early Latter Day Saint movement, with Nauvoo, Illinois, becoming a gathering place for Mormons in the early 1840s. Nauvoo was the location of the succession crisis, which led to the separation of the Mormon movement into several Latter Day Saint sects. The Church of Jesus Christ of Latter-day Saints, the largest of the sects to emerge from the Mormon schism, has more than 55,000 adherents in Illinois today. A significant number of adherents of other Abrahamic faiths can be found in Illinois. Largely concentrated in the Chicago metropolitan area, followers of the Muslim, Bahá'í, and Jewish religions all call the state home. Muslims constituted the largest non-Christian group, with 359,264 adherents. Illinois has the largest concentration of Muslims by state in the country, with 2,800 Muslims per 100,000 citizens. The largest and oldest surviving Bahá'í House of Worship in the world is located in Wilmette, Illinois, The Chicago area has a very large Jewish community, particularly in the suburbs of Skokie, Buffalo Grove, Highland Park, and surrounding suburbs. Former Chicago Mayor Rahm Emanuel is the Windy City's first Jewish mayor. Chicago is also home to a very large population of Hindus, Sikhs, Jains, and Buddhists. The Bahá'í House of Worship in Wilmette is the center of that religion's worship in North America. The dollar gross state product for Illinois was estimated to be billion in 2019. The state's 2019 per capita gross state product was estimated to be around $72,000. As of February 2019, the unemployment rate in Illinois reached 4.2%. Illinois's minimum wage will rise to $15 per hour by 2025, making it one of the highest in the nation. Illinois's major agricultural outputs are corn, soybeans, hogs, cattle, dairy products, and wheat. In most years, Illinois is either the first or second state for the highest production of soybeans, with a harvest of 427.7 million bushels (11.64 million metric tons) in 2008, after Iowa's production of 444.82 million bushels (12.11 million metric tons). Illinois ranks second in U.S. corn production with more than 1.5 billion bushels produced annually. With a production capacity of 1.5 billion gallons per year, Illinois is a top producer of ethanol, ranking third in the United States in 2011. Illinois is a leader in food manufacturing and meat processing. Although Chicago may no longer be "Hog Butcher for the World", the Chicago area remains a global center for food manufacture and meat processing, with many plants, processing houses, and distribution facilities concentrated in the area of the former Union Stock Yards. Illinois also produces wine, and the state is home to two American viticultural areas. In the area of The Meeting of the Great Rivers Scenic Byway, peaches and apples are grown. The German immigrants from agricultural backgrounds who settled in Illinois in the mid- to late 19th century are in part responsible for the profusion of fruit orchards in that area of Illinois. Illinois's universities are actively researching alternative agricultural products as alternative crops. Illinois is one of the nation's manufacturing leaders, boasting annual value added productivity by manufacturing of over $107 billion in 2006. , Illinois is ranked as the 4th-most productive manufacturing state in the country, behind California, Texas, and Ohio. About three-quarters of the state's manufacturers are located in the Northeastern Opportunity Return Region, with 38 percent of Illinois's approximately 18,900 manufacturing plants located in Cook County. As of 2006, the leading manufacturing industries in Illinois, based upon value-added, were chemical manufacturing ($18.3 billion), machinery manufacturing ($13.4 billion), food manufacturing ($12.9 billion), fabricated metal products ($11.5 billion), transportation equipment ($7.4 billion), plastics and rubber products ($7.0 billion), and computer and electronic products ($6.1 billion). By the early 2000s, Illinois's economy had moved toward a dependence on high-value-added services, such as financial trading, higher education, law, logistics, and medicine. In some cases, these services clustered around institutions that hearkened back to Illinois's earlier economies. For example, the Chicago Mercantile Exchange, a trading exchange for global derivatives, had begun its life as an agricultural futures market. Other important non-manufacturing industries include publishing, tourism, and energy production and distribution. Venture capitalists funded a total of approximately $62 billion in the U.S. economy in 2016. Of this amount, Illinois-based companies received approximately $1.1 billion. Similarly, in FY 2016, the federal government spent $461 billion on contracts in the U.S. Of this amount, Illinois-based companies received approximately $8.7 billion. Illinois is a net importer of fuels for energy, despite large coal resources and some minor oil production. Illinois exports electricity, ranking fifth among states in electricity production and seventh in electricity consumption. The coal industry of Illinois has its origins in the middle 19th century, when entrepreneurs such as Jacob Loose discovered coal in locations such as Sangamon County. Jacob Bunn contributed to the development of the Illinois coal industry, and was a founder and owner of the Western Coal & Mining Company of Illinois. About 68% of Illinois has coal-bearing strata of the Pennsylvanian geologic period. According to the Illinois State Geological Survey, 211 billion tons of bituminous coal are estimated to lie under the surface, having a total heating value greater than the estimated oil deposits in the Arabian Peninsula. However, this coal has a high sulfur content, which causes acid rain, unless special equipment is used to reduce sulfur dioxide emissions. Many Illinois power plants are not equipped to burn high-sulfur coal. In 1999, Illinois produced 40.4 million tons of coal, but only 17 million tons (42%) of Illinois coal was consumed in Illinois. Most of the coal produced in Illinois is exported to other states and countries. In 2008, Illinois exported three million tons of coal, and was projected to export 9 million tons in 2011, as demand for energy grows in places such as China, India, and elsewhere in Asia and Europe. , Illinois was ranked third in recoverable coal reserves at producing mines in the nation. Most of the coal produced in Illinois is exported to other states, while much of the coal burned for power in Illinois (21 million tons in 1998) is mined in the Powder River Basin of Wyoming. Mattoon was recently chosen as the site for the Department of Energy's FutureGen project, a 275-megawatt experimental zero emission coal-burning power plant that the DOE just gave a second round of funding. In 2010, after a number of setbacks, the city of Mattoon backed out of the project. Illinois is a leading refiner of petroleum in the American Midwest, with a combined crude oil distillation capacity of nearly . However, Illinois has very limited crude oil proved reserves that account for less than 1% of the U.S. total reserves. Residential heating is 81% natural gas compared to less than 1% heating oil. Illinois is ranked 14th in oil production among states, with a daily output of approximately in 2005. Nuclear power arguably began in Illinois with the Chicago Pile-1, the world's first artificial self-sustaining nuclear chain reaction in the world's first nuclear reactor, built on the University of Chicago campus. There are six operating nuclear power plants in Illinois: Braidwood, Byron, Clinton, Dresden, LaSalle, and Quad Cities. With the exception of the single-unit Clinton plant, each of these facilities has two reactors. Three reactors have been permanently shut down and are in various stages of decommissioning: Dresden-1 and Zion-1 and 2. Illinois ranked first in the nation in 2010 in both nuclear capacity and nuclear generation. Generation from its nuclear power plants accounted for 12 percent of the nation's total. In 2007, 48% of Illinois's electricity was generated using nuclear power. The Morris Operation is the only de facto high-level radioactive waste storage site in the United States. Illinois has seen growing interest in the use of wind power for electrical generation. Most of Illinois was rated in 2009 as "marginal or fair" for wind energy production by the U.S. Department of Energy, with some western sections rated "good" and parts of the south rated "poor". These ratings are for wind turbines with hub heights; newer wind turbines are taller, enabling them to reach stronger winds farther from the ground. As a result, more areas of Illinois have become prospective wind farm sites. As of September 2009, Illinois had 1116.06 MW of installed wind power nameplate capacity with another 741.9 MW under construction. Illinois ranked ninth among U.S. states in installed wind power capacity, and sixteenth by potential capacity. Large wind farms in Illinois include Twin Groves, Rail Splitter, EcoGrove, and Mendota Hills. As of 2007, wind energy represented only 1.7% of Illinois's energy production, and it was estimated that wind power could provide 5–10% of the state's energy needs. Also, the Illinois General Assembly mandated in 2007 that by 2025, 25% of all electricity generated in Illinois is to come from renewable resources. Illinois is ranked second in corn production among U.S. states, and Illinois corn is used to produce 40% of the ethanol consumed in the United States. The Archer Daniels Midland corporation in Decatur, Illinois, is the world's leading producer of ethanol from corn. The National Corn-to-Ethanol Research Center (NCERC), the world's only facility dedicated to researching the ways and means of converting corn (maize) to ethanol is located on the campus of Southern Illinois University Edwardsville. University of Illinois at Urbana–Champaign is one of the partners in the Energy Biosciences Institute (EBI), a $500 million biofuels research project funded by petroleum giant BP. Tax is collected by the Illinois Department of Revenue. State income tax is calculated by multiplying net income by a flat rate. In 1990, that rate was set at 3%, but in 2010, the General Assembly voted for a temporary increase in the rate to 5%; the new rate went into effect on January 1, 2011; the personal income rate partially sunset on January 1, 2015, to 3.75%, while the corporate income tax fell to 5.25%. Illinois failed to pass a budget from 2015 to 2017, after the 736-day budget impasse, a budget was passed in Illinois after lawmakers overturned Governor Bruce Rauner's veto; this budget raised the personal income rate to 4.95% and the corporate rate to 7%. There are two rates for state sales tax: 6.25% for general merchandise and 1% for qualifying food, drugs, and medical appliances. The property tax is a major source of tax revenue for local government taxing districts. The property tax is a local—not state—tax, imposed by local government taxing districts, which include counties, townships, municipalities, school districts, and special taxation districts. The property tax in Illinois is imposed only on real property. On May 1, 2019, the Illinois Senate voted to a approve a constitutional amendment to change from a flat tax rate to a graduated rate, in a 73–44 vote. The governor, J.B. Pritzker, approved the bill on May 27, 2019. It was scheduled for a 2020 general election ballot vote and requires 60 percent voter approval. It needed 71 votes to pass, with taxpayers making over $250,000 to be impacted. It also includes $100 million for property tax relief. As of 2017 Chicago had the highest state and local sales tax rate for a U.S. city with a populations above 200,000, at 10.250%. The state of Illinois has the second highest rate of real estate tax: 2.31%, which is second only to New Jersey at 2.44%. Toll roads are a "de facto" user tax on the citizens and visitors to the state of Illinois. Illinois ranks seventh out of the 11 states with the most miles of toll roads, at 282.1 miles. Chicago ranks fourth in most expensive toll roads in America by the mile, with the Chicago Skyway charging 51.2 cents per mile. Illinois also has the 11th highest gasoline tax by state, at 37.5 cents per gallon. Illinois has numerous museums; the greatest concentration of these are in Chicago. Several museums in Chicago are ranked as some of the best in the world. These include the John G. Shedd Aquarium, the Field Museum of Natural History, the Art Institute of Chicago, the Adler Planetarium, and the Museum of Science and Industry. The modern Abraham Lincoln Presidential Library and Museum in Springfield is the largest and most attended presidential library in the country. The Illinois State Museum boasts a collection of 13.5 million objects that tell the story of Illinois life, land, people, and art. The ISM is among only 5% of the nation's museums that are accredited by the American Alliance of Museums. Other historical museums in the state include the Polish Museum of America in Chicago; Magnolia Manor in Cairo; Easley Pioneer Museum in Ipava; the Elihu Benjamin Washburne; Ulysses S. Grant Homes, both in Galena; and the Chanute Air Museum, located on the former Chanute Air Force Base in Rantoul. The Chicago metropolitan area also hosts two zoos: The very large Brookfield Zoo, located about ten miles west of the city center in suburban Brookfield, contains more than 2,300 animals and covers . The Lincoln Park Zoo is located in huge Lincoln Park on Chicago's North Side, approximately north of the Loop. The zoo covers over within the park. Illinois is a leader in music education, having hosted the Midwest Clinic International Band and Orchestra Conference since 1946, as well being home to the Illinois Music Educators Association (IMEA), one of the largest professional music educator's organizations in the country. Each summer since 2004, Southern Illinois University Carbondale has played host to the Southern Illinois Music Festival, which presents dozens of performances throughout the region. Past featured artists include the Eroica Trio and violinist David Kim. Chicago, in the northeast corner of the state, is a major center for music in the midwestern United States where distinctive forms of blues (greatly responsible for the future creation of rock and roll), and house music, a genre of electronic dance music, were developed. The Great Migration of poor black workers from the South into the industrial cities brought traditional jazz and blues music to the city, resulting in Chicago blues and "Chicago-style" Dixieland jazz. Notable blues artists included Muddy Waters, Junior Wells, Howlin' Wolf and both Sonny Boy Williamsons; jazz greats included Nat King Cole, Gene Ammons, Benny Goodman, and Bud Freeman. Chicago is also well known for its soul music. In the early 1930s, Gospel music began to gain popularity in Chicago due to Thomas A. Dorsey's contributions at Pilgrim Baptist Church. In the 1980s and 1990s, heavy rock, punk, and hip hop also became popular in Chicago. Orchestras in Chicago include the Chicago Symphony Orchestra, the Lyric Opera of Chicago, and the Chicago Sinfonietta. John Hughes, who moved from Grosse Pointe to Northbrook, based many films of his in Chicago, and its suburbs. Ferris Bueller's Day Off, Home Alone, The Breakfast Club, and all his films take place in the fictional Shermer, Illinois (the original name of Northbrook was Shermerville, and Hughes's High School, Glenbrook North High School, is on Shermer Road). Most locations in his films include Glenbrook North, the former Maine North High School, the Ben Rose House in Highland Park, and the famous Home Alone house in Winnetka, Illinois. As one of the United States' major metropolises, all major sports leagues have teams headquartered in Chicago. Many minor league teams also call Illinois their home. They include: The state features 13 athletic programs that compete in NCAA Division I, the highest level of U.S. college sports. The two most prominent are the Illinois Fighting Illini and Northwestern Wildcats, both members of the Big Ten Conference and the only ones competing in one of the so-called "Power Five conferences". The Fighting Illini football team has won five national championships and three Rose Bowl Games, whereas the men's basketball team has won 17 conference seasons and played five Final Fours. Meanwhile, the Wildcats have won eight football conference championships and one Rose Bowl Game. The Northern Illinois Huskies from DeKalb, Illinois compete in the Mid-American Conference winning four conference championships and earning a bid in the Orange Bowl along with producing Heisman candidate Jordan Lynch at quarterback. The Huskies are the state's only other team competing in the Football Bowl Subdivision, the top level of NCAA football. Four schools have football programs that compete in the second level of Division I football, the Football Championship Subdivision. The Illinois State Redbirds (Normal, adjacent to Bloomington) and Southern Illinois Salukis (the latter representing Southern Illinois University's main campus in Carbondale) are members of the Missouri Valley Conference (MVC) for non-football sports and the Missouri Valley Football Conference (MVFC). The Western Illinois Leathernecks (Macomb) are full members of the Summit League, which does not sponsor football, and also compete in the MVFC. The Eastern Illinois Panthers (Charleston) are members of the Ohio Valley Conference (OVC). The city of Chicago is home to four Division I programs that do not sponsor football. The DePaul Blue Demons, with main campuses in Lincoln Park and the Loop, are members of the Big East Conference. The Loyola Ramblers, with their main campus straddling the Edgewater and Rogers Park community areas on the city's far north side, compete in the MVC. The UIC Flames, from the Near West Side next to the Loop, are in the Horizon League. The Chicago State Cougars, from the city's south side, compete in the Western Athletic Conference. Finally, two non-football Division I programs are located downstate. The Bradley Braves (Peoria) are MVC members, and the SIU Edwardsville Cougars (in the Metro East region across the Mississippi River from St. Louis) compete in the OVC. The city was formerly home to several other teams that either failed to survive or belonged to leagues that folded. The NFL's Arizona Cardinals, who currently play in the Phoenix suburb of Glendale, Arizona, played in Chicago as the Chicago Cardinals, until moving to St. Louis, Missouri after the 1959 season. An NBA expansion team known as the Chicago Packers in 1961–1962, and as the Chicago Zephyrs the following year, moved to Baltimore after the 1962–1963 season. The franchise is now known as the Washington Wizards. The Peoria Chiefs and Kane County Cougars are minor league baseball teams affiliated with MLB. The Schaumburg Boomers and Lake County Fielders are members of the North American League, and the Southern Illinois Miners, Gateway Grizzlies, Joliet Slammers, Windy City ThunderBolts, and Normal CornBelters belong to the Frontier League. In addition to the Chicago Wolves, the AHL also has the Rockford IceHogs serving as the AHL affiliate of the Chicago Blackhawks. The second incarnation of the Peoria Rivermen plays in the SPHL. Motor racing oval tracks at the Chicagoland Speedway in Joliet, the Chicago Motor Speedway in Cicero and the Gateway International Raceway in Madison, near St. Louis, have hosted NASCAR, CART, and IRL races, whereas the Sports Car Club of America, among other national and regional road racing clubs, have visited the Autobahn Country Club in Joliet, the Blackhawk Farms Raceway in South Beloit and the former Meadowdale International Raceway in Carpentersville. Illinois also has several short tracks and dragstrips. The dragstrip at Gateway International Raceway and the Route 66 Raceway, which sits on the same property as the Chicagoland Speedway, both host NHRA drag races. Illinois features several golf courses, such as Olympia Fields, Medinah, Midlothian, Cog Hill, and Conway Farms, which have often hosted the BMW Championship, Western Open, and Women's Western Open. Also, the state has hosted 13 editions of the U.S. Open (latest at Olympia Fields in 2003), six editions of the PGA Championship (latest at Medinah in 2006), three editions of the U.S. Women's Open (latest at The Merit Club), the 2009 Solheim Cup (at Rich Harvest Farms), and the 2012 Ryder Cup (at Medinah). The John Deere Classic is a regular PGA Tour event played in the Quad Cities since 1971, whereas the Encompass Championship is a Champions Tour event since 2013. Previously, the LPGA State Farm Classic was an LPGA Tour event from 1976 to 2011. The Illinois state parks system began in 1908 with what is now Fort Massac State Park, becoming the first park in a system encompassing more than 60 parks and about the same number of recreational and wildlife areas. Areas under the protection of the National Park Service include: the Illinois and Michigan Canal National Heritage Corridor near Lockport, the Lewis and Clark National Historic Trail, the Lincoln Home National Historic Site in Springfield, the Mormon Pioneer National Historic Trail, the Trail of Tears National Historic Trail, the American Discovery Trail, and the Pullman National Monument. The federal government also manages the Shawnee National Forest and the Midewin National Tallgrass Prairie. The government of Illinois, under the Constitution of Illinois, has three branches of government: executive, legislative and judicial. The executive branch is split into several statewide elected offices, with the governor as chief executive. Legislative functions are granted to the Illinois General Assembly. The judiciary is composed of the Supreme Court and lower courts. The Illinois General Assembly is the state legislature, composed of the 118-member Illinois House of Representatives and the 59-member Illinois Senate. The members of the General Assembly are elected at the beginning of each even-numbered year. The "Illinois Compiled Statutes" (ILCS) are the codified statutes of a general and permanent nature. The executive branch is composed of six elected officers and their offices as well as numerous other departments. The six elected officers are: Governor, Lieutenant Governor, Attorney General, Secretary of State, Comptroller, and Treasurer. The government of Illinois has numerous departments, agencies, boards and commissions, but the so-called code departments provide most of the state's services. The Judiciary of Illinois is the unified court system of Illinois. It consists of the Supreme Court, Appellate Court, and Circuit Courts. The Supreme Court oversees the administration of the court system. The administrative divisions of Illinois are counties, townships, precincts, cities, towns, villages, and special-purpose districts. The basic subdivision of Illinois are the 102 counties. Eighty-five of the 102 counties are in turn divided into townships and precincts. Municipal governments are the cities, villages, and incorporated towns. Some localities possess "home rule", which allows them to govern themselves to a certain extent. Illinois is a Democratic stronghold. Historically, Illinois was a political swing state, with near-parity existing between the Republican and the Democratic parties. However, in recent elections, the Democratic Party has gained ground, and Illinois has come to be seen as a solid "blue" state in presidential campaigns. Votes from Chicago and most of Cook County have long been strongly Democratic. However, the "collar counties" (the suburbs surrounding Chicago's Cook County, Illinois), can be seen as moderate voting districts. College towns like Carbondale, Champaign, and Normal also lean Democratic. Republicans continue to prevail in the rural areas of northern and central Illinois, as well as southern Illinois outside of East St. Louis. From 1920 until 1972, Illinois was carried by the victor of each of these 14 presidential elections. In fact, the state was long seen as a national bellwether, supporting the winner in every election in the 20th century, except for 1916 and 1976. By contrast, Illinois has trended more toward the Democratic party, and has voted for their presidential candidates in the last six elections; in 2000, George W. Bush became the first Republican to win the presidency without carrying either Illinois or Vermont. Local politician and Chicago resident Barack Obama easily won the state's 21 electoral votes in 2008, with 61.9% of the vote. In 2010, incumbent governor Pat Quinn was re-elected with 47% of the vote, while Republican Mark Kirk was elected to the Senate with 48% of the vote. In 2012, President Obama easily carried Illinois again, with 58% to Republican candidate Mitt Romney's 41%. In 2014, Republican Bruce Rauner defeated Governor Quinn 50% to 46% to become Illinois's first Republican governor in 12 years after being sworn in on January 12, 2015, while Democratic senator Dick Durbin was re-elected with 53% of the vote. In 2016, Hillary Clinton carried Illinois with 55% of the vote, and Tammy Duckworth defeated incumbent Mark Kirk 54% to 40%. George W. Bush and Donald Trump are the only Republican presidential candidates to win without carrying either Illinois or Vermont. In 2018, Democrat JB Pritzker defeated the incumbent Bruce Rauner for the governorship with 54% of the vote. Politics in the state have been infamous for highly visible corruption cases, as well as for crusading reformers, such as governors Adlai Stevenson and James R. Thompson. In 2006, former governor George Ryan was convicted of racketeering and bribery, leading to a six-and-a-half-year prison sentence. In 2008, then-Governor Rod Blagojevich was served with a criminal complaint on corruption charges, stemming from allegations that he conspired to sell the vacated Senate seat left by President Barack Obama to the highest bidder. Subsequently, on December 7, 2011, Rod Blagojevich was sentenced to 14 years in prison for those charges, as well as perjury while testifying during the case, totaling 18 convictions. Blagojevich was impeached and convicted by the legislature, resulting in his removal from office. In the late 20th century, Congressman Dan Rostenkowski was imprisoned for mail fraud; former governor and federal judge Otto Kerner, Jr. was imprisoned for bribery; Secretary of State Paul Powell was investigated and found to have gained great wealth through bribes, and State Auditor of Public Accounts (Comptroller) Orville Hodge was imprisoned for embezzlement. In 1912, William Lorimer, the GOP boss of Chicago, was expelled from the U.S. Senate for bribery and in 1921, Governor Len Small was found to have defrauded the state of a million dollars. Illinois has shown a strong presence in presidential elections. Three presidents have claimed Illinois as their political base when running for president: Abraham Lincoln, Ulysses S. Grant, and most recently Barack Obama. Lincoln was born in Kentucky, but he moved to Illinois at age 21. He served in the General Assembly and represented the 7th congressional district in the U.S. House of Representatives before his election to the presidency in 1860. Ulysses S. Grant was born in Ohio and had a military career that precluded settling down, but on the eve of the Civil War and approaching middle age, he moved to Illinois and thus utilized the state as his home and political base when running for president. Barack Obama was born in Hawaii and made Illinois his home after graduating from law school, and later represented Illinois in the U.S. Senate. He then became president in 2008, running as a candidate from his Illinois base. Ronald Reagan was born in Illinois, in the city of Tampico, raised in Dixon, Illinois, and educated at Eureka College, outside Peoria. Reagan later moved to California during his young adulthood. He then became an actor, and later became California's Governor before being elected president. Hillary Clinton was born and raised in the suburbs of Chicago and became the first woman to represent a major political party in the general election of the U.S. presidency. Clinton ran from a platform based in New York State. Nine African-Americans have served as members of the United States Senate. Three of them have represented Illinois, the most of any single state: Carol Moseley-Braun, Barack Obama, and Roland Burris, who was appointed to replace Obama after his election to the presidency. Moseley-Braun was the first African-American woman to become a U.S. Senator. Three families from Illinois have played particularly prominent roles in the Democratic Party, gaining both statewide and national fame. The Stevenson family, initially rooted in central Illinois and later based in the Chicago metropolitan area, has provided four generations of Illinois officeholders. The Daley family's powerbase was in Chicago. The Pritzker family is based in Chicago and have played important roles both in the private and public sectors. The Illinois State Board of Education (ISBE) is autonomous of the governor and the state legislature, and administers public education in the state. Local municipalities and their respective school districts operate individual public schools, but the ISBE audits performance of public schools with the Illinois School Report Card. The ISBE also makes recommendations to state leaders concerning education spending and policies. Education is compulsory from ages 7 to 17 in Illinois. Schools are commonly, but not exclusively, divided into three tiers of primary and secondary education: elementary school, middle school or junior high school, and high school. District territories are often complex in structure. Many areas in the state are actually located in "two" school districts—one for high school, the other for elementary and middle schools. And such districts do not necessarily share boundaries. A given high school may have several elementary districts that feed into it, yet some of those feeder districts may themselves feed into multiple high school districts. Using the criterion established by the Carnegie Foundation for the Advancement of Teaching, there are eleven "National Universities" in the state. , six of these rank in the "first tier" (that is, the top quartile) among the top 500 National Universities in the United States, as determined by the "U.S. News & World Report" rankings: the University of Chicago (3), Northwestern University (10), the University of Illinois at Urbana–Champaign (41), Loyola University Chicago (89), the Illinois Institute of Technology (108), DePaul University (123), University of Illinois at Chicago (129), Illinois State University (149), Southern Illinois University Carbondale (153), and Northern Illinois University (194). The University of Chicago is continuously ranked as one of the world's top ten universities on various independent university rankings, and its Booth School of Business, along with Northwestern's Kellogg School of Management consistently rank within the top five graduate business schools in the country and top ten globally. The University of Illinois at Urbana–Champaign is often ranked among the best engineering schools in the world and in United States. Illinois also has more than twenty additional accredited four-year universities, both public and private, and dozens of small liberal arts colleges across the state. Additionally, Illinois supports 49 public community colleges in the Illinois Community College System. Because of its central location and its proximity to the Rust Belt and Grain Belt, Illinois is a national crossroads for air, auto, rail, and truck traffic. From 1962 until 1998, Chicago's O'Hare International Airport (ORD) was the busiest airport in the world, measured both in terms of total flights and passengers. While it was surpassed by Atlanta's Hartsfield in 1998 (as Chicago splits its air traffic between O'Hare and Midway airports, while Atlanta uses only one airport), with 59.3 million domestic passengers annually, along with 11.4 million international passengers in 2008, O'Hare consistently remains one of the two or three busiest airports globally, and in some years still ranks number one in total flights. It is a major hub for both United Airlines and American Airlines, and a major airport expansion project is currently underway. Midway Airport (MDW), which had been the busiest airport in the world at one point until it was supplanted by O'Hare as the busiest airport in 1962, is now the secondary airport in the Chicago metropolitan area and still ranks as one of the nation's busiest airports. Midway is a major hub for Southwest Airlines and services many other carriers as well. Midway served 17.3 million domestic and international passengers in 2008. Illinois has an extensive passenger and freight rail transportation network. Chicago is a national Amtrak hub and in-state passengers are served by Amtrak's Illinois Service, featuring the Chicago to Carbondale "Illini" and "Saluki", the Chicago to Quincy "Carl Sandburg" and "Illinois Zephyr", and the Chicago to St. Louis "Lincoln Service". Currently there is trackwork on the Chicago–St. Louis line to bring the maximum speed up to , which would reduce the trip time by an hour and a half. Nearly every North American railway meets at Chicago, making it the largest and most active rail hub in the country. Extensive commuter rail is provided in the city proper and some immediate suburbs by the Chicago Transit Authority's 'L' system. One of the largest suburban commuter rail system in the United States, operated by Metra, uses existing rail lines to provide direct commuter rail access for hundreds of suburbs to the city and beyond. In addition to the state's rail lines, the Mississippi River and Illinois River provide major transportation routes for the state's agricultural interests. Lake Michigan gives Illinois access to the Atlantic Ocean by way of the Saint Lawrence Seaway. The Interstate Highways in Illinois are all segments of the Interstate Highway System that are owned and maintained by the state. Illinois has the distinction of having the most primary (two-digit) interstates pass through it among all the 50 states with 13. Illinois also ranks third among the fifty states with the most interstate mileage, coming in after California and Texas, which are much bigger states in area. Major U.S. Interstate highways crossing the state include: Interstate 24 (I-24), I-39, I-41, I-55, I-57, I-64, I-70, I-72, I-74, I-80, I-88, I-90, and I-94. The Illinois Department of Transportation (IDOT) is responsible for maintaining the U.S Highways in Illinois. The system in Illinois consists of 21 primary highways. Among the U.S. highways that pass through the state, the primary ones are: US 6, US 12, US 14, US 20, US 24, US 30, US 34, US 36, US 40, US 41, US 45, US 50, US 51, US 52, US 54, US 60, US 62, and US 67.
https://en.wikipedia.org/wiki?curid=14849
Ian Murdock Ian Ashley Murdock (28April 1973 28December 2015) was an American software engineer, known for being the founder of the Debian project and Progeny Linux Systems, a commercial Linux company. Although Murdock's parents were both from Southern Indiana, he was born in Konstanz, West Germany, on 28 April 1973, where his father was pursuing postdoctoral research. The family returned to the United States in 1975, and Murdock grew up in Lafayette, Indiana, beginning in 1977 when his father became a professor of entomology at Purdue University. Murdock graduated from Harrison High School in 1991, and then earned his bachelor's degree in computer science from Purdue in 1996. While a college student, Murdock founded the Debian project in August 1993, and wrote the Debian Manifesto in January 1994. Murdock conceived Debian as a Linux distribution that embraced open design, contributions, and support from the free software community. He named Debian after his then-girlfriend (later wife) Debra Lynn, and himself (Deb and Ian). They later married, had three children, and divorced in January 2008. In January 2006, Murdock was appointed Chief Technology Officer of the Free Standards Group and elected chair of the Linux Standard Base workgroup. He continued as CTO of the Linux Foundation when the group was formed from the merger of the Free Standards Group and Open Source Development Labs. Murdock left the Linux Foundation to join Sun Microsystems in March 2007 to lead Project Indiana, which he described as "taking the lesson that Linux has brought to the operating system and providing that for Solaris", making a full OpenSolaris distribution with GNOME and userland tools from GNU plus a network-based package management system. From March 2007 to February 2010, he was Vice President of Emerging Platforms at Sun, until the company merged with Oracle and he resigned his position with the company. From 2011 until 2015 Murdock was Vice President of Platform and Developer Community at Salesforce Marketing Cloud, based in Indianapolis. From November 2015 until his death Murdock was working for Docker, Inc. Murdock died on 28 December 2015 in San Francisco. Though initially no cause of death was released, in July 2016 it was announced his death had been ruled a suicide. The police confirmed that the cause of death was due to asphyxiation caused by hanging himself with a vacuum cleaner electrical cord. The last tweets from Murdock's Twitter account first announced that he would commit suicide, then said he would not. He reported having been accused of assault on a police officer after having been himself assaulted by the police, then declared an intent to devote his life to opposing police abuse. His Twitter account was taken down shortly afterwards. The San Francisco police confirmed he was detained, saying he matched the description in a reported attempted break-in and that he appeared to be drunk. The police stated that he became violent and was ultimately taken to jail on suspicion of four misdemeanor counts. They added that he did not appear to be suicidal and was medically examined prior to release. Later, police returned on reports of a possible suicide. The city medical examiner's office confirmed Murdock was found dead.
https://en.wikipedia.org/wiki?curid=14851
Inner product space In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a scalar quantity known as the inner product of the vectors. Inner products allow the rigorous introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors (zero inner product). Inner product spaces generalize Euclidean spaces (in which the inner product is the dot product, also known as the scalar product) to vector spaces of any (possibly infinite) dimension, and are studied in functional analysis. The first usage of the concept of a vector space with an inner product is due to Giuseppe Peano, in 1898. An inner product naturally induces an associated norm, (|"x"| and |"y"| are the norms of "x" and "y", in the picture) thus an inner product space is also a normed vector space. A complete space with an inner product is called a Hilbert space. An (incomplete) space with an inner product is called a pre-Hilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces. In this article, the field of scalars denoted is either the field of real numbers or the field of complex numbers . Formally, an inner product space is a vector space over the field together with an "inner product", i.e., with a map that satisfies the following three properties for all vectors and all scalars : If the positive-definite condition is replaced by merely requiring that formula_13 for all "x", then one obtains the definition of "positive semi-definite Hermitian form". A positive semi-definite Hermitian form formula_14 is an inner product if and only if for all "x", if formula_15 then "x = 0". Positive-definiteness and linearity, respectively, ensure that: Notice that conjugate symmetry implies that is real for all , since we have: Conjugate symmetry and linearity in the first variable imply that is, conjugate linearity in the second argument. So, an inner product is a sesquilinear form. Conjugate symmetry is also called Hermitian symmetry, and a conjugate-symmetric sesquilinear form is called a "Hermitian form". While the above axioms are more mathematically economical, a compact verbal definition of an inner product is a "positive-definite Hermitian form". This important generalization of the familiar square expansion follows: These properties, constituents of the above linearity in the first and second argument: are otherwise known as "additivity". In the case of , conjugate-symmetry reduces to symmetry, and sesquilinearity reduces to bilinearity. So, an inner product on a real vector space is a "positive-definite symmetric bilinear form". That is, and the binomial expansion becomes: A common special case of the inner product, the scalar product or dot product, is written with a centered dot formula_23. Some authors, especially in physics and matrix algebra, prefer to define the inner product and the sesquilinear form with linearity in the second argument rather than the first. Then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product as (the bra–ket notation of quantum mechanics), respectively (dot product as a case of the convention of forming the matrix product as the dot products of rows of with columns of ). Here the kets and columns are identified with the vectors of and the bras and rows with the linear functionals (covectors) of the dual space , with conjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature, taking to be conjugate linear in rather than . A few instead find a middle ground by recognizing both and as distinct notations differing only in which argument is conjugate linear. There are various technical reasons why it is necessary to restrict the base field to and in the definition. Briefly, the base field has to contain an ordered subfield in order for non-negativity to make sense, and therefore has to have characteristic equal to 0 (since any ordered field has to have such characteristic). This immediately excludes finite fields. The basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of or will suffice for this purpose, e.g., the algebraic numbers or the constructible numbers. However, in these cases when it is a proper subfield (i.e., neither nor ) even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over or , such as those used in quantum computation, are automatically metrically complete and hence Hilbert spaces. In some cases we need to consider non-negative "semi-definite" sesquilinear forms. This means that is only required to be non-negative. We show how to treat these below. A simple example is the real numbers with the standard multiplication as the inner product More generally, the real -space with the dot product is an inner product space, an example of a Euclidean vector space. where is the transpose of . The general form of an inner product on is known as the Hermitian form and is given by where is any Hermitian positive-definite matrix and is the conjugate transpose of . For the real case this corresponds to the dot product of the results of directionally different scaling of the two vectors, with positive scale factors and orthogonal directions of scaling. Up to an orthogonal transformation it is a weighted-sum version of the dot product, with positive weights. The article on Hilbert spaces has several examples of inner product spaces wherein the metric induced by the inner product yields a complete metric space. An example of an inner product which induces an incomplete metric occurs with the space of continuous complex valued functions "f" and "g" on the interval . The inner product is This space is not complete; consider for example, for the interval the sequence of continuous "step" functions, , defined by: This sequence is a Cauchy sequence for the norm induced by the preceding inner product, which does not converge to a "continuous" function. For real random variables and , the expected value of their product is an inner product. In this case, if and only if (i.e., almost surely). This definition of expectation as inner product can be extended to random vectors as well. For real square matrices of the same size, with transpose as conjugation is an inner product. On an inner product space, or more generally a vector space with a nondegenerate form (so an isomorphism ) vectors can be sent to covectors (in coordinates, via transpose), so one can take the inner product and outer product of two vectors, not simply of a vector and a covector. Inner product spaces are normed vector spaces for the norm defined by As for every normed vector space, a inner product space is a metric space, for the distance defined by Directly from the axioms of the inner product, one can prove that the axioms of a norm are satisfied, as well as the following properties. Let be a finite dimensional inner product space of dimension . Recall that every basis of consists of exactly linearly independent vectors. Using the Gram–Schmidt process we may start with an arbitrary basis and transform it into an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. In symbols, a basis is orthonormal if for every and for each . This definition of orthonormal basis generalizes to the case of infinite-dimensional inner product spaces in the following way. Let be any inner product space. Then a collection is a "basis" for if the subspace of generated by finite linear combinations of elements of is dense in (in the norm induced by the inner product). We say that is an "orthonormal basis" for if it is a basis and if and for all . Using an infinite-dimensional analog of the Gram-Schmidt process one may show: Theorem. Any separable inner product space has an orthonormal basis. Using the Hausdorff maximal principle and the fact that in a complete inner product space orthogonal projection onto linear subspaces is well-defined, one may also show that Theorem. Any complete inner product space has an orthonormal basis. The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. The answer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken from Halmos's "A Hilbert Space Problem Book" (see the references). Parseval's identity leads immediately to the following theorem: Theorem. Let be a separable inner product space and an orthonormal basis of . Then the map is an isometric linear map with a dense image. This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays the role of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be any countable set (and in fact any set whatsoever, provided is defined appropriately, as is explained in the article Hilbert space). In particular, we obtain the following result in the theory of Fourier series: Theorem. Let be the inner product space . Then the sequence (indexed on set of all integers) of continuous functions is an orthonormal basis of the space with the inner product. The mapping is an isometric linear map with dense image. Orthogonality of the sequence follows immediately from the fact that if , then Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finally the fact that the sequence has a dense algebraic span, in the "inner product norm", follows from the fact that the sequence has a dense algebraic span, this time in the space of continuous periodic functions on with the uniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials. Several types of linear maps from an inner product space to an inner product space are of relevance: From the point of view of inner product space theory, there is no need to distinguish between two spaces which are isometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generally normal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds for continuous normal operators in Hilbert spaces. Any of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that are closest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness is weakened. If is a vector space and a semi-definite sesquilinear form, then the function: makes sense and satisfies all the properties of norm except that does not imply (such a functional is then called a semi-norm). We can produce an inner product space by considering the quotient }. The sesquilinear form factors through . This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly important example of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets. Alternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero there exists some such that , though need not equal ; in other words, the induced map to the dual space is injective. This generalization is important in differential geometry: a manifold whose tangent spaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugate symmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every inner product is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugate symmetric form is similar to the dot product with "nonzero" weights on a set of vectors, and the number of positive and negative weights are called respectively the positive index and negative index. Product of vectors in Minkowski space is an example of indefinite inner product, although, technically speaking, it is not an inner product according to the standard definition above. Minkowski space has four dimensions and indices 3 and 1 (assignment of "+" and "−" to them differs depending on conventions). Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (the injective homomorphism ) and thus hold more generally. The term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, in coordinates, the inner product is the product of a "covector" with an vector, yielding a 1 × 1 matrix (a scalar), while the outer product is the product of an vector with a covector, yielding an matrix. Note that the outer product is defined for different dimensions, while the inner product requires the same dimension. If the dimensions are the same, then the inner product is the "trace" of the outer product (trace only being properly defined for square matrices). In a quip: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out". More abstractly, the outer product is the bilinear map sending a vector and a covector to a rank 1 linear transformation (simple tensor of type (1, 1)), while the inner product is the bilinear evaluation map given by evaluating a covector on a vector; the order of the domain vector spaces here reflects the covector/vector distinction. The inner product and outer product should not be confused with the interior product and exterior product, which are instead operations on vector fields and differential forms, or more generally on the exterior algebra. As a further complication, in geometric algebra the inner product and the "exterior" (Grassmann) product are combined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors (1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in this context the exterior product is usually called the "outer product" (alternatively, "wedge product"). The inner product is more correctly called a "scalar" product in this context, as the nondegenerate quadratic form in question need not be positive definite (need not be an inner product).
https://en.wikipedia.org/wiki?curid=14856
Iain Banks Iain Banks (16 February 1954 – 9 June 2013) was a Scottish author. He wrote mainstream fiction under the name Iain Banks and science fiction as Iain M. Banks, including the initial of his adopted middle name Menzies (). After the publication and success of "The Wasp Factory" (1984), Banks began to write on a full-time basis. His first science fiction book, "Consider Phlebas", was released in 1987, marking the start of the Culture series. His books have been adapted for theatre, radio and television. In 2008, "The Times" named Banks in their list of "The 50 greatest British writers since 1945". In April 2013, Banks announced that he had inoperable cancer and was unlikely to live beyond a year. He died on 9 June 2013. Banks was born in Dunfermline, Fife, to a mother who was a professional ice skater and a father who was an officer in the Admiralty. An only child, Banks lived in North Queensferry until the age of nine, near the naval dockyards in Rosyth where his father was based. Banks's family then moved to Gourock due to the requirements of his father's work. When someone introduced him to science fiction by giving him "Kemlo and the Zones of Silence", he continued reading the Kemlo series which made him want to write science fiction himself. After attending Gourock and Greenock High Schools, Banks studied English, philosophy and psychology at the University of Stirling (1972–1975). After graduation Banks chose a succession of jobs that left him free to write in the evenings. These posts supported his writing throughout his twenties and allowed him to take long breaks between contracts, during which time he travelled through Europe and North America. During this period he worked as an IBM ’Expediter Analyser' (a kind of procurement clerk), a testing technician for the British Steel Corporation and a costing clerk for a law firm in London's Chancery Lane. Banks decided to become a writer at the age of 11. He completed his first novel "The Hungarian Lift-Jet" at 16 and his second novel "TTR" (aka "The Tashkent Rambler") during his first year at Stirling University in 1972. Though he considered himself primarily a science fiction author, his lack of success at being published as such led him to pursue mainstream fiction, resulting in his first published novel "The Wasp Factory", which was published in 1984 when he was thirty. After the publication and success of "The Wasp Factory", Banks began to write full-time. His editor at Macmillan, James Hale, advised him to write one book a year and Banks agreed to this schedule. His second novel "Walking on Glass" was published in 1985. "The Bridge" followed in 1986, and "Espedair Street", published in 1987, was later broadcast as a series on BBC Radio 4. His first published science fiction book "Consider Phlebas" was released in 1987 and was the first of several novels of the acclaimed Culture series. Banks cited Robert A. Heinlein, Isaac Asimov, Arthur C. Clarke, Brian Aldiss, M. John Harrison and Dan Simmons as literary influences. "The Crow Road", published in 1992, was adapted as a BBC television series. Banks continued to write both science fiction and mainstream novels, with his final novel "The Quarry" published in June 2013, the month of his death. Banks published work under two names. His parents had intended to name him "Iain Menzies Banks", but his father made a mistake when registering the birth and "Iain Banks" became the officially registered name. Despite this error, Banks used the middle name and submitted "The Wasp Factory" for publication as "Iain M. Banks". Banks's editor inquired about the possibility of omitting the 'M' as it appeared "too fussy" and the potential existed for confusion with Rosie M. Banks, a romantic novelist in the Jeeves novels by P.G. Wodehouse; Banks agreed to the omission. After three mainstream novels, Banks's publishers agreed to publish his first science fiction (SF) novel "Consider Phlebas". To create a distinction between the mainstream and SF novels, Banks suggested the return of the 'M' to his name, and it was used in all of his science fiction works. By his death in June 2013 Banks had published 26 novels. His twenty-seventh novel "The Quarry" was published posthumously. His final work, a collection of poetry, was released in February 2015. In an interview in January 2013, he also mentioned he had the plot idea for another novel in the Culture series, which would most likely have been his next book and was planned for publication in 2014. He wrote in different categories, but enjoyed writing science fiction the most. In September 2012 Banks was revealed as one of the Guests of Honour at the 2014 World Science Fiction Convention, Loncon 3. Banks was the subject of "The Strange Worlds of Iain Banks" "South Bank Show" (1997), a television documentary that examined his mainstream writing, and was also an in-studio guest for the final episode of Marc Riley's "Rocket Science" radio show, broadcast on BBC Radio 6 Music. An audio version of "The Business", set to contemporary music, arranged by Paul Oakenfold, was broadcast in October 1999 on Galaxy Fm as the tenth Urban Soundtracks. A radio adaptation of Banks's "The State of the Art" was broadcast on BBC Radio 4 in 2009; the adaptation was written by Paul Cornell and the production was directed/produced by Nadia Molinari. In 1998 "Espedair Street" was dramatised as a serial for Radio 4, presented by Paul Gambaccini in the style of a Radio 1 documentary. In 2011 Banks was featured on the BBC Radio 4 programme "Saturday Live". Banks reaffirmed his atheism during his "Saturday Live" appearance, whereby he explained that death is an important "part of the totality of life" and should be treated realistically, instead of feared. Banks appeared on the BBC television programme "Question Time", a show that features political discussion. In 2006 Banks captained a team of writers to victory in a special series of BBC Two's "". Banks also won a 2006 edition of BBC One's "Celebrity Mastermind"; the author selected "Malt whisky and the distilleries of Scotland" as his specialist subject. His final interview was with Kirsty Wark and was broadcast as "Iain Banks: Raw Spirit" on BBC2 Scotland on Wednesday 12 June 2013. BBC One Scotland and BBC2 broadcast an adaptation of his novel Stonemouth in June 2015. Banks was involved in the theatre production "The Curse of Iain Banks" that was written by Maxton Walker and was performed at the Edinburgh Fringe festival in 1999. Banks collaborated with the play's soundtrack composer Gary Lloyd frequently, including on a collection of songs they co-composed in tribute to the fictional band 'Frozen Gold' from Banks's novel 'Espedair Street'. Lloyd also composed the score for a spoken word and musical production of the Banks novel "The Bridge" which Banks himself voiced and featured a cast of forty musicians (released on cd by Codex Records in 1996). Lloyd recorded Banks for inclusion in the play as a disembodied voice appearing as himself in one of the cast member's dreams. Lloyd explained his collaboration with Banks on their first versions of 'Espedair Street' (subsequent versions are dated from between 2005 and 2013) in a "Guardian" article prior to the opening of "The Curse of Iain Banks": When he [Banks] first played them to me, I think he was worried that they might not be up to scratch (some of them dated back to 1973 and had never been heard). He needn't have worried. They're fantastic. We're slaving away to get the songs to the stage where we can go into the studio and make a demo. Iain bashes out melodies on his state-of-the-art Apple Mac in Edinburgh and sends them down to me in Chester where I put them onto my Atari. Banks' political position has been described as "left of centre", and he was an Honorary Associate of the National Secular Society and a Distinguished Supporter of the Humanist Society Scotland. As a signatory to the Declaration of Calton Hill, he was an open supporter of Scottish independence. In November 2012, Banks supported the campaign group that emerged from the Radical Independence Conference that was held during that month. Banks explained that the Scottish independence movement was motivated by co-operation and "just seem to be more communitarian than the consensus expressed by the UK population as a whole". In late 2004, Banks was a member of a group of British politicians and media figures who campaigned to have Prime Minister Tony Blair impeached following the 2003 invasion of Iraq. In protest he cut up his passport and posted it to 10 Downing Street—in a "Socialist Review" interview, Banks explained that his passport protest occurred after he "abandoned the idea of crashing my Land Rover through the gates of Fife dockyard, after spotting the guys armed with machine guns." Banks relayed his concerns about the invasion of Iraq in his book "Raw Spirit", and the principal protagonist (Alban McGill) in the novel "The Steep Approach to Garbadale" confronts another character with arguments of a similar nature. In 2010 Banks called for a cultural and educational boycott of Israel after the Gaza flotilla raid incident. In a letter to "The Guardian" newspaper, Banks stated that he had instructed his agent to turn down any further book translation deals with Israeli publishers: Appeals to reason, international law, U.N. resolutions and simple human decency meanit is now obviousnothing to Israel ... I would urge all writers, artists and others in the creative arts, as well as those academics engaging in joint educational projects with Israeli institutions, to consider doing everything they can to convince Israel of its moral degradation and ethical isolation, preferably by simply having nothing more to do with this outlaw state. An extract from Banks's contribution to the written collection "Generation Palestine: Voices from the Boycott, Divestment and Sanctions Movement", entitled "Our People", was published in "The Guardian" in the wake of the author's cancer revelation. The extract relays the author's support for the Boycott, Divestment and Sanctions (BDS) campaign that was issued by a Palestinian civil society against Israel until the country complies with what they hold are international law and Palestinian rights, that commenced in 2005 and applies the lessons from Banks's experience with South Africa's apartheid era. The continuation of Banks's boycott of Israeli publishers for the sale of the rights to his novels was also confirmed in the extract and Banks further explained, "I don't buy Israeli-sourced products or food, and my partner and I try to support Palestinian-sourced products wherever possible." In 2002, Banks endorsed the Scottish Socialist Party. Banks met his first wife, Annie, in London before the 1984 release of his first book. The couple lived in Faversham in the south of England, then split up in 1988. Banks returned to Edinburgh and dated another woman for two years until she left him. Iain and Annie reconciled a year later and moved to Fife. They got married in Hawaii in 1992; in 2007, after 15 years of marriage, they announced their separation. In 1998 Banks was in a near-fatal accident when his car rolled off the road. In February 2007, Banks sold his extensive car collection, including a 3.2-litre Porsche Boxster, a Porsche 911 Turbo, a 3.8-litre Jaguar Mark II, a 5-litre BMW M5 and a daily use diesel Land Rover Defender whose power he had boosted by about 50%. Banks exchanged all of the vehicles for a Lexus RX 400h hybrid – later replaced by a diesel Toyota Yaris – and said in the future he would fly only in emergencies. In April 2012 Banks became the "Acting Honorary Non-Executive Figurehead President Elect pro tem (trainee)" of the Science Fiction Book Club based in London. The title was his own creation and on 3 October 2012 Banks accepted a T-shirt decorated with this title. Banks lived in North Queensferry, on the north side of the Firth of Forth, with the author and founder of the Dead by Dawn film festival Adele Hartley. Banks and Hartley commenced their relationship in 2006, and married on 29 March 2013 after he asked her to "do me the honour of becoming my widow". On 3 April 2013, Banks announced on his website, and one set up by him and some friends, that he had been diagnosed with terminal cancer of the gallbladder and was unlikely to live beyond a year. In his announcement, Banks stated that he would be withdrawing from all public engagements and that "The Quarry" would be his last novel. The dates of publication of "The Quarry" were brought forward at Banks's request, to 20 June 2013 in the UK and 25 June 2013 in the US and Canada. Banks died on 9 June 2013. Banks's publisher stated that the author was "an irreplaceable part of the literary world", a sentiment that was reaffirmed by fellow Scottish author and friend since secondary school Ken MacLeod, who observed that Banks's death "left a large gap in the Scottish literary scene as well as the wider English-speaking world." British author Charles Stross wrote that "One of the giants of 20th and 21st century Scottish literature has left the building." Authors, including Neil Gaiman, Ian Rankin, Alastair Reynolds, and David Brin also paid tribute to Banks, in their blogs and elsewhere. The asteroid 5099 Iainbanks was named after him shortly after his death. On 23 January 2015, SpaceX's CEO Elon Musk named two of the company's autonomous spaceport drone ships "Just Read The Instructions" and "Of Course I Still Love You", after ships from Banks's novel "The Player of Games". Another, "A Shortfall of Gravitas", began construction in 2018. The name is a reference to the ship "Experiencing A Significant Gravitas Shortfall", first mentioned in "Look to Windward". On 13 May 2019, the Five Deeps Expedition achieved the deepest ocean dive record in the "DSV Limiting Factor". The support ship was named "DSSV Pressure Drop". The financial sponsor behind the Limiting Factor's design and construction, Victor Vescovo, is a great admirer of the science fiction genre and in particular of the Culture series. "Limiting Factor" and "Pressure Drop" are two of the ship names in the series. Iain Banks received the following literary awards and nominations: Banks' non-SF work comprises fourteen novels and one non-fiction book. Many of his novels contain elements of autobiography, and feature various locations in his native Scotland. "Raw Spirit" (subtitled "In Search of the Perfect Dram") is a travel book of Banks' visits to the distilleries of Scotland in search of the finest whisky, including his musings on other subjects such as cars and politics. Banks wrote fourteen SF novels, ten of which were part of the Culture series, and a short story collection called "The State of the Art" (1991), which includes some stories set in the same universe. These works focus upon characters that are usually on the margins of the Culture, a post-scarcity anarchist utopia. In the same universe are other civilizations, with which the Culture sometimes enters into conflict, and sentient artificial intelligences. Banks wrote introductions for works by other writers including:
https://en.wikipedia.org/wiki?curid=14858
Incunable An incunable, or sometimes incunabulum (plural incunables or incunabula, respectively), is a book, pamphlet, or broadside printed in Europe before the 16th century. Incunabula are not manuscripts, which are documents written by hand. there are about 30,000 distinct known incunable editions extant, but the probable number of surviving copies in Germany alone is estimated at around 125,000. Through statistical analysis, it is estimated that the number of lost editions is at least 20,000. Incunable is the anglicised form of "incunabulum", reconstructed singular of Latin "incunabula", which meant "swaddling clothes", or "cradle", and which metaphorically could and can refer to "the earliest stages or first traces in the development of anything". A former term for incunable is fifteener, in the meaning of "fifteenth-century edition". The term "incunabula" as a printing term was first used by the Dutch physician and humanist Hadrianus Iunius (Adriaan de Jonghe, 1511–1575) and appears in a passage from his posthumous work (written in 1569): Hadrianus Iunius, "Batavia", [...], [Lugduni Batavorum], ex officina Plantiniana, apud Franciscum Raphelengium, 1588, p. 256 l. 3: «inter prima artis [typographicae] incunabula», a term ("the first infancy of printing") to which he arbitrarily set an end of 1500 which still stands as a convention. Only by a misunderstanding was Bernhard von Mallinckrodt (1591–1664) considered to be the inventor of this meaning of "incunabula"; the identical passage is found in his Latin pamphlet "De ortu ac progressu artis typographicae" ("On the rise and progress of the typographic art", Cologne, 1640): Bernardus a Mallinkrot, "De ortu ac progressu artis typographicae dissertatio historica", [...], Coloniae Agrippinae, apud Ioannem Kinchium, 1640 (in frontispiece: 1639), p. 29 l. 16: «inter prima artis [typographicae] incunabula», within a long passage of several pages, which he (correctly) quotes entirely in italic characters (that is between quotation marks), referring to the name of author and work cited: «Primus istorum [...] Hadrianus Iunius est, cuius integrum locum, ex "Batavia" eius, operae pretium est adscribere; [...]. Ita igitur Iunius» (ibid., p. 27 ll. 27–32, followed by the long passage, «Redeo → sordes», ibid., p. 27, l. 32 – p. 33 l. 32 [= "Batavia", p. 253 l. 28 – p. 258 l. 21]). So the source is only one, the other is a quotation. The term "incunabula" came to denote the printed books themselves in the late 17th century. John Evelyn, in moving the Arundel Manuscripts to the Royal Society in August 1678, remarked of the printed books among the manuscripts: "The printed books, being of the oldest impressions, are not the less valuable; I esteem them almost equal to MSS." The convenient but arbitrarily chosen end date for identifying a printed book as an incunable does not reflect any notable developments in the printing process, and many books printed for a number of years after 1500 continued to be visually indistinguishable from incunables. "Post-incunable" typically refers to books printed after 1500 up to another arbitrary end date such as 1520 or 1540. From around this period the dating of any edition becomes easier, as the practice of printers including information such as the place and year of printing became more widespread. There are two types of "incunabula" in printing: the block book, printed from a single carved or sculpted wooden block for each page, employing the same process as the woodcut in art (these may be called "xylographic"); and the "typographic book", made with individual pieces of cast-metal movable type on a printing press. Many authors reserve the term "incunabula" for the latter kind only. The spread of printing to cities both in the north and in Italy ensured that there was great variety in the texts chosen for printing and the styles in which they appeared. Many early typefaces were modelled on local forms of writing or derived from the various European forms of Gothic script, but there were also some derived from documentary scripts (such as most of Caxton's types), and, particularly in Italy, types modelled on handwritten scripts and calligraphy employed by humanists. Printers congregated in urban centres where there were scholars, ecclesiastics, lawyers, and nobles and professionals who formed their major customer base. Standard works in Latin inherited from the medieval tradition formed the bulk of the earliest printed works, but as books became cheaper, vernacular works (or translations into vernaculars of standard works) began to appear. The most famous "incunabula" include two from Mainz, the Gutenberg Bible of 1455 and the "Peregrinatio in terram sanctam" of 1486, printed and illustrated by Erhard Reuwich; the "Nuremberg Chronicle" written by Hartmann Schedel and printed by Anton Koberger in 1493; and the "Hypnerotomachia Poliphili" printed by Aldus Manutius with important illustrations by an unknown artist. Other printers of incunabula were Günther Zainer of Augsburg, Johannes Mentelin and Heinrich Eggestein of Strasbourg, Heinrich Gran of Haguenau and William Caxton of Bruges and London. The first incunable to have woodcut illustrations was Ulrich Boner's "Der Edelstein", printed by Albrecht Pfister in Bamberg in 1461. Many incunabula are undated, needing complex bibliographical analysis to place them correctly. The post-incunabula period marks a time of development during which the printed book evolved fully as a mature artefact with a standard format. After c. 1540 books tended to conform to a template that included the author, title-page, date, seller, and place of printing. This makes it much easier to identify any particular edition. As noted above, the "end date" for identifying a printed book as an incunable is convenient but was chosen arbitrarily; it does not reflect any notable developments in the printing process around the year 1500. Books printed for a number of years after 1500 continued to look much like incunables, with the notable exception of the small format books printed in italic type introduced by Aldus Manutius in 1501. The term post-incunable is sometimes used to refer to books printed "after 1500—how long after, the experts have not yet agreed." For books printed in the UK, the term generally covers 1501–1520, and for books printed in mainland Europe, 1501–1540. The data in this section were derived from the Incunabula Short-Title Catalogue (ISTC). The number of printing towns and cities stands at 282. These are situated in some 18 countries in terms of present-day boundaries. In descending order of the number of editions printed in each, these are: Italy, Germany, France, Netherlands, Switzerland, Spain, Belgium, England, Austria, the Czech Republic, Portugal, Poland, Sweden, Denmark, Turkey, Croatia, Montenegro, and Hungary (see diagram). The following table shows the 20 main 15th century printing locations; as with all data in this section, exact figures are given, but should be treated as close estimates (the total editions recorded in ISTC at May 2013 is 28,395): The 18 languages that incunabula are printed in, in descending order, are: Latin, German, Italian, French, Dutch, Spanish, English, Hebrew, Catalan, Czech, Greek, Church Slavonic, Portuguese, Swedish, Breton, Danish, Frisian and Sardinian (see diagram). Only about one edition in ten (i.e. just over 3,000) has any illustrations, woodcuts or metalcuts. The "commonest" incunable is Schedel's "Nuremberg Chronicle" ("Liber Chronicarum") of 1493, with c 1,250 surviving copies (which is also the most heavily illustrated). Many incunabula are unique, but on average about 18 copies survive of each. This makes the Gutenberg Bible, at 48 or 49 known copies, a relatively common (though extremely valuable) edition. Counting extant incunabula is complicated by the fact that most libraries consider a single volume of a multi-volume work as a separate item, as well as fragments or copies lacking more than half the total leaves. A complete incunable may consist of a slip, or up to ten volumes. In terms of format, the 29,000-odd editions comprise: 2,000 broadsides, 9,000 folios, 15,000 quartos, 3,000 octavos, 18 12mos, 230 16mos, 20 32mos, and 3 64mos. ISTC at present cites 528 extant copies of books printed by Caxton, which together with 128 fragments makes 656 in total, though many are broadsides or very imperfect (incomplete). Apart from migration to mainly North American and Japanese universities, there has been little movement of incunabula in the last five centuries. None were printed in the Southern Hemisphere, and the latter appears to possess less than 2,000 copies, about 97.75% remain north of the equator. However many incunabula are sold at auction or through the rare book trade every year. The British Library's Incunabula Short Title Catalogue now records over 29,000 titles, of which around 27,400 are incunabula editions (not all unique works). Studies of incunabula began in the 17th century. Michel Maittaire (1667–1747) and Georg Wolfgang Panzer (1729–1805) arranged printed material chronologically in annals format, and in the first half of the 19th century, Ludwig Hain published, "Repertorium bibliographicum"— a checklist of incunabula arranged alphabetically by author: "Hain numbers" are still a reference point. Hain was expanded in subsequent editions, by Walter A. Copinger and Dietrich Reichling, but it is being superseded by the authoritative modern listing, a German catalogue, the "Gesamtkatalog der Wiegendrucke", which has been under way since 1925 and is still being compiled at the Staatsbibliothek zu Berlin. North American holdings were listed by Frederick R. Goff and a worldwide union catalogue is provided by the Incunabula Short Title Catalogue. Notable collections, with the approximate numbers of incunabula held, include:
https://en.wikipedia.org/wiki?curid=14863
Isotropy Isotropy is uniformity in all orientations; it is derived from the Greek "isos" (ἴσος, "equal") and "tropos" (τρόπος, "way"). Precise definitions depend on the subject area. Exceptions, or inequalities, are frequently indicated by the prefix "an", hence "anisotropy". "Anisotropy" is also used to describe situations where properties vary systematically, dependent on direction. Isotropic radiation has the same intensity regardless of the direction of measurement, and an isotropic field exerts the same action regardless of how the test particle is oriented. Within mathematics, "isotropy" has a few different meanings: In the study of mechanical properties of materials, "isotropic" means having identical values of a property in all directions. This definition is also used in geology and mineralogy. Glass and metals are examples of isotropic materials. Common anisotropic materials include wood, because its material properties are different parallel and perpendicular to the grain, and layered rocks such as slate. Isotropic materials are useful since they are easier to shape, and their behavior is easier to predict. Anisotropic materials can be tailored to the forces an object is expected to experience. For example, the fibers in carbon fiber materials and rebars in reinforced concrete are oriented to withstand tension. In industrial processes, such as etching steps, isotropic means that the process proceeds at the same rate, regardless of direction. Simple chemical reaction and removal of a substrate by an acid, a solvent or a reactive gas is often very close to isotropic. Conversely, anisotropic means that the attack rate of the substrate is higher in a certain direction. Anisotropic etch processes, where vertical etch-rate is high, but lateral etch-rate is very small are essential processes in microfabrication of integrated circuits and MEMS devices. An isotropic antenna is an idealized "radiating element" used as a reference; an antenna that broadcasts power equally (calculated by the Poynting vector) in all directions. The gain of an arbitrary antenna is usually reported in decibels relative to an isotropic antenna, and is expressed as dBi or dB(i).
https://en.wikipedia.org/wiki?curid=14865
International Mathematical Union The International Mathematical Union (IMU) is an international non-governmental organization devoted to international cooperation in the field of mathematics across the world. It is a member of the International Science Council (ISC) and supports the International Congress of Mathematicians. Its members are national mathematics organizations from more than 80 countries. The objectives of the International Mathematical Union (IMU) are: promoting international cooperation in mathematics, supporting and assisting the International Congress of Mathematicians (ICM) and other international scientific meetings/conferences, acknowledging outstanding research contributions to mathematics through the awarding of scientific prizes, and encouraging and supporting other international mathematical activities, considered likely to contribute to the development of mathematical science in any of its aspects, whether pure, applied, or educational. The IMU was established in 1920, but dissolved in September 1932 and then re-established 1950 de facto at the Constitutive Convention in New York, de jure on September 10, 1951, when ten countries had become members. The last milestone was the General Assembly in March 1952, in Rome, Italy where the activities of the new IMU were inaugurated and the first Executive Committee, President and various commissions where elected. In 1952 the IMU was also readmitted to the ICSU. The past president of the Union is Shigefumi Mori (2015–2018). The current president is Carlos Kenig. At the 16th meeting of the IMU General Assembly in Bangalore, India, in August 2010, Berlin was chosen as the location of the permanent office of the IMU, which was opened on January 1, 2011, and is hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS), an institute of the Gottfried Wilhelm Leibniz Scientific Community, with about 120 scientists engaging in mathematical research applied to complex problems in industry and commerce. IMU has a close relationship to mathematics education through its International Commission on Mathematical Instruction (ICMI). This commission is organized similarly to IMU with its own Executive Committee and General Assembly. Developing countries are a high priority for the IMU and a significant percentage of its budget, including grants received from individuals, mathematical societies, foundations, and funding agencies, is spent on activities for developing countries. Since 2011 this has been coordinated by the Commission for Developing Countries (CDC). The Committee for Women in Mathematics (CWM) is concerned with issues related to women in mathematics worldwide. It organizes the World Meeting for Women in Mathematics formula_1 as a satellite event of ICM. The International Commission on the History of Mathematics (ICHM) is operated jointly by the IMU and the Division of the History of Science (DHS) of the International Union of History and Philosophy of Science (IUHPS). The Committee on Electronic Information and Communication (CEIC) advises IMU on matters concerning mathematical information, communication, and publishing. The scientific prizes awarded by the IMU are deemed to be the highest distinctions in the mathematical world. The opening ceremony of the International Congress of Mathematicians (ICM) is where the awards are presented: Fields Medals (two to four medals are given since 1936), the Rolf Nevanlinna Prize (since 1986), the Carl Friedrich Gauss Prize (since 2006), and the Chern Medal Award (since 2010). The IMU's members are Member Countries and each Member country is represented through an Adhering Organization, which may be its principal academy, a mathematical society, its research council or some other institution or association of institutions, or an appropriate agency of its government. A country starting to develop its mathematical culture and interested in building links to mathematicians all over the world is invited to join IMU as an Associate Member. For the purpose of facilitating jointly sponsored activities and jointly pursuing the objectives of the IMU, multinational mathematical societies and professional societies can join IMU as an Affiliate Member. Every four years the IMU membership gathers in a General Assembly (GA) which consists of delegates appointed by the Adhering Organizations, together with the members of the Executive Committee. All important decisions are made at the GA, including the election of the officers, establishment of commissions, the approval of the budget, and any changes to the statutes and by-laws. The International Mathematical Union is administered by an Executive Committee (EC) which conducts the business of the Union. The EC consists of the President, two Vice-Presidents, the Secretary, six Members-at-Large, all elected for a term of four years, and the Past President. The EC is responsible for all policy matters and for tasks, such as choosing the members of the ICM Program Committee and various prize committees. Every two months IMU publishes an electronic newsletter, "IMU-Net", that aims to improve communication between IMU and the worldwide mathematical community by reporting on decisions and recommendations of the Union, major international mathematical events and developments, and on other topics of general mathematical interest. IMU Bulletins are published annually with the aim to inform IMU’s members about the Union’s current activities. In 2009 IMU published the document "Best Current Practices for Journals". The IMU took its first organized steps towards the promotion of mathematics in developing countries in the early 1970s and has, since then supported various activities. In 2010 IMU formed the Commission for Developing Countries (CDC) which brings together all of the past and current initiatives in support of mathematics and mathematicians in the developing world. Some IMU Supported Initiatives: IMU also supports the "International Commission on Mathematical Instruction" (ICMI) with its programmes, exhibits and workshops in emerging countries, especially in Asia and Africa. IMU released a report in 2008, "Mathematics in Africa: Challenges and Opportunities", on the current state of mathematics in Africa and on opportunities for new initiatives to support mathematical development. In 2014, the IMU's Commission for Developing Countries CDC released an update of the report. Additionally, reports about "Mathematics in Latin America and the Caribbean and South East Asia". were published. In July 2014 IMU released the report: The International Mathematical Union in the Developing World: Past, Present and Future (July 2014). In 2014, the IMU held a day-long symposium prior to the opening of the International Congress of Mathematicians (ICM), entitled "Mathematics in Emerging Nations: Achievements and Opportunities" (MENAO). Approximately 260 participants from around the world, including representatives of embassies, scientific institutions, private business and foundations attended this session. Attendees heard inspiring stories of individual mathematicians and specific developing nations. List of presidents of the International Mathematical Union from 1952 to the present: 1952–1954: Marshall Harvey Stone (vice: Émile Borel, Erich Kamke) 1955–1958: Heinz Hopf (vice: Arnaud Denjoy, W. V. D. Hodge) 1959–1962: Rolf Nevanlinna (vice: Pavel Alexandrov, Marston Morse) 1963–1966: Georges de Rham (vice: Henri Cartan, Kazimierz Kuratowski) 1967–1970: Henri Cartan (vice: Mikhail Lavrentyev, Deane Montgomery) 1971–1974: K. S. Chandrasekharan (vice: Abraham Adrian Albert, Lev Pontryagin) 1975–1978: Deane Montgomery (vice: J. W. S. Cassels, Miron Nicolescu, Gheorghe Vrânceanu) 1979–1982: Lennart Carleson (vice: Masayoshi Nagata, Yuri Vasilyevich Prokhorov) 1983–1986: Jürgen Moser (vice: Ludvig Faddeev, Jean-Pierre Serre) 1987–1990: Ludvig Faddeev (vice: Walter Feit, Lars Hörmander) 1991–1994: Jacques-Louis Lions (vice: John H. Coates, David Mumford) 1995–1998: David Mumford (vice: Vladimir Arnold, Albrecht Dold) 1999–2002: Jacob Palis (vice: Simon Donaldson, Shigefumi Mori) 2003–2006: John M. Ball (vice: Jean-Michel Bismut, Masaki Kashiwara) 2007–2010: László Lovász (vice: Zhi-Ming Ma, Claudio Procesi) 2011–2014: Ingrid Daubechies (vice: Christiane Rousseau, Marcelo Viana) 2015–2018: Shigefumi Mori (vice: Alicia Dickenstein, Vaughan Jones) 2019–2022: Carlos Kenig (vice: Nalini Joshi, Loyiso Nongxa )
https://en.wikipedia.org/wiki?curid=14868
International Council for Science The International Council for Science (ICSU, after its former name, International Council of Scientific Unions) was an international non-governmental organization devoted to international cooperation in the advancement of science. Its members were national scientific bodies and international scientific unions. In July 2018, the ICSU merged with the International Social Science Council (ISSC) to form the International Science Council (ISC). In 2017, the ICSU comprised 122 multi-disciplinary National Scientific Members, Associates and Observers representing 142 countries and 31 international, disciplinary Scientific Unions. ICSU also had 22 Scientific Associates. The ICSU’s mission was to strengthen international science for the benefit of society. To do this, the ICSU mobilized the knowledge and resources of the international scientific community to: Activities focused on three areas: International Research Collaboration, Science for Policy, and Universality of Science. In July 2018, the ICSU became the International Science Council (ISC). The ICSU itself was one of the oldest non-governmental organizations in the world, representing the evolution and expansion of two earlier bodies known as the International Association of Academies (IAA; 1899-1914) and the International Research Council (IRC; 1919-1931). In 1998, Members agreed that the Council’s current composition and activities would be better reflected by modifying the name from the International Council of Scientific Unions to the International Council for Science, while its rich history and strong identity would be well served by retaining the existing acronym, ICSU. The ICSU Principle of Universality of Science states: ""the free and responsible practice of science is fundamental to scientific advancement and human and environmental well-being. Such practice, in all its aspects, requires freedom of movement, association, expression and communication for scientists, as well as equitable access to data, information, and other resources for research. It requires responsibility at all levels to carry out and communicate scientific work with integrity, respect, fairness, trustworthiness, and transparency, recognising its benefits and possible harms. "In advocating the free and responsible practice of science, ICSU promotes equitable opportunities for access to science and its benefits, and opposes discrimination based on such factors as ethnic origin, religion, citizenship, language, political or other opinion, sex, gender identity, sexual orientation, disability, or age."" Adherence to this Principle is a condition of ICSU membership. The Committee on Freedom and Responsibility in the conduct of Science (CFRS) "serves as the guardian of the Principle and undertakes a variety of actions to defend scientific freedoms and promote integrity and responsibility." The ICSU Secretariat (20 staff in 2012) in Paris ensured the day-to-day planning and operations under the guidance of an elected Executive Board. Three Policy Committees − Committee on Scientific Planning and Review (CSPR), Committee on Freedom and Responsibility in the conduct of Science (CFRS) and Committee on Finance − assisted the Executive Board in its work and a General Assembly of all Members was convened every three years. ICSU has three Regional Offices − Africa, Asia and the Pacific as well as Latin America and the Caribbean. The principal source of ICSU's finances was the contributions it receives from its members. Other sources of income are the framework contracts from UNESCO (United Nations Educational, Scientific and Cultural Organization) and grants and contracts from United Nations bodies, foundations and agencies, which are used to support the scientific activities of the ICSU Unions and interdisciplinary bodies.
https://en.wikipedia.org/wiki?curid=14869
International Union of Pure and Applied Chemistry The International Union of Pure and Applied Chemistry (IUPAC ) is an international federation of National Adhering Organizations that represents chemists in individual countries. It is a member of the International Science Council (ISC). IUPAC is registered in Zürich, Switzerland, and the administrative office, known as the "IUPAC Secretariat", is in Research Triangle Park, North Carolina, United States. This administrative office is headed by IUPAC's executive director, currently Lynn Soby. IUPAC was established in 1919 as the successor of the International Congress of Applied Chemistry for the advancement of chemistry. Its members, the National Adhering Organizations, can be national chemistry societies, national academies of sciences, or other bodies representing chemists. There are fifty-four National Adhering Organizations and three Associate National Adhering Organizations. IUPAC's Inter-divisional Committee on Nomenclature and Symbols (IUPAC nomenclature) is the recognized world authority in developing standards for the naming of the chemical elements and compounds. Since its creation, IUPAC has been run by many different committees with different responsibilities. These committees run different projects which include standardizing nomenclature, finding ways to bring chemistry to the world, and publishing works. IUPAC is best known for its works standardizing nomenclature in chemistry and other fields of science, but IUPAC has publications in many fields including chemistry, biology and physics. Some important work IUPAC has done in these fields includes standardizing nucleotide base sequence code names; publishing books for environmental scientists, chemists, and physicists; and improving education in science. IUPAC is also known for standardizing the atomic weights of the elements through one of its oldest standing committees, the Commission on Isotopic Abundances and Atomic Weights (CIAAW). The need for an international standard for chemistry was first addressed in 1860 by a committee headed by German scientist Friedrich August Kekulé von Stradonitz. This committee was the first international conference to create an international naming system for organic compounds. The ideas that were formulated in that conference evolved into the official IUPAC nomenclature of organic chemistry. IUPAC stands as a legacy of this meeting, making it one of the most important historical international collaborations of chemistry societies. Since this time, IUPAC has been the official organization held with the responsibility of updating and maintaining official organic nomenclature. IUPAC as such was established in 1919. One notable country excluded from this early IUPAC is Germany. Germany's exclusion was a result of prejudice towards Germans by the Allied powers after World War I. Germany was finally admitted into IUPAC during 1929. However, Nazi Germany was removed from IUPAC during World War II. During World War II, IUPAC was affiliated with the Allied powers, but had little involvement during the war effort itself. After the war, East and West Germany were readmitted to IUPAC in 1973. Since World War II, IUPAC has been focused on standardizing nomenclature and methods in science without interruption. In 2016, IUPAC denounced the use of chlorine as a chemical weapon. The organization pointed out their concerns in a letter to Ahmet Üzümcü, the director of the Organisation for the Prohibition of Chemical Weapons (OPCW), in regards to the practice of utilizing chlorine for weapon usage in Syria among other locations. The letter stated, "Our organizations deplore the use of chlorine in this manner. The indiscriminate attacks, possibly carried out by a member state of the Chemical Weapons Convention (CWC), is of concern to chemical scientists and engineers around the globe and we stand ready to support your mission of implementing the CWC." According to the CWC, "the use, stockpiling, distribution, development or storage of any chemical weapons is forbidden by any of the 192 state party signatories." IUPAC is governed by several committees that all have different responsibilities. The committees are as follows: Bureau, CHEMRAWN (Chem Research Applied to World Needs) Committee, Committee on Chemistry Education, Committee on Chemistry and Industry, Committee on Printed and Electronic Publications, Evaluation Committee, Executive Committee, Finance Committee, Interdivisional Committee on Terminology, Nomenclature and Symbols, Project Committee, and Pure and Applied Chemistry Editorial Advisory Board. Each committee is made up of members of different National Adhering Organizations from different countries. The steering committee hierarchy for IUPAC is as follows: IUPAC committee has a long history of officially naming organic and inorganic compounds. IUPAC nomenclature is developed so that any compound can be named under one set of standardized rules to avoid duplicate names. The first publication on IUPAC nomenclature of organic compounds was "A Guide to IUPAC Nomenclature of Organic Compounds" in 1900, which contained information from the International Congress of Applied Chemistry. IUPAC organic nomenclature has three basic parts: the substituents, carbon chain length and chemical ending. The substituents are any functional groups attached to the main carbon chain. The main carbon chain is the longest possible continuous chain. The chemical ending denotes what type of molecule it is. For example, the ending "ane" denotes a single bonded carbon chain, as in "hexane" (). Another example of IUPAC organic nomenclature is cyclohexanol: Basic IUPAC inorganic nomenclature has two main parts: the cation and the anion. The cation is the name for the positively charged ion and the anion is the name for the negatively charged ion. An example of IUPAC nomenclature of inorganic chemistry is potassium chlorate (KClO3): IUPAC also has a system for giving codes to identify amino acids and nucleotide bases. IUPAC needed a coding system that represented long sequences of amino acids. This would allow for these sequences to be compared to try to find homologies. These codes can consist of either a one letter code or a three letter code. These codes make it easier and shorter to write down the amino acid sequences that make up proteins. The nucleotide bases are made up of purines (adenine and guanine) and pyrimidines (cytosine and thymine or uracil). These nucleotide bases make up DNA and RNA. These nucleotide base codes make the genome of an organism much smaller and easier to read. The codes for amino acids (24 amino acids and three special codes) are: The "Experimental Thermodynamics" books series covers many topics in the fields of thermodynamics. IUPAC color code their books in order to make each publication distinguishable. IUPAC and UNESCO were the lead organizations coordinating events for the International Year of Chemistry, which took place in 2011. The International Year of Chemistry was originally proposed by IUPAC at the general assembly in Turin, Italy. This motion was adopted by UNESCO at a meeting in 2008. The main objectives of the International Year of Chemistry were to increase public appreciation of chemistry and gain more interest in the world of chemistry. This event is also being held to encourage young people to get involved and contribute to chemistry. A further reason for this event being held is to honour how chemistry has made improvements to everyone's way of life. IUPAC Presidents are elected by the IUPAC Council during the General Assembly. Below is the list of IUPAC Presidents since its inception in 1919.
https://en.wikipedia.org/wiki?curid=14870
International Hydrographic Organization The International Hydrographic Organization (IHO) is an intergovernmental organization representing hydrography. In October 2019 the IHO comprised 93 Member States. A principal aim of the IHO is to ensure that the world's seas, oceans and navigable waters are properly surveyed and charted. It does this through the setting of international standards, the co-ordination of the endeavours of the world's national hydrographic offices, and through its capacity building programme. The IHO enjoys observer status at the United Nations, where it is the recognised competent authority on hydrographic surveying and nautical charting. When referring to hydrography and nautical charting in Conventions and similar Instruments, it is the IHO standards and specifications that are normally used. The IHO was established in 1921 as the International Hydrographic Bureau (IHB). The present name was adopted in 1970, as part of a new international Convention on the IHO adopted by the then member nations. The former name International Hydrographic Bureau was retained to describe the IHO secretariat until 8 November 2016, when a revision to the Convention on the IHO entered into force. Thereafter the IHB became known as the "IHO Secretariat", comprising an elected Secretary-General and two supporting Directors, together with a small permanent staff (17 as at August 2019), at the Organization's headquarters in Monaco. During the 19th century, many maritime nations established hydrographic offices to provide means for improving the navigation of naval and merchant vessels by providing nautical publications, nautical charts, and other navigational services. There were substantial differences in hydrographic procedures charts, and publications. In 1889, an International Maritime Conference was held at Washington, D.C., and it was proposed to establish a "permanent international commission." Similar proposals were made at the sessions of the International Congress of Navigation held at Saint Petersburg in 1908 and the International Maritime Conference held at Saint Petersburg in 1912. In 1919, the national Hydrographers of Great Britain and France cooperated in taking the necessary steps to convene an international conference of Hydrographers. London was selected as the most suitable place for this conference, and on 24 July 1919, the First International Conference opened, attended by the Hydrographers of 24 nations. The object of the conference was "To consider the advisability of all maritime nations adopting similar methods in preparation, construction, and production of their charts and all hydrographic publications; of rendering the results in the most convenient form to enable them to be readily used; of instituting a prompt system of mutual exchange of hydrographic information between all countries; and of providing an opportunity to consultations and discussions to be carried out on hydrographic subjects generally by the hydrographic experts of the world." This is still the major purpose of the IHO. As a result of the 1919 Conference, a permanent organization was formed and statutes for its operations were prepared. The IHB, now the IHO, began its activities in 1921 with 18 nations as members. The Principality of Monaco was selected as the seat of the organization as a result of the offer of Albert I of Monaco to provide suitable accommodation for the Bureau in the Principality. The IHO develops hydrographic and nautical charting standards. These standards are subsequently adopted and used by its currently 91 (as at August 2019) member countries and others in their surveys, nautical charts, and publications. The almost universal use of the IHO standards means that the products and services provided by the world's national hydrographic and oceanographic offices are consistent and recognisable by all seafarers and for other users. Much has been done in the field of standardisation since the IHO was founded. The IHO has encouraged the formation of Regional Hydrographic Commissions (RHCs). Each RHC coordinates the national surveying and charting activities of countries within each region and acts as a forum to address other matters of common hydrographic interest. The 15 RHCs plus the IHO Hydrographic Commission on Antarctica effectively cover the world. The IHO, in partnership with the Intergovernmental Oceanographic Commission, directs the General Bathymetric Chart of the Oceans programme. Establishment of the Chart Specifications Committee and International Charts: Most IHO publications, including the standards, guidelines and associated documents such as the "International Hydrographic Review", "International Hydrographic Bulletin", the "Hydrographic Dictionary" and the "Year Book" are available to the general public free of charge from the IHO website. The IHO publishes the international standards related to charting and hydrography, including S-57, "IHO Transfer Standard for Digital Hydrographic Data", the encoding standard that is used primarily for electronic navigational charts. In 2010, the IHO introduced a new, contemporary hydrographic geospatial standard for modelling marine data and information, known as S-100. S-100 and any dependent product specifications are underpinned by an on-line registry accessible via the IHO website. S-100 is aligned with the ISO 19100 series of geographic standards, thereby making it fully compatible with contemporary geospatial data standards. Because S-100 is based on ISO 19100, it can be used by other data providers for their maritime-related (non-hydrographic) data and information. Various data and information providers from both the government and private sector are now using S-100 as part of the implementation of the e-Navigation concept that has been endorsed by the UN International Maritime Organization.
https://en.wikipedia.org/wiki?curid=14871
IBM mainframe IBM mainframes are large computer systems produced by IBM since 1952. During the 1960s and 1970s, IBM dominated the large computer market. Current mainframe computers in IBM's line of business computers are developments of the basic design of the IBM System/360. From 1952 into the late 1960s, IBM manufactured and marketed several large computer models, known as the IBM 700/7000 series. The first-generation 700s were based on vacuum tubes, while the later, second-generation 7000s used transistors. These machines established IBM's dominance in electronic data processing ("EDP"). IBM had two model categories: one (701, 704, 709, 7030, 7090, 7094, 7040, 7044) for engineering and scientific use, and one (702, 705, 705-II, 705-III, 7080, 7070, 7072, 7074, 7010) for commercial or data processing use. The two categories, scientific and commercial, generally used common peripherals but had completely different instruction sets, and there were incompatibilities even within each category. IBM initially sold its computers without any software, expecting customers to write their own; programs were manually initiated, one at a time. Later, IBM provided compilers for the newly developed higher-level programming languages Fortran, COMTRAN and later COBOL. The first operating systems for IBM computers were written by IBM customers who did not wish to have their very expensive machines ($2M USD in the mid-1950s) sitting idle while operators set up jobs manually. These first operating systems were essentially scheduled work queues. It is generally thought that the first operating system used for real work was GM-NAA I/O, produced by General Motors' Research division in 1956. IBM enhanced one of GM-NAA I/O's successors, the SHARE Operating System, and provided it to customers under the name IBSYS. As software became more complex and important, the cost of supporting it on so many different designs became burdensome, and this was one of the factors which led IBM to develop System/360 and its operating systems. The second generation (transistor-based) products were a mainstay of IBM's business and IBM continued to make them for several years after the introduction of the System/360. (Some IBM 7094s remained in service into the 1980s.) Prior to System/360, IBM also sold computers smaller in scale that were not considered mainframes, though they were still bulky and expensive by modern standards. These included: IBM had difficulty getting customers to upgrade from the smaller machines to the mainframes because so much software had to be rewritten. The 7010 was introduced in 1962 as a mainframe-sized 1410. The later Systems 360 and 370 could emulate the 1400 machines. A desk-size machine with a different instruction set, the IBM 1130, was released concurrently with the System/360 to address the niche occupied by the 1620. It used the same EBCDIC character encoding as the 360 and was mostly programmed in Fortran, which was relatively easy to adapt to larger machines when necessary. IBM also introduced smaller machines after S/360. These included: "Midrange computer" is a designation used by IBM for a class of computer systems which fall in between mainframes and microcomputers. All that changed with the announcement of the System/360 (S/360) in April, 1964. The System/360 was a single series of compatible models for both commercial and scientific use. The number "360" suggested a "360 degree," or "all-around" computer system. System/360 incorporated features which had previously been present on only either the commercial line (such as decimal arithmetic and byte addressing) or the engineering and scientific line (such as floating point arithmetic). Some of the arithmetic units and addressing features were optional on some models of the System/360. However, models were upward compatible and most were also downward compatible. The System/360 was also the first computer in wide use to include dedicated hardware provisions for the use of operating systems. Among these were supervisor and application mode programs and instructions, as well as built-in memory protection facilities. Hardware memory protection was provided to protect the operating system from the user programs (tasks) and user tasks from each other. The new machine also had a larger address space than the older mainframes, 24 bits addressing 8-bit bytes vs. a typical 18 bits addressing 36-bit words. The smaller models in the System/360 line (e.g. the 360/30) were intended to replace the 1400 series while providing an easier upgrade path to the larger 360s. To smooth the transition from the second generation to the new line, IBM used the 360's microprogramming capability to emulate the more popular older models. Thus 360/30s with this added cost feature could run 1401 programs and the larger 360/65s could run 7094 programs. To run old programs, the 360 had to be halted and restarted in emulation mode. Many customers kept using their old software and one of the features of the later System/370 was the ability to switch to emulation mode and back under operating system control. Operating systems for the System/360 family included OS/360 (with PCP, MFT, and MVT), BOS/360, TOS/360, and DOS/360. The System/360 later evolved into the System/370, the System/390, and the 64-bit zSeries, System z, and zEnterprise machines. System/370 introduced virtual memory capabilities in all models other than the very first System/370 models; the OS/VS1 variant of OS/360 MFT, the OS/VS2 (SVS) variant of OS/360 MVT, and the DOS/VS variant of DOS/360 were introduced to use the virtual memory capabilities, followed by MVS, which, unlike the earlier virtual-memory operating systems, ran separate programs in separate address spaces, rather than running all programs in a single virtual address space. The virtual memory capabilities also allowed the system to support virtual machines; the VM/370 hypervisor would run one or more virtual machines running either standard System/360 or System/370 operating systems or the single-user Conversational Monitor System (CMS). A time-sharing VM system could run multiple virtual machines, one per user, with each virtual machine running an instance of CMS. The zSeries family, introduced in 2000 with the z900, included IBM's newly designed 64-bit z/Architecture. The different processors on current IBM mainframes are: Note that these are essentially identical, but distinguished for software cost control: all but CP are slightly restricted such they cannot be used to run arbitrary operating systems, and thus do not count in software licensing costs (which are typically based on the number of CPs). There are other supporting processors typically installed inside mainframes such as cryptographic accelerators (CryptoExpress), the OSA-Express networking processor, and FICON Express disk I/O processors. Software to allow users to run "traditional" workloads on zIIPs and zAAPs was briefly marketed by Neon Enterprise Software as "zPrime" but was withdrawn from the market in 2011 after a lawsuit by IBM. The primary operating systems in use on current IBM mainframes include z/OS (which followed MVS/ESA and OS/390 in the OS/360 lineage), z/VM (which followed VM/ESA and VM/XA in the CP-40 lineage), z/VSE (which is in the DOS/360 lineage), z/TPF (a successor of Airlines Control Program), and Linux on IBM Z such as SUSE Linux Enterprise Server and others. A few systems run MUSIC/SP and UTS (Mainframe UNIX). In October 2008, Sine Nomine Associates introduced OpenSolaris on System z. Current IBM mainframes run all the major enterprise transaction processing environments and databases, including CICS, IMS, WebSphere Application Server, DB2, and Oracle. In many cases these software subsystems can run on more than one mainframe operating system. There are software-based emulators for the System/370, System/390, and System z hardware, including FLEX-ES, which runs under UnixWare or Linux, and the freely available Hercules, which runs under Linux, FreeBSD, Solaris, macOS and Microsoft Windows. IBM offers an emulator called zPDT (System z Personal Development Tool) which runs on Linux on x86-64 machines.
https://en.wikipedia.org/wiki?curid=14872
Iowa State University Iowa State University of Science and Technology, (Iowa State University or Iowa State), is a public land-grant and space-grant research university in Ames, Iowa. It is the largest university in the state of Iowa and the third largest university in the Big 12 athletic conference. Iowa State is classified among "R1: Doctoral Universities – Very high research activity." It is also a member of the Association of American Universities (AAU). Founded in 1858 and coeducational from its start, Iowa State became the nation's first designated land-grant institution when the Iowa Legislature accepted the provisions of the 1862 Morrill Act on September 11, 1862, making Iowa the first state in the nation to do so. Iowa State's academic offerings are administered today through eight colleges, including the graduate college, that offer over 100 bachelor's degree programs, 112 master's degree programs, and 83 at the Ph.D. level, plus a professional degree program in Veterinary Medicine. Iowa State University's athletic teams, the Cyclones, compete in Division I of the NCAA and are a founding member of the Big 12. The Cyclones field 16 varsity teams and have won numerous NCAA national championships. In 1856, the Iowa General Assembly enacted legislation to establish the Iowa Agricultural College and Model Farm. This institution (now Iowa State University) was officially established on March 22, 1858, by the General Assembly. Story County was chosen as the location on June 21, 1859, beating proposals from Johnson, Kossuth, Marshall and Polk counties. The original farm of was purchased for a cost of $5,379. Iowa was the first state in the nation to accept the provisions of the Morrill Act of 1862. Iowa subsequently designated Iowa State as the land-grant college on March 29, 1864. From the start, Iowa Agricultural College focused on the ideals that higher education should be accessible to all and that the university should teach liberal and practical subjects. These ideals are integral to the land-grant university. The institution was coeducational from the first preparatory class admitted in 1868. The formal admitting of students began the following year, and the first graduating class of 1872 consisted of 24 men and two women. The Farm House, the first building on the Iowa State campus, was completed in 1861 before the campus was occupied by students or classrooms. It became the home of the superintendent of the Model Farm and in later years, the deans of Agriculture, including Seaman Knapp and "Tama Jim" Wilson. Iowa State's first president, Adonijah Welch, briefly stayed at the Farm House and penned his inaugural speech in a second floor bedroom. The college's first farm tenants primed the land for agricultural experimentation. The Iowa Experiment Station was one of the university's prominent features. Practical courses of instruction were taught, including one designed to give a general training for the career of a farmer. Courses in mechanical, civil, electrical, and mining engineering were also part of the curriculum. In 1870, President Welch and I. P. Roberts, professor of agriculture, held three-day farmers' institutes at Cedar Falls, Council Bluffs, Washington, and Muscatine. These became the earliest institutes held off-campus by a land grant institution and were the forerunners of 20th century extension. In 1872, the first courses were given in domestic economy (home economics, family and consumer sciences) and were taught by Mary B. Welch, the president's wife. Iowa State became the first land grant university in the nation to offer training in domestic economy for college credit. In 1879, the School of Veterinary Science was organized, the first state veterinary college in the United States (although veterinary courses had been taught since the beginning of the College). This was originally a two-year course leading to a diploma. The veterinary course of study contained classes in zoology, botany, anatomy of domestic animals, veterinary obstetrics, and sanitary science. William M. Beardshear was appointed President of Iowa State in 1891. During his tenure, Iowa Agricultural College truly came of age. Beardshear developed new agricultural programs and was instrumental in hiring premier faculty members such as Anson Marston, Louis B. Spinney, J.B. Weems, Perry G. Holden, and Maria Roberts. He also expanded the university administration, and added Morrill Hall (1891), the Campanile (1899), Old Botany (now Carrie Chapman Catt Hall) (1892), and Margaret Hall (1895) to the campus, all of which stand today. In his honor, Iowa State named its central administrative building (Central Building) after Beardshear in 1925. In 1898, reflecting the school's growth during his tenure, it was renamed Iowa State College of Agricultural and Mechanic Arts, or Iowa State for short. Today, Beardshear Hall holds the offices of the President, Vice-President, Treasurer, Secretary, Registrar, Provost, and student financial aid. Catt Hall is named after alumna and famed suffragette Carrie Chapman Catt, and is the home of the College of Liberal Arts and Sciences. In 1912 Iowa State had its first Homecoming celebration. The idea was first proposed by Professor Samuel Beyer, the college's “patron saint of athletics,” who suggested that Iowa State inaugurate a celebration for alumni during the annual football game against rival University of Iowa. Iowa State's new president, Raymond A. Pearson, liked the idea and issued a special invitation to alumni two weeks prior to the event: “We need you, we must have you. Come and see what a school you have made in Iowa State College. Find a way.” In October 2012 Iowa State marked its 100th Homecoming with a "CYtennial" Celebration. Iowa State celebrated its first VEISHEA on May 11–13, 1922. Wallace McKee (class of 1922) served as the first chairman of the Central Committee and Frank D. Paine (professor of electrical engineering) chose the name, based on the first letters of Iowa State's colleges: Veterinary Medicine, Engineering, Industrial Science, Home Economics, and Agriculture. VEISHEA grew to become the largest student-run festival in the nation. The Statistical Laboratory was established in 1933, with George W. Snedecor, professor of mathematics, as the first director. It was and is the first research and consulting institute of its kind in the country. While attempting to develop a faster method of computation, mathematics and physics professor John Vincent Atanasoff conceptualized the basic tenets of what would become the world's first electronic digital computer, the Atanasoff-Berry Computer (ABC), during a drive to Illinois in 1937. These included the use of a binary system of arithmetic, the separation of computer and memory functions, and regenerative drum memory, among others. The 1939 prototype was constructed with graduate student Clifford Berry in the basement of the Physics Building. During World War II, Iowa State was one of 131 colleges and universities nationally that took part in the V-12 Navy College Training Program which offered students a path to a Navy commission. On July 4, 1959, the college was officially renamed Iowa State University of Science and Technology. However, the short-form name "Iowa State University" is used even in official documents such as diplomas. Official names given to the university's divisions were the College of Agriculture, College of Engineering, College of Home Economics, College of Sciences and Humanities, and College of Veterinary Medicine. Iowa State's eight colleges today offer more than 100 undergraduate majors and 200 fields of study leading to graduate and professional degrees. The academic program at ISU includes a liberal arts education and some of the world's leading research in the biological and physical sciences. Breakthroughs at Iowa State changing the world are in the areas of human, social, economic, and environmental sustainability; new materials and processes for biomedical as well as industrial applications; nutrition, health, and wellness for humans and animals; transportation and infrastructure; food safety and security; plant and animal sciences; information and decision sciences; and renewable energies. The focus on technology has led directly to many research patents and inventions including the first binary computer, the ABC, Maytag blue cheese, the round hay baler, and many more. Located on a campus, the university has grown considerably from its roots as an agricultural college and model farm and is recognized internationally today for its comprehensive research programs. It continues to grow and set a new record for enrollment in the fall of 2015 with 36,001 students. Iowa State University is organized into eight colleges and two schools that offer 100 Bachelor's degree programs, 112 Masters programs, and 83 Ph.D programs, including one professional degree program in Veterinary Medicine. ISU is home to the following schools: Classified as one of Carnegie's "R1: Doctoral Universities - Very High Research Activity," Iowa State receives nearly $300 million in research grants each year. The university is one of 62 elected members of the Association of American Universities, an organization composed of the most highly ranked public and private research universities in the U.S. and Canada. In 2016-17 Iowa State university became part of only fifty-four institutions in the U.S. to have earned the "Innovation and Economic Prosperity University" designation by the Association of Public and Land-grant Universities. The agriculture and forestry programs are consistently ranked in the top 20 in the world by QS. In engineering specialties, at schools whose highest degree is a doctorate, Iowa State's biological/agricultural engineering program is ranked first, the mechanical and civil are ranked 9th and 16th nationally in the U.S. by "U.S. News & World Report." Almost all of the engineering specialities at ISU are ranked in the top 30 nationally. ISU's chemistry and physics programs are considered to be some of the best in the world and are ranked in the Top 100 globally and in Top 50 nationally. ISU's Greenlee School of Journalism and Mass Communication is one of the top journalism schools in the country and is notable for being among the first group of accredited journalism and mass communication programs. Greenlee is also cited as one of the leading JMC research programs in the nation, ranked 23rd in a publication by the AEJMC. The National Science Foundation ranks ISU 78th in the nation in total research and development expenditures and 94th in research and development expenditures for science and engineering. Currently, ISU ranks second nationally in license and options executed on its intellectual property and #2 nationally in license and options that yield income. In 2016, ISU's landscape architecture program was ranked as the 10th best undergraduate program in the nation, and architecture as the 18th best. The W. Robert and Ellen Sorge Parks Library contains over 2.6 million books and subscribes to more than 98,600 journal titles. Named for W. Robert Parks (1915–2003), the 11th president of Iowa State University, and his wife, Ellen Sorge Parks, the original library was built in 1925 with three subsequent additions made in 1961, 1969, and 1983. The library was dedicated and named after W. Robert and Ellen Sorge Parks in 1984. Parks Library provides extensive research collections, services, and information literacy instruction/information for all students. Facilities consist of the main Parks Library, the e-Library, the Veterinary Medical Library, two subject-oriented reading rooms (design and mathematics), and a remote library storage building. The Library's extensive collections include electronic and print resources that support research and study for all undergraduate and graduate programs. Nationally recognized collections support the basic and applied fields of biological and physical sciences. The Parks Library includes four public service desks: the Learning Connections Center, the Circulation Desk, the Media Center (including Maps, Media, Microforms, and Course Reserve collections), and Special Collections. The Library's instruction program includes a required undergraduate information literacy course as well as a wide variety of subject-based seminars on the effective use of Library resources for undergraduate and graduate students. The e-Library, accessed through the Internet, provides access to local and Web-based resources including electronic journals and books, local collections, online indexes, electronic course reserves and guides, and a broad range of subject research guides. Surrounding the first floor lobby staircase in Parks Library are eight mural panels designed by Iowa artist Grant Wood. As with "Breaking the Prairie Sod", Wood's other Iowa State University mural painted two years later, Wood borrowed his theme for "When Tillage Begins Other Arts Follow" from a speech on agriculture delivered by Daniel Webster in 1840 at the State House in Boston. Webster said, “When tillage begins, other arts follow. The farmers therefore are the founders of human civilization.” Wood had planned to create seventeen mural panels for the library, but only the eleven devoted to agriculture and the practical arts were completed. The final six, which would have hung in the main reading room (now the Periodical Room) and were to have depicted the fine arts, were never begun. The university has an IEOP for foreign students. Students whose native language is not English can take IEOP courses to improve their English proficiency to help them succeed at University-level study. IEOP course content also helps students prepare for English proficiency exams, like the TOEFL and IELTS. Classes included in the IEOP include Grammar, Reading, Writing, Oral Communication and Business and various bridge classes. Iowa State is the birthplace of the first electronic digital computer, starting the world's computer technology revolution. Invented by mathematics and physics professor John Atanasoff and engineering graduate student Clifford Berry during 1937–42, the Atanasoff-Berry Computer pioneered important elements of modern computing. On October 19, 1973, U.S. Federal Judge Earl R. Larson signed his decision following a lengthy court trial which declared the ENIAC patent of Mauchly and Eckert invalid and named Atanasoff the inventor of the electronic digital computer—the Atanasoff-Berry Computer or the ABC. An ABC Team consisting of Ames Laboratory and Iowa State engineers, technicians, researchers and students unveiled a working replica of the Atanasoff-Berry Computer in 1997 which can be seen on display on campus in the Durham Computation Center. The Extension Service traces its roots to farmers' institutes developed at Iowa State in the late 19th century. Committed to community, Iowa State pioneered the outreach mission of being a land-grant college through creation of the first Extension Service in 1902. In 1906, the Iowa Legislature enacted the Agricultural Extension Act making funds available for demonstration projects. It is believed this was the first specific legislation establishing state extension work, for which Iowa State assumed responsibility. The national extension program was created in 1914 based heavily on the Iowa State model. Iowa State is widely known for VEISHEA, an annual education and entertainment festival that was held on campus each spring. The name VEISHEA was derived from the initials of ISU's five original colleges, forming an acronym as the university existed when the festival was founded in 1922: VEISHEA was the largest student run festival in the nation, bringing in tens of thousands of visitors to the campus each year. The celebration featured an annual parade and many open-house demonstrations of the university facilities and departments. Campus organizations exhibited products, technologies, and held fund raisers for various charity groups. In addition, VEISHEA brought speakers, lecturers, and entertainers to Iowa State, and throughout its over eight decade history, it has hosted such distinguished guests as Bob Hope, John Wayne, Presidents Harry Truman, Ronald Reagan, and Lyndon Johnson, and performers Diana Ross, Billy Joel, Sonny and Cher, The Who, The Goo Goo Dolls, Bobby V, and The Black Eyed Peas. The 2007 VEISHEA festivities marked the start of Iowa State's year-long sesquicentennial celebration. On August 8, 2014, President Steven Leath announced that VEISHEA would no longer be an annual event at Iowa State and the name VEISHEA would be retired. Iowa State played a role in the development of the atomic bomb during World War II as part of the Manhattan Project, a research and development program begun in 1942 under the Army Corps of Engineers. The process to produce large quantities of high-purity uranium metal became known as the Ames process. One-third of the uranium metal used in the world's first controlled nuclear chain reaction was produced at Iowa State under the direction of Frank Spedding and Harley Wilhelm. The Ames Project received the Army/Navy E Award for Excellence in Production on October 12, 1945 for its work with metallic uranium as a vital war material. Today, ISU is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Iowa State is the only university in the United States that has a U.S. Department of Energy research laboratory physically located on its campus. Operated by Iowa State, the Ames Laboratory is one of ten national DOE Office of Science research laboratories. ISU research for the government provided Ames Laboratory its start in the 1940s with the development of a highly efficient process for producing high-purity uranium for atomic energy. Today, Ames Laboratory continues its leading status in current materials research and focuses diverse fundamental and applied research strengths upon issues of national concern, cultivates research talent, and develops and transfers technologies to improve industrial competitiveness and enhance U.S. economic security. Ames Laboratory employs more than 430 full- and part-time employees, including more than 250 scientists and engineers. Students make up more than 20 percent of the paid workforce. The Ames Laboratory is the U.S. home to 2011 Nobel Prize in Chemistry winner Dan Shechtman and is intensely engaged with the international scientific community, including hosting a large number of international visitors each year. The ISU Research Park is a 230-acre development with over 270,000 square feet of building space located just south of the Iowa State campus in Ames. Though closely connected with the university, the research park operates independently to help tenants reach their proprietary goals, linking technology creation, business formation, and development assistance with established technology firms and the marketplace. The ISU Research Park Corporation was established in 1987 as a not-for-profit, independent, corporation operating under a board of directors appointed by Iowa State University and the ISU Foundation. The corporation manages both the Research Park and incubator programs. Iowa State is involved in a number of other significant research and creative endeavors, multidisciplinary collaboration, technology transfer, and strategies addressing real-world problems. In 2010, the Biorenewables Research Laboratory opened in a LEED-Gold certified building that complements and helps replace labs and offices across Iowa State and promotes interdisciplinary, systems-level research and collaboration. The Lab houses the Bioeconomy Institute, the Biobased Industry Center, and the National Science Foundation Engineering Research Center for Biorenewable Chemicals, a partnership of six universities as well as the Max Planck Society in Germany and the Technical University of Denmark. The Engineering Teaching and Research Complex was built in 1999 and is home to Stanley and Helen Howe Hall and Gary and Donna Hoover Hall. The complex is occupied by the Virtual Reality Applications Center (VRAC), Center for Industrial Research and Service (CIRAS), Department of Aerospace Engineering and Engineering Mechanics, Department of Materials Science and Engineering, Engineering Computer Support Services, Engineering Distance Education, and Iowa Space Grant Consortium. And the complex contains one of the world's only six-sided immersive virtual reality labs (C6), as well as the 240 seat 3D-capable Alliant Energy Lee Liu Auditorium, the Multimodal Experience Testbed and Laboratory (METaL), and the User Experience Lab (UX Lab). All of which supports the research of more than 50 faculty and 200 graduate, undergraduate, and postdoctoral students. There is also the Iowa State University Northeast Research Farm in Nashua. Iowa State's campus contains over 160 buildings. Several buildings, as well as the Marston Water Tower, are listed on the National Register of Historic Places. The central campus includes of trees, plants, and classically designed buildings. The landscape's most dominant feature is the central lawn, which was listed as a "medallion site" by the American Society of Landscape Architects in 1999, one of only three central campuses designated as such. The other two were Harvard University and the University of Virginia. Thomas Gaines, in "The Campus As a Work of Art", proclaimed the Iowa State campus to be one of the twenty-five most beautiful campuses in the country. Gaines noted Iowa State's park-like expanse of central campus, and the use of trees and shrubbery to draw together ISU's varied building architecture. Over decades, campus buildings, including the Campanile, Beardshear Hall, and Curtiss Hall, circled and preserved the central lawn, creating a space where students study, relax, and socialize. The campanile was constructed during 1897-1898 as a memorial to Margaret MacDonald Stanton, Iowa State's first dean of women, who died on July 25, 1895. The tower is located on ISU's central campus, just north of the Memorial Union. The site was selected by Margaret's husband, Edgar W. Stanton, with the help of then-university president William M. Beardshear. The campanile stands tall on a 16 by 16 foot (5 by 5 m) base, and cost $6,510.20 to construct. The campanile is widely seen as one of the major symbols of Iowa State University. It is featured prominently on the university's official ring and the university's mace, and is also the subject of the university's alma mater, "The Bells of Iowa State". Named for Dr. LaVerne W. Noyes, who also donated the funds to see that Alumni Hall could be completed after sitting unfinished and unused from 1905 to 1907. Dr. Noyes is an 1872 alumnus. Lake LaVerne is located west of the Memorial Union and south of Alumni Hall, Carver Hall, and Music Hall. The lake was a gift from Dr. Noyes in 1916. Lake LaVerne is the home of two mute swans named Sir Lancelot and Elaine, donated to Iowa State by VEISHEA 1935. In 1944, 1970, and 1971 cygnets (baby swans) made their home on Lake LaVerne. Previously Sir Lancelot and Elaine were trumpeter swans but were too aggressive and in 1999 were replaced with two mute swans. In early spring 2003, Lake LaVerne welcomed its newest and most current mute swan duo. In support of Iowa Department of Natural Resources efforts to re-establish the trumpeter swans in Iowa, university officials avoided bringing breeding pairs of male and female mute swans to Iowa State which means the current Sir Lancelot and Elaine are both female. Iowa State has maintained a horticulture garden since 1914. Reiman Gardens is the third location for these gardens. Today's gardens began in 1993 with a gift from Bobbi and Roy Reiman. Construction began in 1994 and the Gardens' initial were officially dedicated on September 16, 1995. Reiman Gardens has since grown to become a site consisting of a dozen distinct garden areas, an indoor conservatory and an indoor butterfly "wing", butterfly emergence cases, a gift shop, and several supporting greenhouses. Located immediately south of Jack Trice Stadium on the ISU campus, Reiman Gardens is a year-round facility that has become one of the most visited attractions in central Iowa. The Gardens has received a number of national, state, and local awards since its opening, and its rose gardens are particularly noteworthy. It was honored with the President's Award in 2000 by All American Rose Selections, Inc., which is presented to one public garden in the United States each year for superior rose maintenance and display: “For contributing to the public interest in rose growing through its efforts in maintaining an outstanding public rose garden.” The University Museums consist of the Brunnier Art Museum, Farm House Museum, the Art on Campus Program, the Christian Petersen Art Museum, and the Elizabeth and Byron Anderson Sculpture Garden. The Museums include a multitude of unique exhibits, each promoting the understanding and delight of the visual arts as well as attempt to incorporate a vast interaction between the arts, sciences, and technology. The Brunnier Art Museum, Iowa's only accredited museum emphasizing a decorative arts collection, is one of the nation's few museums located within a performing arts and conference complex, the Iowa State Center. Founded in 1975, the museum is named after its benefactors, Iowa State alumnus Henry J. Brunnier and his wife Ann. The decorative arts collection they donated, called the Brunnier Collection, is extensive, consisting of ceramics, glass, dolls, ivory, jade, and enameled metals. Other fine and decorative art objects from the University Art Collection include prints, paintings, sculptures, textiles, carpets, wood objects, lacquered pieces, silver, and furniture. About eight to 12 annual changing exhibitions and permanent collection exhibitions provide educational opportunities for all ages, from learning the history of a quilt hand-stitched over 100 years ago to discovering how scientists analyze the physical properties of artists' materials, such as glass or stone. Lectures, receptions, conferences, university classes, panel discussions, gallery walks, and gallery talks are presented to assist with further interpretation of objects. Located near the center of the Iowa State campus, the Farm House Museum sits as a monument to early Iowa State history and culture as well as a National Historic Landmark. As the first building on campus, the Farm House was built in 1860 before campus was occupied by students or even classrooms. The college's first farm tenants primed the land for agricultural experimentation. This early practice lead to Iowa State Agricultural College and Model Farm opening its doors to Iowa students for free in 1869 under the Morrill Act (or Land-grant Act) of 1862. Many prominent figures have made the Farm House their home throughout its 150 years of use. The first president of the College, Adonijah Welch, briefly stayed at the Farm House and even wrote his inaugural speech in a bedroom on the second floor. James “Tama Jim” Wilson resided for much of the 1890s with his family at the Farm House until he joined President William McKinley's cabinet as U.S. Secretary of Agriculture. Agriculture Dean Charles Curtiss and his young family replaced Wilson and became the longest resident of Farm House. In 1976, over 110 years after the initial construction, the Farm House became a museum after much time and effort was put into restoring the early beauty of the modest farm home. Today, faculty, students, and community members can enjoy the museum while honoring its significance in shaping a nationally recognized land-grant university. Its collection boasts a large collection of 19th and early 20th century decorative arts, furnishings and material culture reflecting Iowa State and Iowa heritage. Objects include furnishings from Carrie Chapman Catt and Charles Curtiss, a wide variety of quilts, a modest collection of textiles and apparel, and various china and glassware items. As with many sites on the Iowa State University Campus, The Farm House Museum has a few old myths and legends associated with it. There are rumors of a ghost changing silverware and dinnerware, unexplained rattling furniture, and curtains that have opened seemingly by themselves. The Farm House Museum is a unique on-campus educational resource providing a changing environment of exhibitions among the historical permanent collection objects that are on display. A walk through the Farm House Museum immerses visitors in the Victorian era (1860-1910) as well as exhibits colorful Iowa and local Ames history. Iowa State is home to one of the largest campus public art programs in the United States. Over 2,000 works of public art, including 600 by significant national and international artists, are located across campus in buildings, courtyards, open spaces and offices. The traditional public art program began during the Depression in the 1930s when Iowa State College's President Raymond Hughes envisioned that "the arts would enrich and provide substantial intellectual exploration into our college curricula." Hughes invited Grant Wood to create the Library's agricultural murals that speak to the founding of Iowa and Iowa State College and Model Farm. He also offered Christian Petersen a one-semester sculptor residency to design and build the fountain and bas relief at the Dairy Industry Building. In 1955, 21 years later, Petersen retired having created 12 major sculptures for the campus and hundreds of small studio sculptures. The Art on Campus Collection is a campus-wide resource of over 2000 public works of art. Programs, receptions, dedications, university classes, Wednesday Walks, and educational tours are presented on a regular basis to enhance visual literacy and aesthetic appreciation of this diverse collection. The Christian Petersen Art Museum in Morrill Hall is named for the nation's first permanent campus artist-in-residence, Christian Petersen, who sculpted and taught at Iowa State from 1934 through 1955, and is considered the founding artist of the Art on Campus Collection. Named for Justin Smith Morrill who created the Morrill Land-Grant Colleges Act, Morrill Hall was completed in 1891. Originally constructed to fill the capacity of a library, museum, and chapel, its original uses are engraved in the exterior stonework on the east side. The building was vacated in 1996 when it was determined unsafe and was also listed in the National Register of Historic Places the same year. In 2005, $9 million was raised to renovate the building and convert it into a museum. Completed and reopened in March 2007, Morrill Hall is home to the Christian Petersen Art Museum. As part of University Museums, the Christian Petersen Art Museum at Morrill Hall is the home of the Christian Petersen Art Collection, the Art on Campus Program, the University Museums's Visual Literacy and Learning Program, and Contemporary Changing Art Exhibitions Program. Located within the Christian Petersen Art Museum are the Lyle and Nancy Campbell Art Gallery, the Roy and Bobbi Reiman Public Art Studio Gallery, the Margaret Davidson Center for the Study of the Art on Campus Collection, the Edith D. and Torsten E. Lagerstrom Loaned Collections Center, and the Neva M. Petersen Visual Learning Gallery. University Museums shares the James R. and Barbara R. Palmer Small Objects Classroom in Morrill Hall. The Elizabeth and Byron Anderson Sculpture Garden is located by the Christian Petersen Art Museum at historic Morrill Hall. The sculpture garden design incorporates sculptures, a gathering arena, and sidewalks and pathways. Planted with perennials, ground cover, shrubs, and flowering trees, the landscape design provides a distinctive setting for important works of 20th and 21st century sculpture, primarily American. Ranging from forty-four inches to nearly nine feet high and from bronze to other metals, these works of art represent the richly diverse character of modern and contemporary sculpture. The sculpture garden is adjacent to Iowa State's central campus. Adonijah Welch, ISU's first president, envisioned a picturesque campus with a winding road encircling the college's majestic buildings, vast lawns of green grass, many varieties of trees sprinkled throughout to provide shade, and shrubbery and flowers for fragrance. Today, the central lawn continues to be an iconic place for all Iowa Staters, and enjoys national acclaim as one of the most beautiful campuses in the country. The new Elizabeth and Byron Anderson Sculpture Garden further enhances the beauty of Iowa State. Iowa State's composting facility is capable of processing over 10,000 tons of organic waste every year. The school's $3 million revolving loan fund loans money for energy efficiency and conservation projects on campus. In the 2011 College Sustainability Report Card issued by the Sustainable Endowments Institute, the university received a B grade. Iowa State operates 20 on-campus residence halls. The residence halls are divided into geographical areas. The Union Drive Association (UDA) consists of four residence halls located on the west side of campus, including Friley Hall, which has been declared one of the largest residence halls in the country. The Richardson Court Association (RCA) consists of 12 residence halls on the east side of campus. The Towers Residence Association (TRA) are located south of the main campus. Two of the four towers, Knapp and Storms Halls, were imploded in 2005; however, Wallace and Wilson Halls still stand. Buchanan Hall and Geoffroy Hall are nominally considered part of the RCA, despite their distance from the other buildings. ISU operates two apartment complexes for upperclassmen, Frederiksen Court and SUV Apartments. The governing body for ISU students is ISU Student Government. The ISU Student Government is composed of a president, vice president, finance director, cabinet appointed by the president, a clerk appointed by the vice president, senators representing each college and residence area at the university, a nine-member judicial branch and an election commission. ISU has over 800 student organizations on campus that represent a variety of interests. Organizations are supported by Iowa State's Student Activities Center. Many student organization offices are housed in the Memorial Union. The Memorial Union at Iowa State University opened in September 1928 and is currently home to a number of University departments and student organizations, a bowling alley, the University Book Store, and the Hotel Memorial Union. The original building was designed by architect, William T. Proudfoot. The building employs a classical style of architecture reflecting Greek and Roman influences. The building's design specifically complements the designs of the major buildings surrounding the University's Central Campus area, Beardshear Hall to the west, Curtiss Hall to the east, and MacKay Hall to the north. The style utilizes columns with Corinthian capitals, Palladian windows, triangular pediments, and formally balanced facades. Designed to be a living memorial for ISU students lost in World War I, the building includes a solemn memorial hall, named the Gold Star Room, which honors the names of the dead World War I, World War II, Korean, Vietnam, and War on Terrorism veterans engraved in marble. Symbolically, the hall was built directly over a library (the Browsing Library) and a small chapel, the symbol being that no country would ever send its young men to die in a war for a noble cause without a solid foundation on both education (the library) and religion (the chapel). Renovations and additions have continued through the years to include: elevators, bowling lanes, a parking ramp, a book store, food court, and additional wings. The Choral Division of the Department of Music and Theater at Iowa State University consists of over 400 choristers in four main ensembles – the "Iowa State Singers", "Cantamus," the "Iowa Statesmen", and "Lyrica" – and multiple small ensembles including three a cappella groups, "Count Me In" (female), "Shy of a Dozen" (male), and "Hymn and Her" (co-ed). ISU is home to an active Greek community. There are 50 chapters that involve 14.6 percent of undergraduate students. Collectively, fraternity and sorority members have raised over $82,000 for philanthropies and committed 31,416 hours to community service. In 2006, the ISU Greek community was named the best large Greek community in the Midwest. The ISU Greek Community has received multiple Jellison and Sutherland Awards from Association for Fraternal Leadership and Values, formerly the Mid-American Greek Council Association. These awards recognize the top Greek Communities in the Midwest. The first fraternity, Delta Tau Delta, was established at Iowa State in 1875, six years after the first graduating class entered Iowa State. The first sorority, I.C. Sorocis, was established only two years later, in 1877. I.C. Sorocis later became a chapter of the first national sorority at Iowa State, Pi Beta Phi. Anti-Greek rioting occurred in 1888. As reported in "The Des Moines Register", "The anti-secret society men of the college met in a mob last night about 11 o'clock in front of the society rooms in chemical and physical hall, determined to break up a joint meeting of three secret societies." In 1891, President William Beardshear banned students from joining secret college fraternities, resulting in the eventual closing of all formerly established fraternities. President Storms lifted the ban in 1904. Following the lifting of the fraternity ban, the first thirteen national fraternities (IFC) installed on the Iowa State campus between 1904 and 1913 were, in order, Sigma Nu, Sigma Alpha Epsilon, Beta Theta Pi, Phi Gamma Delta, Alpha Tau Omega, Kappa Sigma, Theta Xi, Acacia, Phi Sigma Kappa, Delta Tau Delta, Pi Kappa Alpha, and Phi Delta Theta. Though some have suspended their chapters at various times, eleven of the original thirteen fraternities were active in 2008. Many of these chapters existed on campus as local fraternities before being reorganized as national fraternities, prior to 1904. In the Spring of 2014, it was announced that Alpha Phi sorority would be coming to Iowa state in the Fall of 2014, with Delta Gamma sorority Following in the near future. The "Iowa State Daily" is the university's student newspaper. The "Daily" has its roots from a news sheet titled the "Clipper", which was started in the spring of 1890 by a group of students at Iowa Agricultural College led by F.E. Davidson. The "Clipper" soon led to the creation of the "Iowa Agricultural College Student", and the beginnings of what would one day become the "Iowa State Daily". It was awarded the 2016 Best All-Around Daily Student Newspaper by the Society of Professional Journalists. 88.5 KURE is the university's student-run radio station. Programming for KURE includes ISU sports coverage, talk shows, the annual quiz contest Kaleidoquiz, and various music genres. ISUtv is the university's student-run television station. It is housed in the former WOI-TV station that was established in 1950. The student organization of ISUtv has many programs including Newswatch, a twice weekly news spot, Cyclone InCyders, the campus sports show, Fortnightly News, a satirical/comedy program, and Cy's Eyes on the Skies, a twice weekly weather show. The "Cyclones" name dates back to 1895. That year, Iowa suffered an unusually high number of devastating cyclones (as tornadoes were called at the time). In September, Iowa Agricultural College's football team traveled to Northwestern University and defeated that team by a score of 36–0. The next day, the "Chicago Tribune"'s headline read "Struck by a Cyclone: It Comes from Iowa and Devastates Evanston Town." The article began, "Northwestern might as well have tried to play football with an Iowa cyclone as with the Iowa team it met yesterday." The nickname stuck. The school colors are cardinal and gold. The mascot is Cy the Cardinal, introduced in 1954. Since a cyclone was determined to be difficult to depict in costume, the cardinal was chosen in reference to the school colors. A contest was held to select a name for the mascot, with the name Cy being chosen as the winner. The Iowa State Cyclones are a member of the Big 12 Conference and compete in NCAA Division I Football Bowl Subdivision (FBS), fielding 16 varsity teams in 12 sports. The Cyclones also compete in and are a founding member of the Central States Collegiate Hockey League of the American Collegiate Hockey Association. Iowa State's intrastate archrival is the University of Iowa with whom it competes annually for the Iowa Corn Cy-Hawk Series trophy, an annual athletic competition between the two schools. Sponsored by the Iowa Corn Growers Association, the competition includes all head-to-head regular season competitions between the two rival universities in all sports. Football first made its way onto the Iowa State campus in 1878 as a recreational sport, but it was not until 1892 that Iowa State organized its first team to represent the school in football. In 1894, college president William M. Beardshear spearheaded the foundation of an athletic association to officially sanction Iowa State football teams. The 1894 team finished with a 6-1 mark. The Cyclones compete each year for traveling trophies. Since 1977, Iowa State and Iowa compete annually for the Cy-Hawk Trophy. Iowa State competes in an annual rivalry game against Kansas State known as Farmageddon and against former conference foe Missouri for the Telephone Trophy. The Cyclones also compete against the Iowa Hawkeyes, their in-state rival. The Cyclones play their home games at Jack Trice Stadium, named after Jack Trice, ISU's first African-American athlete and also the first and only Iowa State athlete to die from injuries sustained during athletic competition. Trice died three days after his first game playing for Iowa State against Minnesota in Minneapolis on October 6, 1923. Suffering from a broken collarbone early in the game, he continued to play until he was trampled by a group of Minnesota players. It is disputed whether he was trampled purposely or if it was by accident. The stadium was named in his honor in 1997 and is the only NCAA Division I-A stadium named after an African-American. Jack Trice Stadium, formerly known as Cyclone Stadium, opened on September 20, 1975, with a win against the United States Air Force Academy. Hopes of "Hilton Magic" returning took a boost with the hiring of ISU alum, Ames native, and fan favorite Fred Hoiberg as coach of the men's basketball team in April 2010. Hoiberg ("The Mayor") played three seasons under legendary coach Johnny Orr and one season under future Chicago Bulls coach Tim Floyd during his standout collegiate career as a Cyclone (1991–95). Orr laid the foundation of success in men's basketball upon his arrival from Michigan in 1980 and is credited with building Hilton Magic. Besides Hoiberg, other Cyclone greats played for Orr and brought winning seasons, including Jeff Grayer, Barry Stevens, and walk-on Jeff Hornacek. The 1985-86 Cyclones were one of the most memorable. Orr coached the team to second place in the Big Eight and produced one of his greatest career wins, a victory over his former team and No. 2 seed Michigan in the second round of the NCAA tournament. Under coaches Floyd (1995–98) and Larry Eustachy (1998–2003), Iowa State achieved even greater success. Floyd took the Cyclones to the Sweet Sixteen in 1997 and Eustachy led ISU to two consecutive Big 12 regular season conference titles in 1999-2000 and 2000–01, plus the conference tournament title in 2000. Seeded No. 2 in the 2000 NCAA tournament, Eustachy and the Cyclones defeated UCLA in the Sweet Sixteen before falling to Michigan State, the eventual NCAA Champion, in the regional finals by a score of 75-64 (the differential representing the Spartans' narrowest margin of victory in the tournament). Standout Marcus Fizer and Jamaal Tinsley were scoring leaders for the Cyclones who finished the season 32–5. Tinsley returned to lead the Cyclones the following year with another conference title and No. 2 seed, but ISU finished the season with a 25-6 overall record after a stunning loss to No. 15 seed Hampton in the first round. In 2011–12, Hoiberg's Cyclones finished third in the Big 12 and returned to the NCAA Tournament, dethroning defending national champion Connecticut, 77–64, in the second round before losing in the Round of 32 to top-seeded Kentucky. All-Big 12 First Team selection Royce White led the Cyclones with 38 points and 22 rebounds in the two contests, ending the season at 23–11. The 2013-14 campaign turned out to be another highly successful season. Iowa State went 28–8, won the Big 12 Tournament, and advanced to the Sweet Sixteen by beating North Carolina in the second round of the NCAA Tournament. The Cyclones finished 11–7 in Big 12 play, finishing in a tie for third in the league standings, and beat a school-record nine teams (9-3) that were ranked in the Associated Press top 25. The Cyclones opened the season 14–0, breaking the school record for consecutive wins. Melvin Ejim was named the Big 12 Player of the Year and an All-American by five organizations. Deandre Kane was named the Big 12 Tournament's most valuable player. On June 8, 2015, Steve Prohm took over as head basketball coach replacing Hoiberg who left to take the head coaching position with the Chicago Bulls. In his first season with the Cyclones, Prohm secured a #4 seed in the Midwest region where the Cyclones advanced to the Sweet Sixteen before falling to top-seeded Virginia, 84–71. In 2017, Iowa State stunned 3rd ranked Kansas, 92–89, in overtime, snapping KU's 54-game home winning streak, before winning the 2017 Big 12 Men's Basketball Tournament, its third conference championship in four years, defeating West Virginia in the final. Of Iowa State's 19 NCAA Tournament appearances, the Cyclones have reached the Sweet Sixteen six times (1944, 1986, 1997, 2000, 2014, 2016), made two appearances in the Elite Eight (1944, 2000), and reached the Final Four once in 1944. Iowa State is known for having one of the most successful women's basketball programs in the nation. Since the founding of the Big 12, Coach Bill Fennelly and the Cyclones have won three conference titles (one regular season, two tournament), and have advanced to the Sweet Sixteen five times (1999–2001, 2009, 2010) and the Elite Eight twice (1999, 2009) in the NCAA Tournament. The team has one of the largest fan bases in the nation with attendance figures ranked third in the nation in 2009, 2010, and 2012. Coach Christy Johnson-Lynch led the 2012 Cyclones team to a fifth straight 20-win season and fifth NCAA regional semifinal appearance in six seasons, and leading Iowa State to a 22-8 (13-3 Big 12) overall record and second-place finish in the conference. The Cyclones finished the season with seven wins over top-25 teams, including a victory over No. 1 Nebraska Cornhuskers in Iowa State's first-ever win over a top-ranked opponent in addition to providing the only Big 12 Conference loss to the 2012 conference and NCAA champion Texas Longhorns. In 2011, Iowa State finished the season 25-6 (13-3 Big 12), placing second in the league, as well as a final national ranking of eighth. 2011 is only the second season in which an Iowa State volleyball team has ever recorded 25 wins. The Cyclones beat No. 9 Florida during the season in Gainesville, its sixth win over a top-10 team in Cyclone history. In 2009, Iowa State finished the season second in the Big 12 behind Texas with a 27–5 record and ranked No. 6, its highest ever national finish. Johnson-Lynch is the fastest Iowa State coach to clinch 100 victories. In 2011, she became the school's winningest volleyball coach when her team defeated the Texas Tech Red Raiders, her 136th coaching victory, in straight sets. The ISU wrestling program has captured the NCAA wrestling tournament title eight times between 1928 and 1987, and won the Big 12 Conference Tournament three consecutive years, 2007–2009. On February 7, 2010, the Cyclones became the first collegiate wrestling program to record its 1,000th dual win in program history by defeating the Arizona State Sun Devils, 30–10, in Tempe, Arizona. In 2002, under former NCAA champion & Olympian Coach Bobby Douglas, Iowa State became the first school to produce a four-time, undefeated NCAA Division I champion, Cael Sanderson (considered by the majority of the wrestling community to be the best college wrestler ever), who also took the gold medal at the 2004 Olympic Games in Athens, Greece. Dan Gable, another legendary ISU wrestler, is famous for having lost only one match in his entire Iowa State collegiate career - his last - and winning gold at the 1972 Olympics in Munich, Germany, while not giving up a single point. In 2013, Iowa State hosted its eighth NCAA Wrestling Championships. The Cyclones hosted the first NCAA championships in 1928. In February 2017, former Virginia Tech coach and 2016 NWCA Coach of the Year Kevin Dresser was introduced as the new Cyclone wrestling coach, replacing Kevin Jackson.
https://en.wikipedia.org/wiki?curid=14875
International Astronomical Union The International Astronomical Union (IAU; , UAI) is an international association of professional astronomers, at the PhD level and beyond, active in professional research and education in astronomy. Among other activities, it acts as the recognized authority for assigning designations and names to celestial bodies (stars, planets, asteroids, etc.) and any surface features on them. The IAU is a member of the International Science Council (ISC). Its main objective is to promote and safeguard the science of astronomy in all its aspects through international cooperation. The IAU maintains friendly relations with organizations that include amateur astronomers in their membership. The IAU has its head office on the second floor of the "Institut d'Astrophysique de Paris" in the 14th arrondissement of Paris. This organisation has many working groups. For example, the Working Group for Planetary System Nomenclature (WGPSN), which maintains the astronomical naming conventions and planetary nomenclature for planetary bodies, and the Working Group on Star Names (WGSN), which catalogues and standardizes proper names for stars. The IAU is also responsible for the system of astronomical telegrams which are produced and distributed on its behalf by the Central Bureau for Astronomical Telegrams. The Minor Planet Center also operates under the IAU, and is a "clearinghouse" for all non-planetary or non-moon bodies in the Solar System. The IAU was founded on 28 July 1919, at the Constitutive Assembly of the International Research Council (now the International Science Council) held in Brussels, Belgium. Two subsidiaries of the IAU were also created at this assembly: the "International Time Commission" seated at the International Time Bureau in Paris, France, and the "International Central Bureau of Astronomical Telegrams" initially seated in Copenhagen, Denmark. The 7 initial member states were Belgium, Canada, France, Great Britain, Greece, Japan, and the United States, soon to be followed by Italy and Mexico. The first executive committee consisted of Benjamin Baillaud (President, France), Alfred Fowler (General Secretary, UK), and four vice presidents: William Campbell (USA), Frank Dyson (UK), Georges Lecointe (Belgium), and Annibale Riccò (Italy). Thirty-two Commissions (referred to initially as Standing Committees) were appointed at the Brussels meeting and focused on topics ranging from relativity to minor planets. The reports of these 32 Commissions formed the main substance of the first General Assembly, which took place in Rome, Italy, 2–10 May 1922. By the end of the first General Assembly, ten additional nations (Australia, Brazil, Czecho-Slovakia, Denmark, the Netherlands, Norway, Poland, Romania, South Africa, and Spain) had joined the Union, bringing the total membership to 19 countries. Although the Union was officially formed eight months after the end of World War I, international collaboration in astronomy had been strong in the pre-war era (e.g., the Astronomische Gesellschaft Katalog projects since 1868, the Astrographic Catalogue since 1887, and the International Union for Solar research since 1904). The first 50 years of the Union's history are well documented. Subsequent history is recorded in the form of reminiscences of past IAU Presidents and General Secretaries. Twelve of the fourteen past General Secretaries in the period 1964-2006 contributed their recollections of the Union's history in IAU Information Bulletin No. 100. Six past IAU Presidents in the period 1976–2003 also contributed their recollections in IAU Information Bulletin No. 104. As of 1 August 2019, the IAU includes a total of 13,701 "individual members", who are professional astronomers from 102 countries worldwide. 81.7% of all individual members are male, while 18.3% are female, among them the union's former president, Mexican astronomer Silvia Torres-Peimbert. Membership also includes 82 "national members", professional astronomical communities representing their country's affiliation with the IAU. National members include the Australian Academy of Science, the Chinese Astronomical Society, the French Academy of Sciences, the Indian National Science Academy, the National Academies (United States), the National Research Foundation of South Africa, the National Scientific and Technical Research Council (Argentina), KACST (Saudi Arabia), the Council of German Observatories, the Royal Astronomical Society (United Kingdom), the Royal Astronomical Society of New Zealand, the Royal Swedish Academy of Sciences, the Russian Academy of Sciences, and the Science Council of Japan, among many others. The sovereign body of the IAU is its "General Assembly", which comprises all members. The Assembly determines IAU policy, approves the Statutes and By-Laws of the Union (and amendments proposed thereto) and elects various committees. The right to vote on matters brought before the Assembly varies according to the type of business under discussion. The Statutes consider such business to be divided into two categories: On budget matters (which fall into the second category), votes are weighted according to the relative subscription levels of the national members. A second category vote requires a turnout of at least two-thirds of national members to be valid. An absolute majority is sufficient for approval in any vote, except for Statute revision which requires a two-thirds majority. An equality of votes is resolved by the vote of the President of the Union. Since 1922, the IAU General Assembly meets every three years, except for the period between 1938 and 1948, due to World War II. After a Polish request in 1967, and by a controversial decision of the then President of the IAU, an "Extraordinary IAU General Assembly" was held in September 1973 in Warsaw, Poland, to commemorate the 500th anniversary of the birth of Nicolaus Copernicus, soon after the regular 1973 GA had been held in Sydney, Australia. Sources. Commission 46 is a Committee of the Executive Committee of the IAU, playing a special role in the discussion of astronomy development with governments and scientific academies. The IAU is affiliated with the International Council of Scientific Unions (ICSU), a non-governmental organization representing a global membership that includes both national scientific bodies and international scientific unions. They often encourage countries to become members of the IAU. The Commission further seeks to development, information or improvement of astronomical education. Part of Commission 46, is Teaching Astronomy for Development (TAD) program in countries where there is currently very little astronomical education. Another program is named the Galileo Teacher Training Program (GTTP), is a project of the International Year of Astronomy 2009, among which Hands-On Universe that will concentrate more resources on education activities for children and schools designed to advance sustainable global development. GTTP is also concerned with the effective use and transfer of astronomy education tools and resources into classroom science curricula. A strategic plan for the period 2010-2020 has been published. In 2004 the IAU contracted with the Cambridge University Press to publish the "Proceedings of the International Astronomical Union". In 2007, the Communicating Astronomy with the Public Journal Working Group prepared a study assessing the feasibility of the "Communicating Astronomy with the Public Journal" ("CAP Journal").
https://en.wikipedia.org/wiki?curid=14878
International Criminal Court The International Criminal Court (ICC or ICCt) is an intergovernmental organization and international tribunal that sits in The Hague, Netherlands. The ICC has jurisdiction to prosecute individuals for the international crimes of genocide, crimes against humanity, war crimes, and the crime of aggression. It is intended to complement existing national judicial systems and it may therefore exercise its jurisdiction only when national courts are unwilling or unable to prosecute criminals. The ICC lacks universal territorial jurisdiction, and may only investigate and prosecute crimes committed within member states, crimes committed by nationals of member states, or crimes in situations referred to the Court by the United Nations Security Council. The ICC began functioning on 1 July 2002, the date that the Rome Statute entered into force. The Rome Statute is a multilateral treaty that serves as the ICC's foundational and governing document. States which become party to the Rome Statute become member states of the ICC. As of November 2019, there are 123 ICC member states. 42 states are non-party, non-signatory states. The ICC has four principal organs: the Presidency, the Judicial Divisions, the Office of the Prosecutor, and the Registry. The President is the most senior judge chosen by his or her peers in the Judicial Division, which hears cases before the Court. The Office of the Prosecutor is headed by the Prosecutor who investigates crimes and initiates criminal proceedings before the Judicial Division. The Registry is headed by the Registrar and is charged with managing all the administrative functions of the ICC, including the headquarters, detention unit, and public defense office. The Office of the Prosecutor has opened 12 official investigations and is also conducting an additional nine preliminary examinations. Thus far, 45 individuals have been indicted in the ICC, including Ugandan rebel leader Joseph Kony, former Sudanese president Omar al-Bashir, Kenyan president Uhuru Kenyatta, Libyan leader Muammar Gaddafi, Ivorian president Laurent Gbagbo, and DR Congo vice-president Jean-Pierre Bemba. The ICC has faced a number of criticisms from states and civil society, including objections about its jurisdiction, accusations of bias, questioning of the fairness of its case-selection and trial procedures, and doubts about its effectiveness. The establishment of an international tribunal to judge political leaders accused of international crimes was first proposed during the Paris Peace Conference in 1919 following the First World War by the Commission of Responsibilities. The issue was addressed again at a conference held in Geneva under the auspices of the League of Nations in 1937, which resulted in the conclusion of the first convention stipulating the establishment of a permanent international court to try acts of international terrorism. The convention was signed by 13 states, but none ratified it and the convention never entered into force. Following the Second World War, the allied powers established two "ad hoc" tribunals to prosecute Axis leaders accused of war crimes. The International Military Tribunal, which sat in Nuremberg, prosecuted German leaders while the International Military Tribunal for the Far East in Tokyo prosecuted Japanese leaders. In 1948 the United Nations General Assembly first recognised the need for a permanent international court to deal with atrocities of the kind prosecuted after the Second World War. At the request of the General Assembly, the International Law Commission (ILC) drafted two statutes by the early 1950s but these were shelved during the Cold War, which made the establishment of an international criminal court politically unrealistic. Benjamin B. Ferencz, an investigator of Nazi war crimes after the Second World War, and the Chief Prosecutor for the United States Army at the Einsatzgruppen Trial, became a vocal advocate of the establishment of international rule of law and of an international criminal court. In his first book published in 1975, entitled "Defining International Aggression: The Search for World Peace", he advocated for the establishment of such a court. A second major advocate was Robert Kurt Woetzel, who co-edited "Toward a Feasible International Criminal Court" in 1970 and created the Foundation for the Establishment of an International Criminal Court in 1971. In June 1989 Prime Minister of Trinidad and Tobago A. N. R. Robinson revived the idea of a permanent international criminal court by proposing the creation of such a court to deal with the illegal drug trade. Following Trinidad and Tobago's proposal, the General Assembly tasked the ILC with once again drafting a statute for a permanent court. While work began on the draft, the United Nations Security Council established two "ad hoc" tribunals in the early 1990s: The International Criminal Tribunal for the former Yugoslavia, created in 1993 in response to large-scale atrocities committed by armed forces during Yugoslav Wars, and the International Criminal Tribunal for Rwanda, created in 1994 following the Rwandan genocide. The creation of these tribunals further highlighted to many the need for a permanent international criminal court. In 1994, the ILC presented its final draft statute for the International Criminal Court to the General Assembly and recommended that a conference be convened to negotiate a treaty that would serve as the Court's statute. To consider major substantive issues in the draft statute, the General Assembly established the Ad Hoc Committee on the Establishment of an International Criminal Court, which met twice in 1995. After considering the Committee's report, the General Assembly created the Preparatory Committee on the Establishment of the ICC to prepare a consolidated draft text. From 1996 to 1998, six sessions of the Preparatory Committee were held at the United Nations headquarters in New York City, during which NGOs provided input and attended meetings under the umbrella organisation of the Coalition for the International Criminal Court (CICC). In January 1998, the Bureau and coordinators of the Preparatory Committee convened for an Inter-Sessional meeting in Zutphen in the Netherlands to technically consolidate and restructure the draft articles into a draft. Finally the General Assembly convened a conference in Rome in June 1998, with the aim of finalizing the treaty to serve as the Court's statute. On 17 July 1998, the Rome Statute of the International Criminal Court was adopted by a vote of 120 to seven, with 21 countries abstaining. The seven countries that voted against the treaty were China, Iraq, Israel, Libya, Qatar, the United States, and Yemen. Israel's opposition to the treaty stemmed from the inclusion in the list of war crimes "the action of transferring population into occupied territory". Following 60 ratifications, the Rome Statute entered into force on 1 July 2002 and the International Criminal Court was formally established. The first bench of 18 judges was elected by the Assembly of States Parties in February 2003. They were sworn in at the inaugural session of the Court on 11 March 2003. The Court issued its first arrest warrants on 8 July 2005, and the first pre-trial hearings were held in 2006. The Court issued its first judgment in 2012 when it found Congolese rebel leader Thomas Lubanga Dyilo guilty of war crimes related to using child soldiers. In 2010 the states parties of the Rome Statute held the first Review Conference of the Rome Statute of the International Criminal Court in Kampala, Uganda. The Review Conference led to the adoption of two resolutions that amended the crimes under the jurisdiction of the Court. Resolution 5 amended Article 8 on war crimes, criminalizing the use of certain kinds of weapons in non-international conflicts whose use was already forbidden in international conflicts. Resolution 6, pursuant to Article 5(2) of the Statute, provided the definition and a procedure for jurisdiction over the crime of aggression. During the administration of Barack Obama, US opposition to the ICC evolved to "positive engagement," although no effort was made to ratify the Rome Statute. The current administration of Donald Trump is considerably more hostile to the Court, threatening prosecutions and financial sanctions on ICC judges and staff in US courts as well as imposing visa bans in response to any investigation against American nationals in connection to alleged crimes and atrocities perpetrated by the US in Afghanistan. The threat included sanctions against any of over 120 countries which have ratified the Court for cooperating in the process. Following the imposition of sanctions on 11 June 2020 by the Trump administration, the court branded the sanctions an "attack against the interests of victims of atrocity crimes" and an "unacceptable attempt to interfere with the rule of law". The UN also regretted the effect sanctions may have on trials and investigations under way, saying its independence must be protected. In October 2016, after repeated claims that the court was biased against African states, Burundi, South Africa and the Gambia announced their withdrawals from the Rome Statute. However, following Gambia's presidential election later that year, which ended the long rule of Yahya Jammeh, Gambia rescinded its withdrawal notification. A decision by the High Court of South Africa in early 2017 ruled that withdrawal would be unconstitutional, prompting the South African government to inform the UN that it was revoking its decision to withdraw. In November 2017, Fatou Bensouda advised the court to consider seeking charges for human rights abuses committed during the War in Afghanistan such as alleged rapes and tortures by the United States Armed Forces and the Central Intelligence Agency, crime against humanity committed by the Taliban, and war crimes committed by the Afghan National Security Forces. John Bolton, National Security Advisor of the United States, stated that ICC Court had no jurisdiction over the US, which did not ratify the Rome Statute. In 2020, overturning the previous decision not to proceed, senior judges at the ICC authorized an investigation into the alleged war crimes in Afghanistan. However, in June 2020, the decision to proceed led Trump administration to power an economic and legal attack on the court. “The US government has reason to doubt the honesty of the ICC. The Department of Justice has received substantial credible information that raises serious concerns about a long history of financial corruption and malfeasance at the highest levels of the office of the prosecutor,” Attorney General William Barr said. The ICC responded with a statement expressing “profound regret at the announcement of further threats and coercive actions.""These attacks constitute an escalation and an unacceptable attempt to interfere with the rule of law and the Court’s judicial proceedings, the statement said. "They are announced with the declared aim of influencing the actions of ICC officials in the context of the court’s independent and objective investigations and impartial judicial proceedings." Following the announcement that the ICC would open a preliminary investigation on the Philippines in connection to its escalating drug war, President Rodrigo Duterte announced on 14 March 2018 that the Philippines would start to submit plans to withdraw, completing the process on 17 March 2019. The ICC pointed out that it retained jurisdiction over the Philippines during the period when it was a state party to the Rome Statute, from November 2011 to March 2019. The ICC is governed by the Assembly of States Parties, which is made up of the states that are party to the Rome Statute. The Assembly elects officials of the Court, approves its budget, and adopts amendments to the Rome Statute. The Court itself, however, is composed of four organs: the Presidency, the Judicial Divisions, the Office of the Prosecutor, and the Registry. The Court's management oversight and legislative body, the Assembly of States Parties, consists of one representative from each state party. Each state party has one vote and "every effort" has to be made to reach decisions by consensus. If consensus cannot be reached, decisions are made by vote. The Assembly is presided over by a president and two vice-presidents, who are elected by the members to three-year terms. The Assembly meets in full session once a year, alternating between New York and The Hague, and may also hold special sessions where circumstances require. Sessions are open to observer states and non-governmental organizations. The Assembly elects the judges and prosecutors, decides the Court's budget, adopts important texts (such as the Rules of Procedure and Evidence), and provides management oversight to the other organs of the Court. Article 46 of the Rome Statute allows the Assembly to remove from office a judge or prosecutor who "is found to have committed serious misconduct or a serious breach of his or her duties" or "is unable to exercise the functions required by this Statute". The states parties cannot interfere with the judicial functions of the Court. Disputes concerning individual cases are settled by the Judicial Divisions. In 2010, Kampala, Uganda hosted the Assembly's Rome Statute Review Conference. The Court has four organs: the Presidency, the Judicial Division, the Office of the Prosecutor, and the Registry. The Presidency is responsible for the proper administration of the Court (apart from the Office of the Prosecutor). It comprises the President and the First and Second Vice-Presidents—three judges of the Court who are elected to the Presidency by their fellow judges for a maximum of two three-year terms. The current president is Chile Eboe-Osuji, who was elected 11 March 2018, succeeding Silvia Fernández de Gurmendi (first female president). The Judicial Divisions consist of the 18 judges of the Court, organized into three chambers—the Pre-Trial Chamber, Trial Chamber and Appeals Chamber—which carry out the judicial functions of the Court. Judges are elected to the Court by the Assembly of States Parties. They serve nine-year terms and are not generally eligible for re-election. All judges must be nationals of states parties to the Rome Statute, and no two judges may be nationals of the same state. They must be "persons of high moral character, impartiality and integrity who possess the qualifications required in their respective States for appointment to the highest judicial offices". The Prosecutor or any person being investigated or prosecuted may request the disqualification of a judge from "any case in which his or her impartiality might reasonably be doubted on any ground". Any request for the disqualification of a judge from a particular case is decided by an absolute majority of the other judges. A judge may be removed from office if he or she "is found to have committed serious misconduct or a serious breach of his or her duties" or is unable to exercise his or her functions. The removal of a judge requires both a two-thirds majority of the other judges and a two-thirds majority of the states parties. The Office of the Prosecutor (OTP) is responsible for conducting investigations and prosecutions. It is headed by the Chief Prosecutor, who is assisted by one or more Deputy Prosecutors. The Rome Statute provides that the Office of the Prosecutor shall act independently; as such, no member of the Office may seek or act on instructions from any external source, such as states, international organisations, non-governmental organisations or individuals. The Prosecutor may open an investigation under three circumstances: Any person being investigated or prosecuted may request the disqualification of a prosecutor from any case "in which their impartiality might reasonably be doubted on any ground". Requests for the disqualification of prosecutors are decided by the Appeals Chamber. A prosecutor may be removed from office by an absolute majority of the states parties if he or she "is found to have committed serious misconduct or a serious breach of his or her duties" or is unable to exercise his or her functions. However, critics of the Court argue that there are "insufficient checks and balances on the authority of the ICC prosecutor and judges" and "insufficient protection against politicized prosecutions or other abuses". Luis Moreno-Ocampo, chief ICC prosecutor, stressed in 2011 the importance of politics in prosecutions: "You cannot say al-Bashir is in London, arrest him. You need a political agreement." Henry Kissinger says the checks and balances are so weak that the prosecutor "has virtually unlimited discretion in practice". As of 16 June 2012, the Prosecutor has been Fatou Bensouda of Gambia, who had been elected as the new Prosecutor on 12 December 2011. She has been elected for nine years. Her predecessor, Luis Moreno Ocampo of Argentina, had been in office from 2003 to 2012. A Policy Paper is a document published by the Office of the Prosecutor occasionally where the particular considerations given to the topics in focus of the Office and often criteria for case selection are stated. While a policy paper does not give the Court jurisdiction over a new category of crimes, it promises what the Office of Prosecutor will consider when selecting cases in the upcoming term of service. OTP's policy papers are subject to revision. On the Policy Paper published in September 2016 it was announced that the International Criminal Court will focus on environmental crimes when selecting the cases. According to this document, the Office will give particular consideration to prosecuting Rome Statute crimes that are committed by means of, or that result in, "inter alia, the destruction of the environment, the illegal exploitation of natural resources or the illegal dispossession of land". This has been interpreted as a major shift towards the environmental crimes and a move with significant effects. The Registry is responsible for the non-judicial aspects of the administration and servicing of the Court. This includes, among other things, "the administration of legal aid matters, court management, victims and witnesses matters, defence counsel, detention unit, and the traditional services provided by administrations in international organisations, such as finance, translation, building management, procurement and personnel". The Registry is headed by the Registrar, who is elected by the judges to a five-year term. The previous Registrar was Herman von Hebel, who was elected on 8 March 2013. The current Registrar is Peter Lewis, who was elected on 28 March 2018. The Rome Statute requires that several criteria exist in a particular case before an individual can be prosecuted by the Court. The Statute contains three jurisdictional requirements and three admissibility requirements. All criteria must be met for a case to proceed. The three jurisdictional requirements are (1) subject-matter jurisdiction (what acts constitute crimes), (2) territorial or personal jurisdiction (where the crimes were committed or who committed them), and (3) temporal jurisdiction (when the crimes were committed). The process to establish the Court's jurisdiction may be "triggered" by any one of three possible sources: (1) a State party, (2) the Security Council or (3) a Prosecutor. It is then up to the Prosecutor acting "ex proprio motu" ("of his own motion" so to speak) to initiate an investigation under the requirements of Article 15 of the Rome Statute. The procedure is slightly different when referred by a State Party or the Security Council, in which cases the Prosecutor does not need authorization of the Pre-Trial Chamber to initiate the investigation. Where there is a reasonable basis to proceed, it is mandatory for the Prosecutor to initiate an investigation. The factors listed in Article 53 considered for reasonable basis include whether the case would be admissible, and whether there are substantial reasons to believe that an investigation would not serve the interests of justice (the latter stipulates balancing against the gravity of the crime and the interests of the victims). The Court's subject-matter jurisdiction means the crimes for which individuals can be prosecuted. Individuals can only be prosecuted for crimes that are listed in the Statute. The primary crimes are listed in article 5 of the Statute and defined in later articles: genocide (defined in article 6), crimes against humanity (defined in article 7), war crimes (defined in article 8), and crimes of aggression (defined in article 8 "bis") (which is not yet within the jurisdiction of the Court; see below). In addition, article 70 defines "offences against the administration of justice", which is a fifth category of crime for which individuals can be prosecuted. Article 6 defines the crime of genocide as "acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group". There are five such acts which constitute crimes of genocide under article 6: The definition of these crimes is identical to those contained within the Convention on the Prevention and Punishment of the Crime of Genocide of 1948. In the Akayesu case the Court concluded that inciting directly and publicly others to commit génocide is in itself constitutive of a crime. Article 7 defines crimes against humanity as acts "committed as part of a widespread or systematic attack directed against any civilian population, with knowledge of the attack". The article lists 16 such as individual crimes: Article 8 defines war crimes depending on whether an armed conflict is either international (which generally means it is fought between states) or non-international (which generally means that it is fought between non-state actors, such as rebel groups, or between a state and such non-state actors). In total there are 74 war crimes listed in article 8. The most serious crimes, however, are those that constitute either grave breaches of the Geneva Conventions of 1949, which only apply to international conflicts, and serious violations of article 3 common to the Geneva Conventions of 1949, which apply to non-international conflicts. There are 11 crimes which constitute grave breaches of the Geneva Conventions and which are applicable only to international armed conflicts: There are seven crimes which constitute serious violations of article 3 common to the Geneva Conventions and which are applicable only to non-international armed conflicts: Additionally, there are 56 other crimes defined by article 8: 35 that apply to international armed conflicts and 21 that apply to non-international armed conflicts. Such crimes include attacking civilians or civilian objects, attacking peacekeepers, causing excessive incidental death or damage, transferring populations into occupied territories, treacherously killing or wounding, denying quarter, pillaging, employing poison, using expanding bullets, rape and other forms of sexual violence, and conscripting or using child soldiers. Article 8 "bis" defines crimes of aggression. The Statute originally provided that the Court could not exercise its jurisdiction over the crime of aggression until such time as the states parties agreed on a definition of the crime and set out the conditions under which it could be prosecuted. Such an amendment was adopted at the first review conference of the ICC in Kampala, Uganda, in June 2010. However, this amendment specified that the ICC would not be allowed to exercise jurisdiction of the crime of aggression until two further conditions had been satisfied: (1) the amendment has entered into force for 30 states parties and (2) on or after 1 January 2017, the Assembly of States Parties has voted in favor of allowing the Court to exercise jurisdiction. On 26 June 2016 the first condition was satisfied and the state parties voted in favor of allowing the Court to exercise jurisdiction on 14 December 2017. The Court's jurisdiction to prosecute crimes of aggression was accordingly activated on 17 July 2018. The Statute, as amended, defines the crime of aggression as "the planning, preparation, initiation or execution, by a person in a position effectively to exercise control over or to direct the political or military action of a State, of an act of aggression which, by its character, gravity and scale, constitutes a manifest violation of the Charter of the United Nations." The Statute defines an "act of aggression" as "the use of armed force by a State against the sovereignty, territorial integrity or political independence of another State, or in any other manner inconsistent with the Charter of the United Nations." The article also contains a list of seven acts of aggression, which are identical to those in United Nations General Assembly Resolution 3314 of 1974 and include the following acts when committed by one state against another state: Article 70 criminalizes certain intentional acts which interfere with investigations and proceedings before the Court, including giving false testimony, presenting false evidence, corruptly influencing a witness or official of the Court, retaliating against an official of the Court, and soliciting or accepting bribes as an official of the Court. For an individual to be prosecuted by the Court either territorial jurisdiction or personal jurisdiction must exist. Therefore, an individual can only be prosecuted if he or she has either (1) committed a crime within the territorial jurisdiction of the Court or (2) committed a crime while being a national of a state that is within the territorial jurisdiction of the Court. The territorial jurisdiction of the Court includes the territory, registered vessels, and registered aircraft of states which have either (1) become party to the Rome Statute or (2) accepted the Court's jurisdiction by filing a declaration with the Court. In situations that are referred to the Court by the United Nations Security Council, the territorial jurisdiction is defined by the Security Council, which may be more expansive than the Court's normal territorial jurisdiction. For example, if the Security Council refers a situation that took place in the territory of a state that has both not become party to the Rome Statute and not lodged a declaration with the Court, the Court will still be able to prosecute crimes that occurred within that state. The personal jurisdiction of the Court extends to all natural persons who commit crimes, regardless of where they are located or where the crimes were committed, as long as those individuals are nationals of either (1) states that are party to the Rome Statute or (2) states that have accepted the Court's jurisdiction by filing a declaration with the Court. As with territorial jurisdiction, the personal jurisdiction can be expanded by the Security Council if it refers a situation to the Court. Temporal jurisdiction is the time period over which the Court can exercise its powers. No statute of limitations applies to any of the crimes defined in the Statute. However, the Court's jurisdiction is not completely retroactive. Individuals can only be prosecuted for crimes that took place on or after 1 July 2002, which is the date that the Rome Statute entered into force. If a state became party to the Statute, and therefore a member of the Court, after 1 July 2002, then the Court cannot exercise jurisdiction prior to the membership date for certain cases. For example, if the Statute entered into force for a state on 1 January 2003, the Court could only exercise temporal jurisdiction over crimes that took place in that state or were committed by a national of that state on or after 1 January 2003. To initiate an investigation, the Prosecutor must (1) have a "reasonable basis to believe that a crime within the jurisdiction of the Court has been or is being committed", (2) the investigation would be consistent with the principle of complementarity, and (3) the investigation serves the interests of justice. The principle of complementarity means that the Court will only prosecute an individual if states are unwilling or unable to prosecute. Therefore, if legitimate national investigations or proceedings into crimes have taken place or are ongoing, the Court will not initiate proceedings. This principle applies regardless of the outcome of national proceedings. Even if an investigation is closed without any criminal charges being filed or if an accused person is acquitted by a national court, the Court will not prosecute an individual for the crime in question so long as it is satisfied that the national proceedings were legitimate. However, the actual application of the complementarity principle has recently come under theoretical scrutiny. The Court will only initiate proceedings if a crime is of "sufficient gravity to justify further action by the Court". The Prosecutor will initiate an investigation unless there are "substantial reasons to believe that an investigation would not serve the interests of justice" when "[t]aking into account the gravity of the crime and the interests of victims". Furthermore, even if an investigation has been initiated and there are substantial facts to warrant a prosecution and no other admissibility issues, the Prosecutor must determine whether a prosecution would serve the interests of justice "taking into account all the circumstances, including the gravity of the crime, the interests of victims and the age or infirmity of the alleged perpetrator, and his or her role in the alleged crime". The Court has jurisdiction over natural persons A person who commits a crime within the jurisdiction of the Court is individually responsible and liable for punishment in accordance with the Rome Statute. In accordance with the Rome Statute, a person shall be criminally responsible and liable for punishment for a crime within the jurisdiction of the Court if that person: Commits such a crime, whether as an individual, jointly with another or through another person, regardless of whether that other person is criminally responsible; Orders, solicits or induces the commission of such a crime which in fact occurs or is attempted; For the purpose of facilitating the commission of such a crime, aids, abets or otherwise assists in its commission or its attempted commission, including providing the means for its commission; In any other way contributes to the commission or attempted commission of such a crime by a group of persons acting with a common purpose. In respect of the crime of genocide, directly and publicly incites others to commit genocide; Attempts to commit such a crime by taking action that commences its execution by means of a substantial step, but the crime does not occur because of circumstances independent of the person's intentions Trials are conducted under a hybrid common law and civil law judicial system, but it has been argued the procedural orientation and character of the court is still evolving. A majority of the three judges present, as triers of fact, may reach a decision, which must include a full and reasoned statement. Trials are supposed to be public, but proceedings are often closed, and such exceptions to a public trial have not been enumerated in detail. "In camera" proceedings are allowed for protection of witnesses or defendants as well as for confidential or sensitive evidence. Hearsay and other indirect evidence is not generally prohibited, but it has been argued the court is guided by hearsay exceptions which are prominent in common law systems. There is no subpoena or other means to compel witnesses to come before the court, although the court has some power to compel testimony of those who chose to come before it, such as fines. The Rome Statute provides that all persons are presumed innocent until proven guilty beyond reasonable doubt, and establishes certain rights of the accused and persons during investigations. These include the right to be fully informed of the charges against him or her; the right to have a lawyer appointed, free of charge; the right to a speedy trial; and the right to examine the witnesses against him or her. To ensure "equality of arms" between defence and prosecution teams, the ICC has established an independent Office of Public Counsel for the Defence (OPCD) to provide logistical support, advice and information to defendants and their counsel. The OPCD also helps to safeguard the rights of the accused during the initial stages of an investigation. However, Thomas Lubanga's defence team say they were given a smaller budget than the Prosecutor and that evidence and witness statements were slow to arrive. One of the great innovations of the Statute of the International Criminal Court and its Rules of Procedure and Evidence is the series of rights granted to victims. For the first time in the history of international criminal justice, victims have the possibility under the Statute to present their views and observations before the Court. Participation before the Court may occur at various stages of proceedings and may take different forms, although it will be up to the judges to give directions as to the timing and manner of participation. Participation in the Court's proceedings will in most cases take place through a legal representative and will be conducted "in a manner which is not prejudicial or inconsistent with the rights of the accused and a fair and impartial trial". The victim-based provisions within the Rome Statute provide victims with the opportunity to have their voices heard and to obtain, where appropriate, some form of reparation for their suffering. It is the aim of this attempted balance between retributive and restorative justice that, it is hoped, will enable the ICC to not only bring criminals to justice but also help the victims themselves obtain some form of justice. Justice for victims before the ICC comprises both procedural and substantive justice, by allowing them to participate and present their views and interests, so that they can help to shape truth, justice and reparations outcomes of the Court. Article 43(6) establishes a Victims and Witnesses Unit to provide "protective measures and security arrangements, counseling and other appropriate assistance for witnesses, victims who appear before the Court, and others who are at risk on account of testimony given by such witnesses." Article 68 sets out procedures for the "Protection of the victims and witnesses and their participation in the proceedings." The Court has also established an Office of Public Counsel for Victims, to provide support and assistance to victims and their legal representatives. The ICC does not have its own witness protection program, but rather must rely on national programs to keep witnesses safe. Victims before the International Criminal Court can also claim reparations under Article 75 of the Rome Statute. Reparations can only be claimed when a defendant is convicted and at the discretion of the Court's judges. So far the Court has ordered reparations against Thomas Lubanga. Reparations can include compensation, restitution and rehabilitation, but other forms of reparations may be appropriate for individual, collective or community victims. Article 79 of the Rome Statute establishes a Trust Fund to provide assistance before a reparation order to victims in a situation or to support reparations to victims and their families if the convicted person has no money. One of the principles of international law is that a treaty does not create either obligations or rights for third states without their consent, and this is also enshrined in the 1969 Vienna Convention on the Law of Treaties. The co-operation of the non-party states with the ICC is envisioned by the Rome Statute of the International Criminal Court to be of voluntary nature. However, even states that have not acceded to the Rome Statute might still be subjects to an obligation to co-operate with ICC in certain cases. When a case is referred to the ICC by the UN Security Council all UN member states are obliged to co-operate, since its decisions are binding for all of them. Also, there is an obligation to respect and ensure respect for international humanitarian law, which stems from the Geneva Conventions and Additional Protocol I, which reflects the absolute nature of international humanitarian law. Although the wording of the Conventions might not be precise as to what steps have to be taken, it has been argued that it at least requires non-party states to make an effort not to block actions of ICC in response to serious violations of those Conventions. In relation to co-operation in investigation and evidence gathering, it is implied from the Rome Statute that the consent of a non-party state is a prerequisite for ICC Prosecutor to conduct an investigation within its territory, and it seems that it is even more necessary for him to observe any reasonable conditions raised by that state, since such restrictions exist for states party to the Statute. Taking into account the experience of the International Criminal Tribunal for the former Yugoslavia (which worked with the principle of the primacy, instead of complementarity) in relation to co-operation, some scholars have expressed their pessimism as to the possibility of ICC to obtain co-operation of non-party states. As for the actions that ICC can take towards non-party states that do not co-operate, the Rome Statute stipulates that the Court may inform the Assembly of States Parties or Security Council, when the matter was referred by it, when non-party state refuses to co-operate after it has entered into an "ad hoc" arrangement or an agreement with the Court. It is unclear to what extent the ICC is compatible with reconciliation processes that grant amnesty to human rights abusers as part of agreements to end conflict. Article 16 of the Rome Statute allows the Security Council to prevent the Court from investigating or prosecuting a case, and Article 53 allows the Prosecutor the discretion not to initiate an investigation if he or she believes that "an investigation would not serve the interests of justice". Former ICC president Philippe Kirsch has said that "some limited amnesties may be compatible" with a country's obligations genuinely to investigate or prosecute under the Statute. It is sometimes argued that amnesties are necessary to allow the peaceful transfer of power from abusive regimes. By denying states the right to offer amnesty to human rights abusers, the International Criminal Court may make it more difficult to negotiate an end to conflict and a transition to democracy. For example, the outstanding arrest warrants for four leaders of the Lord's Resistance Army are regarded by some as an obstacle to ending the insurgency in Uganda. Czech politician Marek Benda argues that "the ICC as a deterrent will in our view only mean the worst dictators will try to retain power at all costs". However, the United Nations and the International Committee of the Red Cross maintain that granting amnesty to those accused of war crimes and other serious crimes is a violation of international law. The official seat of the Court is in The Hague, Netherlands, but its proceedings may take place anywhere. The Court moved into its first permanent premises in The Hague, located at Oude Waalsdorperweg 10, on 14 December 2015. Part of The Hague's International Zone, which also contains the Peace Palace, Europol, Eurojust, ICTY, OPCW and The Hague World Forum, the court facilities are situated on the site of the "Alexanderkazerne", a former military barracks, adjacent to the dune landscape on the northern edge of the city. The ICC's detention centre is a short distance away. The land and financing for the new construction were provided by the Netherlands. In addition, the host state organised and financed the architectural design competition which started at the end of 2008. Three architects were chosen by an international jury from a total of 171 applicants to enter into further negotiations. The Danish firm schmidt hammer lassen were ultimately selected to design the new premises since its design met all the ICC criteria, such as design quality, sustainability, functionality and costs. Demolition of the barracks started in November 2011 and was completed in August 2012. In October 2012 the tendering procedure for the General Contractor was completed and the combination Visser & Smit Bouw and Boele & van Eesteren ("Courtys") was selected. The building has a compact footprint and consists of six connected building volumes with a garden motif. The tallest volume with a green facade, placed in the middle of the design, is the Court Tower that accommodates 3 courtrooms. The rest of the building's volumes accommodate the offices of the different organs of the ICC. Until late 2015, the ICC was housed in interim premises in The Hague provided by the Netherlands. Formerly belonging to KPN, the provisional headquarters were located at Maanweg 174 in the east-central portion of the city. The ICC's detention centre accommodates both those convicted by the court and serving sentences as well as those suspects detained pending the outcome of their trial. It comprises twelve cells on the premises of the Scheveningen branch of the Haaglanden Penal Institution, The Hague, close to the ICC's new headquarters in the Alexanderkazerne. Suspects held by the International Criminal Tribunal for the former Yugoslavia are held in the same prison and share some facilities, like the fitness room, but have no contact with suspects held by the ICC. The ICC maintains a liaison office in New York and field offices in places where it conducts its activities. As of 18 October 2007, the Court had field offices in Kampala, Kinshasa, Bunia, Abéché and Bangui. The ICC is financed by contributions from the states parties. The amount payable by each state party is determined using the same method as the United Nations: each state's contribution is based on the country's capacity to pay, which reflects factors such as a national income and population. The maximum amount a single country can pay in any year is limited to 22% of the Court's budget; Japan paid this amount in 2008. The Court spent €80.5 million in 2007. The Assembly of States Parties approved a budget of €90.4 million for 2008, €101.2 million for 2009, and €141.6 million for 2017. , the ICC's staff consisted of 800 persons from approximately 100 states. To date, the Prosecutor has opened investigations in 12 situations: Burundi; two in the Central African Republic; Côte d'Ivoire; Darfur, Sudan; the Democratic Republic of the Congo; Georgia; Kenya; Libya; Mali; Uganda; and Bangladesh/Myanmar. Additionally, the Office of the Prosecutor is conducting preliminary examinations in nine situations in Afghanistan; Colombia; Guinea; Iraq / the United Kingdom; Nigeria; Palestine; the Philippines; Ukraine; and Venezuela. The Court's Pre-Trial Chambers have The "Lubanga" and "Katanga-Chui" trials in the situation of the DR Congo are concluded. Mr Lubanga and Mr Katanga were convicted and sentenced to 14 and 12 years imprisonment, respectively, whereas Mr Chui was acquitted. The "Bemba" trial in the Central African Republic situation is concluded. Mr Bemba was convicted on two counts of crimes against humanity and three counts of war crimes. This marked the first time the ICC convicted someone of sexual violence as they added rape to his conviction. Trials in the "Ntaganda" case (DR Congo), the "Bemba et al." case and the "Laurent Gbagbo-Blé Goudé" trial in the Côte d'Ivoire situation are ongoing. The "Banda" trial in the situation of Darfur, Sudan, was scheduled to begin in 2014 but the start date was vacated. Charges against Dominic Ongwen in the Uganda situation and Ahmed al-Faqi in the Mali situation have been confirmed; both are awaiting their trials. Currently, the Office of the Prosecutor has Notes Unlike the International Court of Justice, the ICC is legally independent from the United Nations. However, the Rome Statute grants certain powers to the United Nations Security Council, which limits its functional independence. Article 13 allows the Security Council to refer to the Court situations that would not otherwise fall under the Court's jurisdiction (as it did in relation to the situations in Darfur and Libya, which the Court could not otherwise have prosecuted as neither Sudan nor Libya are state parties). Article 16 allows the Security Council to require the Court to defer from investigating a case for a period of 12 months. Such a deferral may be renewed indefinitely by the Security Council. This sort of an arrangement gives the ICC some of the advantages inhering in the organs of the United Nations such as using the enforcement powers of the Security Council, but it also creates a risk of being tainted with the political controversies of the Security Council. The Court cooperates with the UN in many different areas, including the exchange of information and logistical support. The Court reports to the UN each year on its activities, and some meetings of the Assembly of States Parties are held at UN facilities. The relationship between the Court and the UN is governed by a "Relationship Agreement between the International Criminal Court and the United Nations". During the 1970s and 1980s, international human rights and humanitarian Nongovernmental Organizations (or NGOs) began to proliferate at exponential rates. Concurrently, the quest to find a way to punish international crimes shifted from being the exclusive responsibility of legal experts to being shared with international human rights activism. NGOs helped birth the ICC through advocacy and championing for the prosecution of perpetrators of crimes against humanity. NGOs closely monitor the organization's declarations and actions, ensuring that the work that is being executed on behalf of the ICC is fulfilling its objectives and responsibilities to civil society. According to Benjamin Schiff, "From the Statute Conference onward, the relationship between the ICC and the NGOs has probably been closer, more consistent, and more vital to the Court than have analogous relations between NGOs and any other international organization." There are a number of NGOs working on a variety of issues related to the ICC. The NGO Coalition for the International Criminal Court has served as a sort of umbrella for NGOs to coordinate with each other on similar objectives related to the ICC. The CICC has 2,500 member organizations in 150 different countries. The original steering committee included representatives from the World Federalist Movement, the International Commission of Jurists, Amnesty International, the Lawyers Committee for Human Rights, Human Rights Watch, Parliamentarians for Global Action, and No Peace Without Justice. Today, many of the NGOs with which the ICC cooperates are members of the CICC. These organizations come from a range of backgrounds, spanning from major international NGOs such as Human Rights Watch and Amnesty International, to smaller, more local organizations focused on peace and justice missions. Many work closely with states, such as the International Criminal Law Network, founded and predominantly funded by the Hague municipality and the Dutch Ministries of Defense and Foreign Affairs. The CICC also claims organizations that are themselves federations, such as the International Federation of Human Rights Leagues (FIDH). CICC members ascribe to three principles that permit them to work under the umbrella of the CICC, so long as their objectives match them: The NGOs that work under the CICC do not normally pursue agendas exclusive to the work of the Court, rather they may work for broader causes, such as general human rights issues, victims' rights, gender rights, rule of law, conflict mediation, and peace. The CICC coordinates their efforts to improve the efficiency of NGOs' contributions to the Court and to pool their influence on major common issues. From the ICC side, it has been useful to have the CICC channel NGO contacts with the Court so that its officials do not have to interact individually with thousands of separate organizations. NGOs have been crucial to the evolution of the ICC, as they assisted in the creation of the normative climate that urged states to seriously consider the Court's formation. Their legal experts helped shape the Statute, while their lobbying efforts built support for it. They advocate Statute ratification globally and work at expert and political levels within member states for passage of necessary domestic legislation. NGOs are greatly represented at meetings for the Assembly of States Parties, and they use the ASP meetings to press for decisions promoting their priorities. Many of these NGOs have reasonable access to important officials at the ICC because of their involvement during the Statute process. They are engaged in monitoring, commenting upon, and assisting in the ICC's activities. The ICC often depends on NGOs to interact with local populations. The Registry Public Information Office personnel and Victims Participation and Reparations Section officials hold seminars for local leaders, professionals and the media to spread the word about the Court. These are the kinds of events that are often hosted or organized by local NGOs. Because there can be challenges with determining which of these NGOs are legitimate, CICC regional representatives often have the ability to help screen and identify trustworthy organizations. However, NGOs are also "sources of criticism, exhortation and pressure upon" the ICC. The ICC heavily depends on NGOs for its operations. Although NGOs and states cannot directly impact the judicial nucleus of the organization, they can impart information on crimes, can help locate victims and witnesses, and can promote and organize victim participation. NGOs outwardly comment on the Court's operations, "push for expansion of its activities especially in the new justice areas of outreach in conflict areas, in victims' participation and reparations, and in upholding due-process standards and defense 'equality of arms' and so implicitly set an agenda for the future evolution of the ICC." The relatively uninterrupted progression of NGO involvement with the ICC may mean that NGOs have become repositories of more institutional historical knowledge about the ICC than its national representatives, and have greater expertise than some of the organization's employees themselves. While NGOs look to mold the ICC to satisfy the interests and priorities that they have worked for since the early 1990s, they unavoidably press against the limits imposed upon the ICC by the states that are members of the organization. NGOs can pursue their own mandates, irrespective of whether they are compatible with those of other NGOs, while the ICC must respond to the complexities of its own mandate as well as those of the states and NGOs. Another issue has been that NGOs possess "exaggerated senses of their ownership over the organization and, having been vital to and successful in promoting the Court, were not managing to redefine their roles to permit the Court its necessary independence." Additionally, because there does exist such a gap between the large human rights organizations and the smaller peace-oriented organizations, it is difficult for ICC officials to manage and gratify all of their NGOs. "ICC officials recognize that the NGOs pursue their own agendas, and that they will seek to pressure the ICC in the direction of their own priorities rather than necessarily understanding or being fully sympathetic to the myriad constraints and pressures under which the Court operates." Both the ICC and the NGO community avoid criticizing each other publicly or vehemently, although NGOs have released advisory and cautionary messages regarding the ICC. They avoid taking stances that could potentially give the Court's adversaries, particularly the US, more motive to berate the organization. The ICC has been accused of bias and as being a tool of Western imperialism, only punishing leaders from small, weak states while ignoring crimes committed by richer and more powerful states. This sentiment has been expressed particularly by African leaders due to an alleged disproportionate focus of the Court on Africa, while it claims to have a global mandate; until January 2016, all nine situations which the ICC had been investigating were in African countries. The prosecution of Kenyan Deputy President William Ruto and President Uhuru Kenyatta (both charged before coming into office) led to the Kenyan parliament passing a motion calling for Kenya's withdrawal from the ICC, and the country called on the other 33 African states party to the ICC to withdraw their support, an issue which was discussed at a special African Union (AU) summit in October 2013. Though the ICC has denied the charge of disproportionately targeting African leaders, and claims to stand up for victims wherever they may be, Kenya was not alone in criticising the ICC. Sudanese President Omar al-Bashir visited Kenya, South Africa, China, Nigeria, Saudi Arabia, United Arab Emirates, Egypt, Ethiopia, Qatar and several other countries despite an outstanding ICC warrant for his arrest but was not arrested; he said that the charges against him are "exaggerated" and that the ICC was a part of a "Western plot" against him. Ivory Coast's government opted not to transfer former first lady Simone Gbagbo to the court but to instead try her at home. Rwanda's ambassador to the African Union, Joseph Nsengimana, argued that "It is not only the case of Kenya. We have seen international justice become more and more a political matter." Ugandan President Yoweri Museveni accused the ICC of "mishandling complex African issues." Ethiopian Prime Minister Hailemariam Desalegn, at the time AU chairman, told the UN General Assembly at the General debate of the sixty-eighth session of the United Nations General Assembly: "The manner in which the ICC has been operating has left a very bad impression in Africa. It is totally unacceptable." South African President Jacob Zuma said the perceptions of the ICC as "unreasonable" led to the calling of the special AU summit on 13 October 2015. Botswana is a notable supporter of the ICC in Africa. At the summit, the AU did not endorse the proposal for a collective withdrawal from the ICC due to lack of support for the idea. However, the summit did conclude that serving heads of state should not be put on trial and that the Kenyan cases should be deferred. Ethiopian Foreign Minister Tedros Adhanom said: "We have rejected the double standard that the ICC is applying in dispensing international justice." Despite these calls, the ICC went ahead with requiring William Ruto to attend his trial. The UNSC was then asked to consider deferring the trials of Kenyatta and Ruto for a year, but this was rejected. In November, the ICC's Assembly of State Parties responded to Kenya's calls for an exemption for sitting heads of state by agreeing to consider amendments to the Rome Statute to address the concerns. On 7 October 2016, Burundi announced that it would leave the ICC, after the court began investigating political violence in that nation. In the subsequent two weeks, South Africa and Gambia also announced their intention to leave the court, with Kenya and Namibia reportedly also considering departure. All three nations cited the fact that all 39 people indicted by the court over its history have been African and that the court has made no effort to investigate war crimes tied to the 2003 invasion of Iraq. However, following Gambia's presidential election later that year, which ended the long rule of Yahya Jammeh, Gambia rescinded its withdrawal notification. The High Court of South Africa ruled on 2 February 2017 that the South African government's notice to withdraw was unconstitutional and invalid. On 7 March 2017 the South African government formally revoked its intention to withdraw; however, the ruling ANC revealed on 5 July 2017 that its intention to withdraw stands. The United States Department of State argues that there are "insufficient checks and balances on the authority of the ICC prosecutor and judges" and "insufficient protection against politicized prosecutions or other abuses". The current law in the United States on the ICC is the American Service-Members' Protection Act (ASPA), 116 Stat. 820, The ASPA authorizes the President of the United States to use "all means necessary and appropriate to bring about the release of any U.S. or allied personnel being detained or imprisoned by, on behalf of, or at the request of the International Criminal Court." This authorization has led the act to be nicknamed the "Hague Invasion Act", because the freeing of U.S. citizens by force might be possible only through military action. On 10 September 2018, John R. Bolton, in his first major address as U.S. National Security Advisor, reiterated that the ICC lacks checks and balances, exercises "jurisdiction over crimes that have disputed and ambiguous definitions," and has failed to "deter and punish atrocity crimes." The ICC, said Bolton, is "superfluous" given that "domestic judicial systems already hold American citizens to the highest legal and ethical standards." He added that the U.S. would do everything "to protect our citizens" should the ICC attempt to prosecute U.S. servicemen over alleged detainee abuse in Afghanistan. In that event, ICC judges and prosecutors would be barred from entering the U.S., their funds in the U.S. would be sanctioned and the U.S. "will prosecute them in the US criminal system. We will do the same for any company or state that assists an ICC investigation of Americans", Bolton said. He also criticized Palestinian efforts to bring Israel before the ICC over allegations of human rights abuses in the West Bank and Gaza. ICC responded that it will continue to investigate war crimes undeterred. On 11 June 2020, Mike Pompeo and U.S. President Donald Trump announced sanctions on officials and employees, as well as their families, involved in investigating crimes against humanity committed by US armed forces in Afghanistan. This move was widely criticized by human rights groups. Concerning the independent Office of Public Counsel for the Defence (OPCD), Thomas Lubanga's defence team say they were given a smaller budget than the Prosecutor and that evidence and witness statements were slow to arrive. Limitations exist for the ICC. Human Rights Watch (HRW) reported that the ICC's prosecutor team takes no account of the roles played by the government in the conflict of Uganda, Rwanda or Congo. This led to a flawed investigation, because the ICC did not reach the conclusion of its verdict after considering the governments' position and actions in the conflict. Research suggests that prosecutions of leaders in the ICC makes dictators less likely to peacefully step down. It is also argued that justice is a means to peace: "As a result, the ICC has been used as a means of intervention in ongoing conflicts with the expectation that the indictments, arrests, and trials of elite perpetrators have deterrence and preventive effects for atrocity crimes. Despite these legitimate intentions and great expectations, there is little evidence of the efficacy of justice as a means to peace". That the ICC cannot mount successful cases without state cooperation is problematic for several reasons. It means that the ICC acts inconsistently in its selection of cases, is prevented from taking on hard cases and loses legitimacy. It also gives the ICC less deterrent value, as potential perpetrators of war crimes know that they can avoid ICC judgment by taking over government and refusing to cooperate. The fundamental principle of complementarity of the ICC Rome Statute is often taken for granted in the legal analysis of international criminal law and its jurisprudence. Initially the thorny issue of the actual application of the complementarity principle arose in 2008, when William Schabas published his influential paper. However, despite Schabas' theoretical impact, no substantive research was made by other scholars on this issue for quite some time. In June 2017, Victor Tsilonis advanced the same criticism which is reinforced by events, practices of the Office of the Prosecutor and ICC cases in the Essays in Honour of Nestor Courakis. His paper essentially argues that the Αl‐Senussi case arguably is the first instance of the complementarity principle's actual implementation eleven whole years after the ratification of the Rome Statute of the International Criminal Court. On the other hand, the Chief Prosecutor, Fatou Bensouda, has invoked recently the principle of complementarity in the situation between Russia and Georgia in Ossetia region. Moreover, following the threats of certain African states (initially Burundi, Gambia and South Africa) to withdraw their ratifications, Bensouda again referred to the principle of complementarity as a core principle of ICC's jurisdiction and has more extensively focused on the principle's application on the latest Office of The Prosecutor's Report on Preliminary Examination Activities 2016. Some advocates have suggested that the ICC go "beyond complementarity" and systematically support national capacity for prosecutions. They argue that national prosecutions, where possible, are more cost-effective, preferable to victims and sustainable.
https://en.wikipedia.org/wiki?curid=14880
Iberian Peninsula The Iberian Peninsula , also known as Iberia, is located in the southwest corner of Europe, defining the westernmost edge of Eurasia. The peninsula is principally divided between Spain and Portugal, comprising most of their territory, as well as a small area of Southern France, Andorra and the British overseas territory of Gibraltar. With an area of approximately , and a population of roughly 53 million, it is the second largest European peninsula by area, after the Scandinavian Peninsula. The word "Iberia" is a noun adapted from the Latin word "Hiberia" originating in the Ancient Greek word Ἰβηρία (""), used by Greek geographers under the rule of the Roman Empire to refer to what is known today in English as the Iberian Peninsula. At that time, the name did not describe a single geographical entity or a distinct population; the same name was used for the Kingdom Kartli in the Caucasus, the core region of what would become the Kingdom of Georgia. It was Strabo who first reported the delineation of "Iberia" from Gaul ("Keltikē") by the Pyrenees and included the entire land mass southwest (he says "west") from there. With the fall of the Roman Empire and the establishment of the new Castillian language in Spain, the word "Iberia" continued the Roman word "Hiberia" and the Greek word "Ἰβηρία". The ancient Greeks reached the Iberian Peninsula, of which they had heard from the Phoenicians, by voyaging westward on the Mediterranean. Hecataeus of Miletus was the first known to use the term "Iberia", which he wrote about circa 500 BC. Herodotus of Halicarnassus says of the Phocaeans that "it was they who made the Greeks acquainted with […] Iberia." According to Strabo, prior historians used "Iberia" to mean the country "this side of the Ἶβηρος ("")" as far north as the river Rhône in France, but currently they set the Pyrenees as the limit. Polybius respects that limit, but identifies Iberia as the Mediterranean side as far south as Gibraltar, with the Atlantic side having no name. Elsewhere he says that Saguntum is "on the seaward foot of the range of hills connecting Iberia and Celtiberia." Strabo refers to the Carretanians as people "of the Iberian stock" living in the Pyrenees, who are distinct from either Celts or Celtiberians. According to Charles Ebel, the ancient sources in both Latin and Greek use Hispania and Hiberia (Greek: Iberia) as synonyms. The confusion of the words was because of an overlapping in political and geographic perspectives. The Latin word "Hiberia", similar to the Greek "Iberia", literally translates to "land of the Hiberians". This word was derived from the river Ebro, which the Romans called "Hiberus". "Hiber" (Iberian) was thus used as a term for peoples living near the river Ebro. The first mention in Roman literature was by the annalist poet Ennius in 200 BC. Virgil refers to the "Ipacatos Hiberos" ("restless Iberi") in his Georgics. The Roman geographers and other prose writers from the time of the late Roman Republic called the entire peninsula "Hispania". In Greek and Roman antiquity, the name "Hesperia" was used for both the Italian and Iberian Peninsula; in the latter case "Hesperia Ultima" (referring to its position in the far west) appears as form of disambiguation from the former among Roman writers. As they became politically interested in the former Carthaginian territories, the Romans began to use the names "Hispania Citerior" and "Hispania Ulterior" for 'near' and 'far' Hispania. At the time Hispania was made up of three Roman provinces: Hispania Baetica, Hispania Tarraconensis, and Hispania Lusitania. Strabo says that the Romans use "Hispania" and "Iberia" synonymously, distinguishing between the "near" northern and the "far" southern provinces. (The name "Iberia" was ambiguous, being also the name of the Kingdom of Iberia in the Caucasus.) Whatever languages may generally have been spoken on the peninsula soon gave way to Latin, except for that of the Vascones, which was preserved as a language isolate by the barrier of the Pyrenees. The modern phrase "Iberian Peninsula" was coined by the French geographer Jean-Baptiste Bory de Saint-Vincent on his 1823 work ""Guide du Voyageur en Espagne"". Prior to that date, geographers had used the terms "Spanish Peninsula" or "Pyrenaean Peninsula" The Iberian Peninsula has always been associated with the River Ebro (Ibēros in ancient Greek and Ibērus or Hibērus in Latin). The association was so well known it was hardly necessary to state; for example, Ibēria was the country "this side of the Ibērus" in Strabo. Pliny goes so far as to assert that the Greeks had called "the whole of Spain" Hiberia because of the Hiberus River. The river appears in the Ebro Treaty of 226 BC between Rome and Carthage, setting the limit of Carthaginian interest at the Ebro. The fullest description of the treaty, stated in Appian, uses Ibērus. With reference to this border, Polybius states that the "native name" is "Ibēr", apparently the original word, stripped of its Greek or Latin "-os" or "-us" termination. The early range of these natives, which geographers and historians place from the present southern Spain to the present southern France along the Mediterranean coast, is marked by instances of a readable script expressing a yet unknown language, dubbed "Iberian." Whether this was the native name or was given to them by the Greeks for their residence near the Ebro remains unknown. Credence in Polybius imposes certain limitations on etymologizing: if the language remains unknown, the meanings of the words, including Iber, must also remain unknown. In modern Basque, the word "ibar" means "valley" or "watered meadow", while "ibai" means "river", but there is no proof relating the etymology of the Ebro River with these Basque names. The Iberian Peninsula has been inhabited for at least 1.2 million years as remains found in the sites in the Atapuerca Mountains demonstrate. Among these sites is the cave of Gran Dolina, where six hominin skeletons, dated between 780,000 and one million years ago, were found in 1994. Experts have debated whether these skeletons belong to the species "Homo erectus", "Homo heidelbergensis", or a new species called "Homo antecessor". Around 200,000 BP, during the Lower Paleolithic period, Neanderthals first entered the Iberian Peninsula. Around 70,000 BP, during the Middle Paleolithic period, the last glacial event began and the Neanderthal Mousterian culture was established. Around 37,000 BP, during the Upper Paleolithic, the Neanderthal Châtelperronian cultural period began. Emanating from Southern France, this culture extended into the north of the peninsula. It continued to exist until around 30,000 BP, when Neanderthal man faced extinction. About 40,000 years ago, anatomically modern humans entered the Iberian Peninsula from Southern France. Here, this genetically homogeneous population (characterized by the M173 mutation in the Y chromosome), developed the M343 mutation, giving rise to Haplogroup R1b, still the most common in modern Portuguese and Spanish males. On the Iberian Peninsula, modern humans developed a series of different cultures, such as the Aurignacian, Gravettian, Solutrean and Magdalenian cultures, some of them characterized by the complex forms of the art of the Upper Paleolithic. During the Neolithic expansion, various megalithic cultures developed in the Iberian Peninsula. An open seas navigation culture from the east Mediterranean, called the Cardium culture, also extended its influence to the eastern coasts of the peninsula, possibly as early as the 5th millennium BC. These people may have had some relation to the subsequent development of the Iberian civilization. In the Chalcolithic ( 3000 BC), a series of complex cultures developed that would give rise to the peninsula's first civilizations and to extensive exchange networks reaching to the Baltic, Middle East and North Africa. Around 2800 – 2700 BC, the Beaker culture, which produced the "Maritime Bell Beaker", probably originated in the vibrant copper-using communities of the Tagus estuary in Portugal and spread from there to many parts of western Europe. Bronze Age cultures developed beginning  1800 BC, when the civilization of Los Millares was followed by that of El Argar. From this centre, bronze technology spread to other cultures like the Bronze of Levante, South-Western Iberian Bronze and Las Cogotas. In the Late Bronze Age, the urban civilisation of Tartessos developed in the area of modern western Andalusia, characterized by Phoenician influence and using the Southwest Paleohispanic script for its Tartessian language, not related to the Iberian language. Early in the first millennium BC, several waves of Pre-Celts and Celts migrated from Central Europe, thus partially changing the peninsula's ethnic landscape to Indo-European-speaking in its northern and western regions. In Northwestern Iberia (modern Northern Portugal, Asturias and Galicia), a Celtic culture developed, the Castro culture, with a large number of hill forts and some fortified cities. By the Iron Age, starting in the 7th century BC, the Iberian Peninsula consisted of complex agrarian and urban civilizations, either Pre-Celtic or Celtic (such as the Lusitanians, Celtiberians, Gallaeci, Astures, Celtici and others), the cultures of the Iberians in the eastern and southern zones and the cultures of the Aquitanian in the western portion of the Pyrenees. As early as the 12th century BC, the Phoenicians, a thalassocratic civilization originally from the East Mediterranean, began to explore the coastline of the peninsula, interacting with the metal-rich communities in the southwest of the peninsula (contemporarily known as the semi-mythical Tartessos). Around 1100 BC, Phoenician merchants founded the trading colony of Gadir or Gades (modern day Cádiz). Phoenicians established a permanent trading port in the Gadir colony circa 800 BC in response to the increasing demand of silver from the Assyrian Empire. The seafaring Phoenicians, Greeks and Carthaginians successively settled along the Mediterranean coast and founded trading colonies there over a period of several centuries. In the 8th century BC, the first Greek colonies, such as Emporion (modern Empúries), were founded along the Mediterranean coast on the east, leaving the south coast to the Phoenicians. The Greeks coined the name Iberia, after the river Iber (Ebro). In the sixth century BC, the Carthaginians arrived in the peninsula while struggling with the Greeks for control of the Western Mediterranean. Their most important colony was Carthago Nova (modern-day Cartagena, Spain). In 218 BC, during the Second Punic War against the Carthaginians, the first Roman troops occupied the Iberian Peninsula; however, it was not until the reign of Augustus that it was annexed after 200 years of war with the Celts and Iberians. The result was the creation of the province of Hispania. It was divided into Hispania Ulterior and Hispania Citerior during the late Roman Republic, and during the Roman Empire, it was divided into Hispania Tarraconensis in the northeast, Hispania Baetica in the south and Lusitania in the southwest. Hispania supplied the Roman Empire with silver, food, olive oil, wine, and metal. The emperors Trajan, Hadrian, Marcus Aurelius, and Theodosius I, the philosopher Seneca the Younger, and the poets Martial and Lucan were born from families living on the Iberian Peninsula. During their 600-year occupation of the Iberian Peninsula, the Romans introduced the Latin language that influenced many of the languages that exist today in the Iberian peninsula. In the early fifth century, Germanic peoples occupied the peninsula, namely the Suebi, the Vandals (Silingi and Hasdingi) and their allies, the Alans. Only the kingdom of the Suebi (Quadi and Marcomanni) would endure after the arrival of another wave of Germanic invaders, the Visigoths, who occupied all of the Iberian Peninsula and expelled or partially integrated the Vandals and the Alans. The Visigoths eventually occupied the Suebi kingdom and its capital city, Bracara (modern day Braga), in 584–585. They would also occupy the province of the Byzantine Empire (552–624) of Spania in the south of the peninsula and the Balearic Islands. In 711, a Muslim army conquered the Visigothic Kingdom in Hispania. Under Tariq ibn Ziyad, the Islamic army landed at Gibraltar and, in an eight-year campaign, occupied all except the northern kingdoms of the Iberian Peninsula in the Umayyad conquest of Hispania. Al-Andalus (, tr. "al-ʾAndalūs", possibly "Land of the Vandals"), is the Arabic name given to Muslim Iberia. The Muslim conquerors were Arabs and Berbers; following the conquest, conversion and arabization of the Hispano-Roman population took place, ("muwalladum" or "Muladi"). After a long process, spurred on in the 9th and 10th centuries, the majority of the population in Al-Andalus eventually converted to Islam. The Muslims were referred to by the generic name "Moors". The Muslim population was divided per ethnicity (Arabs, Berbers, Muladi), and the supremacy of Arabs over the rest of group was a recurrent causal for strife, rivalry and hatred, particularly between Arabs and Berbers. Arab elites could be further divided in the Yemenites (first wave) and the Syrians (second wave). Christians and Jews were allowed to live as part of a stratified society under the "dhimmah" system, although Jews became very important in certain fields. Some Christians migrated to the Northern Christian kingdoms, while those who stayed in Al-Andalus progressively arabised and became known as "musta'arab" (mozarabs). The slave population comprised the "Ṣaqāliba" (literally meaning "slavs", although they were slaves of generic European origin) as well as Sudanese slaves. The Umayyad rulers faced a major Berber Revolt in the early 740s; the uprising originally broke out in North Africa (Tangier) and later spread across the peninsula. Following the Abbasid takeover from the Umayyads and the shift of the economic centre of the Islamic Caliphate from Damascus to Baghdad, the western province of al-Andalus was marginalised and ultimately became politically autonomous as independent emirate in 756, ruled by one of the last surviving Umayyad royals, Abd al-Rahman I. Al-Andalus became a center of culture and learning, especially during the Caliphate of Córdoba. The Caliphate reached its height of its power under the rule of Abd-ar-Rahman III and his successor al-Hakam II, becoming then, in the view of Jaime Vicens Vives, "the most powerful state in Europe". Abd-ar-Rahman III also managed to expand the clout of Al-Andalus across the Strait of Gibraltar, waging war, as well as his successor, against the Fatimid Empire. Between the 8th and 12th centuries, Al-Andalus enjoyed a notable urban vitality, both in terms of the growth of the preexisting cities as well as in terms of founding of new ones: Córdoba reached a population of 100,000 by the 10th century, Toledo 30,000 by the 11th century and Seville 80,000 by the 12th century. During the Middle Ages, the North of the peninsula housed many small Christian polities including the Kingdom of Castile, the Kingdom of Aragon, the Kingdom of Navarre, the Kingdom of León or the Kingdom of Portugal, as well as a number of counties that spawned from the Carolingian Marca Hispanica. Christian and Muslim polities fought and allied among themselves in variable alliances. The Christian kingdoms progressively expanded south taking over Muslim territory in what is historiographically known as the "Reconquista" (the latter concept has been however noted as product of the claim to a pre-existing Spanish Catholic nation and it would not necessarily convey adequately "the complexity of centuries of warring and other more peaceable interactions between Muslim and Christian kingdoms in medieval Iberia between 711 and 1492"). The Caliphate of Córdoba subsumed in a period of upheaval and civil war (the Fitna of al-Andalus) and collapsed in the early 11th century, spawning a series of ephemeral statelets, the "taifas". Until the mid 11th century, most of the territorial expansion southwards of the Kingdom of Asturias/León was carried out through a policy of agricultural colonization rather than through military operations; then, profiting from the feebleness of the taifa principalities, Ferdinand I of León seized Lamego and Viseu (1057–1058) and Coimbra (1064) away from the Taifa of Badajoz (at times at war with the Taifa of Seville); Meanwhile, in the same year Coimbra was conquered, in the Northeastern part of the Iberian Peninsula, the Kingdom of Aragon took Barbastro from the Hudid Taifa of Lérida as part of an international expedition sanctioned by Pope Alexander II. Most critically, Alfonso VI of León-Castile conquered Toledo and its wider taifa in 1085, in what it was seen as a critical event at the time, entailing also a huge territorial expansion, advancing from the Sistema Central to La Mancha. In 1086, following the siege of Zaragoza by Alfonso VI of León-Castile, the Almoravids, religious zealots originally from the deserts of the Maghreb, landed in the Iberian Peninsula, and, having inflicted a serious defeat to Alfonso VI at the battle of Zalaca, began to seize control of the remaining taifas. The Almoravids in the Iberian peninsula progressively relaxed strict observance of their faith, and treated both Jews and Mozarabs harshly, facing uprisings across the peninsula, initially in the Western part. The Almohads, another North-African Muslim sect of Masmuda Berber origin who had previously undermined the Almoravid rule south of the Strait of Gibraltar, first entered the peninsula in 1146. Somewhat straying from the trend taking place in other locations of the Latin West since the 10th century, the period comprising the 11th and 13th centuries was not one of weakening monarchical power in the Christian kingdoms. The relatively novel concept of "frontier" (Sp: "frontera"), already reported in Aragon by the second half of the 11th century become widespread in the Christian Iberian kingdoms by the beginning of the 13th century, in relation to the more or less conflictual border with Muslim lands. By the beginning of the 13th century, a power reorientation took place in the Iberian Peninsula (parallel to the Christian expansion in Southern Iberia and the increasing commercial impetus of Christian powers across the Mediterranean) and to a large extent, trade-wise, the Iberian Peninsula reorientated towards the North away from the Muslim World. During the Middle Ages, the monarchs of Castile and León, from Alfonso V and Alfonso VI (crowned "Hispaniae Imperator") to Alfonso X and Alfonso XI tended to embrace an imperial ideal based on a dual Christian and Jewish ideology. Merchants from Genoa and Pisa were conducting an intense trading activity in Catalonia already by the 12th century, and later in Portugal. Since the 13th century, the Crown of Aragon expanded overseas; led by Catalans, it attained an overseas empire in the Western Mediterranean, with a presence in Mediterranean islands such as the Balearics, Sicily and Sardinia, and even conquering Naples in the mid-15th century. Genoese merchants invested heavily in the Iberian commercial enterprise with Lisbon becoming, according to Virgínia Rau, the "great centre of Genoese trade" in the early 14th century. The Portuguese would later detach their trade to some extent from Genoese influence. The Nasrid Kingdom of Granada, neighbouring the Strait of Gibraltar and founded upon a vassalage relationship with the Crown of Castile, also insinuated itself into the European mercantile network, with its ports fostering intense trading relations with the Genoese as well, but also with the Catalans, and to a lesser extent, with the Venetians, the Florentines, and the Portuguese. Between 1275 and 1340, Granada became involved in the "crisis of the Strait", and was caught in a complex geopolitical struggle ("a kaleidoscope of alliances") with multiple powers vying for dominance of the Western Mediterranean, complicated by the unstable relations of Muslim Granada with the Marinid Sultanate. The conflict reached a climax in the 1340 Battle of Río Salado, when, this time in alliance with Granada, the Marinid Sultan (and Caliph pretender) Abu al-Hasan Ali ibn Othman made the last Marinid attempt to set up a power base in the Iberian Peninsula. The lasting consequences of the resounding Muslim defeat to an alliance of Castile and Portugal with naval support from Aragon and Genoa ensured Christian supremacy over the Iberian Peninsula and the preeminence of Christian fleets in the Western Mediterranean. The 1348–1350 bubonic plague devastated large parts of the Iberian Peninsula, leading to a sudden economic cease. Many settlements in northern Castile and Catalonia were left forsaken. The plague had the start of the hostility and downright violence towards religious minorities (particularly the Jews) as additional consequence in the Iberian realms. The 14th century was a period of great upheaval in the Iberian realms. After the death of Peter the Cruel of Castile (reigned 1350–69), the House of Trastámara succeeded to the throne in the person of Peter's half brother, Henry II (reigned 1369–79). In the kingdom of Aragón, following the death without heirs of John I (reigned 1387–96) and Martin I (reigned 1396–1410), a prince of the House of Trastámara, Ferdinand I (reigned 1412–16), succeeded to the Aragonese throne. The Hundred Years' War also spilled over into the Iberian peninsula, with Castile particularly taking a role in the conflict by providing key naval support to France that helped lead to that nation's eventual victory. After the accession of Henry III to the throne of Castile, the populace, exasperated by the preponderance of Jewish influence, perpetrated a massacre of Jews at Toledo. In 1391, mobs went from town to town throughout Castile and Aragon, killing an estimated 50,000 Jews, or even as many as 100,000, according to Jane Gerber. Women and children were sold as slaves to Muslims, and many synagogues were converted into churches. According to Hasdai Crescas, about 70 Jewish communities were destroyed. During the 15th century, Portugal, which had ended its southwards territorial expansion across the Iberian Peninsula in 1249 with the conquest of the Algarve, initiated an overseas expansion in parallel to the rise of the House of Aviz, conquering Ceuta (1415) arriving at Porto Santo (1418), Madeira and the Azores, as well as establishing additional outposts along the North-African coast. During the Late Middle Ages, the Jews acquired considerable power and influence in Castile and Aragon. Throughout the late middle ages, the Crown of Aragon took part in the mediterranean slave trade, with Barcelona (already in the 14th century), Valencia (particularly in the 15th century) and, to a lesser extent, Palma de Mallorca (since the 13th century), becoming dynamic centres in this regard, involving chiefly eastern and muslim peoples. Castile engaged later in this economic activity, rather by adhering to the incipient atlantic slave trade involving sub-saharan people thrusted by Portugal (Lisbon being the largest slave centre in Western Europe) since the mid 15th century, with Seville becoming another key hub for the slave trade. Following the advance in the conquest of the Nasrid kingdom of Granada, the seizure of Málaga entailed the addition of another notable slave centre for the Crown of Castile. By the end of the 15th century (1490) the Iberian kingdoms (including here the Balearic Islands) had an estimated population of 6.525 million (Crown of Castile, 4.3 million; Portugal, 1.0 million; Principality of Catalonia, 0.3 million; Kingdom of Valencia, 0.255 million; Kingdom of Granada, 0.25 million; Kingdom of Aragon, 0.25 million; Kingdom of Navarre, 0.12 million and the Kingdom of Mallorca, 0.05 million). For three decades in the 15th century, the "Hermandad de las Marismas", the trading association formed by the ports of Castile along the Cantabrian coast, resembling in some ways the Hanseatic League, fought against the latter, an ally of England, a rival of Castile in political and economic terms. Castile sought to claim the Gulf of Biscay as its own. In 1419, the powerful Castilian navy thoroughly defeated a Hanseatic fleet in La Rochelle. In the late 15th century, the imperial ambition of the Iberian powers was pushed to new heights by the Catholic Monarchs in Castile and Aragon, and by Manuel I in Portugal. The last Muslim stronghold, Granada, was conquered by a combined Castilian and Aragonese force in 1492. As many as 100,000 Moors died or were enslaved in the military campaign, while 200,000 fled to North Africa. Muslims and Jews throughout the period were variously tolerated or shown intolerance in different Christian kingdoms. After the fall of Granada, all Muslims and Jews were ordered to convert to Christianity or face expulsion—as many as 200,000 Jews were expelled from Spain. Historian Henry Kamen estimates that some 25,000 Jews died en route from Spain. The Jews were also expelled from Sicily and Sardinia, which were under Aragonese rule, and an estimated 37,000 to 100,000 Jews left. In 1497, King Manuel I of Portugal forced all Jews in his kingdom to convert or leave. That same year he expelled all Muslims that were not slaves, and in 1502 the Catholic Monarchs followed suit, imposing the choice of conversion to Christianity or exile and loss of property. Many Jews and Muslims fled to North Africa and the Ottoman Empire, while others publicly converted to Christianity and became known respectively as Marranos and Moriscos (after the old term "Moors"). However, many of these continued to practice their religion in secret. The Moriscos revolted several times and were ultimately forcibly expelled from Spain in the early 17th century. From 1609–14, over 300,000 Moriscos were sent on ships to North Africa and other locations, and, of this figure, around 50,000 died resisting the expulsion, and 60,000 died on the journey. The change of relative supremacy from Portugal to the Hispanic Monarchy in the late 15th century has been described as one of the few cases of avoidance of the Thucydides Trap. Challenging the conventions about the advent of modernity, Immanuel Wallerstein pushed back the origins of the capitalist modernity to the Iberian expansion of the 15th century. During the 16th century Spain created a vast empire in the Americas, with a state monopoly in Seville becoming the center of the ensuing transatlantic trade, based on bullion. Iberian imperialism, starting by the Portuguese establishment of routes to Asia and the posterior transatlantic trade with the New World by Spaniards and Portuguese (along Dutch, English and French), precipitated the economic decline of the Italian peninsula. The 16th century was one of population growth with increased pressure over resources; in the case of the Iberian Peninsula a part of the population moved to the Americas meanwhile Jews and Moriscos were banished, relocating to other places in the Mediterranean Basin. Most of the Moriscos remained in Spain after the Morisco revolt in Las Alpujarras during the mid-16th century, but roughly 300,000 of them were expelled from the country in 1609–1614, and emigrated "en masse" to North Africa. In 1580, after the political crisis that followed the 1578 death of King Sebastian, Portugal became a dynastic composite entity of the Hapsburg Monarchy; thus, the whole peninsula was united politically during the period known as the Iberian Union (1580–1640). During the reign of Phillip II of Spain (I of Portugal), the Councils of Portugal, Italy, Flanders and Burgundy were added to the group of counselling institutions of the Hispanic Monarchy, to which the Councils of Castile, Aragon, Indies, Chamber of Castile, Inquisition, Orders, and Crusade already belonged, defining the organization of the Royal court that underpinned the through which the empire operated. During the Iberian union, the "first great wave" of the transatlantic slave trade happened, according to Enriqueta Vila Villar, as new markets opened because of the unification gave thrust to the slave trade. By 1600, the percentage of urban population for "Spain" was roughly a 11.4%, while for Portugal the urban population was estimated as 14.1%, which were both above the 7.6% European average of the time (edged only by the Low Countries and the Italian Peninsula). Some striking differences appeared among the different Iberian realms. Castile, extending across a 60% of the territory of the peninsula and having 80% of the population was a rather urbanised country, yet with a widespread distribution of cities. Meanwhile, the urban population in the Crown of Aragon was highly concentrated in a handful of cities: Zaragoza (Kingdom of Aragon), Barcelona (Principality of Catalonia), and, to a lesser extent in the Kingdom of Valencia, in Valencia, Alicante and Orihuela. The case of Portugal presented an hypertrophied capital, Lisbon (which greatly increased its population during the 16th century, from 56,000-60,000 inhabitants by 1527, to roughly 120,000 by the third quarter of the century) with its demographic dynamism stimulated by the Asian trade, followed at great distance by Porto and Évora (both roughly accounting for 12,500 inhabitants). Throughout most of the 16th century, both Lisbon and Seville were among the Western Europe's largest and most dynamic cities. The 17th century has been largely considered as a very negative period for the Iberian economies, seen as a time of recession, crisis or even decline, the urban dynamism chiefly moving to Northern Europe. A dismantling of the inner city network in the Castilian plateau took place during this period (with a parallel accumulation of economic activity in the capital, Madrid), with only New Castile resisting recession in the interior. Regarding the Atlantic façade of Castile, aside from the severing of trade with Northern Europe, inter-regional trade with other regions in the Iberian Peninsula also suffered to some extent. In Aragon, suffering from similar problems than Castile, the expelling of the Moriscos in 1609 in the Kingdom of Valencia aggravated the recession. Silk turned from a domestic industry into a raw commodity to be exported. However, the crisis was uneven (affecting longer the centre of the peninsula), as both Portugal and the Mediterranean coastline recovered in the later part of the century by fuelling a sustained growth. The aftermath of the intermittent 1640–1668 Portuguese Restoration War brought the House of Braganza as new ruling dinasty in the Portuguese territories across the world (bar Ceuta), putting an end to the Iberian Union. Despite both Portugal and Spain starting their path towards modernization with the liberal revolutions of the first half of the 19th century, this process was, concerning structural changes in the geographical distribution of the population, relatively tame compared to what took place after World War II in the Iberian Peninsula, when strong urban development ran in parallel to substantial rural flight patterns. The Iberian Peninsula is the westernmost of the three major southern European peninsulas—the Iberian, Italian, and Balkan. It is bordered on the southeast and east by the Mediterranean Sea, and on the north, west, and southwest by the Atlantic Ocean. The Pyrenees mountains are situated along the northeast edge of the peninsula, where it adjoins the rest of Europe. Its southern tip is very close to the northwest coast of Africa, separated from it by the Strait of Gibraltar and the Mediterranean Sea. The Iberian Peninsula encompasses 583,254 km2 and has very contrasting and uneven relief. The mountain ranges of the Iberian Peninsula are mainly distributed from west to east, and in some cases reach altitudes of approximately 3000 mamsl, resulting in the region having the second highest mean altitude (637 mamsl) in Western Europe. The Iberian Peninsula extends from the southernmost extremity at Punta de Tarifa to the northernmost extremity at Punta de Estaca de Bares over a distance between lines of latitude of about based on a degree length of per degree, and from the westernmost extremity at Cabo da Roca to the easternmost extremity at Cap de Creus over a distance between lines of longitude at 40° N latitude of about based on an estimated degree length of about for that latitude. The irregular, roughly octagonal shape of the peninsula contained within this spherical quadrangle was compared to an ox-hide by the geographer Strabo. About three quarters of that rough octagon is the Meseta Central, a vast plateau ranging from 610 to 760 m in altitude. It is located approximately in the centre, staggered slightly to the east and tilted slightly toward the west (the conventional centre of the Iberian Peninsula has long been considered Getafe just south of Madrid). It is ringed by mountains and contains the sources of most of the rivers, which find their way through gaps in the mountain barriers on all sides. The coastline of the Iberian Peninsula is , on the Mediterranean side and on the Atlantic side. The coast has been inundated over time, with sea levels having risen from a minimum of lower than today at the Last Glacial Maximum (LGM) to its current level at 4,000 years BP. The coastal shelf created by sedimentation during that time remains below the surface; however, it was never very extensive on the Atlantic side, as the continental shelf drops rather steeply into the depths. An estimated length of Atlantic shelf is only wide. At the isobath, on the edge, the shelf drops off to . The submarine topography of the coastal waters of the Iberian Peninsula has been studied extensively in the process of drilling for oil. Ultimately, the shelf drops into the Bay of Biscay on the north (an abyss), the Iberian abyssal plain at on the west, and Tagus abyssal plain to the south. In the north, between the continental shelf and the abyss, is an extension called the Galicia Bank, a plateau that also contains the Porto, Vigo, and Vasco da Gama seamounts, which form the Galicia interior basin. The southern border of these features is marked by Nazaré Canyon, which splits the continental shelf and leads directly into the abyss. The major rivers flow through the wide valleys between the mountain systems. These are the Ebro, Douro, Tagus, Guadiana and Guadalquivir. All rivers in the Iberian Peninsula are subject to seasonal variations in flow. The Tagus is the longest river on the peninsula and, like the Douro, flows westwards with its lower course in Portugal. The Guadiana river bends southwards and forms the border between Spain and Portugal in the last stretch of its course. The terrain of the Iberian Peninsula is largely mountainous. The major mountain systems are: The Iberian Peninsula contains rocks of every geological period from the Ediacaran to the Recent, and almost every kind of rock is represented. World-class mineral deposits can also be found there. The core of the Iberian Peninsula consists of a Hercynian cratonic block known as the Iberian Massif. On the northeast, this is bounded by the Pyrenean fold belt, and on the southeast it is bounded by the Baetic System. These twofold chains are part of the Alpine belt. To the west, the peninsula is delimited by the continental boundary formed by the magma-poor opening of the Atlantic Ocean. The Hercynian Foldbelt is mostly buried by Mesozoic and Tertiary cover rocks to the east, but nevertheless outcrops through the Sistema Ibérico and the Catalan Mediterranean System. The Iberian Peninsula features one of the largest Lithium deposits belts in Europe (an otherwise relatively scarce resource in the continent), scattered along the Iberian Massif's and . Also in the Iberian Massif, and similarly to other Hercynian blocks in Europe, the peninsula hosts some uranium deposits, largely located in the Central Iberian Zone unit. The Iberian Pyrite Belt, located in the SW quadrant of the Peninsula, ranks among the most important volcanogenic massive sulphide districts on Earth, and it has been exploited for millennia. The Iberian Peninsula's location and topography, as well as the effects of large atmospheric circulation patterns induce a NW to SE gradient of yearly precipitation (roughly from 2,000 mm to 300 mm). The Iberian peninsula has two dominant climate types. One of these is the oceanic climate seen in the Atlantic coastal region resulting in evenly temperatures with relatively cool summers. However, most of Portugal and Spain have a Mediterranean climate with various precipitation and temperatures depending on latitude and position versus the sea. There are also more localized semi-arid climates in central Spain, with temperatures resembling a more continental Mediterranean climate. In other extreme cases highland alpine climates such as in Sierra Nevada and areas with extremely low precipitation and desert climates or semi-arid climates such as the Almería area, Murcia area and southern Alicante area. In the Spanish and Portuguese interior the hottest temperatures in Europe are found, with Córdoba averaging around in July. The Spanish Mediterranean coast usually averages around in summer. In sharp contrast A Coruña at the northern tip of Galicia has a summer daytime high average at just below . This cool and wet summer climate is replicated throughout most of the northern coastline. Winter temperatures are more consistent throughout the peninsula, although frosts are common in the Spanish interior, even though daytime highs are usually above the freezing point. In Portugal, the warmest winters of the country are found in the area of Algarve, very similar to the ones from Huelva in Spain, while most of the Portuguese Atlantic coast has fresh and humid winters, similar to Galicia. The current political configuration of the Iberian Peninsula now comprises the bulk of Spain and Portugal, the whole microstate of Andorra, a small part of the French department of Pyrénées-Orientales (the French Cerdagne) and the British Overseas Territory of Gibraltar. French Cerdagne is on the south side of the Pyrenees mountain range, which runs along the border between Spain and France. For example, the Segre river, which runs west and then south to meet the Ebro, has its source on the French side. The Pyrenees range is often considered the northeastern boundary of Iberian Peninsula, although the French coastline converges away from the rest of Europe north of the range. Regarding Spain and Portugal, this chiefly excludes the Macaronesian archipelagos (Azores and Madeira vis-à-vis Portugal and the Canary Islands vis-à-vis Spain), the Balearic Islands (Spain); and the Spanish territories in North-Africa (most conspicuously the cities of Ceuta and Melilla), as well as unpopulated islets and rocks. Political divisions of the Iberian Peninsula: The Iberian city network is dominated by 3 international metropolises (Madrid, Barcelona and Lisbon) and four regional metropolises (Valencia, Seville, Porto and Bilbao). The relatively weak integration of the network favours a competitive approach vis-à-vis the inter-relation between the different centres. Among these metropolises, Madrid stands out within the global urban hierarchy in terms of its status as a major service centre and enjoys the greatest degree of connectivity. According to Eurostat (2019), the metropolitan regions with a population over one million are listed as follows: The woodlands of the Iberian Peninsula are distinct ecosystems. Although the various regions are each characterized by distinct vegetation, there are some similarities across the peninsula. While the borders between these regions are not clearly defined, there is a mutual influence that makes it very hard to establish boundaries and some species find their optimal habitat in the intermediate areas. The endangered Iberian lynx ("Lynx pardinus") is a symbol of the Iberian mediterranean forest and of the fauna of the Iberian Peninsula altogether. The Iberian Peninsula is an important stopover on the East Atlantic flyway for birds migrating from northern Europe to Africa. For example, curlew sandpipers rest in the region of the Bay of Cádiz. In addition to the birds migrating through, some seven million wading birds from the north spend the winter in the estuaries and wetlands of the Iberian Peninsula, mainly at locations on the Atlantic coast. In Galicia are Ría de Arousa (a home of grey plover), Ria de Ortigueira, Ria de Corme and Ria de Laxe. In Portugal, the Aveiro Lagoon hosts "Recurvirostra avosetta", the common ringed plover, grey plover and little stint. Ribatejo Province on the Tagus supports "Recurvirostra arosetta", grey plover, dunlin, bar-tailed godwit and common redshank. In the Sado Estuary are dunlin, Eurasian curlew, grey plover and common redshank. The Algarve hosts red knot, common greenshank and turnstone. The Guadalquivir Marshes region of Andalusia and the Salinas de Cádiz are especially rich in wintering wading birds: Kentish plover, common ringed plover, sanderling, and black-tailed godwit in addition to the others. And finally, the Ebro delta is home to all the species mentioned above. With the sole exception of Basque, which is of unknown origin, all modern Iberian languages descend from Vulgar Latin and belong to the Western Romance languages. Throughout history (and pre-history), many different languages have been spoken in the Iberian Peninsula, contributing to the formation and differentiation of the contemporaneous languages of Iberia; however, most of them have become extinct or fallen into disuse. Basque is the only non-Indo-European surviving language in Iberia and Western Europe. In modern times, Spanish (the official language of Spain, spoken by the entire 45 million population in the country, the native language of about 36 million in Europe), Portuguese (the official language of Portugal, with a population over 10 million), Catalan (over 7 million speakers in Europe, 3.4 million with Catalan as first language), Galician (understood by the 93 % of the 1.5 million Galician population) and Basque (cf. around 1 million speakers) are the most widely spoken languages in the Iberian Peninsula. Spanish and Portuguese have expanded beyond Iberia to the rest of world, becoming global languages. Other minority romance languages with some degree of recognition include the several varieties of Astur-leonese, collectively amounting to about 0.6 million speakers, and the Aragonese (barely spoken by the 8% of the 130,000 people inhabiting the Alto Aragón). Both Spain and Portugal have traditionally used a non-standard rail gauge (the 1,668 mm Iberian gauge) since the construction of the first railroads in the 19th century. Spain has progressively introduced the 1,435 mm standard gauge in its new high-speed rail network (one the most extensive in the world), inaugurated in 1992 with the Madrid–Seville line, followed to name a few by the Madrid–Barcelona (2008), Madrid–Valencia (2010), an Alicante branch of the latter (2013) and the connection to France of the Barcelona line. Portugal however suspended all the high-speed rail projects in the wake of the 2008 financial crisis, putting an end for the time being to the possibility of a high-speed rail connection between Lisbon, Porto and Madrid. Handicapped by a mountainous range (the Pyrenees) hindering the connection to the rest of Europe, Spain (and subsidiarily Portugal) only has two meaningful rail connections to France able for freight transport, located at both ends of the mountain range. An international rail line across the Central Pyrenees linking Zaragoza and the French city of Pau through a tunnel existed in the past; however, an accident in the French part destroyed a stretch of the railroad in 1970 and the Canfranc Station has been a cul-de-sac since then. There are four points connecting the Portuguese and Spanish rail networks: Valença do Minho–Tui, Vilar Formoso–Fuentes de Oñoro, Marvão-Beirã–Valencia de Alcántara and Elvas–Badajoz. The prospect of the development (as part of a European-wide effort) of the Central, Mediterranean and Atlantic rail corridors is expected to be a way to improve the competitiveness of the ports of Tarragona, Valencia, Sagunto, Bilbao, Santander, Sines and Algeciras vis-à-vis the rest of Europe and the World. In 1980, Morocco and Spain started a joint study on the feasibility of a fixed link (tunnel or bridge) across the Strait of Gibraltar, possibly through a connection of with Cape Malabata. Years of studies have, however, made no real progress thus far. A transit point for many submarine cables, the Fibre-optic Link Around the Globe, Europe India Gateway, and the SEA-ME-WE 3 feature landing stations in the Iberian Peninsula. The West Africa Cable System, Main One, SAT-3/WASC, Africa Coast to Europe also land in Portugal. MAREA, a high capacity communication transatlantic cable, connects the north of the Iberian Peninsula (Bilbao) to North America (Virginia), while EllaLink is an upcoming high-capacity communication cable expected to connect the Peninsula (Sines) to South America and the mammoth 2Africa project intends to connect the peninsula to the United Kingdom, Europe and Africa (via Portugal and Barcelona) by 2023–24. Two gas pipelines: the Pedro Duran Farell pipeline and (more recently) the Medgaz (from, respectively, Morocco and Algeria) link the Maghreb and the Iberian Peninsula, providing Spain with Algerian natural gas. Major industries include mining, tourism, small farms, and fishing. Because the coast is so long, fishing is popular, especially sardines, tuna and anchovies. Most of the mining occurs in the Pyrenees mountains. Commodities mined include: iron, gold, coal, lead, silver, zinc, and salt. Regarding their role in the global economy, both the microstate of Andorra and the British Overseas Territory of Gibraltar have been described as tax-havens. The Galician region of Spain, in the north-west of the Iberian Peninsula, became one of the biggest entry points of cocaine in Europe, on a par with the Dutch ports. Hashish is smuggled from Morocco via the Strait of Gibraltar.
https://en.wikipedia.org/wiki?curid=14883
Martin Luther King Jr. Martin Luther King Jr. (born Michael King Jr.; January 15, 1929 – April 4, 1968) was an African American minister and activist who became the most visible spokesperson and leader in the civil rights movement from 1955 until his assassination in 1968. King is best known for advancing civil rights through nonviolence and civil disobedience, inspired by his Christian beliefs and the nonviolent activism of Mahatma Gandhi. King led the 1955 Montgomery bus boycott and later became the first president of the Southern Christian Leadership Conference (SCLC). As president of the SCLC, he then led an unsuccessful 1962 struggle against segregation in Albany, Georgia, and helped organize the nonviolent 1963 protests in Birmingham, Alabama. He helped organize the 1963 March on Washington, where he delivered his famous "I Have a Dream" speech on the steps of the Lincoln Memorial. On October 14, 1964, King won the Nobel Peace Prize for combating racial inequality through nonviolent resistance. In 1965, he helped organize the Selma to Montgomery marches. In his final years, he expanded his focus to include opposition towards poverty, capitalism, and the Vietnam War. FBI Director J. Edgar Hoover considered him a radical and made him an object of the FBI's COINTELPRO from 1963 on. FBI agents investigated him for possible communist ties, recorded his extramarital liaisons and reported on them to government officials, and, in 1964, mailed King a threatening anonymous letter, which he interpreted as an attempt to make him commit suicide. King was planning a national occupation of Washington, D.C., to be called the Poor People's Campaign, when he was assassinated on April 4 in Memphis, Tennessee. His death was followed by riots in many U.S. cities. Allegations that James Earl Ray, the man convicted of killing King, had been framed or acted in concert with government agents persisted for decades after the shooting. King was posthumously awarded the Presidential Medal of Freedom and the Congressional Gold Medal. Martin Luther King Jr. Day was established as a holiday in cities and states throughout the United States beginning in 1971; the holiday was enacted at the federal level by legislation signed by President Ronald Reagan in 1986. Hundreds of streets in the U.S. have been renamed in his honor, and a county in Washington was rededicated for him. The Martin Luther King Jr. Memorial on the National Mall in Washington, D.C., was dedicated in 2011. Since the late 2010s activists have made efforts on Martin Luther King Jr. Day to reclaim the legacy of King. King was born "Michael King Jr." on January 15, 1929, in Atlanta, Georgia, the second of three children to the Reverend Michael King Sr. and Alberta King ( Williams). King's mother named him Michael, which was entered onto the birth certificate by the attending physician. King Sr. stated that "Michael" was a mistake by the physician. King's older sister is Christine King Farris and his younger brother was A.D. King. King's maternal grandfather Adam Daniel Williams, who was a minister in rural Georgia, moved to Atlanta in 1893, and became pastor of the Ebenezer Baptist Church in the following year. Williams was of African-Irish descent. Williams married Jennie Celeste Parks, who gave birth to King's mother, Alberta. King's father was born to sharecroppers, James Albert and Delia King of Stockbridge, Georgia. In his adolescent years, King Sr. left his parents' farm and walked to Atlanta where he attained a high school education. King Sr. then enrolled in Morehouse College and studied to enter the ministry. King Sr. and Alberta began dating in 1920, and married on November 25, 1926. Until Jennie's death in 1941, they lived together on the second floor of her parent's two story Victorian house, where King was born. Shortly after marrying Alberta, King Sr. became assistant pastor of the Ebenezer Baptist Church. Adam Daniel Williams died of a stroke in the spring of 1931. That fall, King's father took over the role of pastor at the church, where he would in time raise the attendance from six hundred to several thousand. In 1934, the church sent King Sr. on a multinational trip to Rome, Tunisia, Egypt, Jerusalem, Bethlehem, then Berlin for the meeting of the Baptist World Alliance (BWA). The trip ended with visits to sites in Berlin associated with the Protestant reformation leader, Martin Luther. While there, Michael King Sr. witnessed the rise of Nazism. In reaction, the BWA conference issued a resolution which stated, "This Congress deplores and condemns as a violation of the law of God the Heavenly Father, all racial animosity, and every form of oppression or unfair discrimination toward the Jews, toward coloured people, or toward subject races in any part of the world." He returned home in August 1934, and in that same year began referring to himself as Martin Luther King Sr., and his son as Martin Luther King Jr. King's birth certificate was altered to read "Martin Luther King Jr." on July 23, 1957, when he was 28 years old. At his childhood home, King and his two siblings would read aloud Biblical scripture as instructed by their father. After dinners there, King's grandmother Jennie, who he affectionately referred to as "Mama", would tell lively stories from the Bible to her grandchildren. King's father would regularly use whippings to discipline his children. At times, King Sr. would also have his children whip each other. King's father later remarked, "[King] was the most peculiar child whenever you whipped him. He'd stand there, and the tears would run down, and he'd never cry." Once when King witnessed his brother A.D. emotionally upset his sister Christine, he took a telephone and knocked out A.D. with it. When he and his brother were playing at their home, A.D. slid from a banister and hit into their grandmother, Jennie, causing her to fall down unresponsive. King, believing her dead, blamed himself and attempted suicide by jumping from a second-story window. Upon hearing that his grandmother was alive, King rose and left the ground where he had fallen. King became friends with a white boy whose father owned a business across the street from his family's home. In September 1935, when the boys were about six years old, they started school. King had to attend a school for black children, Younge Street Elementary School, while his close playmate went to a separate school for white children only. Soon afterwards, the parents of the white boy stopped allowing King to play with their son, stating to him "we are white, and you are colored". When King relayed the happenings to his parents, they had a long discussion with him about the history of slavery and racism in America. Upon learning of the hatred, violence and oppression that black people had faced in the U.S., King would later state that he was "determined to hate every white person". His parents instructed him that it was his Christian duty to love everyone. King witnessed his father stand up against segregation and various forms of discrimination. Once, when stopped by a police officer who referred to King Sr. as "boy", King's father responded sharply that King was a boy but he was a man. When King's father took him into a shoe store in downtown Atlanta, the clerk told them they needed to sit in the back. King's father refused, stating "we'll either buy shoes sitting here or we won't buy any shoes at all", before taking King and leaving the store. He told King afterwards, "I don't care how long I have to live with this system, I will never accept it." In 1936, King's father led hundreds of African-Americans in a civil rights march to the city hall in Atlanta, to protest voting rights discrimination. King later remarked that King Sr. was "a real father" to him. King memorized and sang hymns, and stated verses from the Bible, by the time he was five years old. Over the next year, he began to go to church events with his mother and sing hymns while she played piano. His favorite hymn to sing was ""I Want to Be More and More Like Jesus""; he moved attendees with his singing. King later became a member of the junior choir in his church. King enjoyed opera, and played the piano. As he grew up, King garnered a large vocabulary from reading dictionaries and consistently used his expanding lexicon. He got into physical altercations with boys in his neighborhood, but oftentimes used his knowledge of words to stymie fights. King showed a lack of interest in grammar and spelling, a trait which he carried throughout his life. In 1939, King sang as a member of his church choir in slave costume, for the all-white audience at the Atlanta premiere of the film "Gone with the Wind". On May 18, 1941, when King had snuck away from studying at home to watch a parade, King was informed that something had happened to his maternal grandmother. Upon returning home, he found out that she had suffered a heart attack and died while being transported to a hospital. He took the death very hard, and believed that his deception of going to see the parade may have been responsible for God taking her. King jumped out of a second-story window at his home, but again survived an attempt to kill himself. His father instructed him in his bedroom that King shouldn't blame himself for her death, and that she had been called home to God as part of God's plan which could not be changed. King struggled with this, and could not fully believe that his parents knew where his grandmother had gone. Shortly thereafter, King's father decided to move the family to a two-story brick home on a hill that overlooked downtown Atlanta. In his adolescent years, he initially felt resentment against whites due to the "racial humiliation" that he, his family, and his neighbors often had to endure in the segregated South. In 1942, when King was 13 years old, he became the youngest assistant manager of a newspaper delivery station for the "Atlanta Journal". That year, King skipped the ninth grade and was enrolled in Booker T. Washington High School. The high school was the only one in the city for African American students. It had been formed after local black leaders including King's grandfather (Williams), urged the city government of Atlanta to create it. King became known for his public-speaking ability and was part of the school's debate team. During his junior year, he won first prize in an oratorical contest sponsored by the Negro Elks Club in Dublin, Georgia. In his speech he stated, "black America still wears chains. The finest negro is at the mercy of the meanest white man." On the ride home to Atlanta by bus, he and his teacher were ordered by the driver to stand so that white passengers could sit down. King initially refused but complied after his teacher told him that he would be breaking the law if he did not submit. During this incident, King said that he was "the angriest I have ever been in my life." King was initially skeptical of many of Christianity's claims. At the age of 13, he denied the bodily resurrection of Jesus during Sunday school. At this point, he stated, "doubts began to spring forth unrelentingly." He concurrently found himself unable to identify with the emotional displays and gestures people would make at his church, and started to wonder if he would ever attain personal satisfaction from religion. During King's junior year in high school, Morehouse College—a respected historically black college—announced that it would accept any high school juniors who could pass its entrance exam. At that time, many students had abandoned further studies to enlist in World War II. Due to this, Morehouse was eager to fill its classrooms. At the age of 15, King passed the exam and entered Morehouse. He played freshman football there. The summer before his last year at Morehouse, in 1947, the 18-year-old King chose to enter the ministry. Throughout his time in college, King studied under the mentorship of its president, Baptist minister Benjamin Mays, who he would later credit with being his "spiritual mentor." King had concluded that the church offered the most assuring way to answer "an inner urge to serve humanity." His "inner urge" had begun developing, and he made peace with the Baptist Church, as he believed he would be a "rational" minister with sermons that were "a respectful force for ideas, even social protest." King graduated from Morehouse with a bachelor of arts (BA) in sociology in 1948, aged nineteen. King enrolled in Crozer Theological Seminary in Chester, Pennsylvania. King's father fully supported his decision to continue his education and made arrangements for King to work with J. Pius Barbour, a family friend who pastored at Calvary Baptist Church in Chester. King became known as one of the "Sons of Calvary", an honor he shared with William Augustus Jones Jr. and Samuel D. Proctor who both went on to become well-known preachers in the black church. While attending Crozer, King was joined by Walter McCall, a former classmate at Morehouse. At Crozer, King was elected president of the student body. The African-American students of Crozer for the most part conducted their social activity on Edwards Street. King became fond of the street because a classmate had an aunt who prepared collard greens for them, which they both relished. King once reproved another student for keeping beer in his room, saying they had shared responsibility as African Americans to bear "the burdens of the Negro race." For a time, he was interested in Walter Rauschenbusch's "social gospel." In his third year at Crozer, King became romantically involved with the white daughter of an immigrant German woman who worked as a cook in the cafeteria. The woman had been involved with a professor prior to her relationship with King. King planned to marry her, but friends advised against it, saying that an interracial marriage would provoke animosity from both blacks and whites, potentially damaging his chances of ever pastoring a church in the South. King tearfully told a friend that he could not endure his mother's pain over the marriage and broke the relationship off six months later. He continued to have lingering feelings toward the woman he left; one friend was quoted as saying, "He never recovered." King graduated with a B.Div. degree in 1951. King began doctoral studies in systematic theology at Boston University. While pursuing doctoral studies, King worked as an assistant minister at Boston's historic Twelfth Baptist Church with Rev. William Hunter Hester. Hester was an old friend of King's father, and was an important influence on King. In Boston, King befriended a small cadre of local ministers his age, and sometimes guest pastored at their churches, including the Reverend Michael Haynes, associate pastor at Twelfth Baptist Church in Roxbury (and younger brother of jazz drummer Roy Haynes). The young men often held bull sessions in their various apartments, discussing theology, sermon style, and social issues. At the age of 25 in 1954, King was called as pastor of the Dexter Avenue Baptist Church in Montgomery, Alabama. King received his Ph.D. degree on June 5, 1955, with a dissertation (initially supervised by Edgar S. Brightman and, upon the latter's death, by Lotan Harold DeWolf) titled "A Comparison of the Conceptions of God in the Thinking of Paul Tillich and Henry Nelson Wieman." An academic inquiry in October 1991 concluded that portions of his doctoral dissertation had been plagiarized and he had acted improperly. However, its finding, the committee said that 'no thought should be given to the revocation of Dr. King's doctoral degree,' an action that the panel said would serve no purpose." The committee found that the dissertation still "makes an intelligent contribution to scholarship." A letter is now attached to the copy of King's dissertation held in the university library, noting that numerous passages were included without the appropriate quotations and citations of sources. Significant debate exists on how to interpret King's plagiarism. While studying at Boston University, he asked a friend from Atlanta named Mary Powell, who was a student at the New England Conservatory of Music, if she knew any nice Southern girls. Powell asked fellow student Coretta Scott if she was interested in meeting a Southern friend studying divinity. Scott was not interested in dating preachers, but eventually agreed to allow Martin to telephone her based on Powell's description and vouching. On their first phone call, King told Scott "I am like Napoleon at Waterloo before your charms," to which she replied "You haven't even met me." They went out for dates in his green Chevy. After the second date, King was certain Scott possessed the qualities he sought in a wife. She had been an activist at Antioch in undergrad, where Carol and Rod Serling were schoolmates. King married Coretta Scott on June 18, 1953, on the lawn of her parents' house in her hometown of Heiberger, Alabama. They became the parents of four children: Yolanda King (1955–2007), Martin Luther King III (b. 1957), Dexter Scott King (b. 1961), and Bernice King (b. 1963). During their marriage, King limited Coretta's role in the civil rights movement, expecting her to be a housewife and mother. In March 1955, Claudette Colvin—a fifteen-year-old black schoolgirl in Montgomery—refused to give up her bus seat to a white man in violation of Jim Crow laws, local laws in the Southern United States that enforced racial segregation. King was on the committee from the Birmingham African-American community that looked into the case; E. D. Nixon and Clifford Durr decided to wait for a better case to pursue because the incident involved a minor. Nine months later on December 1, 1955, a similar incident occurred when Rosa Parks was arrested for refusing to give up her seat on a city bus. The two incidents led to the Montgomery bus boycott, which was urged and planned by Nixon and led by King. The boycott lasted for 385 days, and the situation became so tense that King's house was bombed. King was arrested during this campaign, which concluded with a United States District Court ruling in "Browder v. Gayle" that ended racial segregation on all Montgomery public buses. King's role in the bus boycott transformed him into a national figure and the best-known spokesman of the civil rights movement. In 1957, King, Ralph Abernathy, Fred Shuttlesworth, Joseph Lowery, and other civil rights activists founded the Southern Christian Leadership Conference (SCLC). The group was created to harness the moral authority and organizing power of black churches to conduct nonviolent protests in the service of civil rights reform. The group was inspired by the crusades of evangelist Billy Graham, who befriended King, as well as the national organizing of the group In Friendship, founded by King allies Stanley Levison and Ella Baker. King led the SCLC until his death. The SCLC's 1957 Prayer Pilgrimage for Freedom was the first time King addressed a national audience. Other civil rights leaders involved in the SCLC with King included: James Bevel, Allen Johnson, Curtis W. Harris, Walter E. Fauntroy, C. T. Vivian, Andrew Young, The Freedom Singers, Cleveland Robinson, Randolph Blackwell, Annie Bell Robinson Devine, Charles Kenzie Steele, Alfred Daniel Williams King, Benjamin Hooks, Aaron Henry and Bayard Rustin. On September 20, 1958, King was signing copies of his book "Stride Toward Freedom" in Blumstein's department store in Harlem when he narrowly escaped death. Izola Curry—a mentally ill black woman who thought that King was conspiring against her with communists—stabbed him in the chest with a letter opener. King underwent emergency surgery with three doctors: Aubre de Lambert Maynard, Emil Naclerio and John W. V. Cordice; he remained hospitalized for several weeks. Curry was later found mentally incompetent to stand trial. In 1959, King published a short book called "The Measure of A Man", which contained his sermons "What is Man?" and "The Dimensions of a Complete Life." The sermons argued for man's need for God's love and criticized the racial injustices of Western civilization. Harry Wachtel joined King's legal advisor Clarence B. Jones in defending four ministers of the SCLC in the libel case "New York Times Co. v. Sullivan"; the case was litigated in reference to the newspaper advertisement "Heed Their Rising Voices". Wachtel founded a tax-exempt fund to cover the expenses of the suit and to assist the nonviolent civil rights movement through a more effective means of fundraising. This organization was named the "Gandhi Society for Human Rights." King served as honorary president for the group. He was displeased with the pace that President Kennedy was using to address the issue of segregation. In 1962, King and the Gandhi Society produced a document that called on the President to follow in the footsteps of Abraham Lincoln and issue an executive order to deliver a blow for civil rights as a kind of Second Emancipation Proclamation. Kennedy did not execute the order. The FBI was under written directive from Attorney General Robert F. Kennedy when it began tapping King's telephone line in the fall of 1963. Kennedy was concerned that public allegations of communists in the SCLC would derail the administration's civil rights initiatives. He warned King to discontinue these associations and later felt compelled to issue the written directive that authorized the FBI to wiretap King and other SCLC leaders. FBI Director J. Edgar Hoover feared the civil rights movement and investigated the allegations of communist infiltration. When no evidence emerged to support this, the FBI used the incidental details caught on tape over the next five years in attempts to force King out of his leadership position in the COINTELPRO program. King believed that organized, nonviolent protest against the system of southern segregation known as Jim Crow laws would lead to extensive media coverage of the struggle for black equality and voting rights. Journalistic accounts and televised footage of the daily deprivation and indignities suffered by Southern blacks, and of segregationist violence and harassment of civil rights workers and marchers, produced a wave of sympathetic public opinion that convinced the majority of Americans that the civil rights movement was the most important issue in American politics in the early 1960s. King organized and led marches for blacks' right to vote, desegregation, labor rights, and other basic civil rights. Most of these rights were successfully enacted into the law of the United States with the passage of the Civil Rights Act of 1964 and the 1965 Voting Rights Act. King and the SCLC put into practice many of the principles of the Christian Left and applied the tactics of nonviolent protest with great success by strategically choosing the method of protest and the places in which protests were carried out. There were often dramatic stand-offs with segregationist authorities, who sometimes turned violent. King was criticized by other black leaders during the course of his participation in the civil rights movement. This included opposition by more militant blacks such as Nation of Islam member Malcolm X. Student Nonviolent Coordinating Committee founder Ella Baker regarded King as a charismatic media figure who lost touch with the grassroots of the movement as he became close to elite figures like Nelson Rockefeller. Stokely Carmichael, a protege of Baker's, became a black separatist and disagreed with King's plea for racial integration because he considered it an insult to a uniquely African-American culture. The Albany Movement was a desegregation coalition formed in Albany, Georgia, in November 1961. In December, King and the SCLC became involved. The movement mobilized thousands of citizens for a broad-front nonviolent attack on every aspect of segregation within the city and attracted nationwide attention. When King first visited on December 15, 1961, he "had planned to stay a day or so and return home after giving counsel." The following day he was swept up in a mass arrest of peaceful demonstrators, and he declined bail until the city made concessions. According to King, "that agreement was dishonored and violated by the city" after he left town. King returned in July 1962 and was given the option of forty-five days in jail or a $178 fine (); he chose jail. Three days into his sentence, Police Chief Laurie Pritchett discreetly arranged for King's fine to be paid and ordered his release. "We had witnessed persons being kicked off lunch counter stools ... ejected from churches ... and thrown into jail ... But for the first time, we witnessed being kicked out of jail." It was later acknowledged by the King Center that Billy Graham was the one who bailed King out of jail during this time. After nearly a year of intense activism with few tangible results, the movement began to deteriorate. King requested a halt to all demonstrations and a "Day of Penance" to promote nonviolence and maintain the moral high ground. Divisions within the black community and the canny, low-key response by local government defeated efforts. Though the Albany effort proved a key lesson in tactics for King and the national civil rights movement, the national media was highly critical of King's role in the defeat, and the SCLC's lack of results contributed to a growing gulf between the organization and the more radical SNCC. After Albany, King sought to choose engagements for the SCLC in which he could control the circumstances, rather than entering into pre-existing situations. In April 1963, the SCLC began a campaign against racial segregation and economic injustice in Birmingham, Alabama. The campaign used nonviolent but intentionally confrontational tactics, developed in part by Rev. Wyatt Tee Walker. Black people in Birmingham, organizing with the SCLC, occupied public spaces with marches and sit-ins, openly violating laws that they considered unjust. King's intent was to provoke mass arrests and "create a situation so crisis-packed that it will inevitably open the door to negotiation." The campaign's early volunteers did not succeed in shutting down the city, or in drawing media attention to the police's actions. Over the concerns of an uncertain King, SCLC strategist James Bevel changed the course of the campaign by recruiting children and young adults to join in the demonstrations. "Newsweek" called this strategy a Children's Crusade. During the protests, the Birmingham Police Department, led by Eugene "Bull" Connor, used high-pressure water jets and police dogs against protesters, including children. Footage of the police response was broadcast on national television news and dominated the nation's attention, shocking many white Americans and consolidating black Americans behind the movement. Not all of the demonstrators were peaceful, despite the avowed intentions of the SCLC. In some cases, bystanders attacked the police, who responded with force. King and the SCLC were criticized for putting children in harm's way. But the campaign was a success: Connor lost his job, the "Jim Crow" signs came down, and public places became more open to blacks. King's reputation improved immensely. King was arrested and jailed early in the campaign—his 13th arrest out of 29. From his cell, he composed the now-famous Letter from Birmingham Jail that responds to calls on the movement to pursue legal channels for social change. King argues that the crisis of racism is too urgent, and the current system too entrenched: "We know through painful experience that freedom is never voluntarily given by the oppressor; it must be demanded by the oppressed." He points out that the Boston Tea Party, a celebrated act of rebellion in the American colonies, was illegal civil disobedience, and that, conversely, "everything Adolf Hitler did in Germany was 'legal'." Walter Reuther, president of the United Auto Workers, arranged for $160,000 to bail out King and his fellow protestors. In March 1964, King and the SCLC joined forces with Robert Hayling's then-controversial movement in St. Augustine, Florida. Hayling's group had been affiliated with the NAACP but was forced out of the organization for advocating armed self-defense alongside nonviolent tactics. However, the pacifist SCLC accepted them. King and the SCLC worked to bring white Northern activists to St. Augustine, including a delegation of rabbis and the 72-year-old mother of the governor of Massachusetts, all of whom were arrested. During June, the movement marched nightly through the city, "often facing counter demonstrations by the Klan, and provoking violence that garnered national media attention." Hundreds of the marchers were arrested and jailed. During the course of this movement, the Civil Rights Act of 1964 was passed. In December 1964, King and the SCLC joined forces with the Student Nonviolent Coordinating Committee (SNCC) in Selma, Alabama, where the SNCC had been working on voter registration for several months. A local judge issued an injunction that barred any gathering of three or more people affiliated with the SNCC, SCLC, DCVL, or any of 41 named civil rights leaders. This injunction temporarily halted civil rights activity until King defied it by speaking at Brown Chapel on January 2, 1965. During the 1965 march to Montgomery, Alabama, violence by state police and others against the peaceful marchers resulted in much publicity, which made Alabama's racism visible nationwide. On February 6, 1964, King delivered the inaugural speech of a lecture series initiated at the New School called "The American Race Crisis." No audio record of his speech has been found, but in August 2013, almost 50 years later, the school discovered an audiotape with 15 minutes of a question-and-answer session that followed King's address. In these remarks, King referred to a conversation he had recently had with Jawaharlal Nehru in which he compared the sad condition of many African Americans to that of India's untouchables. King, representing the SCLC, was among the leaders of the "Big Six" civil rights organizations who were instrumental in the organization of the March on Washington for Jobs and Freedom, which took place on August 28, 1963. The other leaders and organizations comprising the Big Six were Roy Wilkins from the National Association for the Advancement of Colored People; Whitney Young, National Urban League; A. Philip Randolph, Brotherhood of Sleeping Car Porters; John Lewis, SNCC; and James L. Farmer Jr., of the Congress of Racial Equality. Bayard Rustin's open homosexuality, support of democratic socialism, and his former ties to the Communist Party USA caused many white and African-American leaders to demand King distance himself from Rustin, which King agreed to do. However, he did collaborate in the 1963 March on Washington, for which Rustin was the primary logistical and strategic organizer. For King, this role was another which courted controversy, since he was one of the key figures who acceded to the wishes of United States President John F. Kennedy in changing the focus of the march. Kennedy initially opposed the march outright, because he was concerned it would negatively impact the drive for passage of civil rights legislation. However, the organizers were firm that the march would proceed. With the march going forward, the Kennedys decided it was important to work to ensure its success. President Kennedy was concerned the turnout would be less than 100,000. Therefore, he enlisted the aid of additional church leaders and Walter Reuther, president of the United Automobile Workers, to help mobilize demonstrators for the cause. The march originally was conceived as an event to dramatize the desperate condition of blacks in the southern U.S. and an opportunity to place organizers' concerns and grievances squarely before the seat of power in the nation's capital. Organizers intended to denounce the federal government for its failure to safeguard the civil rights and physical safety of civil rights workers and blacks. The group acquiesced to presidential pressure and influence, and the event ultimately took on a far less strident tone. As a result, some civil rights activists felt it presented an inaccurate, sanitized pageant of racial harmony; Malcolm X called it the "Farce on Washington", and the Nation of Islam forbade its members from attending the march. The march made specific demands: an end to racial segregation in public schools; meaningful civil rights legislation, including a law prohibiting racial discrimination in employment; protection of civil rights workers from police brutality; a $2 minimum wage for all workers (); and self-government for Washington, D.C., then governed by congressional committee. Despite tensions, the march was a resounding success. More than a quarter of a million people of diverse ethnicities attended the event, sprawling from the steps of the Lincoln Memorial onto the National Mall and around the reflecting pool. At the time, it was the largest gathering of protesters in Washington, D.C.'s history. King delivered a 17-minute speech, later known as "I Have a Dream". In the speech's most famous passagein which he departed from his prepared text, possibly at the prompting of Mahalia Jackson, who shouted behind him, "Tell them about the dream!"King said: "I Have a Dream" came to be regarded as one of the finest speeches in the history of American oratory. The March, and especially King's speech, helped put civil rights at the top of the agenda of reformers in the United States and facilitated passage of the Civil Rights Act of 1964. The original typewritten copy of the speech, including King's handwritten notes on it, was discovered in 1984 to be in the hands of George Raveling, the first African-American basketball coach of the University of Iowa. In 1963, Raveling, then 26 years old, was standing near the podium, and immediately after the oration, impulsively asked King if he could have his copy of the speech. He got it. Acting on James Bevel's call for a march from Selma to Montgomery, King, Bevel, and the SCLC, in partial collaboration with SNCC, attempted to organize the march to the state's capital. The first attempt to march on March 7, 1965, was aborted because of mob and police violence against the demonstrators. This day has become known as Bloody Sunday and was a major turning point in the effort to gain public support for the civil rights movement. It was the clearest demonstration up to that time of the dramatic potential of King's nonviolence strategy. King, however, was not present. On March 5, King met with officials in the Johnson Administration in order to request an injunction against any prosecution of the demonstrators. He did not attend the march due to church duties, but he later wrote, "If I had any idea that the state troopers would use the kind of brutality they did, I would have felt compelled to give up my church duties altogether to lead the line." Footage of police brutality against the protesters was broadcast extensively and aroused national public outrage. King next attempted to organize a march for March 9. The SCLC petitioned for an injunction in federal court against the State of Alabama; this was denied and the judge issued an order blocking the march until after a hearing. Nonetheless, King led marchers on March 9 to the Edmund Pettus Bridge in Selma, then held a short prayer session before turning the marchers around and asking them to disperse so as not to violate the court order. The unexpected ending of this second march aroused the surprise and anger of many within the local movement. The march finally went ahead fully on March 25, 1965. At the conclusion of the march on the steps of the state capitol, King delivered a speech that became known as "How Long, Not Long." In it, King stated that equal rights for African Americans could not be far away, "because the arc of the moral universe is long, but it bends toward justice" and "you shall reap what you sow". In 1966, after several successes in the south, King, Bevel, and others in the civil rights organizations took the movement to the North, with Chicago as their first destination. King and Ralph Abernathy, both from the middle class, moved into a building at 1550 S. Hamlin Avenue, in the slums of North Lawndale on Chicago's West Side, as an educational experience and to demonstrate their support and empathy for the poor. The SCLC formed a coalition with CCCO, Coordinating Council of Community Organizations, an organization founded by Albert Raby, and the combined organizations' efforts were fostered under the aegis of the Chicago Freedom Movement. During that spring, several white couple/black couple tests of real estate offices uncovered racial steering: discriminatory processing of housing requests by couples who were exact matches in income, background, number of children, and other attributes. Several larger marches were planned and executed: in Bogan, Belmont Cragin, Jefferson Park, Evergreen Park (a suburb southwest of Chicago), Gage Park, Marquette Park, and others. King later stated and Abernathy wrote that the movement received a worse reception in Chicago than in the South. Marches, especially the one through Marquette Park on August 5, 1966, were met by thrown bottles and screaming throngs. Rioting seemed very possible. King's beliefs militated against his staging a violent event, and he negotiated an agreement with Mayor Richard J. Daley to cancel a march in order to avoid the violence that he feared would result. King was hit by a brick during one march, but continued to lead marches in the face of personal danger. When King and his allies returned to the South, they left Jesse Jackson, a seminary student who had previously joined the movement in the South, in charge of their organization. Jackson continued their struggle for civil rights by organizing the Operation Breadbasket movement that targeted chain stores that did not deal fairly with blacks. A 1967 CIA document declassified in 2017 downplayed King's role in the "black militant situation" in Chicago, with a source stating that King "sought at least constructive, positive projects." King was long opposed to American involvement in the Vietnam War, but at first avoided the topic in public speeches in order to avoid the interference with civil rights goals that criticism of President Johnson's policies might have created. At the urging of SCLC's former Director of Direct Action and now the head of the Spring Mobilization Committee to End the War in Vietnam, James Bevel, and inspired by the outspokenness of Muhammad Ali, King eventually agreed to publicly oppose the war as opposition was growing among the American public. During an April 4, 1967, appearance at the New York City Riverside Church—exactly one year before his death—King delivered a speech titled "." He spoke strongly against the U.S.'s role in the war, arguing that the U.S. was in Vietnam "to occupy it as an American colony" and calling the U.S. government "the greatest purveyor of violence in the world today." He connected the war with economic injustice, arguing that the country needed serious moral change: King opposed the Vietnam War because it took money and resources that could have been spent on social welfare at home. The United States Congress was spending more and more on the military and less and less on anti-poverty programs at the same time. He summed up this aspect by saying, "A nation that continues year after year to spend more money on military defense than on programs of social uplift is approaching spiritual death." He stated that North Vietnam "did not begin to send in any large number of supplies or men until American forces had arrived in the tens of thousands", and accused the U.S. of having killed a million Vietnamese, "mostly children." King also criticized American opposition to North Vietnam's land reforms. King's opposition cost him significant support among white allies, including President Johnson, Billy Graham, union leaders and powerful publishers. "The press is being stacked against me", King said, complaining of what he described as a double standard that applauded his nonviolence at home, but deplored it when applied "toward little brown Vietnamese children." Life magazine called the speech "demagogic slander that sounded like a script for Radio Hanoi", and "The Washington Post" declared that King had "diminished his usefulness to his cause, his country, his people." The "Beyond Vietnam" speech reflected King's evolving political advocacy in his later years, which paralleled the teachings of the progressive Highlander Research and Education Center, with which he was affiliated. King began to speak of the need for fundamental changes in the political and economic life of the nation, and more frequently expressed his opposition to the war and his desire to see a redistribution of resources to correct racial and economic injustice. He guarded his language in public to avoid being linked to communism by his enemies, but in private he sometimes spoke of his support for democratic socialism. In a 1952 letter to Coretta Scott, he said: "I imagine you already know that I am much more socialistic in my economic theory than capitalistic ..." In one speech, he stated that "something is wrong with capitalism" and claimed, "There must be a better distribution of wealth, and maybe America must move toward a democratic socialism." King had read Marx while at Morehouse, but while he rejected "traditional capitalism", he rejected communism because of its "materialistic interpretation of history" that denied religion, its "ethical relativism", and its "political totalitarianism." King stated in "Beyond Vietnam" that "true compassion is more than flinging a coin to a beggar ... it comes to see that an edifice which produces beggars needs restructuring." King quoted a United States official who said that from Vietnam to Latin America, the country was "on the wrong side of a world revolution." King condemned America's "alliance with the landed gentry of Latin America", and said that the U.S. should support "the shirtless and barefoot people" in the Third World rather than suppressing their attempts at revolution. King's stance on Vietnam encouraged Allard K. Lowenstein, William Sloane Coffin and Norman Thomas, with the support of anti-war Democrats, to attempt to persuade King to run against President Johnson in the 1968 United States presidential election. King contemplated but ultimately decided against the proposal on the grounds that he felt uneasy with politics and considered himself better suited for his morally unambiguous role as an activist. On April 15, 1967, King participated and spoke at an anti-war march from Manhattan's Central Park to the United Nations. The march was organized by the Spring Mobilization Committee to End the War in Vietnam and initiated by its chairman, James Bevel. At the U.N. King brought up issues of civil rights and the draft: Seeing an opportunity to unite civil rights activists and anti-war activists, Bevel convinced King to become even more active in the anti-war effort. Despite his growing public opposition towards the Vietnam War, King was not fond of the hippie culture which developed from the anti-war movement. In his 1967 Massey Lecture, King stated: On January 13, 1968 (the day after President Johnson's State of the Union Address), King called for a large march on Washington against "one of history's most cruel and senseless wars." Thích Nhất Hạnh was an influential Vietnamese Buddhist who taught at Princeton University and Columbia University. He had written a letter to Martin Luther King, Jr. in 1965 entitled: "In Search of the Enemy of Man". It was during his 1966 stay in the US that Nhất Hạnh met with King and urged him to publicly denounce the Vietnam War. In 1967, Dr. King gave a famous speech at the Riverside Church in New York City, his first to publicly question the U.S. involvement in Vietnam. Later that year, Dr. King nominated Nhất Hạnh for the 1967 Nobel Peace Prize. In his nomination Dr. King said, "I do not personally know of anyone more worthy of [this prize] than this gentle monk from Vietnam. His ideas for peace, if applied, would build a monument to ecumenism, to world brotherhood, to humanity". In 1968, King and the SCLC organized the "Poor People's Campaign" to address issues of economic justice. King traveled the country to assemble "a multiracial army of the poor" that would march on Washington to engage in nonviolent civil disobedience at the Capitol until Congress created an "economic bill of rights" for poor Americans. The campaign was preceded by King's final book, "" which laid out his view of how to address social issues and poverty. King quoted from Henry George and George's book, "Progress and Poverty", particularly in support of a guaranteed basic income. The campaign culminated in a march on Washington, D.C., demanding economic aid to the poorest communities of the United States. King and the SCLC called on the government to invest in rebuilding America's cities. He felt that Congress had shown "hostility to the poor" by spending "military funds with alacrity and generosity." He contrasted this with the situation faced by poor Americans, claiming that Congress had merely provided "poverty funds with miserliness." His vision was for change that was more revolutionary than mere reform: he cited systematic flaws of "racism, poverty, militarism and materialism", and argued that "reconstruction of society itself is the real issue to be faced." The Poor People's Campaign was controversial even within the civil rights movement. Rustin resigned from the march, stating that the goals of the campaign were too broad, that its demands were unrealizable, and that he thought that these campaigns would accelerate the backlash and repression on the poor and the black. The plan to set up a shantytown in Washington, D.C., was carried out soon after the April 4 assassination. Criticism of King's plan was subdued in the wake of his death, and the SCLC received an unprecedented wave of donations for the purpose of carrying it out. The campaign officially began in Memphis, on May 2, at the hotel where King was murdered. Thousands of demonstrators arrived on the National Mall and stayed for six weeks, establishing a camp they called "Resurrection City." On March 29, 1968, King went to Memphis, Tennessee, in support of the black sanitary public works employees, who were represented by AFSCME Local 1733. The workers had been on strike since March 12 for higher wages and better treatment. In one incident, black street repairmen received pay for two hours when they were sent home because of bad weather, but white employees were paid for the full day. On April 3, King addressed a rally and delivered his "I've Been to the Mountaintop" address at Mason Temple, the world headquarters of the Church of God in Christ. King's flight to Memphis had been delayed by a bomb threat against his plane. In the prophetic peroration of the last speech of his life, in reference to the bomb threat, King said the following: King was booked in Room 306 at the Lorraine Motel (owned by Walter Bailey) in Memphis. Ralph Abernathy, who was present at the assassination, testified to the United States House Select Committee on Assassinations that King and his entourage stayed at Room 306 so often that it was known as the "King-Abernathy suite." According to Jesse Jackson, who was present, King's last words on the balcony before his assassination were spoken to musician Ben Branch, who was scheduled to perform that night at an event King was attending: "Ben, make sure you play 'Take My Hand, Precious Lord' in the meeting tonight. Play it real pretty." King was fatally shot by James Earl Ray at 6:01 p.m., April 4, 1968, as he stood on the motel's second-floor balcony. The bullet entered through his right cheek, smashing his jaw, then traveled down his spinal cord before lodging in his shoulder. Abernathy heard the shot from inside the motel room and ran to the balcony to find King on the floor. Jackson stated after the shooting that he cradled King's head as King lay on the balcony, but this account was disputed by other colleagues of King; Jackson later changed his statement to say that he had "reached out" for King. After emergency chest surgery, King died at St. Joseph's Hospital at 7:05 p.m. According to biographer Taylor Branch, King's autopsy revealed that though only 39 years old, he "had the heart of a 60 year old", which Branch attributed to the stress of 13 years in the civil rights movement. King is buried within Martin Luther King Jr. National Historical Park. The assassination led to a nationwide wave of race riots in Washington, D.C., Chicago, Baltimore, Louisville, Kansas City, and dozens of other cities. Presidential candidate Robert F. Kennedy was on his way to Indianapolis for a campaign rally when he was informed of King's death. He gave a short, improvised speech to the gathering of supporters informing them of the tragedy and urging them to continue King's ideal of nonviolence. The following day, he delivered a prepared response in Cleveland. James Farmer Jr. and other civil rights leaders also called for non-violent action, while the more militant Stokely Carmichael called for a more forceful response. The city of Memphis quickly settled the strike on terms favorable to the sanitation workers. President Lyndon B. Johnson declared April 7 a national day of mourning for the civil rights leader. Vice President Hubert Humphrey attended King's funeral on behalf of the President, as there were fears that Johnson's presence might incite protests and perhaps violence. At his widow's request, King's last sermon at Ebenezer Baptist Church was played at the funeral, a recording of his "Drum Major" sermon, given on February 4, 1968. In that sermon, King made a request that at his funeral no mention of his awards and honors be made, but that it be said that he tried to "feed the hungry", "clothe the naked", "be right on the [Vietnam] war question", and "love and serve humanity." His good friend Mahalia Jackson sang his favorite hymn, "Take My Hand, Precious Lord", at the funeral. Two months after King's death, James Earl Ray—who was on the loose from a previous prison escape—was captured at London Heathrow Airport while trying to leave England on a false Canadian passport. He was using the alias Ramon George Sneyd on his way to white-ruled Rhodesia. Ray was quickly extradited to Tennessee and charged with King's murder. He confessed to the assassination on March 10, 1969, though he recanted this confession three days later. On the advice of his attorney Percy Foreman, Ray pleaded guilty to avoid a trial conviction and thus the possibility of receiving the death penalty. He was sentenced to a 99-year prison term. Ray later claimed a man he met in Montreal, Quebec, with the alias "Raoul" was involved and that the assassination was the result of a conspiracy. He spent the remainder of his life attempting, unsuccessfully, to withdraw his guilty plea and secure the trial he never had. Ray died in 1998 at age 70. Ray's lawyers maintained he was a scapegoat similar to the way that John F. Kennedy's assassin Lee Harvey Oswald is seen by conspiracy theorists. Supporters of this assertion said that Ray's confession was given under pressure and that he had been threatened with the death penalty. They admitted that Ray was a thief and burglar, but claimed that he had no record of committing violent crimes with a weapon. However, prison records in different U.S. cities have shown that he was incarcerated on numerous occasions for charges of armed robbery. In a 2008 interview with CNN, Jerry Ray, the younger brother of James Earl Ray, claimed that James was smart and was sometimes able to get away with armed robbery. Jerry Ray said that he had assisted his brother on one such robbery. "I never been with nobody as bold as he is," Jerry said. "He just walked in and put that gun on somebody, it was just like it's an everyday thing." Those suspecting a conspiracy in the assassination point to the two successive ballistics tests which proved that a rifle similar to Ray's Remington Gamemaster had been the murder weapon. Those tests did not implicate Ray's specific rifle. Witnesses near King at the moment of his death said that the shot came from another location. They said that it came from behind thick shrubbery near the boarding house—which had been cut away in the days following the assassination—and not from the boarding house window. However, Ray's fingerprints were found on various objects (a rifle, a pair of binoculars, articles of clothing, a newspaper) that were left in the bathroom where it was determined the gunfire came from. An examination of the rifle containing Ray's fingerprints determined that at least one shot was fired from the firearm at the time of the assassination. In 1997, King's son Dexter Scott King met with Ray, and publicly supported Ray's efforts to obtain a new trial. Two years later, King's widow Coretta Scott King and the couple's children won a wrongful death claim against Loyd Jowers and "other unknown co-conspirators." Jowers claimed to have received $100,000 to arrange King's assassination. The jury of six whites and six blacks found in favor of the King family, finding Jowers to be complicit in a conspiracy against King and that government agencies were party to the assassination.  William F. Pepper represented the King family in the trial. In 2000, the U.S. Department of Justice completed the investigation into Jowers' claims but did not find evidence to support allegations about conspiracy. The investigation report recommended no further investigation unless some new reliable facts are presented. A sister of Jowers admitted that he had fabricated the story so he could make $300,000 from selling the story, and she in turn corroborated his story in order to get some money to pay her income tax. In 2002, "The New York Times" reported that a church minister, Rev. Ronald Denton Wilson, claimed his father, Henry Clay Wilson—not James Earl Ray—assassinated King. He stated, "It wasn't a racist thing; he thought Martin Luther King was connected with communism, and he wanted to get him out of the way." Wilson provided no evidence to back up his claims. King researchers David Garrow and Gerald Posner disagreed with William F. Pepper's claims that the government killed King. In 2003, Pepper published a book about the long investigation and trial, as well as his representation of James Earl Ray in his bid for a trial, laying out the evidence and criticizing other accounts. King's friend and colleague, James Bevel, also disputed the argument that Ray acted alone, stating, "There is no way a ten-cent white boy could develop a plan to kill a million-dollar black man." In 2004, Jesse Jackson stated: King's main legacy was to secure progress on civil rights in the U.S. Just days after King's assassination, Congress passed the Civil Rights Act of 1968. Title VIII of the Act, commonly known as the Fair Housing Act, prohibited discrimination in housing and housing-related transactions on the basis of race, religion, or national origin (later expanded to include sex, familial status, and disability). This legislation was seen as a tribute to King's struggle in his final years to combat residential discrimination in the U.S. Internationally, King's legacy includes influences on the Black Consciousness Movement and civil rights movement in South Africa. King's work was cited by, and served as, an inspiration for South African leader Albert Lutuli, who fought for racial justice in his country and was later awarded the Nobel Prize. The day following King's assassination, school teacher Jane Elliott conducted her first "Blue Eyes/Brown Eyes" exercise with her class of elementary school students in Riceville, Iowa. Her purpose was to help them understand King's death as it related to racism, something they little understood as they lived in a predominantly white community. King has become a national icon in the history of American liberalism and American progressivism. King also influenced Irish politician and activist John Hume. Hume, the former leader of the Social Democratic and Labour Party, cited King's legacy as quintessential to the Northern Irish civil rights movement and the signing of the Good Friday Agreement, calling him "one of my great heroes of the century." King's wife Coretta Scott King followed in her husband's footsteps and was active in matters of social justice and civil rights until her death in 2006. The same year that Martin Luther King was assassinated, she established the King Center in Atlanta, Georgia, dedicated to preserving his legacy and the work of championing nonviolent conflict resolution and tolerance worldwide. Their son, Dexter King, serves as the center's chairman. Daughter Yolanda King, who died in 2007, was a motivational speaker, author and founder of Higher Ground Productions, an organization specializing in diversity training. Even within the King family, members disagree about his religious and political views about gay, lesbian, bisexual and transgender people. King's widow Coretta publicly said that she believed her husband would have supported gay rights. However, his youngest child, Bernice King, has said publicly that he would have been opposed to gay marriage. On February 4, 1968, at the Ebenezer Baptist Church, in speaking about how he wished to be remembered after his death, King stated: Beginning in 1971, cities such as St. Louis, Missouri, and states established annual holidays to honor King. At the White House Rose Garden on November 2, 1983, President Ronald Reagan signed a bill creating a federal holiday to honor King. Observed for the first time on January 20, 1986, it is called Martin Luther King Jr. Day. Following President George H. W. Bush's 1992 proclamation, the holiday is observed on the third Monday of January each year, near the time of King's birthday. On January 17, 2000, for the first time, Martin Luther King Jr. Day was officially observed in all fifty U.S. states. Arizona (1992), New Hampshire (1999) and Utah (2000) were the last three states to recognize the holiday. Utah previously celebrated the holiday at the same time but under the name Human Rights Day. King is remembered as a martyr by the Episcopal Church in the United States of America with an annual feast day on the anniversary of his death, April 4. The Evangelical Lutheran Church in America commemorates King liturgically on the anniversary of his birth, January 15. In the United Kingdom, The Northumbria and Newcastle Universities Martin Luther King Peace Committee exists to honor King's legacy, as represented by his final visit to the UK to receive an honorary degree from Newcastle University in 1967. The Peace Committee operates out of the chaplaincies of the city's two universities, Northumbria and Newcastle, both of which remain centres for the study of Martin Luther King and the US civil rights movement. Inspired by King's vision, it undertakes a range of activities across the UK as it seeks to "build cultures of peace." In 2017, Newcastle University unveiled a bronze statue of King to celebrate the 50th anniversary of his honorary doctorate ceremony. The Students Union also voted to rename their bar 'Luthers'. On June 25, 2019, "The New York Times Magazine" listed Martin Luther King Jr. among hundreds of artists whose material was reportedly destroyed in the 2008 Universal Studios fire. As a Christian minister, King's main influence was Jesus Christ and the Christian gospels, which he would almost always quote in his religious meetings, speeches at church, and in public discourses. King's faith was strongly based in Jesus' commandment of loving your neighbor as yourself, loving God above all, and loving your enemies, praying for them and blessing them. His nonviolent thought was also based in the injunction to "turn the other cheek" in the Sermon on the Mount, and Jesus' teaching of putting the sword back into its place (Matthew 26:52). In his famous Letter from Birmingham Jail, King urged action consistent with what he describes as Jesus' "extremist" love, and also quoted numerous other Christian pacifist authors, which was very usual for him. In another sermon, he stated: King's private writings show that he rejected biblical literalism; he described the Bible as "mythological," doubted that Jesus was born of a virgin and did not believe that the story of Jonah and the whale was true. Veteran African-American civil rights activist Bayard Rustin was King's first regular advisor on nonviolence. King was also advised by the white activists Harris Wofford and Glenn Smiley. Rustin and Smiley came from the Christian pacifist tradition, and Wofford and Rustin both studied Mahatma Gandhi's teachings. Rustin had applied nonviolence with the Journey of Reconciliation campaign in the 1940s, and Wofford had been promoting Gandhism to Southern blacks since the early 1950s. King had initially known little about Gandhi and rarely used the term "nonviolence" during his early years of activism in the early 1950s. King initially believed in and practiced self-defense, even obtaining guns in his household as a means of defense against possible attackers. The pacifists guided King by showing him the alternative of nonviolent resistance, arguing that this would be a better means to accomplish his goals of civil rights than self-defense. King then vowed to no longer personally use arms. In the aftermath of the boycott, King wrote "Stride Toward Freedom", which included the chapter "Pilgrimage to Nonviolence." King outlined his understanding of nonviolence, which seeks to win an opponent to friendship, rather than to humiliate or defeat him. The chapter draws from an address by Wofford, with Rustin and Stanley Levison also providing guidance and ghostwriting. King was inspired by Gandhi and his success with nonviolent activism, and as a theology student, King described Gandhi as being one of the "individuals who greatly reveal the working of the Spirit of God". King had "for a long time ... wanted to take a trip to India." With assistance from Harris Wofford, the American Friends Service Committee, and other supporters, he was able to fund the journey in April 1959. The trip to India affected King, deepening his understanding of nonviolent resistance and his commitment to America's struggle for civil rights. In a radio address made during his final evening in India, King reflected, "Since being in India, I am more convinced than ever before that the method of nonviolent resistance is the most potent weapon available to oppressed people in their struggle for justice and human dignity." King's admiration of Gandhi's nonviolence did not diminish in later years. He went so far as to hold up his example when receiving the Nobel Peace Prize in 1964, hailing the "successful precedent" of using nonviolence "in a magnificent way by Mohandas K. Gandhi to challenge the might of the British Empire ... He struggled only with the weapons of truth, soul force, non-injury and courage." Another influence for King's nonviolent method was Henry David Thoreau's essay "On Civil Disobedience" and its theme of refusing to cooperate with an evil system. He also was greatly influenced by the works of Protestant theologians Reinhold Niebuhr and Paul Tillich, and said that Walter Rauschenbusch's "Christianity and the Social Crisis" left an "indelible imprint" on his thinking by giving him a theological grounding for his social concerns. King was moved by Rauschenbusch's vision of Christians spreading social unrest in "perpetual but friendly conflict" with the state, simultaneously critiquing it and calling it to act as an instrument of justice. He was apparently unaware of the American tradition of Christian pacifism exemplified by Adin Ballou and William Lloyd Garrison King frequently referred to Jesus' Sermon on the Mount as central for his work. King also sometimes used the concept of "agape" (brotherly Christian love). However, after 1960, he ceased employing it in his writings. Even after renouncing his personal use of guns, King had a complex relationship with the phenomenon of self-defense in the movement. He publicly discouraged it as a widespread practice, but acknowledged that it was sometimes necessary. Throughout his career King was frequently protected by other civil rights activists who carried arms, such as Colonel Stone Johnson, Robert Hayling, and the Deacons for Defense and Justice. King was an avid supporter of Native American rights. Native Americans were also active supporters of King's civil rights movement which included the active participation of Native Americans. In fact, the Native American Rights Fund (NARF) was patterned after the NAACP's Legal Defense and Education Fund. The National Indian Youth Council (NIYC) was especially supportive in King's campaigns especially the Poor People's Campaign in 1968. In King's book "Why We Can't Wait" he writes: Our nation was born in genocide when it embraced the doctrine that the original American, the Indian, was an inferior race. Even before there were large numbers of Negroes on our shores, the scar of racial hatred had already disfigured colonial society. From the sixteenth century forward, blood flowed in battles over racial supremacy. We are perhaps the only nation which tried as a matter of national policy to wipe out its indigenous population. Moreover, we elevated that tragic experience into a noble crusade. Indeed, even today we have not permitted ourselves to reject or to feel remorse for this shameful episode. Our literature, our films, our drama, our folklore all exalt it. King assisted Native American people in south Alabama in the late 1950s. At that time the remaining Creek in Alabama were trying to completely desegregate schools in their area. The South had many egregious racial problems: In this case, light-complexioned Native children were allowed to ride school buses to previously all white schools, while dark-skinned Native children from the same band were barred from riding the same buses. Tribal leaders, upon hearing of King's desegregation campaign in Birmingham, Alabama, contacted him for assistance. He promptly responded and through his intervention the problem was quickly resolved. In September 1959, King flew from Los Angeles, California, to Tucson, Arizona. After giving a speech at the University of Arizona on the ideals of using nonviolent methods in creating social change. He put into words his belief that one must not use force in this struggle "but match the violence of his opponents with his suffering." King then went to Southside Presbyterian, a predominantly Native American church, and was fascinated by their photos. On the spur of the moment Dr. King wanted to go to an Indian Reservation to meet the people so Reverend Casper Glenn took King to the Papago Indian Reservation. At the reservation King met with all the tribal leaders, and others on the reservation then ate with them. King then visited another Presbyterian church near the reservation, and preached there attracting a Native American crowd. He later returned to Old Pueblo in March 1962 where he preached again to a Native American congregation, and then went on to give another speech at the University of Arizona. King would continue to attract the attention of Native Americans throughout the civil rights movement. During the 1963 March on Washington there was a sizable Native American contingent, including many from South Dakota, and many from the Navajo nation. Native Americans were also active participants in the Poor People's Campaign in 1968. King was a major inspiration along with the civil rights movement which inspired the Native American rights movement of the 1960s and many of its leaders. John Echohawk a member of the Pawnee tribe and the executive director and one of the founders of the Native American Rights Fund stated: Inspired by Dr. King, who was advancing the civil rights agenda of equality under the laws of this country, we thought that we could also use the laws to advance our Indianship, to live as tribes in our territories governed by our own laws under the principles of tribal sovereignty that had been with us ever since 1831. We believed that we could fight for a policy of self-determination that was consistent with U.S. law and that we could govern our own affairs, define our own ways and continue to survive in this society. As the leader of the SCLC, King maintained a policy of not publicly endorsing a U.S. political party or candidate: "I feel someone must remain in the position of non-alignment, so that he can look objectively at both parties and be the conscience of both—not the servant or master of either." In a 1958 interview, he expressed his view that neither party was perfect, saying, "I don't think the Republican party is a party full of the almighty God nor is the Democratic party. They both have weaknesses ... And I'm not inextricably bound to either party." King did praise Democratic Senator Paul Douglas of Illinois as being the "greatest of all senators" because of his fierce advocacy for civil rights causes over the years. King critiqued both parties' performance on promoting racial equality: Although King never publicly supported a political party or candidate for president, in a letter to a civil rights supporter in October 1956 he said that he had not decided whether he would vote for Adlai Stevenson II or Dwight D. Eisenhower at the 1956 presidential election, but that "In the past I always voted the Democratic ticket." In his autobiography, King says that in 1960 he privately voted for Democratic candidate John F. Kennedy: "I felt that Kennedy would make the best president. I never came out with an endorsement. My father did, but I never made one." King adds that he likely would have made an exception to his non-endorsement policy for a second Kennedy term, saying "Had President Kennedy lived, I would probably have endorsed him in 1964." In 1964, King urged his supporters "and all people of goodwill" to vote against Republican Senator Barry Goldwater for president, saying that his election "would be a tragedy, and certainly suicidal almost, for the nation and the world." King supported the ideals of democratic socialism, although he was reluctant to speak directly of this support due to the anti-communist sentiment being projected throughout the United States at the time, and the association of socialism with communism. King believed that capitalism could not adequately provide the basic necessities of many American people, particularly the African-American community. King stated that black Americans, as well as other disadvantaged Americans, should be compensated for historical wrongs. In an interview conducted for "Playboy" in 1965, he said that granting black Americans only equality could not realistically close the economic gap between them and whites. King said that he did not seek a full restitution of wages lost to slavery, which he believed impossible, but proposed a government compensatory program of $50 billion over ten years to all disadvantaged groups. He posited that "the money spent would be more than amply justified by the benefits that would accrue to the nation through a spectacular decline in school dropouts, family breakups, crime rates, illegitimacy, swollen relief rolls, rioting and other social evils." He presented this idea as an application of the common law regarding settlement of unpaid labor, but clarified that he felt that the money should not be spent exclusively on blacks. He stated, "It should benefit the disadvantaged of "all" races." On being awarded the Planned Parenthood Federation of America's Margaret Sanger Award on May 5, 1966, King said: Actress Nichelle Nichols planned to leave the science-fiction television series "" in 1967 after , wanting to return to musical theater. She changed her mind after talking to King who was a fan of the show. King explained that her character signified a future of greater racial harmony and cooperation. King told Nichols, "You are our image of where we're going, you're 300 years from now, and that means that's where we are and it takes place now. Keep doing what you're doing, you are our inspiration." As Nichols recounted, ""Star Trek" was one of the only shows that [King] and his wife Coretta would allow their little children to watch. And I thanked him and I told him I was leaving the show. All the smile came off his face. And he said, 'Don't you understand for the first time we're seen as we should be seen. You don't have a black role. You have an equal role.' For his part, the series' creator, Gene Roddenberry, was deeply moved upon learning of King's support. FBI director J. Edgar Hoover personally ordered surveillance of King, with the intent to undermine his power as a civil rights leader. The Church Committee, a 1975 investigation by the U.S. Congress, found that "From December 1963 until his death in 1968, Martin Luther King Jr. was the target of an intensive campaign by the Federal Bureau of Investigation to 'neutralize' him as an effective civil rights leader." In the fall of 1963, the FBI received authorization from Attorney General Robert F. Kennedy to proceed with wiretapping of King's phone lines, purportedly due to his association with Stanley Levison. The Bureau informed President John F. Kennedy. He and his brother unsuccessfully tried to persuade King to dissociate himself from Levison, a New York lawyer who had been involved with Communist Party USA. Although Robert Kennedy only gave written approval for limited wiretapping of King's telephone lines "on a trial basis, for a month or so", Hoover extended the clearance so his men were "unshackled" to look for evidence in any areas of King's life they deemed worthy. The Bureau placed wiretaps on the home and office phone lines of both Levison and King, and bugged King's rooms in hotels as he traveled across the country. In 1967, Hoover listed the SCLC as a black nationalist hate group, with the instructions: "No opportunity should be missed to exploit through counterintelligence techniques the organizational and personal conflicts of the leaderships of the groups ... to insure the targeted group is disrupted, ridiculed, or discredited." In a secret operation code-named "Minaret", the National Security Agency monitored the communications of leading Americans, including King, who were critical of the U.S. war in Vietnam. A review by the NSA itself concluded that Minaret was "disreputable if not outright illegal." For years, Hoover had been suspicious of potential influence of communists in social movements such as labor unions and civil rights. Hoover directed the FBI to track King in 1957, and the SCLC when it was established. Due to the relationship between King and Stanley Levison, the FBI feared Levison was working as an "agent of influence" over King, in spite of its own reports in 1963 that Levison had left the Party and was no longer associated in business dealings with them. Another King lieutenant, Jack O'Dell, was also linked to the Communist Party by sworn testimony before the House Un-American Activities Committee (HUAC). Despite the extensive surveillance conducted, by 1976 the FBI had acknowledged that it had not obtained any evidence that King himself or the SCLC were actually involved with any communist organizations. For his part, King adamantly denied having any connections to communism. In a 1965 "Playboy" interview, he stated that "there are as many Communists in this freedom movement as there are Eskimos in Florida." He argued that Hoover was "following the path of appeasement of political powers in the South" and that his concern for communist infiltration of the civil rights movement was meant to "aid and abet the salacious claims of southern racists and the extreme right-wing elements." Hoover did not believe King's pledge of innocence and replied by saying that King was "the most notorious liar in the country." After King gave his "I Have A Dream" speech during the March on Washington on August 28, 1963, the FBI described King as "the most dangerous and effective Negro leader in the country." It alleged that he was "knowingly, willingly and regularly cooperating with and taking guidance from communists." The attempts to prove that King was a communist was related to the feeling of many segregationists that blacks in the South were content with the status quo, but had been stirred up by "communists" and "outside agitators." As context, the civil rights movement in 1950s and '60s arose from activism within the black community dating back to before World War I. King said that "the Negro revolution is a genuine revolution, born from the same womb that produces all massive social upheavals—the womb of intolerable conditions and unendurable situations." CIA files declassified in 2017 revealed that the agency was investigating possible links between King and Communism after a Washington Post article dated November 4, 1964 claimed he was invited to the Soviet Union and that Ralph Abernathy, as spokesman for King, refused to comment on the source of the invitation. Mail belonging to King and other civil rights activists was intercepted by the CIA program HTLINGUAL. The FBI having concluded that King was dangerous due to communist infiltration, attempts to discredit King began through revelations regarding his private life. FBI surveillance of King, some of it since made public, attempted to demonstrate that he also had numerous extramarital affairs. Lyndon B. Johnson once said that King was a "hypocritical preacher". In his 1989 autobiography "And the Walls Came Tumbling Down", Ralph Abernathy stated that King had a "weakness for women", although they "all understood and believed in the biblical prohibition against sex outside of marriage. It was just that he had a particularly difficult time with that temptation." In a later interview, Abernathy said that he only wrote the term "womanizing", that he did not specifically say King had extramarital sex and that the infidelities King had were emotional rather than sexual. Abernathy criticized the media for sensationalizing the statements he wrote about King's affairs, such as the allegation that he admitted in his book that King had a sexual affair the night before he was assassinated. In his original wording, Abernathy had stated that he saw King coming out of his room with a woman when he awoke the next morning and later said that "he may have been in there discussing and debating and trying to get her to go along with the movement, I don't know." In his 1986 book "Bearing the Cross", David Garrow wrote about a number of extramarital affairs, including one woman King saw almost daily. According to Garrow, "that relationship ... increasingly became the emotional centerpiece of King's life, but it did not eliminate the incidental couplings ... of King's travels." He alleged that King explained his extramarital affairs as "a form of anxiety reduction." Garrow asserted that King's supposed promiscuity caused him "painful and at times overwhelming guilt." King's wife Coretta appeared to have accepted his affairs with equanimity, saying once that "all that other business just doesn't have a place in the very high level relationship we enjoyed." Shortly after "Bearing the Cross" was released, civil rights author Howell Raines gave the book a positive review but opined that Garrow's allegations about King's sex life were "sensational" and stated that Garrow was "amassing facts rather than analyzing them." The FBI distributed reports regarding such affairs to the executive branch, friendly reporters, potential coalition partners and funding sources of the SCLC, and King's family. The bureau also sent anonymous letters to King threatening to reveal information if he did not cease his civil rights work. The FBI–King suicide letter sent to King just before he received the Nobel Peace Prize read, in part: The American public, the church organizations that have been helping—Protestants, Catholics and Jews will know you for what you are—an evil beast. So will others who have backed you. You are done. King, there is only one thing left for you to do. You know what it is. You have just 34 days in which to do (this exact number has been selected for a specific reason, it has definite practical significant ). You are done. There is but one way out for you. You better take it before your filthy fraudulent self is bared to the nation. The letter was accompanied by a tape recording—excerpted from FBI wiretaps—of several of King's extramarital liaisons. King interpreted this package as an attempt to drive him to suicide, although William Sullivan, head of the Domestic Intelligence Division at the time, argued that it may have only been intended to "convince Dr. King to resign from the SCLC." King refused to give in to the FBI's threats. In 1977, Judge John Lewis Smith Jr. ordered all known copies of the recorded audiotapes and written transcripts resulting from the FBI's electronic surveillance of King between 1963 and 1968 to be held in the National Archives and sealed from public access until 2027. In May 2019, FBI files emerged indicating that King "looked on, laughed and offered advice" as one of his friends raped a woman. His biographer, David Garrow, wrote that "the suggestion... that he either actively tolerated or personally employed violence against any woman, even while drunk, poses so fundamental a challenge to his historical stature as to require the most complete and extensive historical review possible". These allegations sparked a heated debate among historians. Clayborne Carson, Martin Luther King biographer and overseer of the Dr. King records at Stanford University states that he came to the opposite conclusion of Garrow saying "None of this is new. Garrow is talking about a recently added summary of a transcript of a 1964 recording from the Willard Hotel that others, including Mrs. King, have said they did not hear Martin's voice on it. The added summary was four layers removed from the actual recording. This supposedly new information comes from an anonymous source in a single paragraph in an FBI report. You have to ask how could anyone conclude King looked at a rape from an audio recording in a room where he was not present." Carson bases his position of Coretta Scott King's memoirs where she states "I set up our reel-to-reel recorder and listened. I have read scores of reports talking about the scurrilous activities of my husband but once again, there was nothing at all incriminating on the tape. It was a social event with people laughing and telling dirty jokes. But I did not hear Martin's voice on it, and there was nothing about sex or anything else resembling the lies J. Edgar and the FBI were spreading." The tapes that could confirm or refute the allegation are scheduled to be declassified in 2027. A fire station was located across from the Lorraine Motel, next to the boarding house in which James Earl Ray was staying. Police officers were stationed in the fire station to keep King under surveillance. Agents were watching King at the time he was shot. Immediately following the shooting, officers rushed out of the station to the motel. Marrell McCollough, an undercover police officer, was the first person to administer first aid to King. The antagonism between King and the FBI, the lack of an all points bulletin to find the killer, and the police presence nearby led to speculation that the FBI was involved in the assassination. King was awarded at least fifty honorary degrees from colleges and universities. On October 14, 1964, King became the (at the time) youngest winner of the Nobel Peace Prize, which was awarded to him for leading nonviolent resistance to racial prejudice in the U.S. In 1965, he was awarded the American Liberties Medallion by the American Jewish Committee for his "exceptional advancement of the principles of human liberty." In his acceptance remarks, King said, "Freedom is one thing. You have it all or you are not free." In 1957, he was awarded the Spingarn Medal from the NAACP. Two years later, he won the Anisfield-Wolf Book Award for his book "Stride Toward Freedom: The Montgomery Story". In 1966, the Planned Parenthood Federation of America awarded King the Margaret Sanger Award for "his courageous resistance to bigotry and his lifelong dedication to the advancement of social justice and human dignity." Also in 1966, King was elected as a fellow of the American Academy of Arts and Sciences. In November 1967 he made a 24-hour trip to the United Kingdom to receive an honorary degree from Newcastle University, being the first African-American to be so honoured by Newcastle. In a moving impromptu acceptance speech, he said There are three urgent and indeed great problems that we face not only in the United States of America but all over the world today. That is the problem of racism, the problem of poverty and the problem of war. In addition to being nominated for three Grammy Awards, the civil rights leader posthumously won for Best Spoken Word Recording in 1971 for "Why I Oppose The War In Vietnam". In 1977, the Presidential Medal of Freedom was posthumously awarded to King by President Jimmy Carter. The citation read: Martin Luther King Jr. was the conscience of his generation. He gazed upon the great wall of segregation and saw that the power of love could bring it down. From the pain and exhaustion of his fight to fulfill the promises of our founding fathers for our humblest citizens, he wrung his eloquent statement of his dream for America. He made our nation stronger because he made it better. His dream sustains us yet. King and his wife were also awarded the Congressional Gold Medal in 2004. King was second in Gallup's List of Most Widely Admired People of the 20th Century. In 1963, he was named "Time" Person of the Year, and in 2000, he was voted sixth in an online "Person of the Century" poll by the same magazine. King placed third in the Greatest American contest conducted by the Discovery Channel and AOL. On April 20, 2016, Treasury Secretary Jacob Lew announced that the $5, $10, and $20 bills would all undergo redesign prior to 2020. Lew said that while Lincoln would remain on the front of the $5 bill, the reverse would be redesigned to depict various historical events that had occurred at the Lincoln Memorial. Among the planned designs are images from King's "I Have a Dream" speech and the 1939 concert by opera singer Marian Anderson.
https://en.wikipedia.org/wiki?curid=20076
Martin Luther King (disambiguation) Martin Luther King Jr. (1929–1968) was a minister and civil rights activist. Martin Luther King may also refer to:
https://en.wikipedia.org/wiki?curid=20079
Marino Marini (sculptor) Marino Marini (27 February 1901 – 6 August 1980) was an Italian sculptor. He attended the Accademia di Belle Arti in Florence in 1917. Although he never abandoned painting, Marini devoted himself primarily to sculpture from about 1922. From this time his work was influenced by Etruscan art and the sculpture of Arturo Martini. Marini succeeded Martini as professor at the Scuola d’Arte di Villa Reale in Monza, near Milan, in 1929, a position he retained until 1940. During this period, Marini traveled frequently to Paris, where he associated with Massimo Campigli, Giorgio de Chirico, Alberto Magnelli, and Filippo Tibertelli de Pisis. In 1936 he moved to Tenero-Locarno, in Ticino Canton, Switzerland; during the following few years the artist often visited Zürich and Basel, where he became a friend of Alberto Giacometti, Germaine Richier, and Fritz Wotruba. In 1936, he received the Prize of the Quadriennale of Rome. In 1938, he married Mercedes Pedrazzini. He accepted a professorship in sculpture at the Accademia di Belle Arti di Brera, Milan, in 1940. In 1943, he went into exile in Switzerland, exhibiting in Basel, Bern, and Zurich. In 1946, the artist settled permanently in Milan. He is buried at Cimitero Comunale of Pistoia, Toscana, Italy. He participated in the 'Twentieth-Century Italian Art' show at the Museum of Modern Art in New York City in 1944. Curt Valentin began exhibiting Marini's work at his Buchholz Gallery in New York in 1950, on which occasion the sculptor visited the city and met Jean Arp, Max Beckmann, Alexander Calder, Lyonel Feininger, and Jacques Lipchitz. On his return to Europe, he stopped in London, where the Hanover Gallery had organized a solo show of his work, and there met Henry Moore. In 1951 a Marini exhibition traveled from the Kestner-Gesellschaft Hannover to the Kunstverein in Hamburg and the Haus der Kunst of Munich. He was awarded the Grand Prize for Sculpture at the Venice Biennale in 1952 and the Feltrinelli Prize at the Accademia dei Lincei in Rome in 1954. One of his monumental sculptures was installed in The Hague in 1959. Retrospectives of Marini's work took place at the Kunsthaus Zürich in 1962 and at the Palazzo Venezia in Rome in 1966. His paintings were exhibited for the first time at Toninelli Arte Moderna in Milan in 1963–64. In 1973 a permanent installation of his work opened at the Galleria d’Arte Moderna in Milan, and in 1978 a Marini show was presented at the National Museum of Modern Art in Tokyo. There is a museum dedicated to his work in Florence (in the former church of San Pancrazio). His work may also be found in museums such as the Civic Gallery of Modern Art in Milan, the Tate Collection, "The Angel of the City" at the Peggy Guggenheim Collection, Venice, the Norton Simon Museum, Museum de Fundatie and the Hirshhorn Museum and Sculpture Garden in Washington, D.C. Marini developed several themes in sculpture: equestrian, Pomonas (nudes), portraits, and circus figures. He drew on traditions of Etruscan and Northern European sculpture in developing these themes. His aim was to develop mythical images by interpreting classical themes in light of modern concerns and techniques. Marini is particularly famous for his series of stylised equestrian statues, which feature a man with outstretched arms on a horse. The evolution of the horse and rider as a subject in Marini's works reflects the artist's response to the changing context of the modern world. This theme appeared in his work in 1936. At first the proportions of horse and rider are slender and both are "poised, formal, and calm." By the next year the horse is depicted rearing and the rider gesturing. By 1940 the forms are simpler and more archaic in spirit; the proportions squatter. After World War II, in the late 1940s, the horse is planted, immobile, with neck extended, ears pinned back, mouth open. An example, in the Peggy Guggenheim Collection in Venice, is "The Angel of the City," depicting "affirmation and charged strength associated explicitly with sexual potency." In later works, the rider is, increasingly, oblivious of his mount, "involved in his own visions or anxieties." In the artist's final work, the rider is unseated as the horse falls to the ground in an "apocalyptic image of lost control" which parallels Marini's growing despair for the future of the world.
https://en.wikipedia.org/wiki?curid=20080
Modular arithmetic In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book "Disquisitiones Arithmeticae", published in 1801. A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Simple addition would result in , but clocks "wrap around" every 12 hours. Because the hour number starts over after it reaches 12, this is arithmetic "modulo" 12. In terms of the definition below, 15 is "congruent" to 3 modulo 12, so "15:00" on a 24-hour clock is displayed "3:00" on a 12-hour clock. Given an integer , called a modulus, two integers are said to be congruent modulo if is a divisor of their difference, that is, if there is an integer such that . Congruence modulo is a congruence relation, meaning that it is an equivalence relation that is compatible with the operations of addition, subtraction, and multiplication. Congruence modulo is denoted: The parentheses mean that applies to the entire equation, not just to the right-hand side (here ). This notation is not to be confused with the notation (without parentheses), which refers to the modulo operation: denotes the unique integer such that and formula_1 The congruence relation may be rewritten as explicitly showing its relationship with Euclidean division. However, need not be the remainder of the division of by More precisely, what the statement asserts is that and have the same remainder when divided by . That is, where is the common remainder. Subtracting these two expressions, we recover the previous relation: by setting For example, because , which is a multiple of 12, or, equivalently, because both 38 and 14 have the same remainder 2 when divided by 12. The definition of congruence also applies to negative values: The congruence relation satisfies all the conditions of an equivalence relation: If and or if then: If , then it is false, in general, that . However, one has: For cancellation of common terms, we have the following rules: The modular multiplicative inverse is defined by the following rules: The multiplicative inverse may be efficiently computed by solving Bézout's equation formula_9 for formula_10 using the Extended Euclidean algorithm. In particular, if is a prime number, then is coprime with for every such that ; thus a multiplicative inverse exists for all not congruent to zero modulo . Some of the more advanced properties of congruence relations are the following: Like any congruence relation, congruence modulo is an equivalence relation, and the equivalence class of the integer , denoted by , is the set . This set, consisting of the integers congruent to  modulo , is called the congruence class or residue class or simply residue of the integer , modulo . When the modulus is known from the context, that residue may also be denoted . Each residue class modulo may be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class (since this is the proper remainder which results from division). Any two members of different residue classes modulo are incongruent modulo . Furthermore, every integer belongs to one and only one residue class modulo . The set of integers is called the least residue system modulo . Any set of integers, no two of which are congruent modulo , is called a complete residue system modulo . The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely one representative of each residue class modulo . The least residue system modulo 4 is {0, 1, 2, 3}. Some other complete residue systems modulo 4 are: Some sets which are "not" complete residue systems modulo 4 are: Any set of integers that are relatively prime to and that are mutually incongruent modulo , where denotes Euler's totient function, is called a reduced residue system modulo . The example above, {5,15} is an example of a reduced residue system modulo 4. The set of all congruence classes of the integers for a modulus is called the ring of integers modulo , and is denoted formula_12, formula_13, or formula_14. The notation formula_14 is, however, not recommended because it can be confused with the set of -adic integers. The ring formula_16 is fundamental to various branches of mathematics (see Applications below). The set is defined for "n" > 0 as: We define addition, subtraction, and multiplication on formula_16 by the following rules: The verification that this is a proper definition uses the properties given before. In this way, formula_16 becomes a commutative ring. For example, in the ring formula_25, we have as in the arithmetic for the 24-hour clock. We use the notation formula_16 because this is the quotient ring of formula_19 by the ideal formula_29 containing all integers divisible by , where formula_30 is the singleton set . Thus formula_16 is a field when formula_29 is a maximal ideal, that is, when is prime. This can also be constructed from the group formula_33 under the addition operation alone. The residue class is the group coset of in the quotient group formula_16, a cyclic group. Rather than excluding the special case , it is more useful to include formula_35 (which, as mentioned before, is isomorphic to the ring formula_19 of integers), for example, when discussing the characteristic of a ring. The ring of integers modulo is a finite field if and only if is prime. (this ensures every nonzero element has a multiplicative inverse). If formula_37 is a prime power with "k" > 1, there exists a unique (up to isomorphism) finite field formula_38 with elements, but this is "not" formula_39, which fails to be a field because it has zero-divisors. We denote the multiplicative subgroup of the modular integers by formula_40. This consists of formula_41 for "a" coprime to "n", which are precisely the classes possessing a multiplicative inverse. This forms a commutative group under multiplication, with order formula_42. In theoretical mathematics, modular arithmetic is one of the foundations of number theory, touching on almost every aspect of its study, and it is also used extensively in group theory, ring theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra, cryptography, computer science, chemistry and the visual and musical arts. A very practical application is to calculate checksums within serial number identifiers. For example, International Standard Book Number (ISBN) uses modulo 11 (for 10 digit ISBN) or modulo 10 (for 13 digit ISBN) arithmetic for error detection. Likewise, International Bank Account Numbers (IBANs), for example, make use of modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the CAS registry number (a unique identifying number for each chemical compound) is a check digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10. In cryptography, modular arithmetic directly underpins public key systems such as RSA and Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety of symmetric key algorithms including Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation. In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used in polynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations of polynomial greatest common divisor, exact linear algebra and Gröbner basis algorithms over the integers and the rational numbers. In computer science, modular arithmetic is often applied in bitwise operations and other operations involving fixed-width, cyclic data structures. The modulo operation, as implemented in many programming languages and calculators, is an application of modular arithmetic that is often used in this context. The logical operator XOR sums 2 bits, modulo 2. In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1∶2 or 2∶1 ratio are equivalent, and C-sharp is considered the same as D-flat). The method of casting out nines offers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7 arithmetic. More generally, modular arithmetic also has application in disciplines such as law (see for example, apportionment), economics, (see for example, game theory) and other areas of the social sciences, where proportional division and allocation of resources plays a central part of the analysis. Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved in polynomial time with a form of Gaussian elimination, for details see linear congruence theorem. Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations, such as multiplication and exponentiation modulo , to be performed efficiently on large numbers. Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard as integer factorization and thus are a starting point for cryptographic algorithms and encryption. These problems might be NP-intermediate. Solving a system of non-linear modular arithmetic equations is NP-complete. Below are three reasonably fast C functions, two for performing modular multiplication and one for modular exponentiation on unsigned integers not larger than 63 bits, without overflow of the transient operations. An algorithmic way to compute formula_43: uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m) On computer architectures where an extended precision format with at least 64 bits of mantissa is available (such as the long double type of most x86 C compilers), the following routine is , by employing the trick that, by hardware, floating-point multiplication results in the most significant bits of the product kept, while integer multiplication results in the least significant bits kept: uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m) Below is a C function for performing modular exponentiation, that uses the function implemented above. An algorithmic way to compute formula_44: uint64_t pow_mod(uint64_t a, uint64_t b, uint64_t m) However, for all above routines to work, must not exceed 63 bits.
https://en.wikipedia.org/wiki?curid=20087
Myriad A myriad (from Ancient Greek ) is technically the number ten thousand; in that sense, the term is used almost exclusively in translations from Greek, Latin, Korean, or Chinese, or when talking about ancient Greek numbers. More generally, a myriad may be an indefinitely large number of things. The Aegean numerals of the Minoan and Mycenaean civilizations included a single unit to denote tens of thousands. It was written with a symbol composed of a circle with four dashes 𐄫. In Classical Greek numerals, a myriad was written as a capital mu: Μ, as lower case letters did not exist in Ancient Greece. To distinguish this numeral from letters, it was sometimes given an overbar: . Multiples were written above this sign, so that for example would equal 4,582×10,000 or 45,820,000. The etymology of the word "myriad" itself is uncertain: it has been variously connected to PIE "*meu-" ("damp") in reference to the waves of the sea and to Greek "myrmex" (, "ant") in reference to their swarms. The largest number named in Ancient Greek was the myriad myriad (written ) or hundred million. In his "Sand Reckoner", Archimedes of Syracuse used this quantity as the basis for a numeration system of large powers of ten, which he used to count grains of sand. In English, "myriad" is most commonly used to mean "some large but unspecified number". It may be either an adjective or a noun: both "there are myriad people outside" and "there is a myriad of people outside" are in use. (There are small differences: the former could imply that it is a "diverse" group of people; the latter does not usually but could possibly indicate a group of exactly ten thousand.) The "Merriam-Webster Dictionary" notes that confusion over the use of myriad as a noun "seems to reflect a mistaken belief that the word was originally and is still properly only an adjective ... however, the noun is in fact the older form, dating to the 16th century. The noun "myriad" has appeared in the works of such writers as Milton (plural 'myriads') and Thoreau ('a myriad of'), and it continues to occur frequently in reputable English." "Myriad" is also infrequently used in English as the specific number 10,000. Owing to the possible confusion with the generic meaning of "large quantity", however, this is generally restricted to translation of other languages like ancient Greek, Chinese, and Hindi where numbers may be grouped into sets of 10,000 (myriads). Such use permits the translator to remain closer to the original text and avoid repeated and unwieldy mentions of "tens of thousands": for example, "the original number of the crews supplied by the several nations I find to have been twenty-four myriads" and "What is the distance between one bridge and another? Twelve myriads of parasangs". Most European languages include variations of "myriad" with similar meanings to the English word. Additionally, the prefix "myria-" indicating multiplication times ten thousand (×104), was part of the original metric system adopted by France in 1795. Although it was not retained after the 11th CGPM conference in 1960, "myriameter" is sometimes still encountered as a translation of the Scandinavian mile (Swedish & Norwegian: "mil") of , or in some classifications of wavelengths as the adjective "myriametric". The "myriagramme" (10 kg) was a French approximation of the avoirdupois "quartier" of and the "myriaton" appears in Isaac Asimov's "Foundation" novel trilogy. In Modern Greek, the word "myriad" is rarely used to denote 10,000, but a million is "ekatommyrio" (, "lit." 'hundred myriad') and a thousand million is "disekatommyrio" (, "lit." 'twice hundred myriad'). In East Asia, the traditional numeral systems of China, Korea, and Japan are all decimal-based but grouped into ten thousands rather than thousands. The character for myriad is in traditional script and in simplified form in both mainland China and Japan. The pronunciation varies within China and abroad: "wàn" (Mandarin), "wan"5 (Hakka), "bān" (Minnan), "maan"6 (Cantonese), "man" (Japanese and Korean), and "vạn" (Vietnamese). Vietnam is peculiar within the Sinosphere in largely rejecting Chinese numerals in favor of its own: "vạn" is less common than the native "mười nghìn" ("ten thousand") and its numerals are grouped in threes. Because of this grouping into fours, higher orders of numbers are provided by the powers of 10,000 rather than 1,000: In China, 10,0002 was in ancient texts but is now called and sometimes written as 1,0000,0000; 10,0003 is 1,0000,0000,0000 or ; 10,0004 is 1,0000,0000,0000,0000 or ; and so on. Conversely, Chinese, Japanese, and Korean generally do not have native words for powers of one thousand: what is called "one million" in English is "100" (100 myriad) in the Sinosphere, and "one billion" in English is "" (ten myllion) or "" (ten myriad myriad) in the Sinosphere. Unusually, Vietnam employs its former translation of , "một triệu", to mean 1,000,000 rather than the Chinese figure. Similarly, the PRC government has adapted the word to mean the scientific prefix mega-, but transliterations are used instead for giga-, tera-, and other larger prefixes. This has caused confusion in areas closely related to the PRC such as Hong Kong and Macau, where is still largely used to mean 10,0003.
https://en.wikipedia.org/wiki?curid=20088
Mohamed Al-Fayed Mohamed Al-Fayed (; ; born Mohamed Fayed; 27 January 1933) is an Egyptian businessman. Fayed's business interests include ownership of Hôtel Ritz Paris and formerly Harrods Department Store. Al-Fayed sold Fulham F.C. to Shahid Khan in 2013. Fayed had a son, Dodi, from his first marriage to Samira Khashoggi from 1954 to 1956. Dodi died in a car crash in Paris with Diana, Princess of Wales, on 31 August 1997. Fayed later remarried to Finnish socialite and former model Heini Wathén in 1985, with whom he also has four children: Jasmine, Karim, Camilla, and Omar. In 2013, Fayed's wealth was estimated at US$1.4 billion, making him the 1,031st-richest person in the world. He was born Mohamed Fayed on 27 January 1933, in Roshdy, Alexandria, Egypt, the eldest son of an Egyptian primary school teacher. Fayed has five siblings: Ali, Ashraf, Salah, Soaad, and Safia. Ali and Salah have been his business colleagues. He was married for two years, from 1954 to 1956, to Samira Khashoggi. Fayed worked with his wife's brother, Saudi Arabian arms dealer and businessman Adnan Khashoggi. Some time in the early 1970s, he began using "Al-Fayed" rather than "Fayed". His brothers Ali and Salah began to follow suit at the time of their acquisition of the House of Fraser in the 1980s, though by the late 1980s, both had reverted to calling themselves simply "Fayed". Some have assumed that Fayed's addition of "Al-" to his name was to imply aristocratic origins, like "de" in French or "von" in German, though "Al-" does not have the same social connotations in Arabic. This assumption led to "Private Eye" nicknaming him the "Phoney Pharaoh". Fayed and his brothers founded a shipping company in Egypt before moving its headquarters to Genoa, Italy with offices in London. Around 1964 Fayed entered a close relationship with Haitian leader François Duvalier, known as 'Papa Doc' Duvalier, and became interested in the construction of a Fayed-Duvalier oil refinery in Haiti. He also associated with the geologist George de Mohrenschildt. Fayed terminated his stay in Haiti six months later when a sample of "crude oil" provided by Haitian associates proved to be low-grade molasses. It was then that Fayed moved to England where he lived in central London. In the mid 1960s, Fayed met the ruler of Dubai, Sheikh Mohammed bin Rashid Al Maktoum who entrusted Fayed with helping transform Dubai, where he set up IMS (International Marine Services) in 1968. Fayed introduced British companies like the Costain Group (of which he became a director and 30 percent shareholder), Bernard Sunley & Sons and Taylor Woodrow to the Emirate to carry out the required construction work. He also became a financial adviser to the then Sultan of Brunei Omar Ali Saifuddien III, in 1966. He briefly joined the board of the mining conglomerate Lonrho in 1975 but left after a disagreement. In 1979, Fayed bought The Ritz hotel in Paris, France for US$30 million. In 1984, Fayed and his brothers purchased a 30 percent stake in House of Fraser, a group that included the famous London store Harrods, from Roland 'Tiny' Rowland, the head of Lonrho. In 1985, he and his brothers bought the remaining 70 percent of House of Fraser for £615m. Rowland claimed the Fayed brothers lied about their background and wealth and he put pressure on the government to investigate them. A Department of Trade and Industry (DTI) inquiry into the Fayeds was launched. The DTI's subsequent report was critical, but no action was taken against the Fayeds, and while many believed the contents of the report, others felt it was politically motivated. In 1998, Rowland accused Fayed of stealing papers and jewels from his Harrods safe deposit box. Fayed was arrested, but the charges were dropped. Rowland died in 1998. Fayed settled the dispute with a payment to his widow; he also sued the Metropolitan Police for false arrest in 2002, but lost the case. In 1994, House of Fraser went public, but Fayed retained the private ownership of Harrods. He re-launched the humour publication "Punch" in 1996 but it folded again in 2002. AlFayed applied for British citizenship twice – once in 1994 and once in 1999 unsuccessfully. It was suggested that the feud with Rowland contributed to Fayed's being refused British citizenship the first time. In 1994, in what became known as the cash-for-questions affair, Mohammed Fayed revealed the names of MPs he had paid to ask questions in parliament on his behalf, but who had failed to declare their fees. It saw the Conservative MPs Neil Hamilton and Tim Smith leave the government in disgrace, and a Committee on Standards in Public Life established to prevent such corruption occurring again. Fayed also revealed that the cabinet minister Jonathan Aitken had stayed for free at the Ritz Hotel in Paris at the same time as a group of Saudi arms dealers leading to Aitken's subsequent unsuccessful libel case and imprisonment for perjury. During this period, from 1988 to February 1998, Al-Fayed's spokesman was Michael Cole, a former BBC journalist, although Cole's PR work for Al-Fayed did not cease in 1998. Hamilton lost a subsequent libel action against Al-Fayed in December 1999 and a subsequent appeal against the verdict in December 2000. The former MP has always denied that he was paid by Al-Fayed for asking questions in parliament. Hamilton's libel action related to a Channel 4 "Dispatches" documentary broadcast on 16 January 1997 in which Al-Fayed made claims that the MP had received up to £110,000 in cash and received other gratuities for asking parliamentary questions. Hamilton's basis for his appeal was that the original verdict was invalid because Al-Fayed had paid £10,000 for documents stolen from the dustbins of Hamilton's legal representatives by Benjamin Pell. In 2003, Fayed moved from Surrey, UK to Switzerland, alleging a breach in an agreement with Inland Revenue. In 2005, he moved back to Britain, saying that he "regards Britain as home". He moored a yacht in Monaco called the "Sokar" prior to selling it in 2014. After previously denying that Harrods was for sale, Harrods was sold to Qatar Holdings, the sovereign wealth fund of the country of Qatar, on 10 May 2010. A fortnight previously, Fayed had stated that "People approach us from Kuwait, Saudi Arabia, Qatar. Fair enough. But I put two fingers up to them. It is not for sale. This is not Marks and Spencer or Sainsbury's. It is a special place that gives people pleasure. There is only one Mecca." Harrods was sold for £1.5 billion. Fayed later revealed in an interview that he decided to sell Harrods following the difficulty in getting his dividend approved by the trustee of the Harrods pension fund. Fayed said "I'm here every day, I can't take my profit because I have to take a permission of those bloody idiots. I say is this right? Is this logic? Somebody like me? I run a business and I need to take bloody fucking trustee's permission to take my profit". Fayed was appointed honorary chairman of Harrods, a position he was scheduled to hold for at least six months. In 1972, Fayed purchased the Balnagown estate in Easter Ross, Northern Scotland. From an initial twelve acres, Al-Fayed has since built the estate up to sixty-five thousand acres. Al-Fayed has invested more than £20 million in the estate, restored the 14th century pink Balnagown Castle, and created a tourist accommodation business. The Highlands of Scotland tourist board awarded Al-Fayed the "Freedom of the Highlands" in 2002, in recognition of his "outstanding contribution and commitment to the highlands." As an Egyptian with links to Scotland, Al-Fayed was intrigued enough to fund a 2008 reprint of the 15th-century chronicle "Scotichronicon" by Walter Bower. The "Scotichronicon" describes how Scota, a sister of the Egyptian Pharaoh Tutankhamen, fled her family and landed in Scotland, bringing with her the Stone of Scone. According to the chronicle, Scotland was later named in her honour. The tale is disputed by modern historians. Al-Fayed later declared that "The Scots are originally Egyptians and that's the truth." In 2009, Al-Fayed revealed that he was a supporter of Scottish independence from the United Kingdom, announcing to the Scots that "It's time for you to waken up and detach yourselves from the English and their terrible politicians...whatever help is needed for Scotland to regain its independence, I will provide it...when you Scots regain your freedom, I am ready to be your president." Fayed set up the Al Fayed Charitable Foundation in 1987 aiming to help children with life-limiting conditions and children living in poverty. The charity works mainly with charities and hospices for disabled and neglected children in the UK, Thailand and Mongolia. Some of the charities with which it works include Francis House Hospice in Manchester, Great Ormond Street Hospital and ChildLine. In 1998, Al Fayed bought Princess Diana's old boarding school in Kent and helped found the New School at West Heath for children with additional needs and mental health problems. In 2011, Mohamed Al-Fayed's daughter Camilla, who has worked as an ambassador for the charity for eight years, opened the newly refurbished Zoe’s Place baby hospice in West Derby, Liverpool. Al-Fayed bought the freehold of West London professional football club Fulham F.C. for £6.25 million in 1997. The club was purchased via Bill Muddyman's Muddyman Group. His long-term aim was that Fulham would become a FA Premier League side within five years. In 2001, Fulham took the First Division (now Football League Championship) under manager Jean Tigana, winning 100 points and scoring over 100 goals in the season. This meant that Al-Fayed had achieved his objective of Fulham becoming a Premier League club a year ahead of schedule. By 2002, Fulham were competing in European football, winning the Intertoto Cup and challenging in the UEFA Cup. Fulham reached the final of the 2009–10 UEFA Europa League and continued to play in the Premier League throughout Al-Fayed's tenure as owner (which ended in 2013). Fulham temporarily left Craven Cottage while it was being upgraded to meet modern safety standards. There were fears that Fulham would not return to the Cottage after it was revealed that Al-Fayed had sold the first right to build on the ground to a property development firm. Fulham lost a legal case against former manager Tigana in 2004 after Al-Fayed had wrongly alleged that Tigana had overpaid more than £7m for new players and had negotiated transfers in secret. In 2009, Al-Fayed said that he was in favour of a wage cap for footballers, and criticised the management of The Football Association and Premier League as "run by donkeys who don't understand business, who are dazzled by money." A statue of the American entertainer Michael Jackson was unveiled by Al-Fayed in April 2011 at Craven Cottage. In 1999 Jackson had attended a league game against Wigan Athletic at the stadium. Following criticisms of the statue, Al-Fayed said "If some stupid fans don't understand and appreciate such a gift this guy gave to the world they can go to hell. I don't want them to be fans." The statue was removed by the club's new owners in 2013; Al-Fayed blamed the club's subsequent relegation from the Premier League on the 'bad luck' brought by its removal. Al-Fayed then donated the statue to the National Football Museum. In March 2019, the statue was removed from the museum due to the backlash against Jackson caused by the accusations against him in the documentary "Leaving Neverland". Under Al-Fayed Fulham F.C. was owned by Mafco Holdings, based in the tax haven of Bermuda. Mafco Holdings is owned by Al-Fayed and his family. By 2011, Al-Fayed had loaned Fulham F.C. £187 million in interest free loans. In July 2013, it was announced that Al-Fayed had sold the club to Pakistani American businessman Shahid Khan, who owns the NFL's Jacksonville Jaguars. Lady Diana Spencer was born in 1961, and married the heir to the British throne, Charles, Prince of Wales, in 1981, becoming the Princess of Wales. Diana was an international celebrity and a frequent visitor to Harrods in the 1980s. Al-Fayed and Dodi first met Diana and Charles when they were introduced at a polo tournament in July 1986, that had been sponsored by Harrods. Diana and Charles divorced in 1996. Diana was hosted by Al-Fayed in the south of France in mid-1997, with her two sons, the Princes William and Harry. For the holiday, Fayed bought a 195 ft yacht, the "Jonikal" (later renamed the "Sokar"). Dodi and Diana later began a private cruise on the "Jonikal" and paparazzi photographs of the couple in an embrace were published. Diana's friend, the journalist Richard Kay, confirmed that Diana was involved in "her first serious romance" since her divorce. Dodi and Diana went on a second private cruise on the "Jonikal" in the third week of August, and returned from Sardinia to Paris on 30 August. The couple privately dined at the Ritz later that day, after the behaviour of the press caused them to cancel a restaurant reservation, they then planned to spend the night at Dodi's apartment near the Arc de Triomphe. In an attempt to deceive the paparazzi, a decoy car left the front of the hotel, while Diana and Dodi departed at speed in a Mercedes-Benz S280 driven by chauffeur Henri Paul from the rear of the hotel. Five minutes later, the car crashed in the Pont de l'Alma tunnel, killing Paul and Dodi. Diana died later in hospital. Fayed arrived in Paris the next day and viewed Dodi's body, which was returned to Britain for an Islamic funeral. From February 1998, Al-Fayed maintained that the crash was a result of a conspiracy, and later contended that the crash was orchestrated by MI6 on the instructions of Prince Philip, Duke of Edinburgh. His claims that the crash was a result of a conspiracy were dismissed by a French judicial investigation, but Fayed appealed against this verdict. A libel action was brought against Al-Fayed by Neil Hamilton (see above). The British Operation Paget, a Metropolitan police inquiry that concluded in 2006, also found no evidence of a conspiracy. To Operation Paget, Al-Fayed made 175 "conspiracy claims". An inquest headed by Lord Justice Scott Baker into the deaths of Diana and Dodi began at the Royal Courts of Justice, London, on 2 October 2007 and lasted for six months. It was a continuation of the original inquest that had begun in 2004. At the Scott Baker inquest Fayed accused the Duke of Edinburgh, the Prince of Wales, Lady Sarah McCorquodale, her sister, and numerous others, of plotting to kill the Princess of Wales. Their motive, he claimed, was that they could not tolerate the idea of the Princess marrying a Muslim. Al-Fayed first claimed that the Princess was pregnant to the "Daily Express" in May 2001, and that he was the only person who had been told of this news. Witnesses at the inquest who said the Princess was not pregnant, and could not have been, were part of the conspiracy according to Al-Fayed. Fayed's testimony at the inquest was roundly condemned in the press as being farcical. Members of the British Government's Intelligence and Security Committee accused Fayed of turning the inquest into a 'circus' and called for it to be ended maturely. Lawyers representing Al-Fayed later accepted at the inquest that there was no direct evidence that either the Duke of Edinburgh or MI6 had been involved in any murder conspiracy involving Diana or Dodi. A few days before Al-Fayed's appearance, John Macnamara, a former senior detective at Scotland Yard and Al-Fayed's investigator for five years from 1997, was forced to admit on 14 February 2008 that he had no evidence to suggest foul play, except for the assertions Al-Fayed had made to him. His admissions also related to the lack of evidence for Al-Fayed's claims about the alleged pregnancy of the Princess and the couple's supposed engagement. The jury verdict, given on 7 April 2008, was that Diana and Dodi had been "unlawfully killed" through the grossly negligent driving of chauffeur Henri Paul, who was intoxicated, and the pursuing vehicles. Lawyers for Al-Fayed also accepted that there was no evidence to support the assertion that Diana was illegally embalmed in order to cover up a pregnancy, a "pregnancy" that they accepted could not be established by any medical evidence. They also accepted that there was no evidence to support the assertion the French emergency and medical services had played any role in a conspiracy to harm Diana. Following the Baker inquest, Al-Fayed said that he was abandoning his campaign to prove that Diana and Dodi were murdered in a conspiracy, and said that he would accept the verdict of the jury. Al-Fayed financially supported "Unlawful Killing" (2011), a documentary film presenting his version of events. The film was not formally released as a result of legal problems. Al-Fayed's business interests include: Al-Fayed's major business purchases have included: Al-Fayed has been accused by multiple women of sexual harassment and assault. Young women applying for employment at Harrods were often submitted to HIV tests and gynaecological examinations. These women were then selected to spend the weekend with Al-Fayed in Paris. In her profile of Al-Fayed for "Vanity Fair", Maureen Orth described how according to former employees "Fayed regularly walked the store on the lookout for young, attractive women to work in his office. Those who rebuffed him would often be subjected to crude, humiliating comments about their appearance or dress...A dozen ex-employees I spoke with said that Fayed would chase secretaries around the office and sometimes try to stuff money down women's blouses". In 1994, Hermínia da Silva quit her job as a nanny at Al-Fayed's home in Oxted. Silva had prepared accusations that she was sexually harassed by Al-Fayed, and she was subsequently arrested by detectives and held overnight in cells following a complaint of theft by an employee of Al-Fayed's. She was later released without charge after officers concluded she had not stolen anything. Al-Fayed eventually settled with her out of court, and she was awarded £12,000. Al-Fayed was interviewed under caution by the Metropolitan Police after an allegation of sexual assault against a 15-year-old schoolgirl in October 2008. The case was dropped by the Crown Prosecution Service, after they found that there was no realistic chance of conviction due to conflicting statements. In December 1997, the ITV current affairs programme, "The Big Story" broadcast testimonies from a number of former Harrods employees who spoke of how women were routinely sexually harassed by Al-Fayed. A December 2017 episode of Channel 4's "Dispatches" programme alleged that Al-Fayed had sexually harassed three Harrods employees, and attempted to "groom" them. One of the women was aged 17 at the time. Cheska Hill-Wood, now in her 40s, waived her right to anonymity to be interviewed for the programme. The programme alleged al-Fayed targeted young employees over a 13-year period.
https://en.wikipedia.org/wiki?curid=20089
Marmite Marmite ( ) is a food spread made from yeast extract invented by German scientist Justus von Liebig and originally made in the United Kingdom. It is a by-product of beer brewing and is produced by Dutch-British company Unilever. Other similar products include the Australian Vegemite (the name of which is derived from that of Marmite), the Swiss Cenovis, the Brazilian Cenovit and the German Vitam-R. Marmite has been manufactured in New Zealand since 1919 under license, but with a different recipe, see "Marmite (New Zealand)". That product is the only one sold as Marmite in Australasia and the Pacific, whereas elsewhere in the world the European version predominates. The product is notable as a vegan source of B vitamins, including supplemental Vitamin B12. South Africa also produces both a bottled, long-life and a fresh full-cream cheese spread, flavoured with Marmite. ; Marmite is a sticky, dark brown food paste with a distinctive, powerful flavour, that is extremely salty. This distinctive taste is represented in the marketing slogan: "Love it or hate it." Such is its prominence in British popular culture that the product's name is often used as a metaphor for something that is an acquired taste or tends to polarise opinions. Marmite is a commonly used ingredient in dishes as a flavouring, as it is particularly rich in umami due to its very high levels of glutamate (1960 mg/100g). The image on the front of the jar shows a "marmite" (), a French term for a large, covered earthenware or metal cooking pot. Marmite was originally supplied in earthenware pots but since the 1920s has been sold in glass jars. The product that was to become Marmite was invented during the late 19th century when German scientist Justus von Liebig discovered that brewer's yeast could be concentrated, bottled and eaten. During 1902, the Marmite Food Extract Company was formed in Burton upon Trent, Staffordshire, England with Marmite as its main product and Burton as the site of the first factory. The by-product yeast needed for the paste was supplied by Bass Brewery. By 1907, the product had become successful enough to warrant construction of a second factory at Camberwell Green in London. By 1912, the discovery of vitamins was a boost for Marmite, as the spread is a rich source of the vitamin B complex; with the vitamin B1 deficiency beri-beri being common during World War I, the spread became more popular. British troops during World War I were issued Marmite as part of their rations. During the 1930s, Marmite was used by the English scientist Lucy Wills to successfully treat a form of anaemia in mill workers in Bombay. She later identified folic acid as the active ingredient. Marmite was used to treat malnutrition by Suriya-Mal workers during the 1934–5 malaria epidemic in Sri Lanka. Housewives were encouraged to spread Marmite thinly and to "use it sparingly just now" because of limited rations of the product. During 1990, Marmite Limited, which had become a subsidiary of Bovril Limited, was bought by CPC International Inc, which changed its name to Best Foods Inc during 1998. Best Foods Inc subsequently merged with Unilever during 2000, and Marmite is now a trademark owned by Unilever.. There are a number of similar yeast products available in other countries; these products are not directly connected to the original Marmite recipe and brand. The Australian product Vegemite is distributed in many countries, and AussieMite is sold in Australia. Other products include OzeMite, which is made by Dick Smith Foods; Cenovis, a Swiss spread; and Vegex, an autolyzed yeast product available in the United States since 1913. Marmite is traditionally eaten as a savoury spread on bread, toast, savoury biscuits or crackers, and other similar baked products. Owing to its concentrated taste it is often spread thinly with butter or margarine. Marmite can also be made into a savoury hot drink by adding one teaspoon to a mug of hot water, much like Oxo and Bovril. Marmite is also commonly used to enrich casseroles and stews. Marmite is paired with cheese, such as in a cheese sandwich, and has been used as an additional flavouring in Mini Cheddars, a cheese-flavoured biscuit snack. Similarly, it is one of Walkers Crisps flavours; is sold as a flavouring on rice cakes; and is available in the form of Marmite Biscuits. Starbucks in the UK has a cheese and Marmite panini on its menu. Marmite has been used as an ingredient in cocktails, including the Marmite Cocktail and the Marmite Gold Rush. While the process is secret, the general method for making yeast extract on a commercial scale is to add salt to a suspension of yeast, making the solution hypertonic, which results in the cells shrivelling; this triggers autolysis, during which the yeast self-destructs. The dying yeast cells are then heated to complete their breakdown, and since yeast cells have thick cell walls which would detract from the smoothness of the end product, the husks are sieved out. As with other yeast extracts, Marmite contains free glutamic acids, which are analogous to monosodium glutamate. Presently, the main ingredients of Marmite are glutamic acid-rich yeast extract, with lesser quantities of salt, vegetable extract, spice extracts and celery extracts, although the precise composition is a trade secret. Some sources suggest that Vitamin B12 is not naturally found in yeast extract, and is only present because it is added to Marmite during manufacture. Others indicate that "Brewer's and nutritional yeast extracts are also a rich source of B vitamins, particularly B-1, B-2, B-3, B-6, B-12, and folic acid." Marmite is rich in B vitamins including thiamin (B1), riboflavin (B2), niacin (B3), folic acid (B9) and vitamin B12. The sodium content of the spread is high and has caused concern, although it is the amount per serving rather than the percentage in bulk Marmite that is relevant. The main ingredient of Marmite is yeast extract, which contains a high concentration of glutamic acid. Marmite is not gluten free, as it is made with wheat, and although it is thoroughly washed, it may contain small quantities of gluten. Marmite should be avoided if a person takes a MAOI antidepressant, such as phenelzine (Nardil) or tranylcypromine (Parnate), as yeast extracts interact adversely with these types of medications due to their tyramine content. Marmite is presently fortified with added vitamins, resulting in it being banned temporarily in Denmark, which disallows foodstuffs that have been fortified until they have been tested. The Danish Veterinary and Food Administration stated during 2015 that Marmite had not been banned in the country, but that fortified foods need to be tested for safety and approved before they can be marketed in the country. During 2014, suppliers applied for a risk assessment to be performed after which Marmite became available in Denmark. Marmite's publicity campaigns initially emphasised the spread's healthy nature, extolling it as "The growing up spread you never grow out of". The first major Marmite advertising campaign began during the 1930s, with characters whose faces incorporated the word "good". Soon afterwards, the increasing awareness of vitamins was used in Marmite advertising, with slogans proclaiming that "A small quantity added to the daily diet will ensure you and your family are taking sufficient vitamin B to keep nerves, brain, and digestion in proper working order". During the 1980s, the spread was advertised with the slogan "My mate, Marmite", chanted in television commercials by an army platoon. The spread had been a standard vitamin supplement for British-based German POWs during the Second World War. By the 1990s Marmite's distinctive and powerful flavour had earned it as many detractors as it had fans, and it was known for producing a polarised "love/hate" reaction amongst consumers. For many years television advertisements for Marmite featured the song "Low Rider" by the band War with the lyrics changed to the phrase "My Mate, Marmite". Marmite began a "Love it or Hate it" campaign during October 1996, and this resulted in the inventing of the phrase "Marmite effect" or "Marmite reaction" for anything which provoked controversy. On 22 April 2010, Unilever threatened legal action against the British National Party for using a jar of Marmite and the "love it or hate it" slogan for their television advertisements. Because of the local product named Marmite, it is sold by the name "Our Mate" in Australia and New Zealand. New Zealand Marmite uses the name "NZ-Mite" elsewhere. In Denmark, food safety legislation dictates that foodstuffs that contain added vitamins can only be sold by retailers which have been licensed by the Veterinary and Food Administration. During May 2011, the company that imports the product to Denmark revealed that it wasn't licensed and had therefore stopped selling the product: this resulted in widespread but inaccurate reports by the British media that Marmite had been banned by the Danish authorities. On 24 January 2014, the Canadian Food Inspection Agency was noted, in a Canadian Broadcasting Corporation story, as preparing to stop the sale of Marmite, as well as Vegemite and Ovaltine, in Canada because they were enriched with vitamins and minerals which were not listed by Canadian food regulations. The agency said the products were not a health hazard. The CFIA later specified that these specific items had been seized because they were not the versions that are formulated for sale in Canada and which satisfied all Canadian food regulations. Canadian versions of Marmite and the other products would still be permitted to be sold in Canada. Marmite is manufactured by licence in South Africa by Pioneer Foods in its traditional form. South Africa also produces a bottled, long-life Marmite-flavoured Cheese Spread, which is extremely popular in that country. It is light in texture and contains a hint of Marmite, making in more palatable to Marmite novices. . In addition, Lancewood, a major dairy producer in South Africa, manufactures a fresh Full-cream Cheese Spread, flavoured with Marmite, especially for those who prefer fresh dairy over the long-life variety. Also immensely popular in South African baking corners, is a sweet and savoury tea cake that is topped with a hint of Marmite-butter and grated cheese. During 2002 a 100th anniversary jar was released. During February 2007, Marmite produced a limited edition Guinness Marmite of 300,000 250g jars of their yeast extract with 30% Guinness yeast, giving it a noticeable hint of "Guinness" flavour. During January 2008 Champagne Marmite was released for Valentine's Day, with a limited-edition production of 600,000 units initially released exclusively to Selfridges of London. The product had 0.3% champagne added to the recipe, and a modified heart-shaped label with "I love you" in place of the logo. During 2009, a limited edition Marston's Pedigree Marmite was initiated to celebrate the 2009 Ashes Cricket test series. During March 2010, a super-strong blend called Marmite Extra Old, abbreviated as XO, was initiated in the UK. Normal Marmite contains a mix of lager, bitter and ale varieties of yeast, sourced from breweries throughout the UK. However, as lagers have a lighter, sweeter taste, residue from this product is not used in Marmite XO. Only residue from traditional bitters and ales are blended to ensure the stronger taste. Marmite XO is matured for 28 days - four times longer than usual. Continuing the tradition of different coloured caps for special editions, Marmite XO's cap is black. During April 2012, a special edition jar in commemoration of the Diamond Jubilee of Queen Elizabeth II was released. With the product renamed "Ma'amite," the redesigned label featured a colour scheme based upon the Union Jack; the marmite and spoon logo replaced by a gold crown, and with a red rather than yellow cap. The front label also declares "Made with 100% British Yeast". Coinciding with the 110th anniversary of the brand, production was limited to 300,000 jars. For Christmas 2012 a gold limited edition was begun, containing edible gold-coloured flecks. Marmite chocolate is also available. During 2015, Marmite Summer of Love Special Edition featured a flower power themed label. This special edition's blend had a lighter taste made using 100% Lager yeast. In July 2019, Marmite XO returned due to popular demand.
https://en.wikipedia.org/wiki?curid=20090
Mabon ap Modron Mabon ap Modron is a prominent figure from Welsh literature and mythology, the son of Modron and a member of Arthur's war band. Both he and his mother were likely deities in origin, descending from a divine mother–son pair. His name is related to the Romano-British god Maponos, whose name means "Great Son"; Modron, in turn, is likely related to the Gaulish goddess Dea Matrona. He is often equated with the Demetian hero Pryderi fab Pwyll, and may be associated with the minor Arthurian character Mabon ab Mellt. The name "Mabon" is derived from the Common Brittonic and Gaulish deity name "Maponos" "Great Son", from the Proto-Celtic root "*makwo-" "son". Similarly, Modron is derived from the name of the Brittonic and Gaulish deity "Mātronā", meaning "Great Mother", from Proto-Celtic "*mātīr" "mother". Culhwch's father, King Cilydd, the son of Celyddon, loses his wife Goleuddydd after a difficult childbirth. When he remarries, the young Culhwch rejects his stepmother's attempt to pair him with his new stepsister. Offended, the new queen puts a curse on him so that he can marry no one besides the beautiful Olwen, daughter of the giant Ysbaddaden. Though he has never seen her, Culhwch becomes infatuated with her, but his father warns him that he will never find her without the aid of his famous cousin Arthur. The young man immediately sets off to seek his kinsman. He finds him at his court in Celliwig in Cornwall and asks for support and assistance. Cai is the first knight to volunteer to assist Culhwch in his quest, promising to stand by his side until Olwen is found. A further five knights join them in their mission. They travel onwards until they come across the "fairest of the castles of the world", and meet Ysbaddaden's shepherd brother, Custennin. They learn that the castle belongs to Ysbaddaden, that he stripped Custennin of his lands and murdered the shepherd's twenty-three children out of cruelty. Custennin sets up a meeting between Culhwch and Olwen, and the maiden agrees to lead Culhwch and his companions to Ysbadadden's castle. Cai pledges to protect the twenty-fourth son, Goreu, with his life. The knights attack the castle by stealth, killing the nine porters and the nine watchdogs, and enter the giant's hall. Upon their arrival, Ysbaddaden attempts to kill Culhwch with a poison dart, but is outwitted and wounded, first by Bedwyr, then by the enchanter Menw, and finally by Culhwch himself. Eventually, Ysbaddaden relents, and agrees to give Culhwch his daughter on the condition that he completes a number of impossible tasks ("anoethau"), including hunting the Twrch Trwyth and recovering the exalted prisoner, Mabon son of Modron, the only man able to hunt the dog Drudwyn, in turn the only dog who can track the Twrch Trwyth. Arthur and his men learn that Mabon was stolen from his mother's arms when he was three nights old, and question the world's oldest and wisest animals about his whereabouts, until they are led to the salmon of Llyn Llyw, the oldest animal of them all. The enormous salmon carries Arthur's men Cei and Bedwyr downstream to Mabon's prison in Gloucester; they hear him through the walls, singing a lamentation for his fate. The rest of Arthur's men launch an assault on the front of the prison, while Cei and Bedwyr sneak in the back and rescue Mabon. He subsequently plays a key role in the hunt for the Twrch Trwyth. One of the earliest direct reference to Mabon can be found in the tenth century poem Pa Gur, in which Arthur recounts the feats and achievements of his knights so as to gain entrance to a fortress guarded by Glewlwyd Gafaelfawr, the eponymous porter. The poem relates that Mabon fab Mydron (a misspelling of Modron) is one of Arthur's followers, and is described as a "servant to Uther Pendragon". A second figure, Mabon fab Mellt, is described as having "stained the grass with blood". He further appears in the medieval tale "The Dream of Rhonabwy", in which he fights alongside Arthur at the Battle of Badon and is described as one of the king's chief advisors. Mabon is almost certainly related to the continental Arthurian figures Mabonagrain, Mabuz, Nabon le Noir and Maboun.
https://en.wikipedia.org/wiki?curid=20092
Microwave Microwaves are a form of electromagnetic radiation with wavelengths ranging from about one meter to one millimeter; with frequencies between and . Different sources define different frequency ranges as microwaves; the above broad definition includes both UHF and EHF (millimeter wave) bands. A more common definition in radio-frequency engineering is the range between 1 and 100 GHz (wavelengths between 0.3 m and 3 mm). In all cases, microwaves include the entire SHF band (3 to 30 GHz, or 10 to 1 cm) at minimum. Frequencies in the microwave range are often referred to by their IEEE radar band designations: S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations. The prefix "" in "microwave" is not meant to suggest a wavelength in the micrometer range. Rather, it indicates that microwaves are "small" (having shorter wavelengths), compared to the radio waves used prior to microwave technology. The boundaries between far infrared, terahertz radiation, microwaves, and ultra-high-frequency radio waves are fairly arbitrary and are used variously between different fields of study. Microwaves travel by line-of-sight; unlike lower frequency radio waves they do not diffract around hills, follow the earth's surface as ground waves, or reflect from the ionosphere, so terrestrial microwave communication links are limited by the visual horizon to about . At the high end of the band they are absorbed by gases in the atmosphere, limiting practical communication distances to around a kilometer. Microwaves are widely used in modern technology, for example in point-to-point communication links, wireless networks, microwave radio relay networks, radar, satellite and spacecraft communication, medical diathermy and cancer treatment, remote sensing, radio astronomy, particle accelerators, spectroscopy, industrial heating, collision avoidance systems, garage door openers and keyless entry systems, and for cooking food in microwave ovens. Microwaves occupy a place in the electromagnetic spectrum with frequency above ordinary radio waves, and below infrared light: In descriptions of the electromagnetic spectrum, some sources classify microwaves as radio waves, a subset of the radio wave band; while others classify microwaves and radio waves as distinct types of radiation. This is an arbitrary distinction. Microwaves travel solely by line-of-sight paths; unlike lower frequency radio waves, they do not travel as ground waves which follow the contour of the Earth, or reflect off the ionosphere (skywaves). Although at the low end of the band they can pass through building walls enough for useful reception, usually rights of way cleared to the first Fresnel zone are required. Therefore, on the surface of the Earth, microwave communication links are limited by the visual horizon to about . Microwaves are absorbed by moisture in the atmosphere, and the attenuation increases with frequency, becoming a significant factor (rain fade) at the high end of the band. Beginning at about 40 GHz, atmospheric gases also begin to absorb microwaves, so above this frequency microwave transmission is limited to a few kilometers. A spectral band structure causes absorption peaks at specific frequencies (see graph at right). Above 100 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so great that it is in effect opaque, until the atmosphere becomes transparent again in the so-called infrared and optical window frequency ranges. In a microwave beam directed at an angle into the sky, a small amount of the power will be randomly scattered as the beam passes through the troposphere. A sensitive receiver beyond the horizon with a high gain antenna focused on that area of the troposphere can pick up the signal. This technique has been used at frequencies between 0.45 and 5 GHz in tropospheric scatter (troposcatter) communication systems to communicate beyond the horizon, at distances up to 300 km. The short wavelengths of microwaves allow omnidirectional antennas for portable devices to be made very small, from 1 to 20 centimeters long, so microwave frequencies are widely used for wireless devices such as cell phones, cordless phones, and wireless LANs (Wi-Fi) access for laptops, and Bluetooth earphones. Antennas used include short whip antennas, rubber ducky antennas, sleeve dipoles, patch antennas, and increasingly the printed circuit inverted F antenna (PIFA) used in cell phones. Their short wavelength also allows narrow beams of microwaves to be produced by conveniently small high gain antennas from a half meter to 5 meters in diameter. Therefore, beams of microwaves are used for point-to-point communication links, and for radar. An advantage of narrow beams is that they do not interfere with nearby equipment using the same frequency, allowing frequency reuse by nearby transmitters. Parabolic ("dish") antennas are the most widely used directive antennas at microwave frequencies, but horn antennas, slot antennas and dielectric lens antennas are also used. Flat microstrip antennas are being increasingly used in consumer devices. Another directive antenna practical at microwave frequencies is the phased array, a computer-controlled array of antennas which produces a beam which can be electronically steered in different directions. At microwave frequencies, the transmission lines which are used to carry lower frequency radio waves to and from antennas, such as coaxial cable and parallel wire lines, have excessive power losses, so when low attenuation is required microwaves are carried by metal pipes called waveguides. Due to the high cost and maintenance requirements of waveguide runs, in many microwave antennas the output stage of the transmitter or the RF front end of the receiver is located at the antenna. The term "microwave" also has a more technical meaning in electromagnetics and circuit theory. Apparatus and techniques may be described qualitatively as "microwave" when the wavelengths of signals are roughly the same as the dimensions of the circuit, so that lumped-element circuit theory is inaccurate, and instead distributed circuit elements and transmission-line theory are more useful methods for design and analysis. As a consequence, practical microwave circuits tend to move away from the discrete resistors, capacitors, and inductors used with lower-frequency radio waves. Open-wire and coaxial transmission lines used at lower frequencies are replaced by waveguides and stripline, and lumped-element tuned circuits are replaced by cavity resonators or resonant stubs. In turn, at even higher frequencies, where the wavelength of the electromagnetic waves becomes small in comparison to the size of the structures used to process them, microwave techniques become inadequate, and the methods of optics are used. High-power microwave sources use specialized vacuum tubes to generate microwaves. These devices operate on different principles from low-frequency vacuum tubes, using the ballistic motion of electrons in a vacuum under the influence of controlling electric or magnetic fields, and include the magnetron (used in microwave ovens), klystron, traveling-wave tube (TWT), and gyrotron. These devices work in the density modulated mode, rather than the current modulated mode. This means that they work on the basis of clumps of electrons flying ballistically through them, rather than using a continuous stream of electrons. Low-power microwave sources use solid-state devices such as the field-effect transistor (at least at lower frequencies), tunnel diodes, Gunn diodes, and IMPATT diodes. Low-power sources are available as benchtop instruments, rackmount instruments, embeddable modules and in card-level formats. A maser is a solid state device which amplifies microwaves using similar principles to the laser, which amplifies higher frequency light waves. All warm objects emit low level microwave black-body radiation, depending on their temperature, so in meteorology and remote sensing, microwave radiometers are used to measure the temperature of objects or terrain. The sun and other astronomical radio sources such as Cassiopeia A emit low level microwave radiation which carries information about their makeup, which is studied by radio astronomers using receivers called radio telescopes. The cosmic microwave background radiation (CMBR), for example, is a weak microwave noise filling empty space which is a major source of information on cosmology's Big Bang theory of the origin of the Universe. Microwave technology is extensively used for point-to-point telecommunications (i.e. non-broadcast uses). Microwaves are especially suitable for this use since they are more easily focused into narrower beams than radio waves, allowing frequency reuse; their comparatively higher frequencies allow broad bandwidth and high data transmission rates, and antenna sizes are smaller than at lower frequencies because antenna size is inversely proportional to transmitted frequency. Microwaves are used in spacecraft communication, and much of the world's data, TV, and telephone communications are transmitted long distances by microwaves between ground stations and communications satellites. Microwaves are also employed in microwave ovens and in radar technology. Before the advent of fiber-optic transmission, most long-distance telephone calls were carried via networks of microwave radio relay links run by carriers such as AT&T Long Lines. Starting in the early 1950s, frequency-division multiplexing was used to send up to 5,400 telephone channels on each microwave radio channel, with as many as ten radio channels combined into one antenna for the "hop" to the next site, up to 70 km away. Wireless LAN protocols, such as Bluetooth and the IEEE 802.11 specifications used for Wi-Fi, also use microwaves in the 2.4 GHz ISM band, although 802.11a uses ISM band and U-NII frequencies in the 5 GHz range. Licensed long-range (up to about 25 km) Wireless Internet Access services have been used for almost a decade in many countries in the 3.5–4.0 GHz range. The FCC recently carved out spectrum for carriers that wish to offer services in this range in the U.S. — with emphasis on 3.65 GHz. Dozens of service providers across the country are securing or have already received licenses from the FCC to operate in this band. The WIMAX service offerings that can be carried on the 3.65 GHz band will give business customers another option for connectivity. Metropolitan area network (MAN) protocols, such as WiMAX (Worldwide Interoperability for Microwave Access) are based on standards such as IEEE 802.16, designed to operate between 2 and 11 GHz. Commercial implementations are in the 2.3 GHz, 2.5 GHz, 3.5 GHz and 5.8 GHz ranges. Mobile Broadband Wireless Access (MBWA) protocols based on standards specifications such as IEEE 802.20 or ATIS/ANSI HC-SDMA (such as iBurst) operate between 1.6 and 2.3 GHz to give mobility and in-building penetration characteristics similar to mobile phones but with vastly greater spectral efficiency. Some mobile phone networks, like GSM, use the low-microwave/high-UHF frequencies around 1.8 and 1.9 GHz in the Americas and elsewhere, respectively. DVB-SH and S-DMB use 1.452 to 1.492 GHz, while proprietary/incompatible satellite radio in the U.S. uses around 2.3 GHz for DARS. Microwave radio is used in broadcasting and telecommunication transmissions because, due to their short wavelength, highly directional antennas are smaller and therefore more practical than they would be at longer wavelengths (lower frequencies). There is also more bandwidth in the microwave spectrum than in the rest of the radio spectrum; the usable bandwidth below 300 MHz is less than 300 MHz while many GHz can be used above 300 MHz. Typically, microwaves are used in television news to transmit a signal from a remote location to a television station from a specially equipped van. See broadcast auxiliary service (BAS), remote pickup unit (RPU), and studio/transmitter link (STL). Most satellite communications systems operate in the C, X, Ka, or Ku bands of the microwave spectrum. These frequencies allow large bandwidth while avoiding the crowded UHF frequencies and staying below the atmospheric absorption of EHF frequencies. Satellite TV either operates in the C band for the traditional large dish fixed satellite service or Ku band for direct-broadcast satellite. Military communications run primarily over X or Ku-band links, with Ka band being used for Milstar. Global Navigation Satellite Systems (GNSS) including the Chinese Beidou, the American Global Positioning System (introduced in 1978) and the Russian GLONASS broadcast navigational signals in various bands between about 1.2 GHz and 1.6 GHz. Radar is a radiolocation technique in which a beam of radio waves emitted by a transmitter bounces off an object and returns to a receiver, allowing the location, range, speed, and other characteristics of the object to be determined. The short wavelength of microwaves causes large reflections from objects the size of motor vehicles, ships and aircraft. Also, at these wavelengths, the high gain antennas such as parabolic antennas which are required to produce the narrow beamwidths needed to accurately locate objects are conveniently small, allowing them to be rapidly turned to scan for objects. Therefore, microwave frequencies are the main frequencies used in radar. Microwave radar is widely used for applications such as air traffic control, weather forecasting, navigation of ships, and speed limit enforcement. Long distance radars use the lower microwave frequencies since at the upper end of the band atmospheric absorption limits the range, but millimeter waves are used for short range radar such as collision avoidance systems. Microwaves emitted by astronomical radio sources; planets, stars, galaxies, and nebulas are studied in radio astronomy with large dish antennas called radio telescopes. In addition to receiving naturally occurring microwave radiation, radio telescopes have been used in active radar experiments to bounce microwaves off planets in the solar system, to determine the distance to the Moon or map the invisible surface of Venus through cloud cover. A recently completed microwave radio telescope is the Atacama Large Millimeter Array, located at more than 5,000 meters (16,597 ft) altitude in Chile, observes the universe in the millimetre and submillimetre wavelength ranges. The world's largest ground-based astronomy project to date, it consists of more than 66 dishes and was built in an international collaboration by Europe, North America, East Asia and Chile. A major recent focus of microwave radio astronomy has been mapping the cosmic microwave background radiation (CMBR) discovered in 1964 by radio astronomers Arno Penzias and Robert Wilson. This faint background radiation, which fills the universe and is almost the same in all directions, is "relic radiation" from the Big Bang, and is one of the few sources of information about conditions in the early universe. Due to the expansion and thus cooling of the Universe, the originally high-energy radiation has been shifted into the microwave region of the radio spectrum. Sufficiently sensitive radio telescopes can detect the CMBR as a faint signal that is not associated with any star, galaxy, or other object. A microwave oven passes microwave radiation at a frequency near through food, causing dielectric heating primarily by absorption of the energy in water. Microwave ovens became common kitchen appliances in Western countries in the late 1970s, following the development of less expensive cavity magnetrons. Water in the liquid state possesses many molecular interactions that broaden the absorption peak. In the vapor phase, isolated water molecules absorb at around 22 GHz, almost ten times the frequency of the microwave oven. Microwave heating is used in industrial processes for drying and curing products. Many semiconductor processing techniques use microwaves to generate plasma for such purposes as reactive ion etching and plasma-enhanced chemical vapor deposition (PECVD). Microwaves are used in stellarators and tokamak experimental fusion reactors to help break down the gas into a plasma, and heat it to very high temperatures. The frequency is tuned to the cyclotron resonance of the electrons in the magnetic field, anywhere between 2–200 GHz, hence it is often referred to as Electron Cyclotron Resonance Heating (ECRH). The upcoming ITER thermonuclear reactor will use up to 20 MW of 170 GHz microwaves. Microwaves can be used to transmit power over long distances, and post-World War II research was done to examine possibilities. NASA worked in the 1970s and early 1980s to research the possibilities of using solar power satellite (SPS) systems with large solar arrays that would beam power down to the Earth's surface via microwaves. Less-than-lethal weaponry exists that uses millimeter waves to heat a thin layer of human skin to an intolerable temperature so as to make the targeted person move away. A two-second burst of the 95 GHz focused beam heats the skin to a temperature of at a depth of . The United States Air Force and Marines are currently using this type of active denial system in fixed installations. Microwave radiation is used in electron paramagnetic resonance (EPR or ESR) spectroscopy, typically in the X-band region (~9 GHz) in conjunction typically with magnetic fields of 0.3 T. This technique provides information on unpaired electrons in chemical systems, such as free radicals or transition metal ions such as Cu(II). Microwave radiation is also used to perform rotational spectroscopy and can be combined with electrochemistry as in microwave enhanced electrochemistry. Bands of frequencies in the microwave spectrum are designated by letters. Unfortunately, there are several incompatible band designation systems, and even within a system the frequency ranges corresponding to some of the letters vary somewhat between different application fields. The letter system had its origin in World War 2 in a top secret U.S. classification of bands used in radar sets; this is the origin of the oldest letter system, the IEEE radar bands. One set of microwave frequency bands designations by the Radio Society of Great Britain (RSGB), is tabulated below: Other definitions exist. The term P band is sometimes used for UHF frequencies below the L band but is now obsolete per IEEE Std 521. When radars were first developed at K band during World War II, it was not known that there was a nearby absorption band (due to water vapor and oxygen in the atmosphere). To avoid this problem, the original K band was split into a lower band, Ku, and upper band, Ka. Microwave frequency can be measured by either electronic or mechanical techniques. Frequency counters or high frequency heterodyne systems can be used. Here the unknown frequency is compared with harmonics of a known lower frequency by use of a low frequency generator, a harmonic generator and a mixer. Accuracy of the measurement is limited by the accuracy and stability of the reference source. Mechanical methods require a tunable resonator such as an absorption wavemeter, which has a known relation between a physical dimension and frequency. In a laboratory setting, Lecher lines can be used to directly measure the wavelength on a transmission line made of parallel wires, the frequency can then be calculated. A similar technique is to use a slotted waveguide or slotted coaxial line to directly measure the wavelength. These devices consist of a probe introduced into the line through a longitudinal slot, so that the probe is free to travel up and down the line. Slotted lines are primarily intended for measurement of the voltage standing wave ratio on the line. However, provided a standing wave is present, they may also be used to measure the distance between the nodes, which is equal to half the wavelength. Precision of this method is limited by the determination of the nodal locations. Microwaves do not contain sufficient energy to chemically change substances by ionization, and so are an example of non-ionizing radiation. The word "radiation" refers to energy radiating from a source and not to radioactivity. It has not been shown conclusively that microwaves (or other non-ionizing electromagnetic radiation) have significant adverse biological effects at low levels. Some, but not all, studies suggest that long-term exposure may have a carcinogenic effect. This is separate from the risks associated with very high-intensity exposure, which can cause heating and burns like any heat source, and not a unique property of microwaves specifically. During World War II, it was observed that individuals in the radiation path of radar installations experienced clicks and buzzing sounds in response to microwave radiation. This microwave auditory effect was thought to be caused by the microwaves inducing an electric current in the hearing centers of the brain. Research by NASA in the 1970s has shown this to be caused by thermal expansion in parts of the inner ear. In 1955 Dr. James Lovelock was able to reanimate rats chilled to 0-1 °C using microwave diathermy. When injury from exposure to microwaves occurs, it usually results from dielectric heating induced in the body. Exposure to microwave radiation can produce cataracts by this mechanism, because the microwave heating denatures proteins in the crystalline lens of the eye (in the same way that heat turns egg whites white and opaque). The lens and cornea of the eye are especially vulnerable because they contain no blood vessels that can carry away heat. Exposure to heavy doses of microwave radiation (as from an oven that has been tampered with to allow operation even with the door open) can produce heat damage in other tissues as well, up to and including serious burns that may not be immediately evident because of the tendency for microwaves to heat deeper tissues with higher moisture content. Eleanor R. Adair conducted microwave health research by exposing herself, animals and humans to microwave levels that made them feel warm or even start to sweat and feel quite uncomfortable. She found no adverse health effects other than heat. Microwaves were first generated in the 1890s in some of the earliest radio experiments by physicists who thought of them as a form of "invisible light". James Clerk Maxwell in his 1873 theory of electromagnetism, now called Maxwell's equations, had predicted that a coupled electric field and magnetic field could travel through space as an electromagnetic wave, and proposed that light consisted of electromagnetic waves of short wavelength. In 1888, German physicist Heinrich Hertz was the first to demonstrate the existence of radio waves using a primitive spark gap radio transmitter. Hertz and the other early radio researchers were interested in exploring the similarities between radio waves and light waves, to test Maxwell's theory. They concentrated on producing short wavelength radio waves in the UHF and microwave ranges, with which they could duplicate classic optics experiments in their laboratories, using quasioptical components such as prisms and lenses made of paraffin, sulfur and pitch and wire diffraction gratings, to refract and diffract radio waves like light rays. Hertz produced waves up to 450 MHz; his directional 450 MHz transmitter consisted of a 26 cm brass rod dipole antenna with a spark gap between the ends, suspended at the focal line of a parabolic antenna made of a curved zinc sheet, powered by high voltage pulses from an induction coil. His historic experiments demonstrated that radio waves like light exhibited refraction, diffraction, polarization, interference and standing waves, proving that radio waves and light waves were both forms of Maxwell's electromagnetic waves. In 1894, Oliver Lodge and Augusto Righi generated 1.5 and 12 GHz microwaves respectively with small metal ball spark resonators. The same year Indian physicist Jagadish Chandra Bose was the first person to produce millimeter waves, generating 60 GHz (5 millimeter) microwaves using a 3 mm metal ball spark oscillator. Bose also invented waveguide and horn antennas for use in his experiments. Russian physicist Pyotr Lebedev in 1895 generated 50 GHz millimeter waves. In 1897 Lord Rayleigh solved the mathematical boundary-value problem of electromagnetic waves propagating through conducting tubes and dielectric rods of arbitrary shape. which gave the modes and cutoff frequency of microwaves propagating through a waveguide. However, since microwaves were limited to line of sight paths, they could not communicate beyond the visual horizon, and the low power of the spark transmitters then in use limited their practical range to a few miles. The subsequent development of radio communication after 1896 employed lower frequencies, which could travel beyond the horizon as ground waves and by reflecting off the ionosphere as skywaves, and microwave frequencies were not further explored at this time. Practical use of microwave frequencies did not occur until the 1940s and 1950s due to a lack of adequate sources, since the triode vacuum tube (valve) electronic oscillator used in radio transmitters could not produce frequencies above a few hundred megahertz due to excessive electron transit time and interelectrode capacitance. By the 1930s, the first low power microwave vacuum tubes had been developed using new principles; the Barkhausen-Kurz tube and the split-anode magnetron. These could generate a few watts of power at frequencies up to a few gigahertz, and were used in the first experiments in communication with microwaves. In 1931 an Anglo-French consortium demonstrated the first experimental microwave relay link, across the English Channel between Dover, UK and Calais, France. The system transmitted telephony, telegraph and facsimile data over bidirectional 1.7 GHz beams with a power of one-half watt, produced by miniature Barkhausen-Kurz tubes at the focus of metal dishes. A word was needed to distinguish these new shorter wavelengths, which had previously been lumped into the "short wave" band, which meant all waves shorter than 200 meters. The terms "quasi-optical waves" and "ultrashort waves" were used briefly, but did not catch on. The first usage of the word "micro-wave" apparently occurred in 1931. The development of radar, mainly in secrecy, before and during World War 2, resulted in the technological advances which made microwaves practical. Wavelengths in the centimeter range were required to give the small radar antennas which were compact enough to fit on aircraft a narrow enough beamwidth to localize enemy aircraft. It was found that conventional transmission lines used to carry radio waves had excessive power losses at microwave frequencies, and George Southworth at Bell Labs and Wilmer Barrow at MIT independently invented waveguide in 1936. Barrow invented the horn antenna in 1938 as a means to efficiently radiate microwaves into or out of a waveguide. In a microwave receiver, a nonlinear component was needed that would act as a detector and mixer at these frequencies, as vacuum tubes had too much capacitance. To fill this need researchers resurrected an obsolete technology, the point contact crystal detector (cat whisker detector) which was used as a demodulator in crystal radios around the turn of the century before vacuum tube receivers. The low capacitance of semiconductor junctions allowed them to function at microwave frequencies. The first modern silicon and germanium diodes were developed as microwave detectors in the 1930s, and the principles of semiconductor physics learned during their development led to semiconductor electronics after the war. The first powerful sources of microwaves were invented at the beginning of World War 2: the klystron tube by Russell and Sigurd Varian at Stanford University in 1937, and the cavity magnetron tube by John Randall and Harry Boot at Birmingham University, UK in 1940. Britain's 1940 decision to share its microwave technology with the US (the Tizard Mission) significantly influenced the outcome of the war. The MIT Radiation Laboratory established secretly at Massachusetts Institute of Technology in 1940 to research radar, produced much of the theoretical knowledge necessary to use microwaves. By 1943, 10 centimeter (3 GHz) radar was in use on British and American warplanes. The first microwave relay systems were developed by the Allied military near the end of the war and used for secure battlefield communication networks in the European theater. After World War 2, microwaves were rapidly exploited commercially. Due to their high frequency they had a very large information-carrying capacity (bandwidth); a single microwave beam could carry tens of thousands of phone calls. In the 1950s and 60s transcontinental microwave relay networks were built in the US and Europe to exchange telephone calls between cities and distribute television programs. In the new television broadcasting industry, from the 1940s microwave dishes were used to transmit backhaul video feed from mobile production trucks back to the studio, allowing the first remote TV broadcasts. The first communications satellites were launched in the 1960s, which relayed telephone calls and television between widely separated points on Earth using microwave beams. In 1964, Arno Penzias and Robert Woodrow Wilson while investigating noise in a satellite horn antenna at Bell Labs, Holmdel, New Jersey discovered cosmic microwave background radiation. Microwave radar became the central technology used in air traffic control, maritime navigation, anti-aircraft defense, ballistic missile detection, and later many other uses. Radar and satellite communication motivated the development of modern microwave antennas; the parabolic antenna (the most common type), cassegrain antenna, lens antenna, slot antenna, and phased array. The ability of short waves to quickly heat materials and cook food had been investigated in the 1930s by I. F. Mouromtseff at Westinghouse, and at the 1933 Chicago World's Fair demonstrated cooking meals with a 60 MHz radio transmitter. In 1945 Percy Spencer, an engineer working on radar at Raytheon, noticed that microwave radiation from a magnetron oscillator melted a candy bar in his pocket. He investigated cooking with microwaves and invented the microwave oven, consisting of a magnetron feeding microwaves into a closed metal cavity containing food, which was patented by Raytheon on 8 October 1945. Due to their expense microwave ovens were initially used in institutional kitchens, but by 1986 roughly 25% of households in the U.S. owned one. Microwave heating became widely used as an industrial process in industries such as plastics fabrication, and as a medical therapy to kill cancer cells in microwave hyperthermy. The traveling wave tube (TWT) developed in 1943 by Rudolph Kompfner and John Pierce provided a high-power tunable source of microwaves up to 50 GHz, and became the most widely used microwave tube (besides the ubiquitous magnetron used in microwave ovens). The gyrotron tube family developed in Russia could produce megawatts of power up into millimeter wave frequencies, and is used in industrial heating and plasma research, and to power particle accelerators and nuclear fusion reactors. The development of semiconductor electronics in the 1950s led to the first solid state microwave devices which worked by a new principle; negative resistance (some of the prewar microwave tubes had also used negative resistance). The feedback oscillator and two-port amplifiers which were used at lower frequencies became unstable at microwave frequencies, and negative resistance oscillators and amplifiers based on one-port devices like diodes worked better. The tunnel diode invented in 1957 by Japanese physicist Leo Esaki could produce a few milliwatts of microwave power. Its invention set off a search for better negative resistance semiconductor devices for use as microwave oscillators, resulting in the invention of the IMPATT diode in 1956 by W.T. Read and Ralph L. Johnston and the Gunn diode in 1962 by J. B. Gunn. Diodes are the most widely used microwave sources today. Two low-noise solid state negative resistance microwave amplifiers were developed; the ruby maser invented in 1953 by Charles H. Townes, James P. Gordon, and H. J. Zeiger, and the varactor parametric amplifier developed in 1956 by Marion Hines. These were used for low noise microwave receivers in radio telescopes and satellite ground stations. The maser led to the development of atomic clocks, which keep time using a precise microwave frequency emitted by atoms undergoing an electron transition between two energy levels. Negative resistance amplifier circuits required the invention of new nonreciprocal waveguide components, such as circulators, isolators, and directional couplers. In 1969 Kurokawa derived mathematical conditions for stability in negative resistance circuits which formed the basis of microwave oscillator design. Prior to the 1970s microwave devices and circuits were bulky and expensive, so microwave frequencies were generally limited to the output stage of transmitters and the RF front end of receivers, and signals were heterodyned to a lower intermediate frequency for processing. The period from the 1970s to the present has seen the development of tiny inexpensive active solid state microwave components which can be mounted on circuit boards, allowing circuits to perform significant signal processing at microwave frequencies. This has made possible satellite television, cable television, GPS devices, and modern wireless devices, such as smartphones, Wi-Fi, and Bluetooth which connect to networks using microwaves. Microstrip, a type of transmission line usable at microwave frequencies, was invented with printed circuits in the 1950s. The ability to cheaply fabricate a wide range of shapes on printed circuit boards allowed microstrip versions of capacitors, inductors, resonant stubs, splitters, directional couplers, diplexers, filters and antennas to be made, thus allowing compact microwave circuits to be constructed. Transistors that operated at microwave frequencies were developed in the 1970s. The semiconductor gallium arsenide (GaAs) has a much higher electron mobility than silicon, so devices fabricated with this material can operate at 4 times the frequency of similar devices of silicon. Beginning in the 1970s GaAs was used to make the first microwave transistors, and it has dominated microwave semiconductors ever since. MESFETs (metal-semiconductor field-effect transistors), fast GaAs field effect transistors using Schottky junctions for the gate, were developed starting in 1968 and have reached cutoff frequencies of 100 GHz, and are now the most widely used active microwave devices. Another family of transistors with a higher frequency limit is the HEMT (high electron mobility transistor), a field effect transistor made with two different semiconductors, AlGaAs and GaAs, using heterojunction technology, and the similar HBT (heterojunction bipolar transistor). GaAs can be made semi-insulating, allowing it to be used as a substrate on which circuits containing passive components as well as transistors can be fabricated by lithography. By 1976 this led to the first integrated circuits (ICs) which functioned at microwave frequencies, called monolithic microwave integrated circuits (MMIC). The word "monolithic" was added to distinguish these from microstrip PCB circuits, which were called "microwave integrated circuits" (MIC). Since then silicon MMICs have also been developed. Today MMICs have become the workhorses of both analog and digital high frequency electronics, enabling the production of single chip microwave receivers, broadband amplifiers, modems, and microprocessors.
https://en.wikipedia.org/wiki?curid=20097