text
stringlengths
10
951k
source
stringlengths
39
44
Telecommunications in Singapore The telecommunication infrastructure of Singapore spans the entire city-state. Its development level is high, with close accessibility to the infrastructure from nearly all inhabited parts of the island and for all of the population, with exceptions. Today, the country is considered an international telecommunications hub, an achievement that was driven by Singapore's view that high-quality telecommunications is one of the critical factors that support its economic growth. After reform initiatives, the Singaporean telecommunication industry became streamlined and largely directed by the government, which viewed such policy as critical in shaping societal preferences and in directing the state's economy. Being able to provide adequate telecommunications services is also critical when approached from the perspective that Singapore's legitimacy as a state rests on its capability to deliver a high standard of living to its citizens. Hence, beginning in the 1970s, the state pursued a three-phase strategy oriented towards developing world-class telecommunications infrastructure capable of high-quality telecommunications services. The first phase involved the expansion of infrastructure to meet business and societal needs (e.g. service enhancement, reduction of waiting lists for telephone connections). The second phase involved the integration of telecommunications to the over-all state strategy, particularly in the area of services for banking, financial services, and tourism with the goal of tapping telecommunications in ensuring the competitive advantage for Singapore. The National Computer Board was formed during this period for the purpose of developing and adopting IT applications. In 1986, this agency issued Singapore's comprehensive National Information Technology Plan (NITP). By the late 1980s, the third phase commenced and it focused on bolstering Singapore's international role as well as the IT 2000, which was an ambitious plan to encourage new multimedia services, which is articulated in the promotion of Singapore as "an intelligent island". The government's role in the telecommunication industry is best demonstrated in the case of Singtel, which the state controls through its investment company Temasek Holdings Private Limited. Singtel does not only roll out affordable but high-quality telecommunication services to the city's residents but it also pursues initiatives that will attract overseas companies to invest in the country. Radio and television stations are all government-owned entities. All eight television channels are owned by MediaCorp; its only other competitor, SPH Mediaworks closed its television channel on 1 January 2005. Due to the proximity of Singapore to Malaysia and Indonesia, almost all radios and television sets in Singapore can pick up broadcast signals from both countries. Private ownership of satellite dishes is banned, but most households have access to the StarHub TV and the Singtel IPTV TV(mio TV) network. As of 1997, there were 1.3 million televisions in Singapore. All radio stations are operated either by MediaCorp, the SAFRA National Service Association (SAFRA) or SPH UnionWorks. As of 1997, there were 2.5 million radios in Singapore. As of 1998, there were almost 55 million phone lines in Singapore, close to 47 million of which also served other telecommunication devices like computers and facsimile machines. Underwater telephone cables have been laid that lead to Malaysia, the Philippines and Indonesia. As of Jan 2018, there are four cellular phone operators in Singapore serving more than 6.4 million cellular phones. As for internet facilities, as of 2009, there are four major internet service providers (ISPs) in Singapore. By February 2009, there were more than 4.8 million broadband users in Singapore. However, due to the small market and possible market collusion, there have been rising concerns that various ISPs' telecommunication infrastructures being highly under-utilised. In July 2015, Liberty Wireless signed an agreement with M1 Limited that allows it to tap on M1's mobile network. This will enable Liberty Wireless to provide voice, messaging and data services to customers; becoming the first Mobile Virtual Network Operator (MVNO) in Singapore to offer a full service mobile network experience. Telephones – fixed line: Telephones – mobile market: Telephone system: Excellent domestic facilities; excellent international service "domestic:" NA "international:" Submarine cables to several countries and territories including Malaysia (Sabah and Peninsular Malaysia), Indonesia, the Philippines, Hong Kong, Taiwan, and India; satellite earth stations – 2 Intelsat (1 Indian Ocean and 1 Pacific Ocean), and 1 Inmarsat (Pacific Ocean region) IDD Country Code: +65 Radio broadcast stations "(as of March 2006)": AM 0, FM 19, shortwave 5 ("Source:Asiawaves.Net ") Radios: 2.55 million (1997) Television broadcast stations "(as of March 2020)": Operators: Singapore has a large number of computer users and most households have computers and Internet access. A survey conducted by Infocomm Development Authority of Singapore indicated that 78% of households own computers at home and 7 in 10 households have Internet access (2006). The CIA's The World Factbook reports that Singapore has 2.422 million Internet users (2005) and 898,762 Internet hosts (2006). Internet Service Providers (ISPs): 6 (2019) Broadband Fiber Internet While Nucleus Connect is the Operating Company (OpCo) of the NetLink Trust infrastructure, it is not the service provider, rather the company that switches the network to the respective ISPs. Country code (Top level domain): SG Singapore as a small densely populated island nation is the pioneer, and continues to be one of the few countries in the World in which broadband internet access is readily available to just about any would-be user anywhere in the country, with connectivity over 99%. In a government-led initiative to connect the island in a high-speed broadband network using various mediums such as fibre, DSL and cable, the Singapore ONE project was formally announced in June 1996, and commercially launched in June 1998. By December 1998, Singapore ONE is available nationwide with the completion of the national fibre optics network. In 1997, commercial trials for Singapore Telecommunications' (Singtel) ADSL-based "SingTel Magix" service were undertaken in March, before being launched in June. Also in June, Singapore Cable Vision commenced trials for its cable modem based services, before being commercially deployed in December 1999. Singtel's ADSL service was subsequently rolled out on a nationwide scale in August 2000. In January 2001, the Broadband Media Association was formed to promote the broadband industry. By April the same year there were 6 broadband internet providers, with the total number of broadband users exceeding 300,000. Pacific Internet introduced wireless broadband services in October 2001. In 2007, Infocomm Development Authority(IDA) of Singapore introduced a programme named "Wireless@SG". It is part of its Next Generation National Infocomm Infrastructure initiative. Users can enjoy free, both in-door and outdoor seamless wireless broadband access with speeds of up to 1 Mbit/s at with high human traffic. As at June 2007, there are more than 460,000 subscribers and 4,200 hotspots under the Wireless@SG programme. In the same year, M1 introduced its mobile broadband services. Due to the rise of NetLink Trust, operators – Singtel and StarHub will all be converted fully to fibre optic by July 2014. Optical Fiber broadband providers: Wireless@SG operators (Up to 5 Mbit/s):
https://en.wikipedia.org/wiki?curid=27324
Transport in Singapore Transport within Singapore is mainly land-based. Many parts of Singapore, including islands such as Sentosa and Jurong Island are accessible by road. The other major form of transportation within Singapore is rail: the Mass Rapid Transit which runs the length and width of Singapore, and the Light Rail Transit which runs within a few neighbourhoods. The main island of Singapore is connected to the other islands by ferryboat services. There are two bridges which link Singapore to Malaysia – the Causeway, and the Second Link. The Singapore Changi Airport is a major aviation hub in Asia, and Singapore is a major transshipment port. McKinsey’s Urban Transportation report rank Singapore's transport system world's best overall based on five criteria - availability, affordability, efficiency, convenience, sustainability. Singapore also has one of the most cost-efficient public transport networks in the world, according to a study by London consulting firm Credo. With the emergence of driverless vehicles, Singapore is now among the favourite locations for autonomous vehicles development testing location for the big players in the automotive industry. The Mass Rapid Transit, which opened in 1987, is a heavy rail metro system that serves as the major backbone of Singapore's public transport system along with public buses; as of January 2020, the network has a length of and 122 stations. The Land Transport Authority, the main planning authority of the MRT, plans to provide a more comprehensive rail transport system by expanding the rail system to a total of by the year 2030, with eight in ten households living within a 10-minute walking distance of an MRT station. The current MRT network consists of six main lines: the North South Line, East West Line, Circle Line and partially-opened Thomson–East Coast Line operated by SMRT Trains (SMRT Corporation) and the North East Line and Downtown Line operated by SBS Transit. and two more lines, the Jurong Region Line and the Cross Island Line, will open in stages from 2026 and 2029 respectively. In several new towns, automated rubber-tyred light rail transit systems function as feeders to the main MRT network in lieu of feeder buses. The first LRT line, which is operated by SMRT Light Rail, opened in Bukit Panjang in 1999 to provide a connection to Choa Chu Kang in neighbouring Choa Chu Kang New Town. Although subsequently hit by over 50 incidents, some of which resulted in several days of system suspension, similar systems albeit from a different company were introduced in Sengkang and Punggol in 2003 and 2005 respectively, both operated by SBS Transit. The international railway line to Malaysia is an extension of the Malaysian rail network operated by Keretapi Tanah Melayu (Malayan Railways). Since 1 July 2011, Woodlands Train Checkpoint serves as the southern terminus of the KTM rail network. Previously, KTM trains terminated at Tanjong Pagar railway station in central Singapore. Two more rail links are being planned: the Kuala Lumpur-Singapore High Speed Rail terminating in Jurong East, and the Johor Bahru-Singapore Rapid Transit System between Woodlands North and Bukit Chagar, Johor Bahru. Bus transport forms a significant part of public transport in Singapore, with over 4.0 million rides taken per day on average as of 2019. There are more than 365 scheduled bus services, operated by SBS Transit, SMRT Buses, Tower Transit Singapore and Go-Ahead Singapore. There are also around 5,800 buses, both single-deck and double-deck, currently in operation. Since 2016, the Land Transport Authority regulates the public bus service standards and owns relevant assets whereas bus operators bid for operating bus services via competitive tendering. Taxicabs are a popular form of public transport in the compact sovereign city-state of Singapore, with fares considered low compared to those in most cities in developed countries. Starting rates were $3.20 - $3.90. As of March 2019, the taxi population has been increased to 83,037. Taxis may be flagged down at any time of the day along any public road outside of the Central Business District (CBD). However, increased usage of ridesharing services like Grab and Gojek has resulted in a decrease in the usage of taxis. As of 2018, there was a total of 957,006 motor vehicles in Singapore, with 509,302 of them being private cars. Singapore pioneered the modern use of toll roads to enter the most congested city centre area with the Singapore Area Licensing Scheme, which has since been replaced with the Electronic Road Pricing, a form of electronic toll collection. Traffic drives on the left which is typical in Commonwealth countries. The planning, construction and maintenance of the road network is overseen by the Land Transport Authority (LTA), and this extends to expressways in Singapore. These form key transport arteries between the distinct towns and regional centres as laid out in Singapore's urban planning, with the main purpose of allowing vehicles to travel from satellite towns to the city centre and vice versa in the shortest possible distance. These expressways include: The influence of expressways on Singapore's transport policy developed shortly after independence during the history of Singapore because of frequent traffic congestion in the Central district. The aim was to encourage residential development in other parts of the island and give residents in these new "satellite towns" a convenient link between their homes and their workplaces (which were mostly situated around the city centre.) Singapore has two land links to Malaysia. The Johor-Singapore Causeway, built in the 1920s to connect Johor Bahru in Johor, Malaysia to Woodlands in Singapore, carries a road and a railway line. The Tuas Second Link, a bridge further west, was completed in 1996 and links Tuas in Singapore to Tanjung Kupang in Johor. Before World War II, rickshaws were an important part of urban public transportation. Rickshaws were taken over by the trishaw after the world war as the former was banned in 1947 on humanitarian grounds. Usage of trishaws as a means of transportation had died out by 1983. However, there are some trishaws left which now serve as tourist attractions, taking tourists for a ride around the downtown district. There are six local scheduled service airlines, all of them operating from Singapore Changi Airport, offering scheduled flights to over 70 cities on six continents: The national flag carrier, Singapore Airlines, operates from Changi Airport Terminal 2 and 3. Its subsidiaries, SilkAir and Scoot, operate from Changi Airport Terminal 2. Singapore's budget airlines, Jetstar Asia Airways operates from Changi Airport Terminal 1. Singapore Seletar Airport has also been reopened to the public, with Firefly services operating out of Seletar Airport. The aviation industry is regulated by the Civil Aviation Authority of Singapore, a statutory board of the Singapore government under the Ministry of Transport. An open skies agreement was concluded with the United Kingdom in October 2007 permitting unrestricted services from Singapore by UK carriers. Singapore carriers were allowed to operate domestic UK services as well as services beyond London Heathrow to a number of destinations, including the United States along with Canada. Singapore Changi Airport, with its four terminals, is one of the most important air hubs in the region. The international airport is situated at the easternmost tip of the main island, and serves 185 cities in 58 countries. With the recent opening of the fourth terminal, Changi is now capable of handling more than 70 million passengers every year. Seletar Airport is Singapore's first civil aviation airport and is primarily used for private aviation. The airport also serves regular commercial flights by Firefly to its Subang Airport hub. Limited scheduled commercial flights are also conducted by Berjaya Air to the Malaysian islands of Tioman Island and Redang Island. The Singapore Cable Car is a three-station gondola lift system that plies between Mount Faber on the main island of Singapore and the resort island of Sentosa via HarbourFront. Opened in 1974, it was the first aerial ropeway system in the world to span a harbour. The cable car system underwent a revamp that was completed in August 2010. In addition, a similar gondola lift system also operates within Sentosa as the Sentosa Line and was opened in 2015. This line links Siloso Point to Imbiah. The Port of Singapore, run by the port operators PSA International (formerly the Port of Singapore Authority) and Jurong Port, is the world's busiest in terms of shipping tonnage handled. 1.04 billion gross tons were handled in 2004, crossing the one billion mark for the first time in Singapore's maritime history. Singapore also emerged as the top port in terms of cargo tonnage handled with 393 million tonnes of cargo in the same year, beating the Port of Rotterdam for the first time in the process. In 2019, it handled a total of 626 million tonnes of cargo. In 2018, Singapore was ranked second globally in terms of containerised traffic, with 36.6 million Twenty-Foot Equivalent Units (TEUs) handled, and is also the world's busiest hub for transshipment traffic. Additionally, Singapore is the world's largest bunkering hub, with 49.8 million tonnes sold in 2018. In 2007, the Port of Singapore was ranked the world's busiest port, surpassing Hong Kong and Shanghai. The Port of Singapore is also ranked the Best Seaport in Asia. Water transport within the main island is limited to the River Taxi along the Singapore River. The service was introduced in January 2013, with low ridership. There are also daily scheduled ferry services from the Marina South Pier to the Southern Islands such as Kusu Island and Saint John's Island. Singapore Cruise Centre (SCC) runs Tanah Merah and HarbourFront Ferry Terminals which are connected by ferry services to Indonesian Riau Islands of Batam, Bintan and Karimun.
https://en.wikipedia.org/wiki?curid=27325
History of Slovakia This article discusses the history of the territory of Slovakia. Discovery of ancient tools made by the Clactonian technique near Nové Mesto nad Váhom attests that Slovakia's territory was inhabited in the Palaeolithic. Other prehistoric discoveries include the Middle Palaeolithic stone tools found near Bojnice, and a Neanderthal discovery at a site near Gánovce. The Gravettian culture was present principally in the river valleys of Nitra, Hron, Ipeľ, Váh and as far as the city of Žilina, and near the foot of the Vihorlat, Inovec, and Tribeč mountains, as well as in the Myjava Mountains. The best known artifact is the Venus of Moravany from Moravany nad Váhom. Neolithic habitation was found in Želiezovce, Gemer, and the Bukové hory massif, the Domica cave, and at Nitriansky Hrádok. Bronze Age was marked by the Čakany and Velatice cultures, and then the Lusatian culture, followed by the Calenderberg culture and the Hallstatt culture. The Celts were the first population in the territory of present-day Slovakia who can be identified on the basis of written sources. The first Celtic groups came from the West around . Settlements of the La Tène culture indicate that the Celts colonized the lowlands along the river Danube and its tributaries. The local population was either subjected by the Celts or withdrew to the mountainous northern territory. New Celtic groups arrived from Northern Italy during the . The Celts initially lived in tiny huts in sizewhich either formed small villages or were scattered across the countryside. Some of the small hill forts which were built in the developed into important local economic and administrative centers. For example, the hill fort at Zemplín was a center of iron-working; glass works were unearthed at Liptovská Mara; and local coins were struck at Bratislava and Liptovská Mara. Coins from Bratislava bore inscriptions like Biatec and Nonnos. The fort at Liptovská Mara was also an important center of the cult of the bearers of the Púchov culture of the Northern Carpathians. Burebista, King of the Dacians, invaded the Middle Danube region and subjugated the majority of the local Celtic tribes (the Boii and the Taurisci) around . Burebista's empire collapsed after he died about 16 years later. Archaeological sites yielding painted ceramics and other artefacts of Dacian provenance suggest that Dacian groups settled among the local Celts in the region of the rivers Bodrog, Hron and Nitra. The spread of the "Púchov culture", associated with the Celtic Cotini, shows that the bearers of that culture started a northward expansion during the same period. The Romans and the Germanic tribes launched their first invasions against the territories along the Middle Danube in the last decade of the . Roman legions crossed the Danube near Bratislava under the command of Tiberius to fight against the Germanic Quadi in , but the local tribes' rebellion in Pannonia forced the Romans to return. Taking advantage of internal strifes, the Romans settled a group of Quadi in the lowlands along the Danube between the rivers Morava and Váh in 21, making Vannius their king. The Germans lived in rectangular houses, rather than square ones, and cremated their dead, placing the ashes in an urn. Although the Danube formed the frontier between the Roman Empire and the "Barbaricum", the Romans built small outposts along the left bank of the Danube, for instance, at Iža and Devín. During the same period, the Germanic tribes were expanding to the north along the rivers Hron, Ipeľ and Nitra. Roman troops crossed the Danube several times during the Marcomannic Wars between 160 and 180. Emperor Marcus Aurelius accomplished the first chapter of his "Meditations" during a campaign against the Quadi in the region of the Hron River in 172. The "Miracle of the Rain"a storm which saved an exhausted Roman armyoccurred in the land north of the Danube in 173; Christian authors attributed it to a Christian soldier's prayer. Roman troops crossed the Danube for the last time in 374, during Emperor Valentinian I's campaign against the Quadi who had allied with the Sarmatians and invaded the Roman province of Pannonia. In the 4th century AD, the Roman Empire could no longer resist the attacks by the neighboring peoples. The empire's frontier started to collapse along the Danube in the 370s. The development of the Hunnic Empire in the Eurasian Steppes forced large groups of Germanic peoples, including the Quadi and the Vandals, to leave their homelands by the Middle Danube and along the upper course of the river Tisza in the early . Their lands were occupied by the Heruli, Scirii, Rugii and other Germanic peoples. However, the Carpathian Basin was dominated by the nomadic Huns from the early and the Germanic peoples became subjects to Attila the Hun. Disputes among Attila's sons caused the disintegration of his empire shortly after his death in 453. The Germanic peoples either regained their independence or left the Carpathian Basin (like the Heruli and the Sciri, respectively). Warriors' graves from the next century yielded large number of swords, spears, arrow heads, axes and other weapons. Other archaeological finds, including a glass beaker from Zohor, shows that the local inhabitants had close contacts with the Frankish Empire and Scandinavia. Regarding the early history of Slavs, Slavic texts or a record written by a Slav dating from before the late 9th century are not known. The foreign sources (mostly Greek and Latin) about Slavs are very inconsistent. According to a scholarly theory, the first Slavic groups settled in the eastern region of present-day Slovakia already in the . The 6th-century Byzantine historian Jordanes wrote that the funeral feast at Attila's burial was called "strava". Scholars who identify that word as a Slavic expression say that Jordanes' report proves that Slavs inhabited the Carpathian Basin in the middle of the . However, according to a concurrent scholarly theory, "strava" may have been a Hunnic term, because no primary source mentioned that the Slavs were present in Attila's court. Settlements which represented a new archaeological horizonthe so-called "Prague-Korchak cultural horizon"appeared along the northernmost fringes of the Carpathian Mountains around 500. Similar settlements, which are dated to the second half of the , were also excavated in the region of the confluence of the Danube and the Morava. "Prague-Korchak" settlements consisted of about 10 semi-sunken huts, each with a stone oven in a corner. The local inhabitants used handmade pottery and cremated the dead. Most historians associate the spread of the "Prague-Korchak" settlements with the expansion of the early Slavs. According to historian Gabriel Fusek, written sources also evidence the presence of Slavs in the Central Europe in the first half of the . The 6th-century Byzantine historian, Procopius, wrote of a group of the Heruli who had "passed through the territory of all of the Sclavenes", or Slavs, during their migration towards the northern "Thule". Procopius's report implies that the Slavs inhabited the region of the river Morava, but its credibility is suspect. Procopius also wrote of an exiled Longobard prince, Hildigis, who first fled to the "Sclaveni" and then to the Gepids, "taking with him not only those of the Longobards who had followed him, but also many of the Sclaveni" in the 540s. According to a scholarly theory, Hildigis most probably mustered his Slavic warriors in the region of the Middle Danube. The Germanic Longobards were expanding towards the Middle Danube in the early . Archaeological research shows that Longobard expansion bypassed virtually the entire territory of Slovakia and they settled only in the most north-western part of the country (Záhorie). Unlike neighbouring Moravia, Slovakia (except of Záhorie) did not belong to any German empire in this time. The Longobards and the local Slavs remained separated by the natural border formed by Little and White Carpathians, respected by both sides according to Ján Steinhübel. He also writes that the Slavs, who remained "an independent third party" in strained Longobard-Gepid relations, were not interested in conflicts with their Germanic neighbours, but made raids in the faraway Byzantine Empire. The Longobards left the Carpathian Basin for Northern Italy after the invasion of the territory by the Avars in 568. The Avars were a group of nomadic warriors of mixed origin. They conquered the Carpathian Basin, subjugated the local peoples and launched plundering expeditions against the neighboring powers during the next decades. By the time of the Avars' arrival, the Slavs had settled in most lands that now form Slovakia, according to historian Stanislav Kirschbaum. Further migration waves strengthened the local Slavic population because new Slavic groups, pressed by the Avars, crossed the Eastern Carpathians, seceding from the Slavs who continued their expansion to the Balkan Peninsula. Dialects of the Slovak language still reflects that the Slavs came from different directions already in the Early Middle Ages, according to a widely accepted scholarly theory. The Czech and Slovak languages share some features with the South Slavic languages, distinguishing them from the other West Slavic languages. According to archaeologist P. M. Barford, these features suggest that the Carpathian Mountains and the Sudetes separated the ancestors of the Slovaks and the Czechs from the Slavs living to the north of those mountains. Especially the dialects of Central Slovakia, which "stand out from the continuous chain between the western and eastern dialects", preserved South Slavic features. The 7th-century Frankish "Chronicle of Fredegar" wrote that the Avars employed the Slavs, or Wends, as ""Befulci"", showing that the Slavs formed special military units in the Avar Khaganate. According to the same chronicle, the Wends rose up in rebellion against their Avar masters and elected a Frankish merchant, Samo, their king "in the fortieth year of Clothar's reign", that is in 623 or 624. Modern historians agree that the Avars' defeat during the siege of Constantinople in 626 enabled Samo to consolidate his rule. He routed the invading army of Dagobert I, King of the Franks, in the Battle of Wogastisburg in 631 or 632. The realm of Samo, who ruled for 35 years, collapsed soon after his death. Its exact borders cannot be determined, but it must have been located near the confluence of the Danube and the Morava rivers. Historian Richard Marsina puts its centre to Lower Austria. A new horizon of mostly hand-made potterythe so-called "Devínska Nová Ves pottery"appeared between the Middle Danube and the Carpathians before the end of the . Large inhumation cemeteries yielding such pottery were unearthed at Bratislava, Holiare, Nové Zámky and other places, suggesting that cemeteries were located near stable settlements. For instance, the cemetery at Devínska Nová Ves, which contained about a thousand inhumation graves and thirty cremations, was used up until the end of the . In the 670s, the new population of the "griffin and tendril" archaeological culture appeared in the Pannonian Basin expelling Kuber's Bulgars south out of Sirmium (the westernmost part of Kubrat's Onoguria). Shortly afterwards the new Avar-Slav alliance could expand their territories even also over the Vienna Basin. The political and cultural development in Slovakia continued in two separate lines. Lowland areas in the southern Slovakia got under the direct military control of the Avars. The Avars held strategic centers in Devín and Komárno which belonged to the most important centers of the khaganate. The Avars from Devín controlled Moravia and from Komárno they controlled southern Slovakia. In this time, the Avars already began to adopt a more settled lifestyle. The new period introduced Slavo-Avaric symbiosis and multi-ethnic Slavo-Avaric culture. The Slavs in southern Slovakia adopted new burial rite (inhumation), jewelry, fashion and used also common cemeteries with the Avars. Large Slavo-Avaric cemeteries can be found in Devínska Nová Ves and Záhorská Bystrica near Bratislava and similar cemeteries, the proof of direct Avar power, south of the line Devín-Nitra-Levice-Želovce-Košice-Šebastovce. North of this line, the Slavs preserved previous burial rite (cremation, sometimes tumuli). Natural increase of the population together with immigration from the south led to the settlement also in mountain areas. In the 8th century, the Slavs increased their agricultural productivity (usage of iron plow) along with further development of crafts. Higher productivity initiated changes in the Slavic society, released a part of human resources previously required for farming and allowed to form groups of professional warriors. The Slavs began to build heavily fortified settlements ("hradisko" - large grad) protected by strong walls (8–10 m) and trenches (width 4–7 m, depth 2–3.5 m). Among the oldest belong Pobedim, Nitra-Martinský Vrch, Majcichov, Spišské Tomášovce and Divinka. The neighborhood with Avars raised unification process and probably also formation of local military alliances. The archaeological findings from this period (such as an exquisite noble tomb in Blatnica) support the formation of a Slavic upper class on the territory that later became the nucleus of Great Moravia. A series of Frankish-Avar wars (788-803) led to the political fall of the khaganate. In 805, the Slavs attacked again. Their offensive aimed mainly on the centers of Avar power - Devín and Komárno. The Avars were not able to resist attack and they were expelled to the right bank of Danube. The Slavs from Slovakia probably participated also in further conflicts between small Slavic dukes and remaining Avar tarkhans. The "Conversio Bagoariorum et Carantanorum", written around 870, narrates that Moimir, the leader of the Moravians, expelled one Pribina, forcing him to cross (or come up) the Danube and join Radbod, who was the head of the March of Pannonia in the Carolingian Empire from around 830. Radbod presented Pribina to King Louis the German who ordered that Pribina be instructed in the Christian faith and baptised. Three of the eleven extant copies of the "Conversio" also contain an out of context statement which says that Adalram, who was Archbishop of Salzburg between 821 and 836, had once consecrated a church on Pribina's "estate at a place over the Danube called Nitrava". According to a widely accepted scholarly theory, "Nitrava" was identical with Nitra in present-day Slovakia and the forced unification of Pribina's Principality of Nitra with Mojmir's Moravia gave rise to the development a new state "Great Moravia". Between 800–832, a group of Slavic forthills in Slovakia quickly arose and disappeared. Archaeological research confirmed the fall of several important central forthills approximately around the time when Pribina was expelled, e.g. Pobedim or Čingov. The lack of written sources does not allow to finally conclude if these events were caused by internal changes or by Moravian expansion. Pribina could be a ruler of an independent unity (Principality of Nitra) or in the case that Moravian expansion preceded his expulsion, he was a member of "Moravian" aristocracy. Other historians write that Pribina's Nitrava cannot be identified with Nitra. Charles Bowlus says that a letter, written by Theotmar, Archbishop of Salzburg and his suffragan bishops in about 900, strongly suggests that Nitra was only conquered by Svatopluk I of Moravia only in the 870s. However, according to Třeštík, this information can be explained as a reasonable mistake of the Frankish bishops who knew that the territory was in the past a separate "regnum" different from Moravia and because it was ruled by Svatopluk I, they incorrectly assumed that he also conquered it. According to archaeologist Béla Miklós Szőke, no source substantiates either the theory that Pribina was the head of an independent polity or the identification of Nitrava with Nitra. Richard Marsina writes that the Slovak nation emerged in that principality during Pribina's reign. Regarding the 9th century, the archaeological researches successfully established a distinction between "9th-century Slavic-Moravian" and "steppe" burial horizons in Slovakia. Moravia emerged along the borders of the Avars' territory. Great Moravia arose around 830 when Mojmír I unified the Slavic tribes settled north of the Danube and extended the Moravian supremacy over them. When Mojmír I endeavoured to secede from the supremacy of the king of East Francia in 846, King Louis the German deposed him and assisted Mojmír's nephew, Rastislav (846–870) in acquiring the throne. The new monarch pursued an independent policy: after stopping a Frankish attack in 855, he also sought to weaken influence of Frankish priests preaching in his realm. Rastislav asked the Byzantine Emperor Michael III to send teachers who would interpret Christianity in the Slavic vernacular. Upon Rastislav's request, two brothers, Byzantine officials and missionaries Saints Cyril and Methodius came in 863. Cyril developed the first Slavic alphabet and translated the Gospel into the Old Church Slavonic language. Rastislav was also preoccupied with the security and administration of his state. Numerous fortified castles built throughout the country are dated to his reign and some of them ("e.g.", "Dowina" - Devín Castle) are also mentioned in connection with Rastislav by Frankish chronicles. During Rastislav's reign, the Principality of Nitra was given to his nephew Svätopluk as an appanage. The rebellious prince allied himself with the Franks and overthrew his uncle in 870. Similarly to his predecessor, Svätopluk I (871–894) assumed the title of the king ("rex"). During his reign, the Great Moravian Empire reached its greatest territorial extent, when not only present-day Moravia and Slovakia but also present-day northern and central Hungary, Lower Austria, Bohemia, Silesia, Lusatia, southern Poland and northern Serbia belonged to the empire, but the exact borders of his domains are still disputed by modern authors. Svätopluk also withstood attacks of the seminomadic Hungarian tribes and the Bulgarian Empire, although sometimes it was he who hired the Hungarians when waging war against East Francia. In 880, Pope John VIII set up an independent ecclesiastical province in Great Moravia with Archbishop Methodius as its head. He also named the German cleric Wiching the Bishop of Nitra. After the death of King Svätopluk in 894, his sons Mojmír II (894–906?) and Svatopluk II succeeded him as the King of Great Moravia and the Prince of Nitra respectively. However, they started to quarrel for domination of the whole empire. Weakened by an internal conflict as well as by constant warfare with Eastern Francia, Great Moravia lost most of its peripheral territories. In the meantime, the Hungarian tribes, having suffered a defeat from the nomadic Pechenegs, left their territories east of the Carpathian Mountains, invaded the Pannonian Basin and started to occupy the territory gradually around 896. Their armies' advance may have been promoted by continuous wars among the countries of the region whose rulers still hired them occasionally to intervene in their struggles. Both Mojmír II and Svätopluk II probably died in battles with the Hungarians between 904 and 907 because their names are not mentioned in written sources after 906. In three battles (4–5 July and 9 August 907) near Brezalauspurc (now Bratislava), the Hungarians routed Bavarian armies. Historians traditionally put this year as the date of the breakup of the Great Moravian Empire. Great Moravia left behind a lasting legacy in Central and Eastern Europe. The Glagolitic script and its successor Cyrillic were disseminated to other Slavic countries, charting a new path in their cultural development. The administrative system of Great Moravia may have influenced the development of the administration of the Kingdom of Hungary. From 895 to 902, the Hungarians (Magyars) progressively imposed their authority on the Pannonian Basin. Although some contemporary sources mention that Great Moravia disappeared without trace and its inhabitants left, archaeological research and toponyms suggest the continuity of Slavic population in the river valleys of the Inner Western Carpathians. The oldest Hungarian graves in Slovakia are dated to the end of the 9th and the beginning of the 10th century (Medzibordožie region, Eastern Slovakia). These findings document only a relatively short stay, without direct continuation in the settlement. Further findings elewhere, in the most southern parts of Slovakia, are dated to 920-925 and consist mainly of graves of warrior type (isolated graves and smaller groups). Between 930–940, larger groups of Magyars began to migrate to the southern parts of today's Slovakia, but did not cross the line Bratislava, Hlohovec, Nitra, Levice, Lučenec, Rimavská Sobota. The territory affected by this early migration covers about 15% of today's Slovakia (7,500 km2). Hungarian settlements from these first two waves are not documented in the most fertile regions of Trnava Board, Považie north of Hlohovec, Ponitrie north of Nitra and the Eastern Slovak Lowland. The initial confrontation did not have a permanent character, and during the 10th century both populations coexisted. In southern Slovakia, the Hungarians frequently founded their villages close to the older Slavic settlements as they changed their nomadic lifestyle and settled; they occasionally joined them and used the same cemeteries. In the 11th century, the differences between Slavic and Magyar graves disappeared. The archaeological research has also significantly changed the view on the settlement of the northern parts of the country. In addition to the southern parts and river valleys of Nitra (river) and Váh, a relative high population density is notable particularly for the Spiš region with the Poprad river valley and the Turiec Basin. Liptov and the Zvolen Basins, Žilina Basin, Central Orava and northern Šariš were rather sparsely populated. After the fall of the state, some non-landholding noblemen joined the Hungarian forces and participated in their raids in other parts of Europe. The chroniclers of the early history of the Kingdom of Hungary recorded that the prominent noble families of the kingdom descended either from leaders of the Hungarian tribes or from immigrants, and they did not connect any of them to Great Moravia. Archeological evidence proves that to the north of the line mentioned above, not only did the older settlement structures survive, but so also did the territorial administration led by native magnates. The Great Moravian or potential Great Moravian origin of the clan Hunt-Pázmán ("Hont-Pázmány") has been advanced by some modern scholars. The territory of the present-day Slovakia became progressively integrated into the developing state (the future Kingdom of Hungary) in the early 10th century. The "Gesta Hungarorum" ("Deeds of the Hungarians") mentions that Huba, head of one of the seven Hungarian tribes, received possessions around Nitra and the Žitava River; while according to the "Gesta Hunnorum et Hungarorum" ("Deeds of the Huns and Hungarians") another tribal leader, Lél, settled down around Hlohovec () and following the Hungarian victory over the Moravians, he usually stayed around Nitra. Modern authors also claim that the north-western parts of the Pannonian Basin were occupied by one of the Hungarian tribes. The development of the future Kingdom of Hungary started during the reign of Grand Prince Géza (before 972–997) who expanded his rule over the territories of present-day Slovakia west of the River Garam / Hron. Although, he was baptised in or after 972, he never became a convinced Christian – in contrast to his son, Stephen who followed him in 997. Some authors claim that following his marriage with Giselle of Bavaria, Stephen received the "Duchy of Nitra" in appanage from his father. When Géza died, a member of the Árpád dynasty, the pagan Koppány claimed the succession, but Stephen defeated him with the assistance of his wife's German retinue. A Slovak folk song mentions that "Štefan kral" ("i.e.", King Stephen) could only overcome his pagan opponent with the assistance of Slovak warriors around Bíňa (). According to István Bóna the Slovak song may be a translation of a Hungarian folk song, because in 1664, none of the inhabitants of Bíňa was Slovak. Following his victory, Stephen received a crown from Pope Silvester II and he was crowned as the first King of Hungary in 1000 or 1001. The Kingdom of Hungary integrated elements of the former Great Moravian state organization. On the other hand, historians have not reached a consensus on this subject; "e.g.", it is still being debated whether the formation of the basic unit of the administration ("vármegye") in the kingdom followed foreign ( Frankish, Bulgarian, Moravian or Ottonian) patterns or it was an internal innovation. Stephen (1000/1001–1038) established at least eight counties "("vármegye")" on the territories of present-day Slovakia: Abov (), Boršod (), Esztergom, Hont, Komárno (), Nitra (), Tekov () and Zemplín () were probably founded by him. The scarcely populated northern and north-eastern territories of today Slovakia became the kings' private forests. King Stephen also set up several dioceses in his kingdom; in the 11th century, present-day Slovakia's territories were divided between the Archdiocese of Esztergom (established around 1000) and its suffragan, the Diocese of Eger (founded between 1006 and 1009). Around 1015, Duke Boleslaw I of Poland took some territories of present-day Slovakia east of the River Morava, with Hungarian King Stephen recapturing these territories in 1018. Following King Stephen's death, his kingdom got involved in internal conflicts among the claimants for his crown and Henry III, Holy Roman Emperor also intervened in the struggles. In 1042, the Emperor Henry captured some parts of today Slovakia east of the River Hron and granted them to King Stephen's cousin, Béla, but following the withdrawal of the Emperor's armies, King Samuel Aba's troops recaptured the territories. In 1048, King Andrew I of Hungary conceded one-third of his kingdom ("Tercia pars regni") in appanage to his brother, Duke Béla. The duke's domains were centered around Nitra and Bihar (in Romanian: "Biharea" in present-day Romania). During the following 60 years, the "Tercia pars regni" were governed separately by members of the Árpád dynasty ("i.e.", by the Dukes Géza, Ladislaus, Lampert and Álmos). The dukes accepted the kings' supremacy, but some of them (Béla, Géza and Álmos) rebelled against the king in order to acquire the crown and allied themselves with the rulers of the neighbouring countries ("e.g.", the Holy Roman Empire, Bohemia). The history of the "Tercia pars regni" ended in 1107, when King Coloman of Hungary occupied its territories taking advantage of the pilgrimage of Duke Álmos (his brother) to the Holy Land. Although, Duke Álmos, when returned to the kingdom, tried to reoccupy his former duchy with the military assistance of Henry V, Holy Roman Emperor, but he failed and was obliged to accept the "status quo". In 1241, the Mongols invaded and devastated the north-western parts of the kingdom. In April 1241, the Mongolian army crossed the border with Moravia near Hrozenkov. Trenčín Castle resisted the attack, but nearby places were plundered and some of them have never been restored. Mongols turned to the south and devastated regions along rivers Váh and Nitra. Only the strong castles, "e.g.", Trenčín, Nitra, Fiľakovo () and fortified towns could resist attack. A part of the unprotected population escaped to the mountains and rough terrain where they built hill forts and camps. Most affected areas were the southwest Slovakia, Lower Pohronie to Zvolen and Zemplín. It is estimated that at least a third of population died from famine and epidemics. Following the withdrawal of the Mongol army, Frederick II, Duke of Austria invaded the country. In July 1242 his army reached Hlohovec but the Hungarian army, mainly thank to troops from Trenčín and Nitra counties repelled the attack. Bohumír (Bogomer), the župan of Trenčín who played an important role in the suppression of Austrian units, later led the army send to help Bolesław V the Chaste (son-in-law of the Hungarian king) attacked by Konrad I of Masovia. The army consisted mainly of soldiers from the ethnic Slovak counties. The royal administration of the territory was developing gradually during the 11-13th centuries: new counties were established with the partition of existing ones or central counties of the kingdom expanded their territory northward today's Bratislava (, ), Trenčín, Gemer-Malohont () and Novohrad (), while the kings' private forests were organised into "forest counties" around Zvolen and Šariš Castle (). Following the occupation of his brother's duchy, King Coloman set up (or re-established) the third bishopric in present-day Slovakia. Some of the towns in present-day Slovakia were granted special privileges already prior to the Mongol invasion: Trnava (1238), Starý Tekov (1240), Zvolen and Krupina (before 1241). Following the withdrawal of the Mongol troops (1242), several castles were built or strengthened ("e.g.", Komárno, Beckov () and Zvolen) on the order of King Béla IV. In addition to a relatively developed network of castles, agglomerations of an urban character became more important. Medieval towns should serve both to economic and defensive purposes. The territory of present-day Slovakia was rich in raw materials like gold, silver, copper, iron and salt and therefore the mining industry developed gradually in the region. The development of the mining industry and commerce enstrengthened the position of some settlements and they received privileges from the kings. The list of towns with the earliest charters contains Spišské Vlachy (1243), Košice (before 1248), Nitra (1248), Banská Štiavnica (1255), Nemecká Ľupča (1263), Komárno (1269), Gelnica (before 1270), Bratislava (1291) and Prešov, Veľký Šariš and Sabinov (all in 1299). The Saxons in Spiš () were granted a collective charter (1271) by King Stephen V of Hungary. The colonisation of the northern parts of the Kingdom of Hungary continued during the period; Walloon, German, Hungarian and Slavic "guests" ("hospes", as they are called in contemporary documents) arrived to the scarcely populated lands and settled down there. The contemporary documents mention that settlers from Moravia and Bohemia arrived to the western parts of present-day Slovakia, while on the northern and eastern parts, Polish and Ruthenian "guests" settled down. German guests had an important but not exclusive role in the development of towns. Smaller groups of Germans were present already prior the Mongol invasion, but their immigration took a significant rate in the 13th-14th century. In that time, there already existed settlements with a relatively highly developed economy in the territory of present-day Slovakia, but Germans who came from economically and administrative more advanced regions introduced new forms of production and management, new legal system and culture. The German guests settled in Upper and Lower Spiš, mining towns in Central Slovakia, their wide surroundings and many localities in Western Slovakia: Bratislava, Trnava and wine-growing towns in Malé Karpaty. In the Middle Ages, present-day Slovakia belonged to the most urbanized regions of the Kingdom of Hungary and it was an important cultural and economic base. According to the decree of the King Vladislaus II Jagiello (1498) six of the ten most important towns in the kingdom were located in the present-day Slovakia: Košice, Bratislava, Bardejov, Prešov, Trnava and Levoča. In 1514, more than half of the royal towns and free mining towns of the kingdom were located in Slovakia. At the end of the Middle Ages, about two hundred other settlements had an urban character from a functional point of view. The first written mention prior 1500 is available for 2.476 settlements. The mining towns in Slovakia significantly contributed to the economy of the Kingdom of Hungary. Around the middle of the 14th century, Kremnica alone produced 400 kg of gold per year. Banská Štiavnica and Banská Bystrica produced a substantial proportion of silver of the whole kingdom. During the second half of the 14th century, the Kingdom of Hungary produced cca 25% of Europe's total output. The towns formed unions and associations to defend their privileges and common interests. The most important unions were the Community of Saxons of Spiš () (later reduced and known as the Province of twenty-four Spiš towns), the Lower Hungarian Mining Towns (mining towns in Central Slovakia), Pentapolis (alliance of free royal towns in present-day Eastern Slovakia) and the Upper Hungarian Mining Towns (mining towns in eastern Slovakia including two mining towns in present-day Hungary). The inhabitants of the privileged towns were mainly of German origin, followed by Slovaks and smaller number of Hungarians. Royal privileges prove that several families of the developing local nobility ("e.g.", the Zathureczky, Pominorszky and Viszocsányi families) were of Slavic origin. The presence of Jews in several towns ("e.g.", in Bratislava, Pezinok) is also documented at least from the 13th century; the Jews' special status was confirmed by a charter of King Béla IV of Hungary in 1251, but decisions of local synods limited the participation of Jews ("i.e.", they could not hold offices and they could not own lands). The Muslims, living in the region of Nitra, also faced similar limitations; they disappeared (perhaps converted to Christianity) by the end of the 13th century. The last decades of the 13th century were characterized by discords within the royal family and among the several groups of the aristocracy. The decay of the royal power and the rise of some powerful aristocrats gave rise to the transformation of the administrative system: the counties that had been the basic units of the royal administration ("royal counties") transformed gradually into autonomous administrative units of the local nobility ("noble counties"); however, the local nobility was not able to stop the rise of oligarchs. Following the Mongol invasion of the kingdom, a competition started among the landowners: each of them endeavored to build a castle with or without the permission of the king. The competition started a process of differentiation among the noble families, because the nobles who were able to build a castle could also expand their influence over the neighbouring landowners. The conflicts among the members of the royal family also strengthened the power of the aristocrats (who sometimes received whole counties from the kings) and resulted in the formation of around eight huge territories (domains) in the kingdom, governed by powerful aristocrats in the 1290s. In present-day Slovakia, most of the castles were owned by two powerful aristocrats (Amade Aba and Matthew III Csák) or their followers. Following the extinction of the Árpád dynasty (1301), both of them pretended to follow one of the claimants for the throne, but, in practice, they governed their territories independently. Amade Aba governed the eastern parts of present-day Slovakia from his seat in Gönc. He was killed by Charles Robert of Anjou's assassins at the south gate in Košice in 1311. Matthew III Csák was the "de facto" ruler of the western territories of present-day Slovakia, from his seat at Trenčín. He allied himself with the murdered Amade Aba's sons against Košice, but King Charles I of Hungary, who had managed to acquire the throne against his opponents, gave military assistance to the town and the royal armies defeated him at the Battle of Rozgony / Rozhanovce in 1312. However, the north-western counties remained in his power until his death in 1321 when the royal armies occupied his former castles without resistance. Pressburg (Bratislava) county was "de facto" ruled by the Dukes of Austria from 1301 to 1328 when King Charles I of Hungary reoccupied it. King Charles I strengthened the central power in the kingdom following a 20-year-long period of struggles against his opponents and the oligarchs. He concluded commercial agreements with Kings John of Bohemia and Casimir III of Poland in 1335 which increased the trade on the commercial routes leading from Košice to Kraków and from Žilina (hu. Zsolna) to Brno. The king confirmed the privileges of the 24 "Saxon" towns in Spiš, strengthened the special rights of Prešov and granted town privileges to Smolník (hu. Szomolnok ) The towns of present-day Slovakia were still dominated by its German citizens. However, the "Privilegium pro Slavis", dated to 1381, attests notably to nation-building in the wealthy towns: King Louis I gave the Slavs half of the seats in the municipal council of Žilina. Many of the towns ("e.g.", Banská Bystrica, Bratislava, Košice, Kremnica and Trnava) received the status of "free royal cities" "(liberæ regiæ civitates)" and they were entitled to send deputies to the assemblies of the Estates of the Kingdom from 1441. In the first half of the 14th century, the population of the regions of the former "forest counties" increased and their territories formed new counties such as Orava, Liptov, Turiec, Zvolen in the northern parts of present-day Slovakia. In the region of Spiš, some elements of the population received special privileges: the 24 "Saxon" towns formed an autonomous community, independent of Spiš county, and the "nobles with ten lances" were organised into a special autonomous administrative unit ("seat"). In 1412, King Sigismund mortgaged 13 of the "Saxon" towns to King Władysław II of Poland so they "de facto" belonged to Poland until 1769. From the 1320s, most of the lands of present-day Slovakia were owned by the kings, but prelates and aristocratic families ("e.g.", the Drugeth, Szentgyörgyi and Szécsényi families) also hold properties on the territory. In December 1385, the future King Sigismund, who was Queen Mary of Hungary's prince consort at that time, mortgaged the territories of present-day Slovakia west of the Váh River to his cousins, the Jobst and Prokop of Moravia; and the former held his territories until 1389, while the latter could maintain his rule over some of the territories until 1405. King Sigismund (1387–1437) granted vast territories to his followers ("e.g.", to the members of the Cillei, Rozgonyi and Perényi families) during his reign; one of his principal advisers, the Polish Stibor of Stiboricz styled himself "Lord of the whole Váh" referring to his 15 castles around the river. Following the death of King Albert (1439), civil war broke out among the followers of the claimants for the throne. The Dowager Queen Elisabeth hired Czech mercenaries led by Jan Jiskra who captured several towns on the territory of present-day Slovakia ("e.g.", Kremnica, Levoča and Bardejov) and maintained most of them until 1462 when he surrendered to King Matthias Corvinus. The Ottoman Empire conquered the central part of the Kingdom of Hungary, and set up several Ottoman provinces there (see Budin Eyalet, Eğri Eyalet, Uyvar Eyalet). Transylvania became an Ottoman protectorate vassal and a base which gave birth to all the anti-Habsburg revolts led by the nobility of the Kingdom of Hungary during the period 1604 to 1711. The remaining part of the former Kingdom of Hungary, which included much of present-day territory of Slovakia (except for the southern central regions), northwestern present-day Hungary, northern Croatia and present-day Burgenland, resisted Ottoman conquest and subsequently became a province of the Habsburg Monarchy. It remained to be known as the Kingdom of Hungary, but it is referred to by some modern historians as the "Royal Hungary". Ferdinand I, prince of Austria was elected king of Habsburg Kingdom of Hungary. After the conquest of Buda in 1541 by the Ottomans, "Pressburg" (the modern-day capital of Slovakia, Bratislava) became, for the period between 1536 and 1784/1848 the capital and the coronation city of the Habsburg Kingdom of Hungary. From 1526 to 1830, nineteen Habsburg sovereigns went through coronation ceremonies as Kings and Queens of the Kingdom of Hungary in St. Martin's Cathedral. After the Ottoman invasion, the territories that had been administered by the Kingdom of Hungary became, for almost two centuries, the principal battleground of the Turkish wars. The region suffered due to the wars against the Ottoman expansion. A lot of loss of life and property occurred during the wars and the region also practically lost all of its natural riches, especially gold and silver, which went to pay for the costly and difficult combats of an endemic war. In addition, the double taxation of some areas was a common practice, which further worsened the living standards of the declining population of local settlements. During Ottoman administration, parts of the territory of present-day Slovakia were included into Ottoman provinces known as the Budin Eyalet, Eğri Eyalet and Uyvar Eyalet. Uyvar Eyalet had its administrative center in the territory of present-day Slovakia, in the town of Uyvar (Slovak: Nové Zámky). In the second half of the 17th century, Ottoman authority was expanded to eastern part of the Habsburg Kingdom of Hungary, where a vassal Ottoman principality led by prince Imre Thököly was established. After the ousting of the Ottomans from Budin (which later became Budapest) in 1686, it became the capital of the Habsburg Kingdom of Hungary. Despite living under Hungarian, Habsburg and Ottoman administration for several centuries, the Slovak people succeeded in keeping their language and their culture. During the 18th century the Slovak National Movement emerged, partially inspired by the broader Pan-Slavic movement with the aim of fostering a sense of national identity among the Slovak people. Advanced mainly by Slovak religious leaders, the movement grew during the 19th century. At the same time, the movement was divided along the confessional lines and various groups had different views on everything from quotidian strategy to linguistics. Moreover, the Hungarian control remained strict after 1867 and the movement was constrained by the official policy of magyarization. The first codification of a Slovak literary language by Anton Bernolák in the 1780s was based on the dialect from western Slovakia. It was supported by mainly Roman Catholic intellectuals, with the center in Trnava. The Lutheran intellectuals continued to use a Slovakized form of the Czech language. Especially Ján Kollár and Pavel Jozef Šafárik were adherents of Pan-Slavic concepts that stressed the unity of all Slavic peoples. They considered Czechs and Slovaks members of a single nation and they attempted to draw the languages closer together. In the 1840s, the Protestants split as Ľudovít Štúr developed a literal language based on the dialect from central Slovakia. His followers stressed the separate identity of the Slovak nation and uniqueness of its language. Štúr's version was finally approved by both the Catholics and the Lutherans in 1847 and, after several reforms, it remains the official Slovak language. In the Hungarian Revolution of 1848, Slovak nationalist leaders took the side of the Austrians in order to promote their separation from the Kingdom of Hungary within the Austrian monarchy. The Slovak National Council even took part in the Austrian military campaign by setting up auxiliary troops against the rebel government of the Hungarian Revolution of 1848. In September, 1848, it managed to organize a short-lived administration of the captured territories. However, the Slovak troops were later disbanded by the Vienna Imperial Court. On the other hand, tens of thousands of volunteers from the current territory of Slovakia, among them a great number of Slovaks, fought in the Hungarian Army. After the defeat of the Hungarian Revolution, the Hungarian political elite was oppressed by the Austrian authorities and many participants of the Revolution were executed, imprisoned, or forced to emigrate. In 1850, the Kingdom of Hungary was divided into five military districts or provinces, two of which had administrative centers in the territory of present-day Slovakia: the Military District of Pressburg (Bratislava) and the Military District of Košice. The Austrian authorities abolished both provinces in 1860. The Slovak political elite made use of the period of neo-absolutism of the Vienna court and the weakness of the traditional Hungarian elite to promote their national goals. Turz-Sankt Martin (Martin / Túrócszentmárton) became the foremost center of the Slovak National Movement with foundation of the nationwide cultural association Matica slovenská (1863), the Slovak National Museum, and the Slovak National Party (1871). The heyday of the movement came to the sudden end after 1867, when the Habsburg domains in central Europe underwent a constitutional transformation into the dual monarchy of Austria-Hungary as a result of the Austro-Hungarian Compromise of 1867. The territory of present-day Slovakia was included into the Hungarian part of dual Monarchy dominated by the Hungarian political elite which distrusted the Slovak elite due to its Pan-Slavism, separatism and its recent stand against the Hungarian Revolution of 1848. Matica was accused of Pan-Slavic separatism and was dissolved by the authorities in 1875 and other Slovak institutions (including schools) shared the same fate. New signs of national and political life appeared only at the very end of the 19th century. Slovaks became aware that they needed to ally themselves with others in their struggle. One result of this awareness, the Congress of Oppressed Peoples of the Kingdom of Hungary, held in Budapest in 1895, alarmed the government. In their struggle Slovaks received a great deal of help from the Czechs. In 1896, the concept of Czecho-Slovak Mutuality was established in Prague to strengthen Czecho-Slovak cooperation and support the secession of Slovaks from the Kingdom of Hungary. At the beginning of the 20th century, growing democratization of political and social life threatened to overwhelm the monarchy. The call for universal suffrage became the main rallying cry. In the Kingdom of Hungary, only 5 percent of inhabitants could vote. Slovaks saw in the trend towards representative democracy a possibility of easing ethnic oppression and a break-through into renewed political activity. The Slovak political camp, at the beginning of the century, split into different factions. The leaders of the Slovak National Party based in Martin, expected the international situation to change in the Slovaks' favor, and they put great store by Russia. The Roman Catholic faction of Slovak politicians led by Father Andrej Hlinka focused on small undertakings among the Slovak public and, shortly before the war, established a political party named the Slovak People's Party. The liberal intelligentsia rallying around the journal "Hlas" ("Voice"), followed a similar political path, but attached more importance to Czecho-Slovak cooperation. An independent Social Democratic Party emerged in 1905. The Slovaks achieved some results. One of the greatest of these occurred with the election success in 1906, when, despite continued oppression, seven Slovaks managed to get seats in the Assembly. This success alarmed the government, and increased what was regarded by Slovaks as its oppressive measures. Magyarization achieved its climax with a new education act known as the Apponyi Act, named after education minister Count Albert Apponyi. The new act stipulated that the teaching of the Hungarian language, as one of the subjects, must be included in the curriculum of non-state owned four years elementary schools in the frame-work of the compulsory schooling, as a condition for the non-state owned schools to receive state-financing. Non-government organizations such as the Upper Hungary Magyar Educational Society supported Magyarization at a local level. Ethnic tension intensified when 15 Slovaks were killed during a riot on occasion of the consecration of a new church at Černová / Csernova near Rózsahegy / Ružomberok (see Černová tragedy). The local inhabitants wanted the popular priest and nationalist politician Andrej Hlinka to consecrate their new church. Hlinka contributed significantly to the construction of the church, but his bishop Alexander Párvy suspended him from his office and from exercising all clerical functions because of Hlinka's involvement in the national movement. This raised a wave of solidarity with Hlinka from across all today's Slovakia. The villagers tried to achieve a compromise solution and to cancel the suspensions or to postpone consecration until the Holy See decides about the Hlinka's case. Párvy refused to consent and appointed ethnic Slovak dean Martin Pazúrik for the task. Pazúrik, as well as Hlinka, was active in the election campaign but supported Hungarian and Magyarone politicians and continuously adopted anti-Slovak attitude. The church had to be consecrated by force with the police assistance. Given where the event occurred, all 15 local gendarmes who participated in the subsequent tragedy had Slovak origin. In the stress situation, the gendarmes shot dead 15 protesters among a crowd of app. 300–400 villagers who tried to avoid the priests' convoy to enter their village. All this added to Slovak estrangement from and resistance to Hungarian rule, and the incident raised international attention on violation of national rights of non-Hungarian minorities. Before the outbreak of World War I, the idea of Slovak autonomy became part of Archduke Franz Ferdinand's plan of federalization of the monarchy, developed with help of the Slovak journalist and politician Milan Hodža. This last realistic attempt to tie Slovakia to Austria-Hungary was abandoned because of the Archduke's assassination, which in turn triggered World War I. After the outbreak of World War I the Slovak cause took firmer shape in resistance and in determination to leave the Dual Monarchy and to form an independent republic with the Czechs. The decision originated amongst people of Slovak descent in foreign countries. Slovaks in the United States of America, an especially numerous group, formed a sizable organization. These, and other organizations in Russia and in neutral countries, backed the idea of a Czecho-Slovak republic. Slovaks strongly supported this move. The most important Slovak representative at this time, Milan Rastislav Štefánik, a French citizen of Slovak origin, served as a French general and as leading representative of the Czecho-Slovak National Council based in Paris. He made a decisive contribution to the success of the Czecho-Slovak cause. Political representatives at home, including representatives of all political persuasions, after some hesitation, gave their support to the activities of Masaryk, Beneš and Štefánik. During the war the Hungarian authorities increased harassment of Slovaks, which hindered the nationalist campaign among the inhabitants of the Slovak lands. Despite stringent censorship, news of moves abroad towards the establishment of a Czech-Slovak state got through to Slovakia and met with much satisfaction. During World War I (1914–1918) Czechs, Slovaks, and other national groups of Austria-Hungary gained much support from Czechs and Slovaks living abroad in campaigning for an independent state. In the turbulent final year of the war, sporadic protest actions took place in Slovakia; politicians held a secret meeting at Liptószentmiklós / Liptovský Mikuláš on 1 May 1918. At the end of the war Austria-Hungary dissolved. The Prague National Committee proclaimed an independent republic of Czechoslovakia on 28 October, and, two days later, the Slovak National Council at Martin acceded to the Prague proclamation. The new republic was to include the Czech lands (Bohemia and Moravia), a small part of Silesia, Slovakia, and Subcarpathian Ruthenia. The new state set up a parliamentary democratic government and established a capital in the Czech city of Prague. As a result of the counter-attack of the Hungarian Red Army in May–June, 1919, Czech troops were ousted from central and eastern parts of present Slovakia, where a puppet short-lived Slovak Soviet Republic with its capital in Prešov was established. However, the Hungarian army stopped its offensive and later the troops were withdrawn on the Entente's diplomatic intervention. In the Treaty of Trianon signed in 1920, the Paris Peace Conference set the southern border of Czechoslovakia further south from the Slovak-Hungarian language border due to strategic and economic reasons. Consequently, some fully or mostly Hungarian-populated areas were also included into Czechoslovakia. According to the 1910 census, which had been manipulated by the ruling Hungarian bureaucracy, population of the present territory of Slovakia numbered 2,914,143 people, including 1,688,413 (57.9%) speakers of Slovak language, 881,320 (30.2%) speakers of Hungarian language, 198,405 (6.8%) speakers of German language, 103,387 (3.5%) speakers of Ruthenian and 42,618 (1.6%) speakers of other languages. In addition, in Subcarpathian Ruthenia, which was also included into Czechoslovakia in this time period, the 1910 manipulated Hungarian census recorded 605,942 people, including 330,010 (54.5%) speakers of Ruthenian, 185,433 (30.6%) speakers of Hungarian language, 64,257 (10.6%) speakers of German language, 11,668 (1.9%) speakers of Romanian language, 6,346 (1%) speakers of Slovak/Czech language, and 8,228 (1.4%) speakers of other languages. The Czechoslovak census of 1930 recorded in Slovakia 3,254,189 people, including 2,224,983 (68.4%) Slovaks, 585,434 (17.6%) Hungarians, 154,821 (4.5%) Germans, 120,926 (3.7%) Czechs, 95,359 (2.8%) Rusyns and 72,666 (3%) others. Slovaks, whom the Czechs outnumbered in the Czechoslovak state, differed in many important ways from their Czech neighbors. Slovakia had a more agrarian and less developed economy than the Czech lands, and the majority of Slovaks practised Catholicism while fewer Czechs adhered to established religions. The Slovak people had generally less education and less experience with self-government than the Czechs. These disparities, compounded by centralized governmental control from Prague, produced discontent with the structure of the new state among the Slovaks. Although Czechoslovakia, alone among the east-central European countries, remained a parliamentary democracy from 1918 to 1938, it continued to face minority problems, the most important of which concerned the country's large German population. A significant part of the new Slovak political establishment sought autonomy for Slovakia. The movement toward autonomy built up gradually from the 1920s until it culminated in independence in 1939. In the period between the two world wars, the Czechoslovak government attempted to industrialize Slovakia. These efforts did not meet with success, partially due to the Great Depression, the worldwide economic slump of the 1930s. Slovak resentment over perceived economic and political domination by the Czechs led to increasing dissatisfaction with the republic and growing support for ideas of independence. Many Slovaks joined with Father Andrej Hlinka and Jozef Tiso in calls for equality between Czechs and Slovaks and for greater autonomy for Slovakia. In September 1938, France, Italy, United Kingdom and Nazi Germany concluded the Munich Agreement, which forced Czechoslovakia to cede the predominantly German region known as the Sudetenland to Germany. In November, by the First Vienna Award, Italy and Germany compelled Czechoslovakia (later Slovakia) to cede primarily Hungarian-inhabited Southern Slovakia to Hungary. They did this in spite of pro-German official declarations of Czech and Slovak leaders made in October. On 14 March 1939, the Slovak Republic ("Slovenská republika") declared its independence and became a nominally independent state in Central Europe under Nazi German control of foreign policy and, increasingly, also some aspects of domestic policy. Jozef Tiso became Prime Minister and later President of the new state. On 15 March, Nazi Germany invaded what remained of Bohemia, Moravia, and Silesia after the Munich agreement. The Germans established a protectorate over them which was known as the Protectorate of Bohemia and Moravia. On the same day, Carpatho-Ukraine declared its independence. But Hungary immediately invaded and annexed the Republic of Carpatho-Ukraine. On 23 March, Hungary then occupied some additional disputed parts of territory of the present-day Eastern-Slovakia. This caused the brief Slovak-Hungarian War. The nominally independent Slovak Republic went through the early years of the war in relative peace. As an Axis ally, the country took part in the wars against Poland and the Soviet Union. Although its contribution was symbolic in the German war efforts, the number of troops involved (approx. 45,000 in the Soviet campaign) was rather significant in proportion to the population (2.6 million in 1940). Soon after independence, under the authoritarian government of Jozef Tiso, a series of measures aimed against the 90,000 Jews in the country were initiated. The Hlinka Guard began to attack Jews, and the "Jewish Code" was passed in September 1941. Resembling the Nuremberg Laws, the Code required that Jews wear a yellow armband, and were banned from intermarriage and many jobs. More than 64.000 Jews lost their livelihood. Between March and October 1942, the state deported approximately 57,000 Jews to the German-occupied part of Poland, where almost all of them were killed in Extermination camps. The Slovak Parliament accepted a bill that retroactively legalized the deportation in May 1942. The deportation of the remaining Jewish population was stopped when the government "resolved" social problem caused by its own policy. However, 12,600 more Jews were deported by the German forces occupying Slovakia after the Slovak National Uprising in 1944. Around a half of them were killed in concentration camps. Other Jews were rounded up and massacred in the country by Slovak collaborators under German command, at Kremnička and Nemecká. Some 10,000 Slovak Jews survived in Slovakia. On 29 August 1944, 60,000 Slovak troops and 18,000 partisans, organized by various underground groups and the Czechoslovak government-in-exile, rose up against the Nazis. The insurrection later became known as the Slovak National Uprising. Slovakia was devastated by the fierce German counter-offensive and occupation, but the guerrilla warfare continued even after the end of organized resistance. Although ultimately quelled by the German forces, the uprising was an important historical reference point for the Slovak people. It allowed them to end the war as a nation which had contributed to the Allied victory. Later in 1944 the Soviet attacks intensified. Hence the Red Army, helped by Romanian troops, gradually routed out the German army from Slovak territory. On 4 April 1945, Soviet troops marched into the capital city of the Slovak Republic, Bratislava. The victorious Powers restored Czechoslovakia in 1945 in the wake of World War II, albeit without Carpathian Ruthenia, which Prague ceded to the Soviet Union. The Beneš decrees, adopted as a result of the events of the war, led to disenfranchisement and persecution of the Hungarian minority in southern Slovakia. The local German minority was expelled, with only the population of some villages such as Chmeľnica evading expulsion but suffering discrimination against use of their language. The Czechs and Slovaks held elections in 1946. In Slovakia, the Democratic Party won the elections (62%), but the Czechoslovak Communist Party won in the Czech part of the republic, thus winning 38% of the total vote in Czechoslovakia, and eventually seized power in February 1948, making the country effectively a satellite state of the Soviet Union. Strict Communist control characterized the next four decades, interrupted only briefly in the so-called Prague Spring of 1968 after Alexander Dubček (a Slovak) became First Secretary of the Central Committee of the Communist Party of Czechoslovakia. Dubček proposed political, social, and economic reforms in his effort to make "socialism with a human face" a reality. Concern among other Warsaw Pact governments that Dubček had gone too far led to the invasion and occupation of Czechoslovakia on 21 August 1968, by Soviet, Hungarian, Bulgarian, East German, and Polish troops. Another Slovak, Gustáv Husák, replaced Dubček as Communist Party leader in April 1969. The 1970s and 1980s became known as the period of "normalization", in which the apologists for the 1968 Soviet invasion prevented as best they could any opposition to their conservative régime. Political, social, and economic life stagnated. Because the reform movement had had its center in Prague, Slovakia experienced "normalization" less harshly than the Czech lands. In fact, the Slovak Republic saw comparatively high economic growth in the 1970s and 1980s relative to the Czech Republic (and mostly from 1994 till ). The 1970s also saw the development of a dissident movement, especially in the Czech Republic. On 1 January 1977, more than 250 human rights activists signed a manifesto called Charter 77, which criticized the Czechoslovak government for failing to meet its human rights obligations. On 17 November 1989, a series of public protests known as the "Velvet Revolution" began and led to the downfall of Communist Party rule in Czechoslovakia. A transition government formed in December 1989, and the first free elections in Czechoslovakia since 1948 took place in June 1990. In 1992, negotiations on the new federal constitution deadlocked over the issue of Slovak autonomy. In the latter half of 1992, agreement emerged to dissolve Czechoslovakia peacefully. On 1 January 1993, the Czech Republic and the Slovak Republic each simultaneously and peacefully proclaimed their existence. Both states attained immediate recognition from the United States of America and from their European neighbors. In the days following the "Velvet Revolution," Charter 77 and other groups united to become the Civic Forum, an umbrella group championing bureaucratic reform and civil liberties. Its leader, the playwright and former dissident Václav Havel won election as President of Czechoslovakia in December 1989. The Slovak counterpart of the Civic Forum, Public Against Violence, expressed the same ideals. In the June 1990 elections, Civic Forum and Public Against Violence won landslide victories. Civic Forum and Public Against Violence found, however, that although they had successfully completed their primary objective — the overthrow of the communist régime — they proved less effective as governing parties. In the 1992 elections, a spectrum of new parties replaced both Civic Forum and Public Against Violence. In an election held in June 1992, Václav Klaus's Civic Democratic Party won in the Czech lands on a platform of economic reform, and Vladimír Mečiar's Movement for a Democratic Slovakia (HZDS) emerged as the leading party in Slovakia, basing its appeal on the fairness of Slovak demands for autonomy. Mečiar and Klaus negotiated the agreement to divide Czechoslovakia, and Mečiar's party — HZDS — ruled Slovakia for most of its first five years as an independent state, except for a 9-month period in 1994 after a vote of no-confidence, during which a reformist government under Prime Minister Jozef Moravčík operated. The first president of newly independent Slovakia, Michal Kováč, promised to make Slovakia "the Switzerland of Eastern Europe". The first prime minister, Mečiar, had served as the prime minister of the Slovak part of Czechoslovakia since 1992. Rudolf Schuster won the presidential election in May 1999. Mečiar's semi-authoritarian government allegedly breached democratic norms and the rule of law before its replacement after the parliamentary elections of 1998 by a coalition led by Mikuláš Dzurinda. The first Dzurinda government made numerous political and economic reforms that enabled Slovakia to enter the Organisation for Economic Co-operation and Development (OECD), close virtually all chapters in European Union (EU) negotiations, and make itself a strong candidate for accession to North Atlantic Treaty Organization (NATO). However, the popularity of the governing parties declined sharply, and several new parties that earned relatively high levels of support in public opinion polls appeared on the political scene. Mečiar remained the leader (in opposition) of the HZDS, which continued to receive the support of 20% or more of the population during the first Dzurinda government. In the September 2002 parliamentary election, a last-minute surge in support for Prime Minister Dzurinda's Slovak Democratic and Christian Union (SDKÚ) gave him a mandate for a second term. He formed a government with three other center-right parties: the Party of the Hungarian Coalition (SMK), the Christian Democrats (KDH) and the Alliance of the New Citizen (ANO). The coalition won a narrow (three-seat) majority in the parliament. Dzurinda's Second Cabinet (2002-2006) announced strong NATO and EU integration and stated to continue the democratic and free market-oriented reforms begun by the first Dzurinda government. The new coalition had as its main priorities—gaining of NATO and EU invitations, attracting foreign investment, and reforming social services such as the health-care system. Vladimír Mečiar's 'Movement for a Democratic Slovakia', which received about 27% of the vote in 1998 (almost 900,000 votes) received only 19.5% (about 560,000 votes) in 2002 and again went into opposition, unable to find coalition partners. The opposition comprised the HZDS, Smer (led by Róbert Fico), and the Communists, who obtained about 6% of the popular vote. Initially, Slovakia experienced more difficulty than the Czech Republic in developing a modern market economy. Slovakia joined NATO on 29 March 2004 and the EU on 1 May 2004. Slovakia was, on 10 October 2005, for the first time elected to a two-year term on the UN Security Council (for 2006–2007). The next election took place on 17 June 2006, where leftist Smer got 29.14% (around 670 000 votes) of the popular vote and formed coalition with Slota's Slovak National Party and Mečiar's 'Movement for a Democratic Slovakia'. Their opposition comprised the former ruling parties: the SDKÚ, the SMK and the KDH. The election in June 2010 was won by Smer with 34.8% but Fico wasn't able to form a government, so a coalition of SDKU, KDH, SaS and Most-Hid took over, with Iveta Radičová as first women Prime Minister. This government fell after the vote of the European Financial Stability Fund was connected with a no confidence vote, as SaS argued, that Slovakia, shouldn't bail out much richer countries. Smer won the election in 2012 with 44,42%. Fico formed his Second Cabinet. It was a single party government claiming 83 out of the 150 seats. It officially supported the position of the EU during the Russian military intervention in Ukraine (2014–present) but sometimes doubted the efficiency of EU sanctions against Russia. In autumn 2015, during the European migrant crisis, the leaders of the four Visegrád Group states rejected EU's proposal to reallocate 120.000 refugees. The election 2016 took place in March 2016; somde days later Fico formed his Third Cabinet, composed of four parties. Slovakia's Prime Minister Robert Fico has resigned in March 2018 following the largest street protests in decades over the murder of Ján Kuciak, an investigative journalist who was investigating high-level political corruption linked to the organized crime. Lists: General:
https://en.wikipedia.org/wiki?curid=27328
Geography of Slovakia Slovakia is a landlocked Central European country with mountainous regions in the north and flat terrain in the south. Land use: "agricultural land": 40.1% "arable land": 28.9%; permanent crops: 0.4%; permanent pasture: 10.8% "forest": 40.2% "other": 19.7% (2011 est.) Natural resources: Lignite, small amounts of iron ore, copper and manganese ore; salt; arable land Natural Hazards: Flooding Environment-international agreements: "Party to": Air Pollution, Air Pollution-Nitrogen Oxides, Air Pollution-Persistent Organic Pollutants, Air Pollution-Sulfur 85, Air Pollution-Sulfur 94, Air Pollution-Volatile Organic Compounds, Antarctic Treaty, Biodiversity, Climate Change, Climate Change-Kyoto Protocol, Desertification, Endangered Species, Environmental Modification, Hazardous Wastes, Law of the Sea, Ozone Layer Protection, Ship Pollution, Wetlands, Whaling. "Signed, but not ratified": none of the selected agreements Slovakia lies between 49°36'48" and 47°44'21" northern latitude and 16°50'56" and 22°33'53" eastern longitude. The northernmost point is near Beskydok, a mountain on the border with Poland near the village of Oravská Polhora in the Beskids. The southernmost point is near the village of Patince on the Danube on the border with Hungary. The westernmost point is on the Morava River near Záhorská Ves on the Austrian border. The easternmost point is close to the summit of Kremenec, a mountain near the village of Nová Sedlica at the meeting point of Slovak, Polish, and Ukrainian borders. The highest point is at the summit of Gerlachovský štít in the High Tatras, , the lowest point is the surface of the Bodrog River on the Hungarian border at . The country's area is . 31% is arable land, 17% pastures, 41% forests, 3% cultivated land. The remaining 8% is mostly covered with human structures and infrastructure, and partly with rocky mountain ridges and other unimproved land. Slovakia borders Poland in the north - , Ukraine in the east - , Hungary in the south - , Austria in the south-west - , and the Czech Republic in the north-west - for a total border length of . Temperate; cool summers; cold, cloudy, humid winters.
https://en.wikipedia.org/wiki?curid=27329
Demographics of Slovakia This article is about the demographic features of the population of Slovakia, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. The demographic statistics are from the Statistical Office of the SR, unless otherwise indicated. Total population: (as of ). The total fertility rate is the number of children born per woman. It is based on fairly good data for the entire period. Sources: Our World In Data and Gapminder Foundation. Demographic statistics according to the World Population Review in 2019. The following demographic statistics are from the CIA World Factbook. Slovak 80.7%, Hungarian 8.5%, Romani 2%, other 1.8% (includes Czech, Ruthenian, Ukrainian, Russian, German, Polish), unspecified 7% (2011 est.) Slovak (official) 78.6%, Hungarian 9.4%, Roma 2.3%, Ruthenian 1%, other or unspecified 8.8% (2011 est.) Roman Catholic 62%, Protestant 8.2%, Greek Catholic 3.8%, other or unspecified 12.5%, none 13.4% (2011 est.) "at birth:" 1.05 male(s)/female "under 15 years:" 1.05 male(s)/female "15–64 years:" 1 male(s)/female "65 years and over:" 0.6 male(s)/female "total population:" 0.94 male(s)/female (2011 est.) Immigration to Slovakia is one of the lowest in the European Union. "total": 6.47 deaths/1,000 live births "male" 7.54 deaths/1,000 live births "female" 5.34 deaths/1,000 live births (2012 est.) Life expectancy from 1950 to 2015 ("UN World Population Prospects"): The majority of the 5.4 million inhabitants of Slovakia are Slovak (80.7%). Hungarians are the largest ethnic minority (8.5%) and are concentrated in the southern and eastern regions of Slovakia. Other ethnic groups include Roma (2.0%), Czechs, Croats, Rusyns, Ukrainians, Germans, Poles, Serbs and Jews (about 2,300 remain of the estimated pre-WWII population of 120,000). While both international organizations (the United Nations and the World Bank) and the official Slovak statistics office offer population figures for ethnic groups, these figures seldom come close to agreement. Figures for the Roma population (for a variety of reasons) vary between 1% and 10% of the population. In the most recent survey carried out by the Slovak Government's Roma Plenipotentiary, the figure for the percentage of Roma was arrived at through interview with municipality representatives and mayors, according to how many Roma they think live in their jurisdictions. The figure arrived at by this means was in the region of 300,000 (about 5.6%). Note that in the case of the 5.6%, however, the above percentages of Hungarians and Slovaks are lower accordingly. The official state language is Slovak, and Hungarian is widely spoken in the southern regions. Despite its modern European economy and society, Slovakia has a significant rural element. About 45% of Slovaks live in villages with fewer than 5,000 inhabitants, and 14% in villages with fewer than 1,000. The Slovak constitution guarantees freedom of religion. The majority of Slovak citizens (62%) practice Roman Catholicism; the second-largest group consider themselves atheists (13%). About 6.9% are Protestants, 4.1% are Greek Catholics, and 0.9% are Orthodox. Reformed Christian Church 2.0%, other 6.4% (2004 survey).
https://en.wikipedia.org/wiki?curid=27330
Politics of Slovakia Politics of Slovakia takes place in a framework of a parliamentary representative democratic republic, with a multi-party system. Legislative power is vested in the parliament and it can be exercised in some cases also by the government or directly by citizens. Executive power is exercised by the government led by the Prime Minister. The Judiciary is independent of the executive and the legislature. The President is the head of the state. Before the 1989 revolution, Czechoslovakia was a socialist dictatorship ruled by the Communist Party of Czechoslovakia, technically together with the coalition of the so-called National Front. Before the free democratic elections could take place after the revolution, a transitional government was created. 1989 President of Czechoslovakia Gustáv Husák sworn in the Government of National Understanding (, ) headed by Marián Čalfa and he himself abdicated. It consisted of 10 communists and 9 non-communists and its main goal was to prepare for democratic elections, to establish market economy in the country and to start preparing a new constitution. On 8–9 June 1990, the Czechoslovakian parliamentary election of 1990 took place. Čalfa's second government was disbanded on 27 June 1990, when it was replaced by the Government of National Sacrifice (, ), also headed by Marián Čalfa. On 5–6 June 1992, the last elections in Czechoslovakia, the Czechoslovakian parliamentary election of 1992 took place. Čalfa's third government was disbanded on 2 July 1992, when it was replaced by the Caretaker Government of Jan Stráský (, ), headed by Jan Stráský. The caretaker government was disbanded on 31 December 1992 together with the dissolution of the Czech and Slovak Federative Republic. Due to federalism, immediately after the 1989 revolution, two national governments (one for the Czech Republic, one for Slovakia) were created as well under the federal Czechoslovak government. In Slovakia it was headed by Milan Čič and it was established on 12 December 1989 and disbanded on 26 June 1990. On 8–9 June 1990, the Slovak parliamentary election of 1990 took place together with the federal Czechoslovak elections. Čič's government was followed by the First Government of Vladimír Mečiar (1990-1991), Government of Ján Čarnogurský (1991-1992) and the Second Government of Vladimír Mečiar (1992-1994). On 5–6 June the Slovak parliamentary election of 1992 took place. The Constitution of the Slovak Republic was ratified 1 September 1992 and became effective 1 October 1992 (some parts 1 January 1993). It was amended in September 1998 to allow direct election of the president and again in February 2001 due to EU admission requirements. The civil law system is based on Austro-Hungarian codes. The legal code was modified to comply with the obligations of Organization on Security and Cooperation in Europe (OSCE) and to expunge the Marxist–Leninist legal theory. Slovakia accepts the compulsory International Court of Justice jurisdiction with reservations. The president is the head of state and the formal head of the executive, though with very limited powers. The president is elected by direct, popular vote, under the two round system, for a five-year term. Following National Council elections, the leader of the majority party or the leader of the majority coalition is usually appointed prime minister by the president. Cabinet appointed by the president on the recommendation of the prime minister has to receive the majority in the parliament. From July 2006 till July 2010 the coalition consisted of Smer, SNS and HZDS. After the 2010 elections a coalition was formed by the former opposition parties SDKÚ, KDH and Most–Híd and newcomer SaS. From 2012 to 2016, after the premature elections, whole government consisted of members and nominees of the party SMER-SD, which also had majority in the parliament. The 2016 parliamentary election gave a coalition of parties SMER-SD, SNS and Most-Híd. After the 2020 Slovak parliamentary election, the Ordinary People and Independent Personalities won the election and Igor Matovič is the Prime Minister. Slovakia's sole constitutional and legislative body is the 150-seat unicameral National Council of the Slovak Republic. Delegates are elected for 4-year terms on the basis of proportional representation. The National Council considers and approves the Constitution, constitutional statutes and other legal acts. It also approves the state budget. It elects some officials specified by law as well as the candidates for the position of a Justice of the Constitutional Court of the Slovak Republic and the Prosecutor General. Prior to their ratification, the parliament should approve all important international treaties. Moreover, it gives consent for dispatching of military forces outside of Slovakia's territory and for the presence of foreign military forces on the territory of the Slovak Republic. Current Chairman of the National Council is Andrej Danko. 18 years of age; universal, equal, and direct suffrage by secret ballot. The president is elected by direct, popular vote, under the two round system, for a five-year term. Two rounds of the last election were held on March 16 and 30, 2019. Members of the National Council of the Slovak Republic (), are elected directly for a 4-year term, under the proportional representation system. Like the Netherlands, the country is a single multi-member constituency. Voters may indicate their preferences within the semi-open list. The election threshold is 5%. Latest elections were held on March 5, 2016. Other election results: The Slovak political scene supports a wide spectrum of political parties including the communists (KSS) and the nationalists (SNS). New parties arise and old parties cease to exist or merge at a frequent rate. Major parties are members of the European political parties. Some parties have regional strongholds, for example SMK is supported mainly by the Hungarian minority living in southern Slovakia. Although the main political cleavage in the 1990s concerned the somewhat authoritarian policy of HZDS, the left-right conflict over economic reforms (principally between Direction - Social Democracy and Slovak Democratic and Christian Union - Democratic Party) has recently become the dominant power in Slovakia's politics. The country's highest appellate forum is the Supreme Court ("Najvyšší súd"), the judges of which are elected by the National Council; below that are regional, district, and military courts. In certain cases the law provides for decisions of tribunals of judges to be attended by lay judges from the citizenry. Slovakia also has the Constitutional Court of Slovakia ("Ústavný súd Slovenskej Republiky"), which rules on constitutional issues. The 13 members of this court are appointed by the president from a slate of candidates nominated by Parliament. In 2002 Parliament passed legislation which created a Judicial Council. This 18-member council, composed of judges, law professors, and other legal experts, is now responsible for the nomination of judges. All judges except those of the Constitutional Court are appointed by the president from a list proposed by the Judicial Council. The Council also is responsible for appointing Disciplinary Senates in cases of judicial misconduct. Slovakia is member of ACCT (observer), Australia Group, BIS, BSEC (observer), CE, CEI, CERN, European Audiovisual Observatory, EAPC, EBRD, EIB, EU, FAO, IAEA, IBRD, ICAO, ICC, ICCt, ICRM, IDA, IEA, IFC, IFRCS, ILO, IMF, IMO, Interpol, IOC, IOM, ISO, ITU, ITUC, MIGA, NAM (guest), NATO, NEA, NSG, OAS (observer), OECD, OPCW, OSCE, PCA, UN, UNAMSIL, UNCTAD, UNDOF, UNESCO, UNFICYP, UNIDO, UNTSO, UPU, Visegrád Group, WCO, WEU (associate partner), WFTU, WHO, WIPO, WMO, WToO, WTO, ZC
https://en.wikipedia.org/wiki?curid=27331
Economy of Slovakia The economy of Slovakia is based upon Slovakia becoming an EU member state in 2004, and adopting the euro at the beginning of 2009. Its capital, Bratislava, is the largest financial centre in Slovakia. As of 2018 (1.Q.), the unemployment rate was 5.72%. Due to the Slovak GDP growing very strongly from 2000 until 2008 – e.g. 10.4% GDP growth in 2007 – the Slovak economy was referred to as the Tatra Tiger. Since the establishment of the Slovak Republic in January 1993, Slovakia has undergone a transition from a centrally planned economy to a free market economy, a process which some observers were to believe was slowed in the 1994–98 period due to the crony capitalism and other fiscal policies of Prime Minister Vladimír Mečiar's government. While economic growth and other fundamentals improved steadily during Mečiar's term, public and private debt and trade deficits also rose, and privatization was uneven. Real annual GDP growth peaked at 6.5% in 1995 but declined to 1.3% in 1999. Two governments of the "liberal-conservative" Prime Minister Mikuláš Dzurinda (1998–2006) pursued policies of macroeconomic stabilization and market-oriented structural reforms. Nearly the entire economy has now been privatized, and foreign investment has picked up. Economic growth exceeded expectations in the early 2000s, despite recession in key export markets. In 2001 policies of macroeconomic stabilization and structural reform led to spiraling unemployment. Unemployment peaked at 19.2% (Eurostat regional indicators) in 2001, and though it has fallen to (depending on the methodology) 9.8%( or 13.5% as of September 2006, it remains a problem. Solid domestic demand boosted economic growth to 4.1% in 2002. Strong export growth, in turn, pushed economic growth to a still-strong 4.2% in 2003 and 5.4% in 2004, despite a downturn in household consumption. Multiple reasons entailed a GDP growth of 6% in 2005. Headline consumer price inflation dropped from 26% in 1993 to an average rate of 7.5% in 2004, though this was boosted by hikes in subsidized utilities prices ahead of Slovakia's accession to the European Union. In July 2005, the inflation rate dropped to 2.0% and is projected at less than 3% in 2005 and 2.5% in 2006. In 2006, Slovakia reached the highest economic growth (8.9%) among the members of OECD and the third highest in the EU (just behind Estonia and Latvia). The country has had difficulties addressing regional imbalances in wealth and employment. GDP per capita ranges from 188% of EU average in Bratislava to only 54% in Eastern Slovakia. The development of Slovakia's GDP according to the World Bank: In 2007, Slovakia obtained the highest GDP growth among the members of OECD and the EU, with the record level of 14.3% in the fourth quarter. In 2014, GDP growth was 2.4% and in 2015 and 2016 Slovakia's economy grew 3.6% and 3.3% respectively. For year 2018, National Bank of Slovakia predicts raise of GDP by 4%. Foreign direct investment (FDI) in Slovakia has increased dramatically. Cheap and skilled labor, a 19% flat tax rate for both businesses and individuals, no dividend taxes, a weak labor code, and a favorable geographical location are Slovakia's main advantages for foreign investors. FDI inflow grew more than 600% from 2000 and cumulatively reached an all-time high of, US$17.3 billion in 2006, or around $18,000 per capita by the end of 2006. The total inflow of FDI in 2006 was $2.54 billion. In October 2005 new investment stimuli introduced – more favorable conditions to IT and research centers, especially to be located in the east part of the country (where there is more unemployment), to bring more added value and not to be logistically demanding. Origin of foreign investment 1996–2005 – the Netherlands 24.3%; Germany 19.4%, Austria 14.1%; Italy 7.5%, United States (8th largest investor) 4.0%. Top investors by companies: Deutsche Telekom (Germany), Neusiedler (Austria), Gaz de France (France), Gazprom (Russia), U.S.Steel (U.S.), MOL (Hungary), ENEL (Italy), E.ON (Germany)... Foreign investment sectors – industry 38.4%; banking and insurance 22.2%; wholesale and retail trade 13.1%; production of electricity, gas and water 10.5%; transport and telecommunications 9.2%. Foreign direct investment "" on green field"" Slovak service sector grew rapidly during the last 10 years and now employs about 69% of the population and contributes with over 61% to GDP. Slovakia's tourism has been rising in recent years, income has doubled from US$640 million in 2001 to US$1.2 billion in 2005. Slovakia became industrialized mostly in the second half of the 20th century. Heavy industry (including coal mining and the production of machinery and steel) was built for strategic reasons because Slovakia was less exposed to the military threat than the western parts of Czechoslovakia. After the end of the Cold War, the importance of industry, and especially of heavy industry, declined. In 2010, industry (including construction) accounted for 35.6% of GDP, compared with 49% in 1990. Nowadays, building on a long-standing tradition and a highly skilled labor force, main industries with potential of growth are following sectors: Automotive, Electronics, Mechanical engineering, Chemical engineering, Information technology. The automotive sector is among the fastest growing sectors in Slovakia due to the recent large investments of Volkswagen (Bratislava), Peugeot (Trnava), Kia Motors (Žilina) and since 2018 also Jaguar Land Rover in Nitra. Passenger car production was 1,040,000 units in 2016, what makes Slovakia the largest automobile producer in produced cars per capita. Other big industrial companies include U.S. Steel (metallurgy), Slovnaft (oil industry), Samsung Electronics (electronics), Foxconn (electronics), Mondi SCP (paper), Slovalco (aluminum production), Hyundai Mobis (automotive), Continental Matador (automotive) and Whirlpool Corporation. In 2006, machinery accounted for more than a half of Slovakia's export. In 2016, agriculture accounted for 3.6% of GDP (compared to 6.9% in 1993) and occupied about 3.9% of the labor force (down from 10.2% in 1994). Over 40% of the land in Slovakia is cultivated. The southern part of Slovakia (bordering with Hungary) is known for its rich farmland. Growing wheat, rye, corn, potatoes, sugar beets, grains, fruits and sunflowers. Vineyards are concentrated in Little Carpathians, Tokaj, and other southern regions. The breeding of livestock, including pigs, cattle, sheep, and poultry is also important. In recent years, service and high-tech-oriented businesses have prospered in Slovakia. Many global companies, including IBM, Dell, Lenovo, AT&T, SAP, Amazon, Johnson Controls, Swiss Re and Accenture, have built outsourcing and service centres in Bratislava and Košice (T-Systems, Cisco Systems, Ness, Deloitte ). Slovak IT companies included ESET, Sygic and Pixel Federation have headquarters in Bratislava. According to a recent report by the European Commission, Slovakia (along with some other Central and Eastern European economies) is low on the list of EU states in terms of innovation (Slovakia ranks 22nd). Within the EU, it ranks next to last on knowledge creation and last for innovation and entrepreneurship. In the process of transition to a knowledge economy, it particularly lacks investment into education and a broader application of IT. The World Bank urges Slovakia to upgrade information infrastructure and reform the education system. The OECD states that a stronger product market competition would help. In March 2006, the Slovak government introduced new measures to implement the Action Plan for R&D and Innovation. The program covers the period from 2006 to 2010. The RDA is expected to launch at least one call for the expression of interests related to this program each year. The annual budget for the program will be set by the RDA. The overall amount available for the program depends on annual national budget resources and is likely to vary from year to year. Following an increase of around 50% in budget resources, the RDA disposes of a total budget of €19.31 million in 2006. The minimum wage in Slovakia is set at 520 € per month, the average salary for 2017 was 1052 € per month, in the Bratislava region in 2017 the average salary was €1527 per month. As of February 2018 the unemployment rate stood at 5.88%. Currency switch to the euro Slovakia switched its currency from the Slovak crown (SK-slovenská koruna) to the Euro on 1 January 2009, at a rate of 30.1260 korunas to the euro. Foreign trade
https://en.wikipedia.org/wiki?curid=27332
Telecommunications in Slovakia Telecommunications in Slovakia includes fixed and mobile telephones, radio, television, and the Internet. Slovak Telecom Inc. (former Slovenské Telekomunikácie, a.s.) was privatised on 18 July 2000. The 51% package of shares was purchased by the German Deutsche Telecom AG for 1 bln. EUR (more than 44 bln. SKK at that time). The outstanding 49% of the shares are still owned by the Slovak government through the Department of Transport, Construction and Regional Development of the Slovak Republic (34%) and the National Property Fund (15%). Slovak Telecom was rebranded to T-Com in the year 2003. In 2010 there were more than 100 companies licensed to provide public fixed line telephone service, although many of these do not offer commercial service to the wider public. The most notable country-wide providers are T-Com, Orange, Dial Telecom, SWAN and UPC. Several regional providers also operate in the market. Many of these offer triple-play services consisting of a fixed line service, broadband internet access and access to television programmes. The number of triple-play customers has doubled since the service was introduced and currently peaks at 78,049 subscribers. Due to strong penetration of the Slovak market by mobile phones, the fixed lines sector has decreased dramatically in recent years. While there were 1,655,380 fixed lines in use in 1999, their number decreased by 60% to 994,421 in 2010. Mobile communication in Slovakia first became available in the early 1990s when the first NMT network operator, EuroTel Bratislava, a.s., a subsidiary of the then state owned Slovenské Telekomunikácie a.s. EuroTel, introduced the first GSM service to public in 1997. EuroTel was privatised together with its parent company and was rebranded as T-Mobile on 3 May 2005. It is now fully integrated as part of the international T-Mobile brand. The second GSM network operator started its operation on 15 January 1997 under the name GlobTel a.s. It was acquired by France Télécom (through Atlas Services Belgium, 100% shares) and rebranded to Orange Slovensko on 27 March 2002. Telefónica Europe, the third mobile operator in Slovakia, entered the market in February 2007 under the O2 brand. Virtual providers are active in the Slovak market, the most notable being Tesco Mobile (associated with Tesco Stores) and FunFón (a virtual operator associated with a popular FM radio station). Key figures of the mobile cellular sector Slovakia has one of the highest Internet penetration rates in the world and the highest penetration rate in Central and Eastern Europe. Slovakia has a large number of full-area ISP's that offer wired broadband Internet connections, including Slovak Telekom, Orange Slovensko and UPC. They offer a range of connections, from ADSL / ADSL2+ to fiber optic. ADSL or ADSL2+, VDSL is available in almost every town and village in Slovakia. There are no government restrictions on access to the Internet or reports that the government monitors e-mail or Internet chat rooms without judicial oversight; however, police monitor Web sites containing hate speech and attempt to arrest or fine the authors. The constitution and the law provide for freedom of speech and press. While the government mostly respects these rights in practice, in some instances, it limits these rights to impede criticism and limit actions of groups it considers extremist. The law prohibits the defamation of nationalities, punishable by up to three years in prison, and denial of the Holocaust, which carries a sentence of six months to three years in prison. Criminal penalties for defamation are rarely used. The constitution and the law prohibit arbitrary interference with privacy, family, home, or correspondence and the government generally respects these prohibitions in practice. Police must present a warrant before conducting a search or within 24 hours afterwards. A new draft law under consideration in 2011 would allow the nation's tax office to block web servers that provide online gambling without a Slovakian license. Opponents argue that the economic interests served by the law are not sufficient to justify online censorship.
https://en.wikipedia.org/wiki?curid=27333
Transport in Slovakia Transport in Slovakia is possible by rail, road, air or rivers. Slovakia is a developed Central European country with a well-developed rail network (3,662 km) and a highway system (225.25 km). Main international airport is the M. R. Štefánik Airport in the capital, Bratislava. Most important waterway is the river Danube used both by passenger, cargo and freight ships. (2008) (2008) See:List of airports in Slovakia "total:" 37 "total:" 18 (2002) "total:" 1
https://en.wikipedia.org/wiki?curid=27334
Slovak Armed Forces The Armed Forces of the Slovak Republic were divided from Czechoslovak army after dissolution of Czechoslovakia on January 1, 1993. Slovakia joined NATO on 29 March 2004. From 2006 the army transformed into a fully professional organization and compulsory military service was abolished. Slovak armed forces numbered 15,996 uniformed personnel and 3,761 civilians in 2014. The Slovak Air Force, officially the "Air Force of the Armed Forces of the Slovak Republic", has been defending Slovak airspace since 1939. Slovak Air Force currently comprises one wing of fighters and one wing of utility helicopters, and one SAM brigade. It operates 20 pieces aircraft, as well as 10 helicopters from 3 air bases: Malacky Air Base, Sliač Air Base, Prešov Air Base. The Air Force is currently part of the NATO Integrated Air and Missiles Defense System – NATINADS. In the future will be added Cyber Defence Unit and SOF Training base. Slovakia has 169 military personnel deployed in Cyprus for UNFICYP United Nations led peace support operations. Slovakia committed to increase the number of its troops in Afghanistan to around 45 men by the end of 2016. Slovakia has 41 troops deployed in Bosnia and Herzegovina for EUFOR Althea. Slovak troops were withdrawn from Kosovo because the Slovak Armed Forces set its priority to focus mainly on an Afghanistan NATO led mission. Since the independence of Slovakia in 1993, there have been 60 uniformed personnel deaths in the line of service to the United Nations and NATO (as of April 30, 2018).
https://en.wikipedia.org/wiki?curid=27335
Foreign relations of Slovakia Slovak Republic has been a member of European Union since 2004. Slovakia has been an active participant in U.S.- and NATO-led military actions. There is a joint Czech-Slovak peacekeeping force in Kosovo. After the September 11, 2001 Terrorist Attack on the United States, the government opened its airspace to coalition planes. In June 2002, Slovakia announced that they would send an engineering brigade to Afghanistan. Slovak Republic is a member of the United Nations and participates in its specialized agencies. It is a member of the Organization for Security and Cooperation in Europe (OSCE), the World Trade Organization (WTO), and the OECD. It also is part of the Visegrad Four (Slovakia, Hungary, Czech Republic, and Poland), a forum for discussing areas of common concern. Slovak Republic and the Czech Republic entered into a Customs Union upon the division of Czechoslovakia in 1993, which facilitates a relatively free flow of goods and services. Slovak Republic maintains diplomatic relations with 134 countries, primarily through its Ministry of Foreign Affairs. There are 44 embassies and 35 honorary consulates in Bratislava. Liechtenstein claims restitution of land in Slovakia confiscated from its princely family in 1918 by the then newly established state of Czechoslovakia, the predecessor of the Slovak Republic. The Slovak Republic insists that the power to claim restitution does not go back before February 1948, when the Communists seized power. Slovakia and Liechtenstein established diplomatic relations on 9 December 2009. Bilateral government, legal, technical and economic working group negotiations continued in 2006 between Slovakia and Hungary over Hungary's completion of its portion of the Gabcikovo-Nagymaros hydroelectric dam project along the Danube. Transshipment point for Southwest Asian heroin bound for Western Europe, producer of synthetic drugs for regional market.
https://en.wikipedia.org/wiki?curid=27336
Steven Soderbergh Steven Andrew Soderbergh (; born January 14, 1963) is an American film director, producer, screenwriter, cinematographer, and editor. An early pioneer of modern independent cinema, Soderbergh is an acclaimed and prolific filmmaker. Soderbergh's directorial breakthrough—indie drama "Sex, Lies, and Videotape" (1989)—lifted him into the public spotlight as a notable presence in the film industry. At 26, Soderbergh became the youngest solo director to win the Palme d'Or at the Cannes Film Festival which garnered the film worldwide commercial success, among numerous accolades. His breakthrough saw him to Hollywood where he directed crime comedy "Out of Sight" (1998); biopic "Erin Brockovich" (2000) and crime drama film "Traffic" (2000), the latter winning him the Academy Award for Best Director. He found further popular and critical success with the "Ocean's" trilogy and film franchise (2001–18); "Contagion" (2011); "Magic Mike" (2012); "Side Effects" (2013); "Logan Lucky" (2017); and "Unsane" (2018). Despite his film career spanning a multitude of genres, his cinematic niche centers on psychological, crime, and heist thrillers. His films have grossed over US$2.2 billion worldwide and garnered nine Oscar nominations, winning seven. Soderbergh's films often revolve around familiar concepts often used for big budget Hollywood movies but with an avant garde arthouse approach to them. They center on the themes of shifting personal identities, vengeance, sexuality, morality, and the human condition. His feature films retain distinctive cinematography as a result of his liberal use of avant-garde cinema coupled with unconventional film and camera formats. Many of Soderbergh's films are anchored by multi-dimensional storylines with plot twists, nonlinear storytelling, experimental sequencing, suspenseful soundscapes, and third person vantage points. Soderbergh was born on January 14, 1963, in Atlanta, Georgia, to Mary Ann (née Bernard) and Peter Andrew Soderbergh, who was a university administrator and educator. He has Swedish, Irish, and Italian roots. Soderbergh's paternal grandfather immigrated to the U.S. from Stockholm. As a child, he moved with his family to Charlottesville, Virginia, where he lived during his adolescence, and then to Baton Rouge, Louisiana, where his father became Dean of Education at Louisiana State University (LSU). Soderbergh discovered filmmaking as a teenager and directed short films with a Super 8 and 16 mm cameras. He attended the Louisiana State University Laboratory School for high school before graduating and moving to Hollywood to pursue professional filmmaking. In his first job he worked as a game show composer and cue card holder; soon after which he found work as a freelance film editor. During this time, he directed the concert video "9012Live" for the rock band Yes in 1985, for which he received a Grammy Award nomination for Best Music Video, Long Form. After Soderbergh returned to Baton Rouge, he wrote the screenplay for "Sex, Lies, and Videotape" on a legal pad during an eight-day cross country drive. The movie tells the story of a troubled man who videotapes women discussing their lives and sexuality, and his impact on the relationship of a married couple. Soderbergh submitted the film to the 1989 Cannes Film Festival where it won a variety of awards, including the Palme d'Or. Its critical performance led it to become a worldwide commercial success, grossing $36.7 million on a $1.2 million budget. The film was considered to be the most influential catalyst of the 1990s Independent Cinema movement. At age 26, Soderbergh became the youngest solo director and the second youngest director to win the festival's top award. Movie critic Roger Ebert called Soderbergh the "poster boy of the Sundance generation". His relative youth and sudden rise to prominence in the film industry had him referred to as a "sensation" and a prodigy. In 2006, the film was selected by the Library of Congress for preservation in the United States National Film Registry, being deemed "culturally, historically, or aesthetically significant" and the American Film Institute nominated it as one of the greatest movies ever made. Soderbergh's directorial debut was followed by a series of low-budget box-office disappointments. In 1991, he directed "Kafka," a biopic of Franz Kafka written by Lem Dobbs and starring Jeremy Irons. The film returned one tenth of its budget and received mixed reviews from critics. Roger Ebert's review stated, "Soderbergh does demonstrate again here that he's a gifted director, however unwise in his choice of project". Two years later, he directed the drama "King of the Hill" (1993), which was again met with poor commercial performance, although fared well with critics. Based on the memoir of writer A. E. Hotchner, the film is set during the Great Depression and follows a young boy (played by Jesse Bradford) struggling to survive on his own in a hotel in St. Louis after his mother falls ill and his father is away on business trips. Also in 1995, he directed a remake of Robert Siodmak's 1949 film noir "Criss Cross", titled "The Underneath", which grossed $536,020 on a $6.5 million budget and was widely panned by critics, with Rodrigo Perez of "IndieWire" accusing Soderbergh of "throwing himself under the bus." Soderbergh directed "Schizopolis" in 1996, a comedy which he starred in, wrote, composed, and shot as well as directed. The 96-minute film was submitted to the Cannes Film Festival to such a "chilly response" that he reworked the entire introduction and conclusion before releasing it commercially. In the movie's introduction, he placed a title page that read: ”In the event that you find certain sequences or events confusing, please bear in mind this is your fault, not ours. You will need to see the picture again and again until you understand everything". He starred in "Schizopolis" as Fletcher Munson, a spokesman for a Scientology-esque lifestyle cult, and again as Dr. Jeffrey Korchek, a dentist having an affair with Munson's wife. The film switched languages multiple times mid-scene without subtitles, leaving large parts of it incomprehensible. It was viewed by critics as a "directorial palate cleanse" for Soderbergh. During the months following his debut of "Schizopolis", he released a small, edited version of the Spalding Gray monologue film "Gray's Anatomy." Soderbergh would later refer to "Schizopolis" as his "artistic wake-up call". Soderbergh co-wrote the script for 1997 horror-thriller "Nightwatch" with Danish filmmaker Ole Bornedal, an American remake of his own film of the same name produced in his native country. Soderbergh's reemergence began in 1998 with "Out of Sight", a stylized adaptation of an Elmore Leonard novel, written by Scott Frank and starring George Clooney and Jennifer Lopez. The film was widely praised, though only a moderate box-office success. The critical reception of the movie began a multi-movie artistic partnership between Clooney and Soderbergh. Soderbergh followed up on the success of "Out of Sight" by making another crime caper, "The Limey" (1999), from a screenplay by Lem Dobbs and starring actors Terence Stamp and Peter Fonda. The film was well-received and established him within the cinematic niche of thriller and heist films. He ventured into his first biographical film in 2000 when he directed "Erin Brockovich", written by Susannah Grant and starring Julia Roberts in her Oscar-winning role as a single mother taking on industry in a civil action. In late 2000, Soderbergh released "Traffic", a social drama written by Stephen Gaghan and featuring an ensemble cast. "Time" magazine compared him to a baseball player hitting home runs with "Erin Brockovich" and "Traffic". Both films would be nominated at the 2001 Academy Awards, making him the first director to have been nominated in the same year for Best Director for two different films since Michael Curtiz in 1938. He was awarded the Academy Award for Best Director for "Traffic" and received best director nominations at the year's Golden Globe and the Directors Guild of America Awards. In early 2001, he was approached to direct a reboot of the 1960s Rat Pack-movie "Ocean's 11" by Ted Griffin. After Griffin wrote the screenplay, Soderbergh signed on to direct. The film opened to critical acclaim and widespread commercial success. It quickly became Soderbergh's highest-grossing movie to date, grossing more than $183 million domestically and more than $450 million worldwide. "Rolling Stone" credited the movie with "[spawning] a new era of heist movies". In the same year, Soderbergh made "Full Frontal", which was shot mostly on digital video in an improvisational style that deliberately blurred the line between which actors were playing characters and which were playing fictionalized versions of themselves. A year later, he was asked by executives at Warner Bros Studios to direct the psychological thriller "Insomnia" (2002), starring Academy Award winners Al Pacino, Robin Williams, and Hilary Swank. Despite their insistence, Soderbergh wanted to use the film as a transitory project for up-and-coming director Christopher Nolan. Before returning to the "Ocean's" series, Soderbergh directed "K Street" (2003), a ten-part political HBO series he co-produced with George Clooney. The series was both partially improvised and each episode being produced in the five days prior to airing to take advantage of topical events that could be worked into the fictional narrative. Actual political players appeared as themselves, either in cameos or portraying fictionalized versions of themselves, notably James Carville and Mary Matalin. Soderbergh directed "Ocean's Twelve", a sequel to "Ocean's Eleven", in 2004. The second installment received muted critical reviews, and was another commercially successful film, grossing $362.7 million on a $110 million budget. Matt Singer of "IndieWire" called it a "Great Sequel About How Hard It Is to Make a Great Sequel." Also in 2004, Soderbergh produced and co-wrote the adapted screenplay for the film "Criminal"—a remake of the Argentine film "Nine Queens"—with his longtime assistant director Gregory Jacobs, who made his directorial debut with the film. A year later, Soderbergh directed "Bubble" (2005), a $1.6 million film featuring a cast of nonprofessional actors. It opened in selected theaters and HDNet simultaneously, and four days later on DVD. Industry heads were reportedly watching how the film performed, as its unusual release schedule could have implications for future feature films. Theater-owners, who at the time had been suffering from dropping attendance rates, did not welcome so-called "day-and-date" movies. National Association of Theatre Owners chief executive John Fithian indirectly called the film's release model "the biggest threat to the viability of the cinema industry today." Soderbergh's response to such criticism: "I don't think it's going to destroy the movie-going experience any more than the ability to get takeout has destroyed the restaurant business." A romantic drama set in post-war Berlin, "The Good German", starring Cate Blanchett and Clooney, was released in late 2006. The film performed poorly commercially grossing $5.9 million worldwide against a budget of $32 million. Soderbergh next directed "Ocean's Thirteen", which was released in June 2007 to further commercial success and increased critical acclaim. Grossing $311.3 million on an $85 million budget, it is the second highest-grossing film of his career after the first "Ocean's". The film concluded what would later be known as the "Ocean's" trilogy, a collection of heist movies that would go on to be described as defining a new era of heist films. Soderbergh directed "Che", which was released in theatres in two parts titled "The Argentine" and "Guerrilla", and was presented in the main competition of the 2008 Cannes Film Festival, on May 22. Benicio del Toro played Argentine guerrilla Ernesto "Che" Guevara in an epic four-hour double bill which looks first at his role in the Cuban Revolution before moving to his campaign and eventual death in Bolivia. Soderbergh shot his feature film "The Girlfriend Experience" in New York in 2008. Soderbergh cast adult film star Sasha Grey as the film's lead actress to great reception and controversy. Soderbergh's first film of 2009 was "The Informant!", a black comedy starring Matt Damon as corporate whistleblower Mark Whitacre. Whitacre wore a wire for two-and-a-half years for the FBI as a high-level executive at a Fortune 500 company, Archer Daniels Midland (ADM), in one of the largest price-fixing cases in history. The film was released on September 18, 2009. The script for the movie was written by Scott Z. Burns based on Kurt Eichenwald's book, "The Informant". The film grossed $41 million on a $22 million budget and received generally favorable reviews from critics. Also in 2009, Soderbergh shot a small improvised film with the cast of the play, "The Last Time I Saw Michael Gregg", a comedy about a theatre company staging Chekhov's "Three Sisters". He has stated that he does not want it seen by the public, and only intended it for the cast. Soderbergh nearly filmed a feature adaptation of the baseball book "Moneyball", starring Brad Pitt and Demetri Martin. The book, by Michael Lewis, tells of how Billy Beane, general manager of Oakland Athletics, used statistical analysis to make up for what he lacked in funds to beat the odds and lead his team to a series of notable wins in 2002. Disagreements between Sony and Soderbergh about revisions to Steven Zaillian's version of the screenplay led to Soderbergh's dismissal from the project only days prior to filming in June 2009. In 2010, Soderbergh shot the action-thriller "Haywire", starring Gina Carano, Ewan McGregor, Michael Fassbender, and Channing Tatum which, even though was shot in early 2010, was not released until January 2012. In the fall of 2010, Soderbergh shot the epic virus thriller "Contagion", written by Scott Z. Burns. With a cast including Matt Damon, Kate Winslet, Gwyneth Paltrow, Laurence Fishburne, Marion Cotillard and Jude Law, the film follows the outbreak of a lethal pandemic across the globe and the efforts of doctors and scientists to discover the cause and develop a cure. Soderbergh premiered it at the 68th Venice Film Festival in Venice, Italy on September 3, 2011, and released it to the general public six days later to commercial success and widespread critical acclaim. Grossing $135.5 million on a $60 million budget, Manohla Dargis of "The New York Times" called his film a "smart, spooky thriller about a thicket of contemporary plagues—a killer virus, rampaging fear, an unscrupulous blogger—is as ruthlessly effective as the malady at its cool, cool center." In August 2011, Soderbergh served as a second unit director on "The Hunger Games" and filmed much of the District 11 riot scene. In September and October 2011, he shot "Magic Mike", a film starring Channing Tatum, about the actor's experiences working as a male stripper in his youth. Tatum played the title mentor character, while Alex Pettyfer played a character based on Tatum. The film was released on June 29, 2012 to a strong commercial performance and favorable critical acclaim. Throughout 2012, Soderbergh had announced his intention to retire from feature filmmaking. He stated that "when you reach the point where you're saying, 'If I have to get into a van to do another scout, I'm just going to shoot myself,' it's time to let somebody who's still excited about getting in the van, get in the van." Soderbergh later said that he would retire from filmmaking and begin to explore painting. A few weeks later, Soderbergh played down his earlier comments, saying a filmmaking "sabbatical" was more accurate. For his then-final feature film, he directed the psychological thriller "Side Effects", which starred Jude Law, Rooney Mara, Channing Tatum and Catherine Zeta-Jones. It was shot in April 2012 and was released on February 8, 2013. Screened at the 63rd Berlin International Film Festival, A. O. Scott of "The New York Times" stated, that Soderbergh "[handled] it brilliantly, serving notice once again that he is a crackerjack genre technician." In the end, while promoting "Side Effects" in early 2013, he clarified that he had a five-year plan that saw him transitioning away from making feature films around his fiftieth birthday. Around that time, he gave a much publicized speech at the San Francisco International Film Festival, detailing the obstacles facing filmmakers in the current corporate Hollywood environment. Soderbergh had planned to commence production in early 2012 on a feature version of "The Man from U.N.C.L.E.", also written by Scott Z. Burns. George Clooney was set for the lead role of Napoleon Solo but had to drop out due to a recurring back injury suffered while filming "Syriana". In November 2011 Soderbergh withdrew from the project due to budget and casting conflicts, and was eventually replaced by Guy Ritchie. His final televised project before heading into retirement was "Behind the Candelabra. S"hot in the summer of 2012, it starred Michael Douglas as legendarily flamboyant pianist Liberace and Matt Damon as his lover Scott Thorson. The film is written by Richard LaGravenese, based on Thorson's book "", and produced by HBO Films. It was selected to compete for the Palme d'Or at the 2013 Cannes Film Festival. In May 2013—only months into his retirement—Soderbergh announced that he would direct a 10-part miniseries for Cinemax called "The Knick". The series followed doctors at a fictionalized version of the Knickerbocker Hospital in Manhattan in the early twentieth century. The series starred Clive Owen, Andre Holland, Jeremy Bobb, Juliet Rylance, Eve Hewson and Michael Angarano and was filmed in the fall of 2013. It began airing in August 2014 to critical acclaim. After completing the second season, Soderbergh revealed he was finished directing for the show and said, "I told them [Cinemax] that I'm going to do the first two years and then we are going to break out the story for seasons 3 and 4 and try and find a filmmaker or filmmakers to do this the way that I did. This is how we want to do this so that every two years, whoever comes on, has the freedom to create their universe." After his work with "The Knick", Soderbergh began working on a variety of personal projects starting with directing an Off-Broadway play titled "The Library", starring Chloë Grace Moretz in January 2014. On April 21, 2014, Soderbergh released an alternate cut of Michael Cimino's controversial 1980 Western "Heaven's Gate" on his website. Credited to his pseudonym Mary Ann Bernard and dubbed "The Butcher's Cut", Soderbergh's version runs 108 minutes. On September 22, 2014, he uploaded a black-and-white silent version of "Raiders of the Lost Ark", with Trent Reznor and Atticus Ross's score of "The Social Network". The purpose of it is to study the aspects of staging in filmmaking. It was announced in June 2014 that Soderbergh would be executive producing a series based on his earlier film "The Girlfriend Experience" for the Starz network, to premiere sometime in 2016. In September 2015, Soderbergh was announced to be directing "Mosaic", a series for HBO. Starring Sharon Stone, it was a dual-media project; it was released as both an interactive movie app in November 2017 and as a six-part miniseries airing in January 2018. In February 2016, Soderbergh officially came out of his retirement to direct a NASCAR heist film, "Logan Lucky", starring Channing Tatum, Adam Driver, and Daniel Craig, among others. The film was produced entirely by Soderbergh, with no studio involved in anything other than theatrical distribution. The film was released on August 18, 2017 by Bleecker Street and Fingerprint Releasing, his own distribution and production company. "Logan Lucky" was met with widespread critical acclaim, Matt Zoller Seitz writing for Roger Ebert stated: "The odds seem stacked in "Logan Lucky"'s favor the instant you spot 'Directed by Steven Soderbergh' in the opening credits". In July 2017, it was revealed that Soderbergh had also secretly shot a horror film using iPhones titled "Unsane", and starring Claire Foy and Juno Temple. The film was released on March 23, 2018 and received well by critics. His usage of an iPhone in 4K to film the movie was considered "inspirational to aspiring filmmakers" for breaking down the perceived costs associated with producing a feature film in the United States. The movie was well received by critics with Scott Meslow of "GQ" noting its relevance to the modern plight of women in patriarchal societies, it was called a "nerve-jangling modern-day Kafka story". In 2018, Soderbergh directed "High Flying Bird", starring Andre Holland who played the role of a sports agent representing his rookie client with an intriguing and controversial business opportunity during an NBA lockout. The film began production in February 2018 and was released on February 8, 2019, by Netflix. Soderbergh's film "The Laundromat" is a political thriller about the international leak of the Panama Papers, written by Scott Z. Burns and based on the book "Secrecy World," by Pulitzer Prize-winner Jake Bernstein. It stars Meryl Streep, Gary Oldman, Antonio Banderas, Jeffrey Wright, Matthias Schoenaerts, James Cromwell and Sharon Stone and premiered at the Venice Film Festival on September 1, 2019 before airing on Netflix. His next film, "Let Them All Talk", is a comedy starring Meryl Streep, Gemma Chan, Dianne Wiest, Candice Bergen and Lucas Hedges. The film was shot in New York and the UK, and aboard the ocean liner . It will premiere in 2020 on HBO Max. Soderbergh is working on a six-part miniseries written by Lem Dobbs about the life of Emin Pasha. He is also scheduled to direct a crime film called "Kill Switch", written by "Mosaic" writer Ed Solomon and starring Don Cheadle, Josh Brolin, and Sebastian Stan. Soderbergh's visual style often emphasizes wealthy urban settings, natural lighting, and fast-paced working environments. Film critic Drew Morton has categorized his stylistic approach to films akin to the French New Wave movement in filmmaking. Soderbergh's experimental style and tendency to reject mainstream film standards stems from his belief that "[filmmakers] are always, in essence, at the beginning of infinity ... there is always another iteration ... always will be." On a technical level, Soderbergh prefers sustained close-ups, tracking shots, jump cuts, experimental sequencing, and frequently skips establishing shots in favor of audio and alternative visuals. Many of his films are noted for a milieu of suspense through the usage of third-person vantage points and a variety of over-the-shoulder shots. In his film "Contagion" (2011), he used a multi-narrative "hyperlink cinema" style, first established within the "Ocean's" trilogy"." He is known for tracking aesthetic transitions with a variety of colored washes, most notably yellow to symbolize open, socially acceptable situations while blue washes typically symbolize illegal or socially illicit endeavors. In line with these washes, Soderbergh is liberal in his usage of montages as he believes that they are equally important story-telling as dialogue is. Soderbergh is known for having a combative relationship with Hollywood and the standards of studio filmmaking. Film critic Roger Ebert has commented in this stylistic antagonism, "Every once in a while, perhaps as an exercise in humility, Steven Soderbergh makes a truly inexplicable film... A film so amateurish that only the professionalism of some of the actors makes it watchable... It's the kind of film where you need the director telling you what he meant to do and what went wrong and how the actors screwed up and how there was no money for retakes, etc." In "Ocean's Twelve" (2004), he had actress Julia Roberts play the part of Tess, a character then forced to play a fictionalized version of Roberts. During the production stages of "The Girlfriend Experience" (2009) he cast adult film star Sasha Grey in the lead role. In "Haywire" (2011), Soderbergh cast and eventually launched the film career of professional mixed martial arts (MMA) fighter Gina Carano. Soderbergh's "Logan Lucky" (2017) made reference to his trilogy by alluding to an "Ocean's 7–11", noting the trilogy's influence on the Southern heist film. Soderbergh's films are centered on suspenseful and ambient soundscapes. A primary way he achieves suspenseful soundscapes is by introducing audio before visuals in cut scenes, alerting the viewer of a sudden change in tone. His frequent collaborations with composers Cliff Martinez, David Holmes, and most recently Thomas Newman, provide his films with "the thematic and sonic landscapes into which he inserts his characters." Soderbergh's early films—on account of his youth and lack of resources—were primarily filmed on Super 8 and 16 mm film formats. A variety of his feature films have been shot using a diverse range of camera equipment. He filmed all of "The Girlfriend Experience" (2009) on a Red One camera, which has retailed for $4000—a relatively inexpensive camera for a movie produced for $1.3 million. Soderbergh filmed the entirety of "Unsane" (2018) on an iPhone 7 Plus with its 4K digital camera using the app FiLMiC Pro. He filmed with three rotating iPhones using a DJI stabiliser to hold the phone in place. In January 2018, he expressed an interest in filming other productions solely with iPhones going forward. He then filmed the entirety of 2019's "High Flying Bird" on an iPhone 8. In addition to his directing, he is frequently a screenwriter for his films. Scott Tobias of "The A. V. Club" has noted his method of experimental filmmaking as "rigorously conceived, like a mathematician working out a byzantine equation". Starting in 2000 with his film "Traffic", Soderbergh has used various pseudonyms when directing films in order to hide the fact that he edits, writes, and arranges in opening and closing credits. When working with actors, Soderbergh prefers to pursue a non-intrusive directorial style. "I try and make sure they're OK, and when they're in the zone, I leave them alone. I don't get in their way." This method has attracted repeat performances by many high-profile movie stars which has established a reoccurring collaboration between them and Soderbergh. Soderbergh's films often center the themes of shifting personal identities, sexuality, and the human condition. Richard Brody of "The New Yorker" stated that Soderbergh is focused on the process of presenting ideas through film rather than their actual realization. In line with this actual realization, he presents themes to critically evaluate political and corporate institutions such as money and capitalism. Film critic A. O. Scott has noted that Soderbergh has a critical interest in exploring the impact capitalist economies have on living an ethical life and the detractions associated with materialism. Money is central to many of his movies as Soderbergh believes that it serves as an obsession unrivaled by any other. Starting with "Out of Sight" (1998), Soderbergh's heist films explore themes of vengeance, characters on a mission, and the morality of crime. He is generally said to have a cinematic niche in these types of films. "I've always had an attraction to caper movies, and certainly there are analogies to making a film. You have to put the right crew together, and if you lose, you go to movie jail", the director noted in 2017. When asked about the top eleven films he regarded among the best, Soderbergh listed the following, in order: "The 5,000 Fingers of Dr. T." (1953), "All The President's Men" (1976), "Annie Hall" (1977), "Citizen Kane" (1941), "The Conversation" (1974), "The Godfather" (1972), "The Godfather Part II" (1974), "Jaws" (1975), "The Last Picture Show" (1971), "Sunset Boulevard" (1950), "The Third Man" (1949). His directorial debut, "Sex, Lies, and Videotape" (1989), was influenced by Mike Nichols' 1971 American comedy-drama "Carnal Knowledge." He has said that Peter Yates' 1972 crime-comedy "The Hot Rock" inspired the tone of the "Ocean's" films. In 2018, Soderberg officially launched his Bolivian grape spirit brand called "Singani 63". In 2014, he teamed up with a distillery based in Tarija, Casa Real and became the sole exporter of the spirit from the mountains of Bolivia. Soderbergh has worked with a variety of actors, composers, screenwriters throughout his career as a filmmaker. His most prolific collaborators are considered to be George Clooney, Matt Damon, Brad Pitt, Julia Roberts, Don Cheadle, and Channing Tatum. Clooney started Section Eight Productions with Soderbergh and as of 2018 remains his most frequent collaborator, followed by Damon. Among those who have won awards for their work with Soderbergh, Roberts won an Academy Award for Best Actress for her lead in "Erin Brockovich"; Benicio del Toro also won an Academy Award for his work in "Traffic", later starred in "Guerrilla" and "The Argentine". Catherine Zeta-Jones received a Golden Globe nomination for her portrayal of Helena in "Traffic" and reteamed with him for "Ocean's Twelve" and "Side Effects". Actor Joe Chrest worked with Soderbergh prolifically during his early career (1993–2009) starring in a total of eight of his films. Soderbergh has frequently relied on Jerry Weintraub to produce many of his films. Composer Cliff Martinez has scored ten Soderbergh films starting with "Sex, Lies, and Videotape" (1989) and ending with "Contagion" (2011). Northern Irish composer David Holmes joined him in 1998 to score "Out of Sight" and rejoined him in scoring his "Ocean's" trilogy. Soderbergh rejected Holmes' score for his 2006 film "The Good German", but brought him back on for subsequent movies, most recently "Logan Lucky" (2017). Starting in 2000, composer Thomas Newman has worked with four Soderbergh films, most recently in 2018 with "Unsane". When not cutting his own films, he relies on editor Stephen Mirrione and frequently works with screenwriter Scott Z. Burns. Soderbergh is a vocal proponent of the preservation of artistic merit in the face of Hollywood corporatism. He believes that "cinema is under assault by the studios and, from what I can tell, with the full support of the audience". He claims that he no longer reads reviews of his movies. "After "Traffic" I just stopped completely," says the director. "After winning the LA and New York film critics awards, I really felt like, this can only get worse". Soderbergh claims to not be a fan of possessory credits, and prefers not to have his name front and center at the start of a film. "The fact that I'm not an identifiable brand is very freeing," says Soderbergh, "because people get tired of brands and they switch brands. I've never had a desire to be out in front of anything, which is why I don't take a possessory credit." On Monday, April 5, 2009, Soderbergh appeared before the Foreign Affairs Committee of the House of Representatives, and "cited the French initiative in asking lawmakers to deputize the American film industry to pursue copyright pirates," indicating he supports anti-piracy laws and Internet regulation. Soderbergh is married to Jules Asner, whom he often credits for influencing his female characters. He has a daughter with his first wife, actress Betsy Brantley, and a daughter with Frances Anderson, an Australian woman. Soderbergh lives in New York City. Soderbergh's entire filmography is routinely analyzed and debated by fans, critics, film academics, and other film directors. His early work—particularly his 1989 film, "Sex, Lies, and Videotape"—has been noted as foundational to the Independent Cinema movement. After directing his first film, Soderbergh's relative youth and sudden rise to prominence in the film industry had him referred to as a "sensation", a prodigy, and a poster boy of the genre's generation. In 2002, he was elected first Vice President of the Directors Guild of America. After screening "Sex, Lies, and Videotape" at the 1989 Cannes Film Festival, Soderbergh was given the festival's top award, the Palme d'Or. At 26, he was the youngest solo director to win the award and second-youngest director after French directors Louis Malle and co-director Jacques Cousteau (Malle won it aged 23). At the 73rd Academy Awards, Soderbergh was nominated twice for Best Director for two separate films, the first occurrence of such an event since 1938. Apart from his first nomination ("Erin Brockovich"), he won the award for "Traffic". When the same occurrence happened at the Directors Guild of America Awards, the Associated Press called the category a "Soderbergh vs. Soderbergh" contest. Critical, public and commercial reception to a selection of Soderbergh's directorial feature films as of April 14, 2018. As of 2018, Soderbergh's entire feature filmography has grossed over US$2.2 billion worldwide in sales. His entire "Ocean's" trilogy was named among the "75 Best Heist Movies of All Time" by Rotten Tomatoes. His film "Out of Sight" was listed as one of the best movies of the 1990s by "Rolling Stone".
https://en.wikipedia.org/wiki?curid=27337
Slovenia Slovenia ( ; ), officially the Republic of Slovenia (Slovene: , abbr.: "RS"), is a country located in Europe at the crossroads of main European cultural and trade routes. It is bordered by Italy to the west, Austria to the north, Hungary to the northeast, Croatia to the southeast, and the Adriatic Sea to the southwest. Slovenia covers and has a population of 2.095 million. One of the successor states of the former Yugoslavia, Slovenia is now a parliamentary republic and member nation of the European Union, United Nations, and NATO. The capital and largest city is Ljubljana. Slovenia has a mostly mountainous terrain with a mainly continental climate, with the exception of the Slovene Littoral, which has a sub-Mediterranean climate, and of the Julian Alps in the northwest, which have an Alpine climate. Additionally, the Dinaric Alps and the Pannonian Plain meet on the territory of Slovenia. The country, marked by a significant biological diversity, is one of the most water-rich in Europe, with a dense river network, a rich aquifer system, and significant karst underground watercourses. Over half of the territory is covered by forest. The human settlement of Slovenia is dispersed and uneven. Slovenia has historically been the crossroads of Slavic, Germanic, and Romance languages and cultures. Ethnic Slovenes comprise more than 80% of the population. The South Slavic language Slovene is the official language throughout the country. Slovenia is a largely secularized country, but Catholicism and Lutheranism have significantly influenced its culture and identity. The economy of Slovenia is small, open and export-oriented and is thus strongly influenced by the conditions of its exporting partners' economies. This is especially true with Germany, Slovenia's biggest trade partner. Like most of the developed world, Slovenia was severely hurt by the Eurozone crisis beginning in 2009, but started to recover in 2014. The main economic driver for the country is the services industry, followed by manufacturing and construction. Historically, the territory of Slovenia has formed part of many different states, such as: the Roman Empire, Byzantine Empire, Carolingian Empire, the Holy Roman Empire, the Kingdom of Hungary, the Republic of Venice, the French-administered Illyrian Provinces of the First French Empire of Napoleon I, the Austrian Empire and Austro-Hungarian Empire. In October 1918, the Slovenes exercised self-determination for the first time by co-founding the State of Slovenes, Croats and Serbs. In December 1918 they merged with the Kingdom of Serbia into the Kingdom of Serbs, Croats and Slovenes (renamed the Kingdom of Yugoslavia in 1929). During World War II (1939–1945) Germany, Italy, and Hungary occupied and annexed the territories comprising today's Slovenia (1941–1945), with a tiny area transferred to the Independent State of Croatia, a Nazi puppet state. In 1945 Slovenia became a founding member of the Federal People's Republic of Yugoslavia, renamed in 1963 as the Socialist Federal Republic of Yugoslavia. In the first years after World War II this state was initially allied with the Eastern Bloc, but because of the Tito-Stalin split in 1948 it never subscribed to the Warsaw Pact and in 1961 became one of the founders of the Non-Aligned Movement. In June 1991, after the introduction of multi-party representative democracy, Slovenia became the first republic that split from Yugoslavia and became an independent sovereign state. In 2004, it entered NATO and the European Union; in 2007 became the first formerly communist country to join the Eurozone; and in 2010 it joined the OECD, a global association of high-income developed countries. Slovenia is a high-income advanced economy with a very high Human Development Index. It ranks 12th in the inequality-adjusted human development index. Slovenia's name means the "Land of the Slavs" in Slovene and other South Slavic languages. It is thus a cognate of the words Slavonia, Slovakia and Slavia. The etymology of itself remains uncertain. The reconstructed autonym "" is usually derived from the word "slovo" ("word"), originally denoting "people who speak (the same language)," i. e. people who understand each other. This is in contrast to the Slavic word denoting German people, namely , meaning "silent, mute people" (from Slavic "mute, mumbling"). The word "slovo" ("word") and the related "slava" ("glory, fame") and "slukh" ("hearing") originate from the Proto-Indo-European root ("be spoken of, glory"), cognate with Ancient Greek ( "fame"), as in the name Pericles, Latin ("be called"), and English . The modern Slovene state originates from the Slovene National Liberation Committee (SNOS) held on 19 February 1944. They officially named the state as "Federal Slovenia" (), a unit within the Yugoslav federation. On 20 February 1946, Federal Slovenia was renamed the "People's Republic of Slovenia" ("Ljudska republika Slovenija"). It retained this name until 9 April 1963, when its name was changed again, this time to "Socialist Republic of Slovenia" (). On 8 March 1990, SR Slovenia removed the prefix "Socialist" from its name, becoming the "Republic of Slovenia"; it remained a part of the SFRY until 25 June 1991. Present-day Slovenia has been inhabited since prehistoric times. There is evidence of human habitation from around 250,000 years ago. A pierced cave bear bone, dating from 43100 ± 700 BP, found in 1995 in Divje Babe cave near Cerkno, is considered a kind of flute, and possibly the oldest musical instrument discovered in the world. In the 1920s and 1930s, artifacts belonging to the Cro-Magnon, such as pierced bones, bone points, and a needle were found by archaeologist Srečko Brodar in Potok Cave. In 2002, remains of pile dwellings over 4,500 years old were discovered in the Ljubljana Marshes, now protected as a UNESCO World Heritage Site, along with the Ljubljana Marshes Wooden Wheel, the oldest wooden wheel in the world. It shows that wooden wheels appeared almost simultaneously in Mesopotamia and Europe. In the transition period between the Bronze Age to the Iron Age, the Urnfield culture flourished. Archaeological remains dating from the Hallstatt period have been found, particularly in southeastern Slovenia, among them a number of situlas in Novo Mesto, the "Town of Situlas". In the Iron Age, present-day Slovenia was inhabited by Illyrian and Celtic tribes until the 1st century BC. The area that is present-day Slovenia was in Roman times shared between "Venetia et Histria" (region X of Roman Italia in the classification of Augustus) and the provinces Pannonia and Noricum. The Romans established posts at Emona (Ljubljana), Poetovio (Ptuj), and Celeia (Celje); and constructed trade and military roads that ran across Slovene territory from Italy to Pannonia. In the 5th and 6th centuries, the area was subject to invasions by the Huns and Germanic tribes during their incursions into Italy. A part of the inner state was protected with a defensive line of towers and walls called "Claustra Alpium Iuliarum". A crucial battle between Theodosius I and Eugenius took place in the Vipava Valley in 394. The Slavic tribes migrated to the Alpine area after the westward departure of the Lombards (the last Germanic tribe) in 568, and under pressure from Avars established a Slavic settlement in the Eastern Alps. From 623 to 624 or possibly 626 onwards, King Samo united the Alpine and Western Slavs against the Avars and Germanic peoples and established what is referred to as Samo's Kingdom. After its disintegration following Samo's death in 658 or 659, the ancestors of the Slovenes located in present-day Carinthia formed the independent duchy of Carantania, and Carniola, later duchy Carniola. Other parts of present-day Slovenia were again ruled by Avars before Charlemagne's victory over them in 803. The Carantanians, one of the ancestral groups of the modern Slovenes, particularly the Carinthian Slovenes, were the first Slavic people to accept Christianity. They were mostly Christianized by Irish missionaries, among them Modestus, known as the "Apostle of Carantanians". This process, together with the Christianization of the Bavarians, was later described in the memorandum known as the Conversio Bagoariorum et Carantanorum, which is thought to have overemphasized the role of the Church of Salzburg in the Christianization process over similar efforts of the Patriarchate of Aquileia. In the mid-8th century, Carantania became a vassal duchy under the rule of the Bavarians, who began spreading Christianity. Three decades later, the Carantanians were incorporated, together with the Bavarians, into the Carolingian Empire. During the same period Carniola, too, came under the Franks, and was Christianised from Aquileia. Following the anti-Frankish rebellion of Liudewit at the beginning of the 9th century, the Franks removed the Carantanian princes, replacing them with their own border dukes. Consequently, the Frankish feudal system reached the Slovene territory. After the victory of Emperor Otto I over the Magyars in 955, Slovene territory was divided into a number of border regions of the Holy Roman Empire. Carantania, being the most important, was elevated into the Duchy of Carinthia in 976. By the 11th century, the Germanization of what is now Lower Austria, effectively isolated the Slovene-inhabited territory from the other western Slavs, speeding up the development of the Slavs of Carantania and of Carniola into an independent Carantanian/Carniolans/Slovene ethnic group. By the late Middle Ages, the historic provinces of Carniola, Styria, Carinthia, Gorizia, Trieste, and Istria developed from the border regions and were incorporated into the medieval German state. The consolidation and formation of these historical lands took place in a long period between the 11th and 14th centuries, and were led by a number of important feudal families, such as the Dukes of Spannheim, the Counts of Gorizia, the Counts of Celje, and, finally, the House of Habsburg. In a parallel process, an intensive German colonization significantly diminished the extent of Slovene-speaking areas. By the 15th century, the Slovene ethnic territory was reduced to its present size. In the 14th century, most of the territory of present-day Slovenia was taken over by the Habsburgs, the Hungarian clan Záh administering the territories connecting Slovenia with Slovakia and Moravia was exterminated in 1330 and the Slovenes permanently lost the connection with their Slovak kinsmen. The counts of Celje, a feudal family from this area who in 1436 acquired the title of state princes, were Habsburgs' powerful competitors for some time. This large dynasty, important at a European political level, had its seat in Slovene territory but died out in 1456. Its numerous large estates subsequently became the property of the Habsburgs, who retained control of the area right up until the beginning of the 20th century. Patria del Friuli ruled present western Slovenia until Venetian takeover in 1420. At the end of the Middle Ages, the Slovene Lands suffered a serious economic and demographic setback because of the Turkish raids. In 1515, a peasant revolt spread across nearly the whole Slovene territory. In 1572 and 1573 the Croatian-Slovenian peasant revolt wrought havoc throughout the wider region. Such uprisings, which often met with bloody defeats, continued throughout the 17th century. The Republic of Venice was dissolved by France and Venetian Slovenia was passed to the Austrian Empire in 1797. The Slovene Lands were part of the French-administered Illyrian provinces established by Napoleon, the Austrian Empire and Austria-Hungary. Slovenes inhabited most of Carniola, the southern part of the duchies of Carinthia and Styria, the northern and eastern areas of the Austrian Littoral, as well as Prekmurje in the Kingdom of Hungary. Industrialization was accompanied by construction of railroads to link cities and markets, but the urbanization was limited. Due to limited opportunities, between 1880 and 1910 there was extensive emigration, and around 300,000 Slovenes (i.e. 1 in 6) emigrated to other countries, mostly to the US, but also to South America (the main part to Argentina), Germany, Egypt, and to larger cities in Austria-Hungary, especially Vienna and Graz. The area of the United States with the highest concentration of Slovenian immigrants is Cleveland, Ohio. The other locations in the United States where many Slovenians settled were areas with substantial industrial and mining activities: Pittsburgh, Chicago, Pueblo, Butte, northern Minnesota, and the Salt Lake Valley. The men were important as workers in the mining industry, because of some of the skills they brought from Slovenia. Despite this emigration, the population of Slovenia increased significantly. Literacy was exceptionally high, at 80–90%. The 19th century also saw a revival of culture in the Slovene language, accompanied by a Romantic nationalist quest for cultural and political autonomy. The idea of a United Slovenia, first advanced during the revolutions of 1848, became the common platform of most Slovenian parties and political movements in Austria-Hungary. During the same period, Yugoslavism, an ideology stressing the unity of all South Slavic peoples, spread as a reaction to Pan-German nationalism and Italian irredentism. World War I brought heavy casualties to Slovenes, particularly the twelve Battles of the Isonzo, which took place in present-day Slovenia's western border area with Italy. Hundreds of thousands of Slovene conscripts were drafted into the Austro-Hungarian Army, and over 30,000 of them died. Hundreds of thousands of Slovenes from Gorizia and Gradisca were resettled in refugee camps in Italy and Austria. While the refugees in Austria received decent treatment, the Slovene refugees in Italian camps were treated as state enemies, and several thousand died of malnutrition and diseases between 1915 and 1918. Entire areas of the Slovene Littoral were destroyed. The Treaty of Rapallo of 1920 left approximately 327,000 out of the total population of 1.3 million Slovenes in Italy. After the fascists took power in Italy, they were subjected to a policy of violent Fascist Italianization. This caused the mass emigration of Slovenes, especially the middle class, from the Slovenian Littoral and Trieste to Yugoslavia and South America. Those who remained organized several connected networks of both passive and armed resistance. The best known was the militant anti-fascist organization TIGR, formed in 1927 to fight Fascist oppression of the Slovene and Croat populations in the Julian March. The Slovene People's Party launched a movement for self-determination, demanding the creation of a semi-independent South Slavic state under Habsburg rule. The proposal was picked up by most Slovene parties, and a mass mobilization of Slovene civil society, known as the Declaration Movement, followed. This demand was rejected by the Austrian political elites; but following the dissolution of the Austro-Hungarian Empire in the aftermath of the First World War, the National Council of Slovenes, Croats and Serbs took power in Zagreb on 6 October 1918. On 29 October, independence was declared by a national gathering in Ljubljana, and by the Croatian parliament, declaring the establishment of the new State of Slovenes, Croats, and Serbs. On 1 December 1918, the State of Slovenes, Croats and Serbs merged with Serbia, becoming part of the new Kingdom of Serbs, Croats, and Slovenes; in 1929 it was renamed the Kingdom of Yugoslavia. The main territory of Slovenia, being the most industrialized and westernized compared to other less developed parts of Yugoslavia, became the main center of industrial production: Compared to Serbia, for example, Slovenian industrial production was four times greater; and it was 22 times greater than in North Macedonia. The interwar period brought further industrialization in Slovenia, with rapid economic growth in the 1920s, followed by a relatively successful economic adjustment to the 1929 economic crisis and Great Depression. Following a plebiscite in October 1920, the Slovene-speaking southern Carinthia was ceded to Austria. With the Treaty of Trianon, on the other hand, the Kingdom of Yugoslavia was awarded the Slovene-inhabited Prekmurje region, formerly part of Austro-Hungary. Slovenes living in territories that fell under the rule of the neighboring states—Italy, Austria, and Hungary—were subjected to assimilation. Slovenia was the only present-day European nation that was trisected and completely annexed into both Nazi Germany and Fascist Italy during World War II. In addition, the Prekmurje region in the east was annexed to Hungary, and some villages in the Lower Sava Valley were incorporated in the newly created Nazi puppet Independent State of Croatia (NDH). Axis forces invaded Yugoslavia in April 1941 and defeated the country in a few weeks. The southern part, including Ljubljana, was annexed to Italy, while the Nazis took over the northern and eastern parts of the country. The Nazis had a plan of ethnic cleansing of these areas, and they resettled or expelled the local Slovene civilian population to the puppet states of Nedić's Serbia (7,500) and NDH (10,000). In addition, some 46,000 Slovenes were expelled to Germany, including children who were separated from their parents and allocated to German families. At the same time, the ethnic Germans in the Gottschee enclave in the Italian annexation zone were resettled to the Nazi-controlled areas cleansed of their Slovene population. Around 30,000 to 40,000 Slovene men were drafted to the German Army and sent to the Eastern front. The Slovene language was banned from education, and its use in public life was limited to the absolute minimum. In south-central Slovenia, annexed by Fascist Italy and renamed the Province of Ljubljana, the Slovenian National Liberation Front was organized in April 1941. Led by the Communist Party, it formed the Slovene Partisan units as part of the Yugoslav Partisans led by the Communist leader Josip Broz Tito. After the resistance started in summer 1941, Italian violence against the Slovene civilian population escalated, as well. The Italian authorities deported some 25,000 people to the concentration camps, which equaled 7.5% of the population of their occupation zone. The most infamous ones were Rab and Gonars. To counter the Communist-led insurgence, the Italians sponsored local anti-guerrilla units, formed mostly by the local conservative Catholic Slovene population that resented the revolutionary violence of the partisans. After the Italian armistice of September 1943, the Germans took over both the Province of Ljubljana and the Slovenian Littoral, incorporating them into what was known as the Operation Zone of Adriatic Coastal Region. They united the Slovene anti-Communist counter-insurgence into the Slovene Home Guard and appointed a puppet regime in the Province of Ljubljana. The anti-Nazi resistance however expanded, creating its own administrative structures as the basis for Slovene statehood within a new, federal and socialist Yugoslavia. In 1945, Yugoslavia was liberated by the partisan resistance and soon became a socialist federation known as the People's Federal Republic of Yugoslavia. Slovenia joined the federation as a constituent republic, led by its own pro-Communist leadership. Approximately 8% of the entire Slovene population died during World War II. The small Jewish community, mostly settled in the Prekmurje region, perished in 1944 in the holocaust of Hungarian Jews. The German speaking minority, amounting to 2.5% of the Slovenian population prior to WWII, was either expelled or killed in the aftermath of the war. Hundreds of Istrian Italians and Slovenes that opposed communism were killed in the foibe massacres, and more than 25,000 fled or were expelled from Slovenian Istria in the aftermath of the war. Following the re-establishment of Yugoslavia during World War II, Slovenia became part of Federal Yugoslavia. A socialist state was established, but because of the Tito–Stalin split in 1948, economic and personal freedoms were broader than in the Eastern Bloc countries. In 1947, the Slovene Littoral and the western half of Inner Carniola, which had been annexed by Italy after World War One, were annexed to Slovenia. After the failure of forced collectivisation that was attempted from 1949 to 1953, a policy of gradual economic liberalisation, known as workers self-management, was introduced under the advice and supervision of the Slovene Marxist theoretician and Communist leader Edvard Kardelj, the main ideologue of the Titoist path to socialism. Suspected opponents of this policy both from within and outside the Communist party were persecuted and thousands were sent to Goli otok. The late 1950s saw a policy of liberalisation in the cultural sphere, as well, and limited border crossing into neighboring Italy and Austria was allowed again. Until the 1980s, Slovenia enjoyed relatively broad autonomy within the federation. In 1956, Josip Broz Tito, together with other leaders, founded the Non-Aligned Movement. Particularly in the 1950s, Slovenia's economy developed rapidly and was strongly industrialised. With further economic decentralisation of Yugoslavia in 1965–66, Slovenia's domestic product was 2.5 times the average of Yugoslav republics. Opposition to the regime was mostly limited to intellectual and literary circles, and became especially vocal after Tito's death in 1980, when the economic and political situation in Yugoslavia became very strained. Political disputes around economic measures were echoed in the public sentiment, as many Slovenians felt they were being economically exploited, having to sustain an expensive and inefficient federal administration. In 1987 a group of intellectuals demanded Slovene independence in the 57th edition of the magazine "Nova revija". Demands for democratisation and more Slovenian independence were sparked off. A mass democratic movement, coordinated by the Committee for the Defence of Human Rights, pushed the Communists in the direction of democratic reforms. In September 1989, numerous constitutional amendments were passed to introduce parliamentary democracy to Slovenia. On 7 March 1990, the Slovenian Assembly changed the official name of the state to the "Republic of Slovenia". In April 1990, the first democratic election in Slovenia took place, and the united opposition movement DEMOS led by Jože Pučnik emerged victorious. The initial revolutionary events in Slovenia pre-dated the Revolutions of 1989 in Eastern Europe by almost a year, but went largely unnoticed by international observers. On 23 December 1990, more than 88% of the electorate voted for a sovereign and independent Slovenia. On 25 June 1991, Slovenia became independent through the passage of appropriate legal documents. On 27 June in the early morning, the Yugoslav People's Army dispatched its forces to prevent further measures for the establishment of a new country, which led to the Ten-Day War. On 7 July, the Brijuni Agreement was signed, implementing a truce and a three-month halt of the enforcement of Slovenia's independence. At the end of the month, the last soldiers of the Yugoslav Army left Slovenia. In December 1991, a new constitution was adopted, followed in 1992 by the laws on denationalisation and privatization. The members of the European Union recognised Slovenia as an independent state on 15 January 1992, and the United Nations accepted it as a member on 22 May 1992. Slovenia joined the European Union on 1 May 2004. Slovenia has one Commissioner in the European Commission, and seven Slovene parliamentarians were elected to the European Parliament at elections on 13 June 2004. In 2004 Slovenia also joined NATO. Slovenia subsequently succeeded in meeting the Maastricht criteria and joined the Eurozone (the first transition country to do so) on 1 January 2007. It was the first post-Communist country to hold the Presidency of the Council of the European Union, for the first six months of 2008. On 21 July 2010, it became a member of the OECD. The disillusionment with domestic socio-economic elites at municipal and national levels was expressed at the 2012–2013 Slovenian protests on a wider scale than in the smaller 15 October 2011 protests. In relation to the leading politicians' response to allegations made by the official Commission for the Prevention of Corruption of the Republic of Slovenia, legal experts expressed the need for changes in the system that would limit political arbitrariness. Slovenia is situated in Central and Southeastern Europe touching the Alps and bordering the Mediterranean. It lies between latitudes 45° and 47° N, and longitudes 13° and 17° E. The 15th meridian east almost corresponds to the middle line of the country in the direction west–east. The Geometrical Center of the Republic of Slovenia is located at coordinates 46°07'11.8" N and 14°48'55.2" E. It lies in Slivna in the Municipality of Litija. Slovenia's highest peak is Triglav (); the country's average height above sea level is . Four major European geographic regions meet in Slovenia: the Alps, the Dinarides, the Pannonian Plain, and the Mediterranean. Although on the shore of the Adriatic Sea near the Mediterranean Sea, most of Slovenia is in the Black Sea drainage basin. The Alps—including the Julian Alps, the Kamnik-Savinja Alps and the Karawank chain, as well as the Pohorje massif—dominate Northern Slovenia along its long border with Austria. Slovenia's Adriatic coastline stretches approximately from Italy to Croatia. The term "Karst topography" refers to that of southwestern Slovenia's Karst Plateau, a limestone region of underground rivers, gorges, and caves, between Ljubljana and the Mediterranean. On the Pannonian plain to the East and Northeast, toward the Croatian and Hungarian borders, the landscape is essentially flat. However, the majority of Slovenian terrain is hilly or mountainous, with around 90% of the surface or more above sea level. Over half of the country () is covered by forests. This makes Slovenia the third most forested country in Europe, after Finland and Sweden. The areas are covered mostly by beech, fir-beech and beech-oak forests and have a relatively high production capacity. Remnants of primeval forests are still to be found, the largest in the Kočevje area. Grassland covers and fields and gardens (). There are of orchards and of vineyards. Slovenia is in a rather active seismic zone because of its position on the small Adriatic Plate, which is squeezed between the Eurasian Plate to the north and the African Plate to the south and rotates counter-clockwise. Thus the country is at the junction of three important geotectonic units: the Alps to the north, the Dinaric Alps to the south and the Pannonian Basin to the east. Scientists have been able to identify 60 destructive earthquakes in the past. Additionally, a network of seismic stations is active throughout the country. Many parts of Slovenia have a carbonate ground, and an extensive subterranean system has developed. The first regionalisations of Slovenia were made by geographers Anton Melik (1935–1936) and Svetozar Ilešič (1968). The newer regionalisation by Ivan Gams divided Slovenia in the following macroregions: According to a newer natural geographic regionalisation, the country consists of four macroregions. These are the Alpine, the Mediterranean, the Dinaric, and the Pannonian landscapes. Macroregions are defined according to major relief units (the Alps, the Pannonian plain, the Dinaric mountains) and climate types (submediterranean, temperate continental, mountain climate). These are often quite interwoven. Protected areas of Slovenia include national parks, regional parks, and nature parks, the largest of which is Triglav National Park. There are 286 Natura 2000 designated protected areas, which comprise 36% of the country's land area, the largest percentage among European Union states. Additionally, according to Yale University's Environmental Performance Index, Slovenia is considered a "strong performer" in environmental protection efforts. Slovenia is located in temperate latitudes. The climate is also influenced by the variety of relief, and the influence of the Alps and the Adriatic Sea. In the northeast, the continental climate type with greatest difference between winter and summer temperatures prevails. In the coastal region, there is sub-Mediterranean climate. The effect of the sea on the temperature rates is visible also up the Soča valley, while a severe Alpine climate is present in the high mountain regions. There is a strong interaction between these three climatic systems across most of the country. Precipitation, often coming from Gulf of Genoa, varies across the country as well, with over in some western regions and dropping down to in Prekmurje. Snow is quite frequent in winter and the record snow cover in Ljubljana was recorded in 1952 at . Compared to Western Europe, Slovenia is not very windy, because it lies in the slipstream of the Alps. The average wind speeds are lower than in the plains of the nearby countries. Due to the rugged terrain, local vertical winds with daily periods are present. Besides these, there are three winds of particular regional importance: the bora, the jugo, and the foehn. The jugo and the bora are characteristic of the Littoral. Whereas the jugo is humid and warm, the bora is usually cold and gusty. The foehn is typical of the Alpine regions in the north of Slovenia. Generally present in Slovenia are the northeast wind, the southeast wind and the north wind. The territory of Slovenia mainly (, i.e. 81%) belongs to the Black Sea basin, and a smaller part (, i.e. 19%) belongs to the Adriatic Sea basin. These two parts are divided into smaller units in regard to their central rivers, the Mura River basin, the Drava River basin, the Sava River basin with Kolpa River basin, and the basin of the Adriatic rivers. In comparison with other developed countries, water quality in Slovenia is considered to be among the highest in Europe. One of the reasons is undoubtedly that most of the rivers rise on the mountainous territory of Slovenia. But this does not mean that Slovenia has no problems with surface water and groundwater quality, especially in areas with intensive farming. Slovenia signed the Rio Convention on Biological Diversity on 13 June 1992 and became a party to the convention on 9 July 1996. It subsequently produced a National Biodiversity Strategy and Action Plan, which was received by the convention on 30 May 2002. Slovenia is distinguished by an exceptionally wide variety of habitats, due to the contact of geological units and biogeographical regions, and due to human influences. Around 12.5% of the territory is protected with 35.5% in the Natura 2000 ecological network. Despite this, because of pollution and environmental degradation, diversity has been in decline. The biological diversity of the country is high, with 1% of the world's organisms on 0.004% of the Earth's surface area. There are 75 mammal species, among them marmots, Alpine ibex, and chamois. There are numerous deer, roe deer, boar, and hares. The edible dormouse is often found in the Slovenian beech forests. Trapping these animals is a long tradition and is a part of the Slovenian national identity. Some important carnivores include the Eurasian lynx, European wild cats, foxes (especially the red fox), and European jackal. There are hedgehogs, martens, and snakes such as vipers and grass snakes. According to recent estimates, Slovenia has c. 40–60 wolves and about 450 brown bears. Slovenia is home to an exceptionally diverse number of cave species, with a few tens of endemic species. Among the cave vertebrates, the only known one is the olm, living in Karst, Lower Carniola, and White Carniola. The only regular species of cetaceans found in the northern Adriatic sea is the bottlenose dolphin ("Tursiops truncatus"). There are a wide variety of birds, such as the tawny owl, the long-eared owl, the eagle owl, hawks, and short-toed eagles. Other birds of prey have been recorded, as well as a growing number of ravens, crows and magpies migrating into Ljubljana and Maribor where they thrive. Other birds include black and green woodpeckers and the white stork, which nests mainly in Prekmurje. There are 13 domestic animals native to Slovenia, of eight species (hen, pig, dog, horse, sheep, goat, honey bee, and cattle). Among these are the Karst Shepherd, the Carniolan honeybee, and the Lipizzan horse. They have been preserved ex situ and in situ. The marble trout or marmorata ("Salmo marmoratus") is an indigenous Slovenian fish. Extensive breeding programmes have been introduced to repopulate the marble trout into lakes and streams invaded by non-indigenous species of trout. Slovenia is also home to the wels catfish. More than 2,400 fungal species have been recorded from Slovenia and, since that figure does not include lichen-forming fungi, the total number of Slovenian fungi already known is undoubtedly much higher. Many more remain to be discovered. Slovenia is the third most-forested country in Europe, with 58.3% of the territory covered by forests. The forests are an important natural resource, and logging is kept to a minimum. In the interior of the country are typical Central European forests, predominantly oak and beech. In the mountains, spruce, fir, and pine are more common. Pine trees grow on the Karst Plateau, although only one-third of the region is covered by pine forest. The lime/linden tree, common in Slovenian forests, is a national symbol. The tree line is at . In the Alps, flowers such as "Daphne blagayana", gentians ("Gentiana clusii", "Gentiana froelichi"), "Primula auricula", edelweiss (the symbol of Slovene mountaineering), "Cypripedium calceolus", "Fritillaria meleagris" (snake's head fritillary), and "Pulsatilla grandis" are found. Slovenia harbors many plants of ethnobotanically useful groups. Of 59 known species of ethnobotanical importance, some species such as "Aconitum napellus", "Cannabis sativa" and "Taxus baccata" are restricted for use as per the Official Gazette of the Republic of Slovenia. Slovenia is a parliamentary democracy republic with a multi-party system. The head of state is the president, who is elected by popular vote and has an important integrative role. The president is elected for five years and at maximum for two consecutive terms. He or she mainly has a representative role and is the commander-in-chief of the Slovenian armed forces. The executive and administrative authority in Slovenia is held by the Government of Slovenia ('), headed by the Prime Minister and the council of ministers or cabinet, who are elected by the National Assembly (). The legislative authority is held by the bicameral Parliament of Slovenia, characterised by an asymmetric duality. The bulk of power is concentrated in the National Assembly, which consists of ninety members. Of those, 88 are elected by all the citizens in a system of proportional representation, whereas two are elected by the registered members of the autochthonous Hungarian and Italian minorities. Election takes place every four years. The National Council ('), consisting of forty members, appointed to represent social, economic, professional and local interest groups, has a limited advisory and control power. The 1992–2004 period was marked by the rule of the Liberal Democracy of Slovenia, which was responsible for gradual transition from the Titoist economy to the capitalist market economy. It later attracted much criticism by neo-liberal economists, who demanded a less gradual approach. The party's president Janez Drnovšek, who served as prime minister between 1992 and 2002, was one of the most influential Slovenian politicians of the 1990s, alongside President Milan Kučan (who served between 1990 and 2002). The 2005–2008 period was characterized by over-enthusiasm after joining the EU. During the first term of Janez Janša's government, for the first time after independence, the Slovenian banks saw their loan-deposit ratios veering out of control. There was over-borrowing from foreign banks and then over-crediting of customers, including local business magnates. After the onset of the financial crisis of 2007–2010 and European sovereign-debt crisis, the left-wing coalition that replaced Janša's government in the 2008 elections, had to face the consequences of the 2005–2008 over-borrowing. Attempts to implement reforms that would help economic recovery were met by student protesters, led by a student who later became a member of Janez Janša's SDS, and by the trade unions. The proposed reforms were postponed in a referendum. The left-wing government was ousted with a vote of no confidence. Janez Janša attributed the boom of spending and overborrowing to the period of left-wing government; he proposed harsh austerity reforms which he had previously helped postpone. Generally, some economists estimate that left and right parties attributed to over-loaning and managers' takeovers; reason behind was that each block tried to establish economic elite which will support political forces. Judicial powers in Slovenia are executed by judges, who are elected by the National Assembly. Judicial power in Slovenia is implemented by courts with general responsibilities and specialised courts that deal with matters relating to specific legal areas. The State Prosecutor is an independent state authority responsible for prosecuting cases brought against those suspected of committing criminal offences. The Constitutional Court, composed of nine judges elected for nine-year terms, decides on the conformity of laws with the Constitution; all laws and regulations must also conform with the general principles of international law and with ratified international agreements. Officially, Slovenia is subdivided into 212 municipalities (eleven of which have the status of urban municipalities). The municipalities are the only bodies of local autonomy in Slovenia. Each municipality is headed by a mayor ("župan"), elected every four years by popular vote, and a municipal council ("občinski svet"). In the majority of municipalities, the municipal council is elected through the system of proportional representation; only a few smaller municipalities use the plurality voting system. In the urban municipalities, the municipal councils are called town (or city) councils. Every municipality also has a Head of the Municipal Administration ("načelnik občinske uprave"), appointed by the mayor, who is responsible for the functioning of the local administration. There is no official intermediate unit between the municipalities and the Republic of Slovenia. The 62 administrative districts, officially called "Administrative Units" ("upravne enote"), are only subdivisions of the national government administration and are named after their respective bases of government offices. They are headed by a Manager of the Unit ("načelnik upravne enote"), appointed by the Minister of Public Administration. Traditional regions were based on the former Habsburg crown lands that included Carniola, Carinthia, Styria, and the Littoral. Stronger than with either the Carniola as a whole, or with Slovenia as the state, Slovenes historically tend to identify themselves with the traditional regions of Slovene Littoral, Prekmurje, and even traditional (sub)regions, such as Upper, Lower and, to a lesser extent, Inner Carniola. The capital city Ljubljana was historically the administrative center of Carniola and belonged to Inner Carniola, except for the Šentvid district, which was in Upper Carniola and also where the border between German-annexed territory and the Italian Province of Ljubljana was during the Second World War. The 12 "statistical regions" have no administrative function and are subdivided into two macroregions for the purpose of the Regional policy of the European Union. These two macroregions are: The Slovenian Armed Forces provide military defence independently or within an alliance, in accordance with international agreements. Since conscription was abolished in 2003, it is organized as a fully professional standing army. The Commander-in-Chief is the President of the Republic of Slovenia, while operational command is in the domain of the Chief of the General Staff of the Slovenian Armed Forces. In 2016, military spending was an estimated 0.91% of the country's GDP. Since joining NATO, the Slovenian Armed Forces have taken a more active part in supporting international peace. They have participated in peace support operations and humanitarian activities. Among others, Slovenian soldiers are a part of international forces serving in Bosnia and Herzegovina, Kosovo, and Afghanistan. Slovenia has a developed economy and is per capita the richest of the Slavic countries by nominal GDP, and the second richest by GDP (PPP) behind the Czech Republic. Slovenia is also among the top global economies in terms of human capital. Slovenia was in the beginning of 2007 the first new member to introduce the euro as its currency, replacing the tolar. Since 2010, it has been member of the Organisation for Economic Co-operation and Development. There is a big difference in prosperity between the various regions. The economically wealthiest regions are the Central Slovenia region which includes the capital Ljubljana and the western Slovenian regions, as Goriška and Coastal–Karst, while the least wealthy regions are the Mura, the Central Sava and the Littoral–Inner Carniola. In 2004–06, the economy grew on average by nearly 5% a year in Slovenia; in 2007, it expanded by almost 7%. The growth surge was fuelled by debt, particularly among firms, and especially in construction. The financial crisis of 2007–2010 and European sovereign-debt crisis had a significant impact on the domestic economy. The construction industry was severely hit in 2010 and 2011. In 2009, Slovenian GDP per capita shrank by 8%, the biggest decline in the European Union after the Baltic countries and Finland. An increasing burden for the Slovenian economy has been its rapidly ageing population. In August 2012, the year-on-year contraction was 0.8%, however, 0.2% growth was recorded in the first quarter (in relation to the quarter before, after data was adjusted according to season and working days). Year-on-year contraction has been attributed to the fall in domestic consumption, and the slowdown in export growth. The decrease in domestic consumption has been attributed to the fiscal austerity, to the freeze on budget expenditure in the final months of 2011, to the failure of the efforts to implement economic reforms, to inappropriate financing, and to the decrease in exports. Due to the effects of the crisis it was expected that several banks had to be bailed out by EU funds in 2013, however needed capital was able to be covered by the country's own funds. Fiscal actions and legislations aiming on the reduction of spendings as well as several privatisations supported an economic recovery as from 2014. The real economic growth rate was at 2.5% in 2016 and accelerated to 5% in 2017. The construction sector has seen a recent increase, and the tourism industry is expected to have continuous rising numbers. Slovenia's total national debt rose substantially during the Great Recession and was decreasing ; at the end of 2018 amounted to 32,223 million euros, 70% of the GDP. Almost two-thirds of people are employed in services, and over one-third in industry and construction. Slovenia benefits from a well-educated workforce, well-developed infrastructure, and its location at the crossroads of major trade routes. The level of foreign direct investment (FDI) per capita in Slovenia is one of the lowest in the EU, and the labor productivity and the competitiveness of the Slovenian economy is still significantly below the EU average. Taxes are relatively high, the labor market is seen by business interests as being inflexible, and industries are losing sales to China, India, and elsewhere. High level of openness makes Slovenia extremely sensitive to economic conditions in its main trading partners and changes in its international price competitiveness. The main industries are motor vehicles, electric and electronic equipment, machinery, pharmaceuticals, and fuels. Examples of major Slovenian companies operating in Slovenia include the home appliance manufacturer Gorenje, the pharmaceutical companies Krka and Lek (Novartis' subsidiary), the oil distributing company Petrol Group, energy distribution company GEN-I and Revoz, a manufacturing subsidiary of Renault. In 2018, the net energy production was 12,262 GWh and consumption was 14,501 GWh. Hydroelectric plants produced 4,421 GWh, thermal plants produced 4,049 GWh, and the Krško Nuclear Power Plant produced 2,742 GWh (50% share that goes to Slovenia; other 50% goes to Croatia due to joint ownership). Domestic electricity consumption was covered 84.6% by domestic production; percentage is decreasing from year to year meaning Slovenia is more and more depending on electricity import. A new 600 MW block of Šoštanj thermal power plant finished construction and went online in the autumn of 2014. The new 39.5 MW HE Krško hydro power plant was finished in 2013, and has since been the largest sole energy producer, accounting for of the gross energy production in 2018. The 41.5 MW HE Brežice and 30.5 MW HE Mokrice hydro power plants were built on the Sava River in 2018 and the construction of ten more hydropower plants with a cumulative capacity of 338 MW is planned to be finished by 2030. A large pumped-storage hydro power plant Kozjak on the Drava River is in the planning stage. At the end of 2018, at least 295 MWp of photovoltaic modules and 31,4 MW of biogas powerplants were installed. Compared to 2017, renewable energy sources contributed 5,6 percentage points more into whole energy consumption. There is interest to add more production in the area of solar and wind energy sources (subsidising schemes are increasing economic feasibility), but microlocation settlement procedures take enormous toll on the efficiency of this intitiatve (nature preservation vs. energy production facilities dilemma). Slovenia offers tourists a wide variety of natural and cultural amenities. Different forms of tourism have developed. The tourist gravitational area is considerably large, however the tourist market is small. There has been no large-scale tourism and no acute environmental pressures; in 2017, National Geographic Traveller's Magazine declared Slovenia as the country with the world's most sustainable tourism.The nation's capital, Ljubljana, has many important Baroque and Vienna Secession buildings, with several important works of the native born architect Jože Plečnik and also his pupil, architect Edo Ravnikar. At the northwestern corner of the country lie the Julian Alps with Lake Bled and the Soča Valley, as well as the nation's highest peak, Mount Triglav in the middle of Triglav National Park. Other mountain ranges include Kamnik–Savinja Alps, the Karawanks, and Pohorje, popular with skiers and hikers. The Karst Plateau in the Slovene Littoral gave its name to karst, a landscape shaped by water dissolving the carbonate bedrock, forming caves. The best-known caves are Postojna Cave and the UNESCO-listed Škocjan Caves. The region of Slovenian Istria meets the Adriatic Sea, where the most important historical monument is the Venetian Gothic Mediterranean town of Piran while the settlement of Portorož attracts crowds in summer. The hills around Slovenia's second-largest town, Maribor, are renowned for their wine-making. The northeastern part of the country is rich with spas, with Rogaška Slatina, Radenci, Čatež ob Savi, Dobrna, and Moravske Toplice growing in importance in the last two decades. Other popular tourist destinations include the historic cities of Ptuj and Škofja Loka, and several castles, such as Predjama Castle. Important parts of tourism in Slovenia include congress and gambling tourism. Slovenia is the country with the highest percentage of casinos per 1,000 inhabitants in the European Union. Perla in Nova Gorica is the largest casino in the region. Most of foreign tourists to Slovenia come from the key European markets: Italy, Austria, Germany, Croatia, Benelux, Serbia, Russia and Ukraine, followed by UK and Ireland. European tourists create more than 90% of Slovenia's tourist income. In 2016, Slovenia was declared the world's first green country by the Netherlands-based organization Green Destinations. On being declared the most sustainable country in 2016, Slovenia had a big part to play at the ITB Berlin to promote sustainable tourism. Since Antiquity, geography has dictated transport routes in Slovenia. Significant mountain ranges, major rivers and proximity to the Danube played roles in the development of the area's transportation corridors. One recent particular advantage are the Pan-European transport corridors V (the fastest link between the North Adriatic, and Central and Eastern Europe) and X (linking Central Europe with the Balkans). This gives it a special position in the European social, economic and cultural integration and restructuring. The road freight and passenger transport constitutes the largest part of transport in Slovenia at 80%. Personal cars are much more popular than public road passenger transport, which has significantly declined. Slovenia has a very high highway and motorway density compared to the European Union average. The highway system, the construction of which was accelerated after 1994, has slowly but steadily transformed Slovenia into a large conurbation. Other state roads have been rapidly deteriorating because of neglect and the overall increase in traffic. The existing Slovenian railways are out-of-date and can't compete with the motorway network; partially also as a result of dispersed population settlement. Due to this fact and the projected increase in traffic through the port of Koper, which is primarily by train, a second rail on the Koper-Divača route is in early stages of starting construction. With a lack of financial assets, maintenance and modernisation of the Slovenian railway network have been neglected. Due to the out-of-date infrastructure, the share of the railway freight transport has been in decline in Slovenia. The railway passenger transport has been recovering after a large drop in the 1990s. The Pan-European railway corridors V and X, and several other major European rail lines intersect in Slovenia. All international transit trains in Slovenia drive through the Ljubljana Railway Hub. The major Slovenian port is the Port of Koper. It is the largest Northern Adriatic port in terms of container transport, with almost 590,000 TEUs annually and lines to all major world ports. It is much closer to destinations east of the Suez than the ports of Northern Europe. In addition, the maritime passenger traffic mostly takes place in Koper. Two smaller ports used for the international passenger transport as well as cargo transport are located in Izola and Piran. Passenger transport mainly takes place with Italy and Croatia. Splošna plovba, the only Slovenian shipping company, transports freight and is active only in foreign ports. Air transport in Slovenia is quite low, but has significantly grown since 1991. Of the three international airports in Slovenia, Ljubljana Jože Pučnik Airport in central Slovenia is the busiest, with connections to many major European destinations. The Maribor Edvard Rusjan Airport is located in the eastern part of the country and the Portorož Airport in the western part. The state-owned Adria Airways is the largest Slovenian airline; however in 2019 it declared bankruptcy and ceased operations. Since 2003, several new carriers have entered the market, mainly low-cost airlines. The only Slovenian military airport is the Cerklje ob Krki Air Base in the southwestern part of the country. There are also 12 public airports in Slovenia. With 101 inhabitants per square kilometer (262/sq mi), Slovenia ranks low among the European countries in population density (compared to 402/km2 (1042/sq mi) for the Netherlands or 195/km2 (505/sq mi) for Italy). The Inner Carniola–Karst Statistical Region has the lowest population density while the Central Slovenia Statistical Region has the highest. Slovenia is among the European countries with the most pronounced ageing of its population, ascribable to a low birth rate and increasing life expectancy. Almost all Slovenian inhabitants older than 64 are retired, with no significant difference between the genders. The working-age group is diminishing in spite of immigration. The proposal to raise the retirement age from the current 57 for women and 58 for men was rejected in a referendum in 2011. In addition, the difference among the genders regarding life expectancy is still significant. The total fertility rate (TFR) in 2014 was estimated at 1.33 children born/woman, which is lower than the replacement rate of 2.1. The majority of children are born to unmarried women (in 2016, 58.6% of all births were outside of marriage). In 2018, life expectancy at birth was 81.1 years (78.2 years male, and 84 years female). In 2009, the suicide rate in Slovenia was 22 per 100,000 persons per year, which places Slovenia among the highest ranked European countries in this regard. Nonetheless, from 2000 until 2010, the rate has decreased by about 30%. The differences between regions and the genders are pronounced. Depending on definition, between 65% and 79% of people live in wider urban areas. According to OECD definition of rural areas none of the Slovene statistical regions is mostly urbanised, meaning that 15% or less of the population lives in rural communities. According to this definition statistical regions are classified: The only large town is the capital, Ljubljana. Other (medium-sized) towns include Maribor, Celje, and Kranj. Overall, there are eleven urban municipalities in Slovenia. 212 municipalities in total. Hodoš, the smallest, has 354 inhabitants. Odranci, the smallest, measures 6.9 km2. The official language in Slovenia is Slovene, which is a member of the South Slavic language group. In 2002, Slovene was the native language of around 88% of Slovenia's population according to the census, with more than 92% of the Slovenian population speaking it in their home environment. This statistic ranks Slovenia among the most homogeneous countries in the EU in terms of the share of speakers of the predominant mother tongue. Slovene is a highly diverse Slavic language in terms of dialects, with different degrees of mutual intelligibility. Accounts of the number of dialects range from as few as seven dialects, often considered dialect groups or dialect bases that are further subdivided into as many as 50 dialects. Other sources characterize the number of dialects as nine or as eight. Hungarian and Italian, spoken by the respective minorities, enjoy the status of official languages in the ethnically mixed regions along the Hungarian and Italian borders, to the extent that even the passports issued in those areas are bilingual. In 2002 around 0.2% of the Slovenian population spoke Italian and around 0.4% spoke Hungarian as their native language. Hungarian is co-official with Slovene in 30 settlements in 5 municipalities (whereof 3 are officially bilingual). Italian is co-official with Slovene in 25 settlements in 4 municipalities (all of them officially bilingual). Romani, spoken in 2002 as the native language by 0.2% of people, is a legally protected language in Slovenia. Romani-speakers mainly belong to the geographically dispersed and marginalized Roma community. German, which used to be the largest minority language in Slovenia prior to World War II (around 4% of the population in 1921), is now the native language of only around 0.08% of the population, the majority of whom are more than 60 years old. Gottscheerish or "Granish", the traditional German dialect of Gottschee County, faces extinction. A significant number of people in Slovenia speak a variant of Serbo-Croatian (Serbian, Croatian, Bosnian, or Montenegrin) as their native language. These are mostly immigrants who moved to Slovenia from other former Yugoslav republics from the 1960s to the late 1980s, and their descendants. In 2002, 0.4% of the Slovenian population declared themselves to be native speakers of Albanian and 0.2% native speakers of Macedonian. Czech, the fourth-largest minority language in Slovenia prior to World War II (after German, Hungarian, and Serbo-Croatian), is now the native language of a few hundred residents of Slovenia. Regarding the knowledge of foreign languages, Slovenia ranks among the top European countries. The most taught foreign languages are English, German, Italian, French and Spanish. , 92% of the population between the age of 25 and 64 spoke at least one foreign language and around 71.8% of them spoke at least two foreign languages, which was the highest percentage in the European Union. According to the Eurobarometer survey, the majority of Slovenes could speak Croatian (61%) and English (56%). A reported 42% of Slovenes could speak German, which was one of the highest percentages outside German-speaking countries. Italian is widely spoken on the Slovenian Coast and in some other areas of the Slovene Littoral. Around 15% of Slovenians can speak Italian, which is (according to the Eurobarometer pool) the third-highest percentage in the European Union, after Italy and Malta. In 2015 about 12% (237,616 people) of the population in Slovenia was born abroad. About 86% of the foreign-born population originated from other countries of the former Yugoslavia state as (in descending order) Bosnia-Herzegovina, followed by immigrants from Croatia, Serbia, North Macedonia and Kosovo. By the beginning of 2017 there were about 114,438 people with a foreign citizenship residing in the country making up 5.5% of the total population. Of these foreigners 76% had citizenships of the other countries from the former Yugoslavia state (excluding Croatia). Additionally 16.4% had EU-citizenships and 7.6% had citizenships of other countries. According to the 2002 census, Slovenia's main ethnic group are Slovenes (83%), however their share in the total population is continuously decreasing due to their relatively low fertility rate. At least 13% (2002) of the population were immigrants from other parts of Former Yugoslavia and their descendants. They have settled mainly in cities and suburbanised areas. Relatively small but protected by the Constitution of Slovenia are the Hungarian and the Italian ethnic minority. A special position is held by the autochthonous and geographically dispersed Roma ethnic community. The number of people immigrating into Slovenia rose steadily from 1995 and has been increasing even more rapidly in recent years. After Slovenia joined the EU in 2004, the annual number of immigrants doubled by 2006 and increased by half yet again by 2009. In 2007, Slovenia had one of the fastest growing net migration rates in the European Union. As to emigration, between 1880 and 1918 (World War I) many men left Slovenia to work in mining areas in other nations. The United States in particular has been a common choice for emigration, with the 1910 US Census showing that there were already "183,431 persons in the USA of Slovenian mother tongue". But there may have been many more, because a good number avoided anti-Slavic prejudice and "identified themselves as Austrians." Favorite localities before 1900 were Minnesota, Wisconsin, Michigan, as well as Omaha, Nebraska, Joliet, Illinois, Cleveland, Ohio, and rural areas of Iowa. After 1910, they settled in Utah (Bingham Copper Mine), Colorado (especially Pueblo), and Butte, Montana. These areas attracted first many single men (who often boarded with Slovenian families). Then after locating work and having sufficient money, the men sent back for their wives and families to join them. Before World War II, 97% of the population declared itself Catholic (Roman Rite), around 2.5% as Lutheran, and around 0.5% of residents identified themselves as members of other denominations. Catholicism was an important feature of both social and political life in pre-Communist Slovenia. After 1945, the country underwent a process of gradual but steady secularization. After a decade of persecution of religions, the Communist regime adopted a policy of relative tolerance towards churches. After 1990, the Catholic Church regained some of its former influence, but Slovenia remains a largely secularized society. According to the 2002 census, 57.8% of the population is Catholic. In 1991, 71.6% were self-declared Catholics which means a drop of more than 1% annually. The vast majority of Slovenian Catholics belong to the Latin Rite. A small number of Greek Catholics live in the White Carniola region. Newest 2018 data shows a resurgence in people identifying as Catholics, membership in the Church has returned to pre 1990 levels. With 73.4% now again identifying as Catholic. Despite a relatively small number of Protestants (less than 1% in 2002), the Protestant legacy is historically significant given that the Slovene standard language and Slovene literature were established by the Protestant Reformation in the 16th century. Primoz Trubar, a theologian in the Lutheran tradition, was one of the most influential Protestant Reformers in Slovenia. Protestantism was extinguished in the Counter-Reformation implemented by the Habsburg dynasty, which controlled the region. It only survived in the easternmost regions due to protection of Hungarian nobles, who often happened to be Calvinist themselves. Today, a significant Lutheran minority lives in the easternmost region of Prekmurje, where they represent around a fifth of the population and are headed by a bishop with the seat in Murska Sobota. The third largest denomination, with around 2.2% of the population, is the Eastern Orthodox Church, with most adherents belonging to the Serbian Orthodox Church while a minority belongs to the Macedonian and other Eastern Orthodox churches. Slovenia has a long been home to a Jewish community. Despite the losses suffered during the Holocaust, Judaism still numbers a few hundred adherents, mostly living in Ljubljana, site of the sole remaining active synagogue in the country. According to the 2002 census, Islam is the second largest religious denomination in the country, with around 2.4% of the population. Most Slovenian Muslims came from Bosnia. In the 2002, around 10% of Slovenes declared themselves as atheists, another 10% professed no specific denomination, and around 16% decided not to answer the question about their religious affiliation. According to the Eurobarometer Poll 2010, 32% of Slovenian citizens responded that "they believe there is a god", whereas 36% answered that "they believe there is some sort of spirit or life force" and 26% that "they do not believe there is any sort of spirit, god, or life force". Slovenia's education ranks as the 12th best in the world and 4th best in the European Union, being significantly higher than the OECD average, according to the Programme for International Student Assessment. Among people age 25 to 64, 12% have attended higher education, while on average Slovenes have 9.6 years of formal education. According to an OECD report, 83% of adults ages 25–64 have earned the equivalent of a high school degree, well above the OECD average of 74%; among 25- to 34-year-olds, the rate is 93%. According to the 1991 census there is 99.6% literacy in Slovenia. Lifelong learning is also increasing. Responsibility for education oversight at primary and secondary level in Slovenia lies with the Ministry of Education and Sports. After non-compulsory pre-school education, children enter the nine-year primary school at the age of six. Primary school is divided into three periods, each of three years. In the academic year 2006–2007 there were 166,000 pupils enrolled in elementary education and more than 13,225 teachers, giving a ratio of one teacher per 12 pupils and 20 pupils per class. After completing elementary school, nearly all children (more than 98%) go on to secondary education, either vocational, technical or general secondary programmes (gimnazija). The latter concludes with matura, the final exam that allows the graduates to enter a university. 84% of secondary school graduates go on to tertiary education. Among several universities in Slovenia, the best ranked is the University of Ljubljana, ranking among the first 500 or the first 3% of the world's best universities according to the ARWU. Two other public universities include the University of Maribor in Styria region and the University of Primorska in Slovene Littoral. In addition, there is a private University of Nova Gorica and an international EMUNI University. Slovenia's architectural heritage includes 2,500 churches, 1,000 castles, ruins, and manor houses, farmhouses, and special structures for drying hay, called hayracks (). Four natural and cultural sites in Slovenia are on the UNESCO World Heritage Site list. Škocjan Caves and its karst landscape are a protected site as the old forests in the area of Goteniški Snežnik and Kočevski Rog in the SE Slovenia. The Idrija Mercury mining site is of world importance, as are the prehistoric pile dwellings in the Ljubljana Marshes. The most picturesque church for photographers is the medieval and Baroque building on Bled Island. The castle above the lake is a museum and restaurant with a view. Near Postojna there is a fortress called Predjama Castle, half hidden in a cave. Museums in Ljubljana and elsewhere feature unique items such as the Divje Babe Flute and the oldest wheel in the world. Ljubljana has medieval, Baroque, Art Nouveau, and modern architecture. The architect Plečnik's architecture and his innovative paths and bridges along the Ljubljanica are notable and on UNESCO tentative list. Slovenian cuisine is a mixture of Central European cuisine (especially Austrian and Hungarian), Mediterranean cuisine and Balkan cuisine. Historically, Slovenian cuisine was divided into town, farmhouse, cottage, castle, parsonage and monastic cuisines. Due to the variety of Slovenian cultural and natural landscapes, there are more than 40 distinct regional cuisines. Ethnologically, the most characteristic Slovene dishes were one-pot dishes, such as "ričet", Istrian stew (), minestrone (), and "žganci "buckwheat spoonbread; in the Prekmurje region there is also "bujta repa", and "prekmurska gibanica" pastry. Pršut prosciutto is known as () in the Slovene Littoral. The nut roll () has become a symbol of Slovenia, especially among the Slovene diaspora in the United States. Soups were added to the traditional one-pot meals and various kinds of porridge and stew only in relatively recent history. Each year since 2000, the Festival of Roasted Potatoes has been organized by the "Society for the Recognition of Roasted Potatoes as a Distinct Dish", attracting thousands of visitors. Roasted potatoes, which have been traditionally served in most Slovenian families only on Sundays—preceded by a meat-based soup, such as beef or chicken soup—have been depicted on a special edition of post marks by the Post of Slovenia on 23 November 2012. The best known sausage is "kranjska klobasa". Historically the most notable Slovenian ballet dancers and choreographers were Pino Mlakar (1907‒2006), who in 1927 graduated from the Rudolf Laban Choreographic Institute, and there met his future wife, balerina Maria Luiza Pia Beatrice Scholz (1908‒2000). Together they worked as a leading dancer and a choreographer in Dessau (1930–1932), Zürich (1934–1938), and State opera in München (1939‒1944). Their plan to build a Slovenian dance centre at Rožnik Hill after the World War II was supported by then minister of culture, Ferdo Kozak, but was cancelled by his successor. Pino Mlakar was also a full professor at the Academy for Theatre, Radio, Film and Television (AGRFT) of the University of Ljubljana. Between 1952 in 1954 they again led State opera ballet in Munich. In the 1930s in Ljubljana was founded a Mary Wigman modern dance school by her student Meta Vidmar. A number of music, theater, film, book, and children's festivals take place in Slovenia each year, including the music festivals Ljubljana Summer Festival and Lent Festival, the stand up comedy Punch Festival, the children's Pippi Longstocking Festival, and the book festivals Slovene book fair and Frankfurt after the Frankfurt. The most notable music festival of Slovene music was historically the Slovenska popevka festival. Between 1981 and 2000 the Novi Rock festival was notable for bringing rock music across Iron curtain from the West to the Slovenian and then Yugoslav audience. The long tradition of jazz festivals in Titoist Yugoslavia began with the Ljubljana Jazz Festival which has been held annually in Slovenia since 1960. Slovene film actors and actresses historically include Ida Kravanja, who played her roles as "Ita Rina" in the early European films, and Metka Bučar. After the WW II, one of the most notable film actors was Polde Bibič, who played a number of roles in many films that were well received in Slovenia, including "Don't Cry, Peter" (1964), "On Wings of Paper" (1968), "Kekec's Tricks" (1968), "Flowers in Autumn" (1973), "The Widowhood of Karolina Žašler" (1976), "Heritage" (1986), "Primož Trubar" (1985), and "My Dad, The Socialist Kulak" (1987). Many of these were directed by Matjaž Klopčič. He also performed in television and radio drama. Altogether, Bibič played over 150 theatre and over 30 film roles. Feature film and short film production in Slovenia historically includes Karol Grossmann, František Čap, France Štiglic, Igor Pretnar, Jože Pogačnik, Peter Zobec, Matjaž Klopčič, Boštjan Hladnik, Dušan Jovanović, Vitan Mal, Franci Slak, and Karpo Godina as its most established filmmakers. Contemporary film directors Filip Robar - Dorin, Jan Cvitkovič, Damjan Kozole, Janez Lapajne, Mitja Okorn, and Marko Naberšnik are among the representatives of the so-called "Renaissance of Slovenian cinema". Slovene screenwriters, who are not film directors, include Saša Vuga and Miha Mazzini. Women film directors include Polona Sepe, Hanna A. W. Slak, and Maja Weiss. Today, notable authors include Slavoj Žižek, as well as Boris Pahor, a German Nazi concentration camp survivor, who opposed Italian Fascism and Titoist Communism. History of Slovene literature began in the 16th century with Primož Trubar and other Protestant Reformers. Poetry in the Slovene language achieved its highest level with the Romantic poet France Prešeren (1800–1849). In the 20th century, the Slovene literary fiction went through several periods: the beginning of the century was marked by the authors of the Slovene Modernism, with the most influential Slovene writer and playwright, Ivan Cankar; it was then followed by expressionism (Srečko Kosovel), avantgardism (Anton Podbevšek, Ferdo Delak) and social realism (Ciril Kosmač, Prežihov Voranc) before World War II, the poetry of resistance and revolution (Karel Destovnik Kajuh, Matej Bor) during the war, and intimism (Poems of the Four, 1953), post-war modernism (Edvard Kocbek), and existentialism (Dane Zajc) after the war. Postmodernist authors include Boris A. Novak, Marko Kravos, Drago Jančar, Evald Flisar, Tomaž Šalamun, and Brina Svit. Among the post-1990 authors best known are Aleš Debeljak, Miha Mazzini, and Alojz Ihan. There are several literary magazines that publish Slovene prose, poetry, essays, and local literary criticism. The Slovenian Philharmonics, established in 1701 as part of Academia operosorum Labacensis, is among the oldest such institutions in Europe. Music of Slovenia historically includes numerous musicians and composers, such as the Renaissance composer Jacobus Gallus (1550–1591), who greatly influenced Central European classical music, the Baroque composer Janez Krstnik Dolar (ca. 1620–1673), and the violin virtuoso Giuseppe Tartini. During the medieval era, secular music was as popular as church music, including wandering minnesingers. By the time of Protestant Reformation in the 16th century, music was used to proselytize. The first Slovenian hymnal, "Eni Psalmi", was published in 1567. This period saw the rise of musicians like Jacobus Gallus and Jurij Slatkonja. In 1701, Johann Berthold von Höffer (1667–1718), a nobleman and amateur composer from Ljubljana, founded the Academia Philharmonicorum Labacensis, as one of the oldest such institutions in Europe, based on Italian models. Composers of Slovenian Lieder and art songs include Emil Adamič (1877–1936), Fran Gerbič (1840–1917), Alojz Geržinič (1915–2008), Benjamin Ipavec (1829–1908), Davorin Jenko (1835–1914), Anton Lajovic (1878–1960), Kamilo Mašek (1831–1859), Josip Pavčič (1870–1949), Zorko Prelovec (1887–1939), and Lucijan Marija Škerjanc (1900–1973). In the early 20th century, impressionism was spreading across Slovenia, which soon produced composers Marij Kogoj and Slavko Osterc. Avant-garde classical music arose in Slovenia in the 1960s, largely due to the work of Uroš Krek, Dane Škerl, Primož Ramovš and Ivo Petrić, who also conducted the Slavko Osterc Ensemble. Jakob Jež, Darijan Božič, Lojze Lebič and Vinko Globokar have since composed enduring works, especially Globokar's "L'Armonia", an opera. Modern composers include Uroš Rojko, Tomaž Svete, Brina Jež-Brezavšček, Božidar Kantušer and Aldo Kumar. Kumar's "Sonata z igro 12" ("A sonata with a play 12"), a set of variations on a rising chromatic scale, is particularly notable. The Slovene National Opera and Ballet Theatre serves as the national opera and ballet house. Harmony singing is a deep rooted tradition in Slovenia, and is at least three-part singing (four voices), while in some regions even up to eight-part singing (nine voices). Slovenian folk songs, thus, usually resounds soft and harmonious, and are very seldom in minor. Traditional Slovenian folk music is performed on Styrian harmonica (the oldest type of accordion), fiddle, clarinet, zithers, flute, and by brass bands of alpine type. In eastern Slovenia, fiddle and cimbalon bands are called velike goslarije. From 1952 on, the Slavko Avsenik's band began to appear in broadcasts, movies, and concerts all over the West Germany, inventing the original "Oberkrainer" country sound that has become the primary vehicle of ethnic musical expression not only in Slovenia, but also in Germany, Austria, Switzerland, and in the Benelux, spawning hundreds of Alpine orchestras in the process. The band produced nearly 1000 original compositions, an integral part of the Slovenian-style polka legacy. Many musicians followed Avsenik's steps, including Lojze Slak. A similarly high standing in Slovene culture, like the Sanremo Music Festival has had in Italian culture, was attributed to the Slovenska popevka, a specific genre of popular Slovene music. Among pop, rock, industrial, and indie musicians the most popular in Slovenia include Laibach, an early 1980s industrial music group as well as Siddharta, an alternative rock band formed in 1995. With more than 15 million views for the official a cappella "Africa" performance video since its publishing on YouTube in May 2009 until September 2013 that earned them kudos from the song's co-writer, David Paich, Perpetuum Jazzile is the group from Slovenia that is internationally most listened online. Other Slovenian bands include a historically progressive rock ones that were also popular in Titoist Yugoslavia, such as Buldožer and Lačni Franz, which inspired later comedy rock bands including Zmelkoow, Slon in Sadež and Mi2. With exception of Terrafolk that made appearances worldwide, other bands, such as Avtomobili, Zaklonišče Prepeva, Šank Rock, Big Foot Mama, Dan D, and Zablujena generacija, are mostly unknown outside the country. Slovenian metal bands include Noctiferia (death metal), Negligence (thrash metal), Naio Ssaion (gothic metal), and Within Destruction (deathcore). Slovenian post-WWII singer-songwriters include Frane Milčinski (1914–1988), Tomaž Pengov whose 1973 album "Odpotovanja" is considered to be the first singer-songwriter album in former Yugoslavia, Tomaž Domicelj, Marko Brecelj, Andrej Šifrer, Eva Sršen, Neca Falk, and Jani Kovačič. After 1990, Adi Smolar, Iztok Mlakar, Vita Mavrič, Vlado Kreslin, Zoran Predin, Peter Lovšin, and Magnifico have been popular in Slovenia, as well. In addition to the main houses, which include Slovene National Theatre, Ljubljana and Maribor National Drama Theatre, a number of small producers are active in Slovenia, including physical theatre (e.g. Betontanc), street theatre (e.g. Ana Monró Theatre), theatresports championship Impro League, and improvisational theatre (e.g. IGLU Theatre). A popular form is puppetry, mainly performed in the Ljubljana Puppet Theatre. Theater has a rich tradition in Slovenia, starting with the 1867 first ever Slovene-language drama performance. Slovenia's visual arts, architecture, and design are shaped by a number of architects, designers, painters, sculptors, photographers, graphics artists, as well as comics, illustration and conceptual artists. The most prestigious institutions exhibiting works of Slovene visual artists are the National Gallery of Slovenia and the Museum of Modern Art. Modern architecture in Slovenia was introduced by Max Fabiani, and in the mid-war period, Jože Plečnik and Ivan Vurnik. In the second half of the 20th century, the national and universal style were merged by the architects Edvard Ravnikar and first generation of his students: Milan Mihelič, Stanko Kristl, Savin Sever. Next generation is mainly still active Marko Mušič, Vojteh Ravnikar, Jurij Kobe and groups of younger architects. A number of conceptual visual art groups formed, including OHO, Group 69, and IRWIN. Nowadays, the Slovene visual arts are diverse, based on tradition, reflect the influence of neighboring nations and are intertwined with modern European movements. Internationally most notable Slovenian design items include the 1952 Rex chair, a Scandinavian design-inspired wooden chair, by interior designer Niko Kralj that was given in 2012 a permanent place in Designmuseum, Denmark, the largest museum of design in Scandinavia, and is included in the collection of the Museum of Modern Art MOMA in New York City, as well. An industrial design item that has changed the international ski industry is Elan SCX by Elan company. Even before the Elan SCX, Elan skis were depicted in two films, the 1985 James Bond film series part A View to a Kill with Roger Moore, and Working Girl where "Katharine Parker" (Sigourney Weaver) was depicted as skiing on the "RC ELAN" model skis and poles. The renewal of Slovene sculpture begun with Alojz Gangl (1859–1935) who created sculptures for the public monuments of the Carniolan polymath Johann Weikhard von Valvasor and Valentin Vodnik, the first Slovene poet and journalist, as well as "The Genius of the Theatre" and other statues for the Slovenian National Opera and Ballet Theatre building. The development of sculpture after World War II was led by a number of artists, including brothers Boris and Zdenko Kalin, Jakob Savinšek stayed with figural art. Younger sculptors, for example Janez Boljka, Drago Tršar and particularly Slavko Tihec, moved towards abstract forms. Jakov Brdar and Mirsad Begić returned to human figures. During World War II, numerous graphics were created by Božidar Jakac, who helped establish the post-war Academy of Visual Arts in Ljubljana. In 1917 Hinko Smrekar illustrated Fran Levstik's book about the well-known Slovene folk hero Martin Krpan. The children's books illustrators include a number of women illustrators, such as Marlenka Stupica, Marija Lucija Stupica, Ančka Gošnik Godec, Marjanca Jemec Božič, and Jelka Reichman. Historically, painting and sculpture in Slovenia was in the late 18th and the 19th century marked by Neoclassicism (Matevž Langus), Biedermeier (Giuseppe Tominz) and Romanticism (Mihael Stroj). The first art exhibition in Slovenia was organized in the late 19th century by Ivana Kobilica, a woman-painter who worked in realistic tradition. Impressionist artists include Matej Sternen, Matija Jama, Rihard Jakopič, Ivan Grohar whose "The Sower" (Slovene: Sejalec) was depicted on the €0.05 Slovenian euro coins, and Franc Berneker, who introduced the impressionism to Slovenia. Espressionist painters include Veno Pilon and Tone Kralj whose picture book, reprinted thirteen times, is now the most recognisable image of the folk hero Martin Krpan. Some of the best known painters in the second half of the 20th century were Zoran Mušič, Gabrijel Stupica and Marij Pregelj. In 1841, Janez Puhar (1814–1864) invented a process for photography on glass, recognized on 17 June 1852 in Paris by the Académie Nationale Agricole, Manufacturière et Commerciale. Gojmir Anton Kos was a notable realist painter and photographer between First World War and WW II. The first photographer from Slovenia whose work was published by National Geographic magazine is Arne Hodalič. Slovenia is a natural sports venue, with many Slovenians actively practicing sports. A variety of sports are played in Slovenia on a professional level, with top international successes in handball, basketball, volleyball, association football, ice hockey, rowing, swimming, tennis, boxing, climbing, road cycling and athletics. Prior to World War II, gymnastics and fencing used to be the most popular sports in Slovenia, with champions like Leon Štukelj and Miroslav Cerar gaining Olympic medals for Slovenia. Association football gained popularity in the interwar period. After 1945, basketball, handball and volleyball have become popular among Slovenians, and from the mid-1970s onward, winter sports have, as well. Since 1992, Slovenian sportspeople have won 40 Olympic medals, including seven gold medals, and 22 Paralympic medals with four golds. Individual sports are also very popular in Slovenia, including tennis and mountaineering, which are two of the most widespread sporting activities in Slovenia. Several Slovenian extreme and endurance sportsmen have gained an international reputation, including the mountaineer Tomaž Humar, the mountain skier Davo Karničar, the ultramarathon swimmer Martin Strel and the ultracyclist Jure Robič. Past and current winter sports Slovenian champions include Alpine skiers, such as Mateja Svet, Bojan Križaj, Ilka Štuhec and double olympic gold medalist Tina Maze, the cross-country skier Petra Majdič, and ski jumpers, such as Primož Peterka and Peter Prevc. Boxing has gained popularity since Dejan Zavec won the IBF Welterweight World Champion title in 2009. In cycling, Primož Roglič became the first Slovenian to win a Grand Tour when he won the 2019 Vuelta a Espana. Prominent team sports in Slovenia include football, basketball, handball, volleyball, and ice hockey. The men's national football team has qualified for one European Championship (2000) and two World Cups (2002 and 2010). Of Slovenian clubs, NK Maribor played three times in the UEFA Champions League, and also three times in the UEFA Europa League. The men's national basketball team has participated at 13 EuroBaskets, winning the gold medal in the 2017 edition, and at three FIBA World Championships. Slovenia also hosted the EuroBasket 2013. The men's national handball team has qualified for three Olympics, eight IHF World Championships, including their third-place finish in the 2017 edition, and twelve European Championships. Slovenia was the hosts of the 2004 European Championship, where the national team won the silver medal. Slovenia's most prominent handball team, RK Celje, won the EHF Champions League in the 2003–04 season. In women's handball, RK Krim won the Champions League in 2001 and 2003. The national volleyball team has won the silver medal in the 2015 and 2019 editions of the European Volleyball Championship. The national ice hockey team has played at 27 Ice Hockey World Championships (with 9 appearances in top division), and has participated in the 2014 and 2018 Winter Olympic Games.
https://en.wikipedia.org/wiki?curid=27338
History of Slovenia The history of Slovenia chronicles the period of the Slovenian territory from the 5th century BC to the present. In the Early Bronze Age, Proto-Illyrian tribes settled an area stretching from present-day Albania to the city of Trieste. Slovenian territory was part of the Roman Empire, and it was devastated by Barbarian incursions in late Antiquity and Early Middle Ages, since the main route from the Pannonian plain to Italy ran through present-day Slovenia. Alpine Slavs, ancestors of modern-day Slovenians, settled the area in the late 6th Century A.D. The Holy Roman Empire controlled the land for nearly 1,000 years, and between the mid 14th century and 1918 most of Slovenia was under Habsburg rule. In 1918, Slovenes formed Yugoslavia along with Serbs and Croats, while a minority came under Italy. The state of Slovenia was created in 1945 as part of federal Yugoslavia. Slovenia gained its independence from Yugoslavia in June 1991, and is today a member of the European Union and NATO. The earliest signs of human settlement in present-day Slovenia were found in Hell Cave in the Loza Woods near Orehek in Inner Carniola, where two stone tools approximately 250,000 years old were recovered. During the last glacial period, present-day Slovenia was inhabited by Neanderthals; the best-known Neanderthal archaeological site in Slovenia is a cave close to the village of Šebrelje near Cerkno, where the Divje Babe Flute, the oldest known musical instrument in the world, was found in 1995. The world's oldest securely dated wooden wheel and axle was found near the Ljubljana Marshes in 2002. In the transition period between the Bronze Age to the Iron Age, the Urnfield culture flourished. Numerous archeological remains dating from the Hallstatt period have been found in Slovenia, with important settlements in Most na Soči, Vače, and Šentvid pri Stični. Novo Mesto in Lower Carniola, one of the most important archaeological sites of the Hallstatt culture, has been nicknamed the "City of Situlas" after numerous situlas found in the area. In the Iron Age, present-day Slovenia was inhabited by Illyrian and Celtic tribes until the 1st century BC, when the Romans conquered the region establishing the provinces of Pannonia and Noricum. What is now western Slovenia was included directly under Roman Italia as part of the X region "Venetia et Histria". Important Roman towns located in present-day Slovenia included Emona, Celeia and Poetovio. Other important settlements were Nauportus, Neviodunum, Haliaetum, Atrans, and Stridon. During the Migration Period, the region suffered invasions of many barbarian armies, due to its strategic position as the main passage from the Pannonian Plain to the Italian Peninsula. Rome finally abandoned the region at the end of the 4th century. Most cities were destroyed, while the remaining local population moved to the highland areas, establishing fortified towns. In the 5th century, the region was part of the Ostrogothic Kingdom, and was later contested between the Ostrogoths, the Byzantine Empire and the Lombards. The Slavic ancestors of present-day Slovenes settled in the East Alpine area at the end of the 6th century. Coming from two directions, North (via today's East Austria and Czech Republic), settling in the area of today's Carinthia and west Styria, and South (via today's Slavonia), settling in the area of today's central Slovenia. This Slavic tribe, also known as the Alpine Slavs, was submitted to Avar rule before joining the Slavic King Samo's tribal union in 623 AD. After Samo's death, the Slavs of Carniola (in present-day Slovenia) again fell to Avar rule, while the Slavs north of the Karavanke range (in present-day Austrian regions of Carinthia, Styria and East Tyrol) established the independent principality of Carantania. In 745, Carantania and the rest of Slavic-populated territories of present-day Slovenia, being pressured by newly consolidated Avar power, submitted to Bavarian overrule and were, together with the Duchy of Bavaria, incorporated into the Carolingian Empire, while Carantanians and other Slavs living in present Slovenia converted to Christianity. The eastern part of Carantania was ruled again by Avars between 745 and 795. Carantania retained its internal independence until 818 when the local princes, following the anti-Frankish rebellion of Ljudevit Posavski, were deposed and gradually replaced by a Germanic (primarily Bavarian) ascendancy. Under Emperor Arnulf of Carinthia, Carantania, now ruled by a mixed Bavarian-Slav nobility, briefly emerged as a regional power, but was destroyed by the Hungarian invasions in the late 9th century. Carantania-Carinthia was established again as an autonomous administrative unit in 976, when Emperor Otto I, "the Great", after deposing the Duke of Bavaria, Henry II, "the Quarreller", split the lands held by him and made Carinthia the sixth duchy of the Holy Roman Empire, but old Carantania never developed into a unified realm. In the late 10th and beginning of the 11th century, primarily because of the Hungarian threat, the south-eastern border region of the German Empire was organized into so called "marks", that became the core of the development of the historical Slovenian lands, the Carniola, the Styria and the western Goriška/Gorizia. The consolidation and formation of the historical Slovenian lands took place in a long period between 11th and 14th century being led by a number of important feudal families such as the Dukes of Spannheim, the Counts of Gorizia, the Counts of Celje and finally the House of Habsburg. The first mentions of a common Slovene ethnic identity, transcending regional boundaries, date from the 16th century. During the 14th century, most of the Slovene Lands passed under the Habsburg rule. In the 15th century, the Habsburg domination was challenged by the Counts of Celje, but by the end of the century the great majority of Slovene-inhabited territories were incorporated into the Habsburg Monarchy. Most Slovenes lived in the administrative region known as Inner Austria, forming the majority of the population of the Duchy of Carniola and the County of Gorizia and Gradisca, as well as of Lower Styria and southern Carinthia. Slovenes also inhabited most of the territory of the Imperial Free City of Trieste, although representing the minority of its population. In the 16th century, the Protestant Reformation spread throughout the Slovene Lands. During this period, the first books in the Slovene language were written by the Protestant preacher Primož Trubar and his followers, establishing the base for the development of the standard Slovene language. In the second half of the 16th century, numerous books were printed in Slovene, including an integral translation of the Bible by Jurij Dalmatin. During the Counter-Reformation in the late 16th and 17th centuries, led by the bishop of Ljubljana Thomas Chrön and Seckau , almost all Protestants were expelled from the Slovene Lands (with the exception of Prekmurje). Nevertheless, they left a strong legacy in the tradition of Slovene culture, which was partially incorporated in the Catholic Counter-Reformation in the 17th century. The old Slovene orthography, also known as Bohorič's alphabet, which was developed by the Protestants in the 16th century and remained in use until the mid-19th century, testified to the unbroken tradition of Slovene culture as established in the years of the Protestant Reformation. Between the 15th and the 17th centuries, the Slovene Lands suffered many calamities. Many areas, especially in southern Slovenia, were devastated by the Ottoman–Habsburg wars. Many flourishing towns, like Vipavski Križ and Kostanjevica na Krki, were completely destroyed by incursions of the Ottoman Army, and never recovered. The nobility of the Slovene-inhabited provinces had an important role in the fight against the Ottoman Empire. The Carniolan noblemen's army thus defeated the Ottomans in the Battle of Sisak of 1593, marking the end of the immediate Ottoman threat to the Slovene Lands, although sporadic Ottoman incursions continued well into the 17th century. In the 16th and 17th centuries, the western Slovene regions became the battlefield of the wars between the Habsburg Monarchy and the Venetian Republic, most notably the War of Gradisca, which was largely fought in the Slovene Goriška region. Between the late 15th and early 18th centuries, the Slovene lands also witnessed many peasant wars, the best-known being the Carinthian Peasant Revolt of 1478, the Slovene Peasant Revolt of 1515, the Croatian–Slovene Peasant Revolt of 1573, the Second Slovene Peasant Revolt of 1635, and the Tolmin Peasant Revolt of 1713. The late 17th century was also marked by a vivid intellectual and artistic activity. Many Italian Baroque artists, mostly architects and musicians, settled in the Slovene Lands, and contributed greatly to the development of the local culture. Artists like Francesco Robba, Andrea Pozzo, Vittore Carpaccio and Giulio Quaglio worked in the Slovenian territory, while scientists such as Johann Weikhard von Valvasor and Johannes Gregorius Thalnitscher contributed to the development of the scholarly activities. By the early 18th century, however, the region entered another period of stagnation, which was slowly overcome only by the mid-18th century. Between the early 18th century and early 19th century, the Slovene lands experienced a period of peace, with a moderate economic recovery starting from mid-18th century onward. The Adriatic city of Trieste was declared a free port in 1718, boosting the economic activity throughout the western parts of the Slovene Lands. The political, administrative and economic reforms of the Habsburg rulers Maria Theresa of Austria and Joseph II improved the economic situation of the peasantry, and were well received by the emerging bourgeoisie, which was however still weak. In the late 18th century, a process of standardarization of Slovene language began, promoted by Carniolan clergymen like Marko Pohlin and Jurij Japelj. During the same period, peasant-writers began using and promoting the Slovene vernacular in the countryside. This popular movement, known as "bukovniki", started among Carinthian Slovenes as part a wider revival of Slovene literature. The Slovene cultural tradition was strongly reinforced in the Enlightenment period in the 18th century by the endeavours of the Zois Circle. After two centuries of stagnation, Slovene literature emerged again, most notably in the works of the playwright Anton Tomaž Linhart and the poet Valentin Vodnik. However, German remained the main language of culture, administration and education well into the 19th century. Between 1805 and 1813, the Slovene-settled territory was part of the Illyrian Provinces, an autonomous province of the Napoleonic French Empire, the capital of which was established at Ljubljana. Although the French rule in the Illyrian Provinces was short-lived it significantly contributed to greater national self-confidence and awareness of freedoms. The French did not entirely abolish the feudal system, their rule familiarised in more detail the inhabitants of the Illyrian Provinces with the achievements of the French revolution and with contemporary bourgeois society. They introduced equality before the law, compulsory military service and a uniform tax system, and also abolished certain tax privileges, introduced modern administration, separated powers between the state and the Church, and nationalised the judiciary. In August 1813, Austria declared war on France. Austrian troops led by General Franz Tomassich invaded the Illyrian Provinces. After this short French interim all Slovene Lands were, once again, included in the Austrian Empire. Slowly, a distinct Slovene national consciousness developed, and the quest for a political unification of all Slovenes became widespread. In the 1820s and 1840s, the interest in Slovene language and folklore grew enormously, with numerous philologists collecting folk songs and advancing the first steps towards a standardization of the language. A small number of Slovene activist, mostly from Styria and Carinthia, embraced the Illyrian movement that started in neighboring Croatia and aimed at uniting all South Slavic peoples. Pan-Slavic and Austro-Slavic ideas also gained importance. However, the intellectual circle around the philologist Matija Čop and the Romantic poet France Prešeren was influential in affirming the idea of Slovene linguistic and cultural individuality, refusing the idea of merging the Slovenes into a wider Slavic nation. In 1848, a mass political and popular movement for the United Slovenia ("") emerged as part of the Spring of Nations movement within the Austrian Empire. Slovene activists demanded a unification of all Slovene-speaking territories in a unified and autonomous Slovene kingdom within the Austrian Empire. Although the project failed, it served as an almost undisputed platform of Slovene political activity in the following decades. Between 1848 and 1918, numerous institutions (including theatres and publishing houses, as well as political, financial and cultural organisations) were founded in the so-called Slovene National Awakening. Despite their political and institutional fragmentation and lack of proper political representation, the Slovenes were able to establish a functioning national infrastructure. With the introduction of a constitution granting civil and political liberties in the Austrian Empire in 1860, the Slovene national movement gained force. Despite its internal differentiation among the conservative Old Slovenes and the progressive Young Slovenes, the Slovene nationals defended similar programs, calling for a cultural and political autonomy of the Slovene people. In the late 1860s and early 1870s, a series of mass rallies called "tabori", modeled on the Irish monster meetings, were organized in support of the United Slovenia program. These rallies, attended by thousands of people, proved the allegiance of wider strata of the Slovene population to the ideas of national emancipation. By the end of the 19th century, Slovenes had established a standardized literary language, and a thriving civil society. Literacy levels were among the highest in the Austro-Hungarian Empire, and numerous national associations were present at grassroots level. The idea of a common political entity of all South Slavs, known as Yugoslavia, emerged. Since the 1880s, a fierce culture war between Catholic traditionalists and integralists on one side, and liberals, progressivists and anticlericals dominated Slovene political and public life, especially in Carniola. During the same period, the growth of industrialization intensified social tensions. Both Socialist and Christian socialist movements mobilized the masses. In 1905, the first Socialist mayor in the Austro-Hungarian Empire was elected in the Slovene mining town of Idrija on the list of the Yugoslav Social Democratic Party. In the same years, the Christian socialist activist Janez Evangelist Krek organized hundreds of workers and agricultural cooperatives throughout the Slovene countryside. At the turn of the 20th century, national struggles in ethnically mixed areas (especially in Carinthia, Trieste and in Lower Styrian towns) dominated the political and social lives of the citizenry. By the 1910s, the national struggles between Slovene and Italian speakers in the Austrian Littoral, and Slovene and German speakers, overshadowed other political conflicts and brought about a nationalist radicalization on both sides. In the last two decades before World War One, Slovene arts and literature experienced one of its most flourishing periods, with numerous talented modernist authors, painters and architects. The most important authors of this period were Ivan Cankar, Oton Župančič and Dragotin Kette, while Ivan Grohar and Rihard Jakopič were among the most talented Slovene visual artists of the time. At the turn of the 20th century, hundreds of thousands of Slovenes emigrated to other countries, mostly to the United States, but also to South America, Germany, Egypt and to larger cities in the Austro-Hungarian Empire, especially Zagreb and Vienna. It has been calculated that around 300,000 Slovenes emigrated between 1880 and 1910, which means that one in six Slovenes left their homeland. Such disproportionally high emigration rates resulted in a relatively small population growth in the Slovene Lands. Comparatively to other Central European regions, the Slovene Lands lost demographic weight between the late 18th and early 20th century. The period between the 1880s and World War I saw a mass emigration from the present-day Slovenia to America. The largest group of Slovenes eventually settled in Cleveland, Ohio, and the surrounding area. The second-largest group settled in Chicago, principally on the Lower West Side. Many Slovene immigrants went to southwestern Pennsylvania, southeastern Ohio and the state of West Virginia to work in the coal mines and lumber industry. Some also went to the Pittsburgh or Youngstown, Ohio areas, to work in the steel mills, as well as Minnesota's Iron Range, to work in the iron mines. During the First World War, which severely affected Slovenia in particular with the bloody soviet front and the politics of the great powers that threatened to dismantle the Slovene territory between several countries (the London Agreement, 1915), Slovenes have already tried to regulate their national position in the common state unit Croats and Serbs in the Habsburg Monarchy. The demand, known as the May Declaration, was given by the Slovene, Croatian and Serbian parliamentarians in the Vienna Parliament in the spring of 1917. The ruling circles of the Habsburg monarchy initially rejected the request, and subsequent government initiatives for the federalisation of the monarchy (for example, the October manifesto of Emperor Charles) was rejected by most Slovenian politicians, which has already leaned towards independence. The preservation of the reformed state was longest defended by the former head of the Slovenian People's Party and the last Provincial Commander-in-Chief of Carniola, Ivan Šusteršič, who had few supporters and influence. The Slovene People's Party launched a movement for self-determination, demanding the creation of a semi-independent South Slavic state under Habsburg rule. The proposal was picked up by most Slovene parties, and a mass mobilization of Slovene civil society, known as the Declaration Movement, followed. By early 1918, more than 200,000 signatures were collected in favor of the Slovene People Party's proposal. During the War, some 500 Slovenes served as volunteers in the Serbian army, while a smaller group led by Captain Ljudevit Pivko, served as volunteers in the Italian Army. In the final year of the war, many predominantly Slovene regiments in the Austro-Hungarian Army staged a mutiny against their military leadership; the best-known mutiny of Slovene soldiers was the Judenburg Rebellion in May 1918. Following the dissolution of Austro-Hungarian Empire in the aftermath of the World War I, a National Council of Slovenes, Croats and Serbs took power in Zagreb on 6 October 1918. On 29 October independence was declared by a national gathering in Ljubljana, and by the Croatian parliament, declaring the establishment of the new State of Slovenes, Croats and Serbs. On 1 December 1918 the State of Slovenes, Croats and Serbs merged with Serbia, becoming part of the new Kingdom of Serbs, Croats and Slovenes, itself being renamed in 1929 to Kingdom of Yugoslavia. Slovenes whose territory fell under the rule of neighboring states Italy, Austria and Hungary, were subjected to policies of assimilation. After the dissolution of the Austro-Hungarian Empire in late 1918, an armed dispute started between the Slovenes and German Austria for the regions of Lower Styria and southern Carinthia. In November 1918, Rudolf Maister seized the city of Maribor and surrounding areas of Lower Styria in the name of the newly formed Yugoslav state. The Austrian government of Styria refrained from military intervention and also opposed a referendum, knowing that the vast majority of Lower Styria was ethnically Slovenian. Maribor and Lower Styria were awarded to Yugoslavia in the Treaty of Saint-Germain. Around the same time, a group of volunteers led by Franjo Malgaj attempted to take control of southern Carinthia. Fighting in Carinthia lasted between December 1918 and June 1919, when the Slovene volunteers and the regular Serbian Army managed to occupy the city of Klagenfurt. In compliance with the Treaty of Saint-Germain, the Yugoslav forces had to withdraw from Klagenfurt, while a referendum was to be held in other areas of southern Carinthia. In October 1920, the majority of the population of southern Carinthia voted to remain in Austria, and only a small portion of the province (around Dravograd and Guštanj) was awarded to the Kingdom of Serbs, Croats and Slovenes. With the Treaty of Trianon, on the other hand, Kingdom of Yugoslavia was awarded the Slovene-inhabited Prekmurje region, which had belonged to Hungary since the 10th century. In exchange for joining the Allied Powers in the First World War, the Kingdom of Italy, under the secret Treaty of London (1915) and later Treaty of Rapallo (1920), was granted rule over much of the Slovene territories. These included a quarter of the Slovene ethnic territory, including areas that were exclusively ethnic Slovene. The population of the affected areas was approximately 327,000 of the total population of 1.3 million Slovenes. In 1921, against the vote of the great majority (70%) of Slovene MPs, a centralist constitution was passed in the Kingdom of Serbs, Croats and Slovenes. Despite it, Slovenes managed to maintain a high level of cultural autonomy, and both economy and the arts prospered. Slovene politicians participated in almost all Yugoslav governments, and the Slovene conservative leader Anton Korošec briefly served as the only non-Serbian Prime Minister of Yugoslavia in the period between the two world wars. In 1929, the Kingdom of Serbs, Croats and Slovenes was renamed to Kingdom of Yugoslavia. The constitution was abolished, civil liberties suspended, while the centralist pressure intensified. Slovenia was renamed to Drava Banovina. During the whole interwar period, Slovene voters strongly supported the conservative Slovene People's Party, which unsuccessfully fought for the autonomy of Slovenia within a federalized Yugoslavia. In 1935, however, the Slovene People's Party joined the pro-regime Yugoslav Radical Community, opening the space for the development of a left wing autonomist movement. In the 1930s, the economic crisis created a fertile ground for the rising of both leftist and rightist radicalisms. In 1937, the Communist Party of Slovenia was founded as an autonomous party within the Communist Party of Yugoslavia. Between 1938 and 1941, left liberal, Christian left and agrarian forces established close relations with members of the illegal Communist party, aiming at establishing a broad anti-Fascist coalition. The main territory of Slovenia, being the most industrialized and westernized among others less developed parts of Yugoslavia became the main center of industrial production: in comparison to Serbia, for example, in Slovenia the industrial production was four times greater and even twenty-two times greater than in Yugoslav Macedonia. The interwar period brought a further industrialization in Slovenia, with a rapid economic growth in the 1920s followed by a relatively successful economic adjustment to the 1929 economic crisis. This development however affected only certain areas, especially the Ljubljana Basin, the Central Sava Valley, parts of Slovenian Carinthia, and the urban areas around Celje and Maribor. Tourism experienced a period of great expansion, with resort areas like Bled and Rogaška Slatina gaining an international reputation. Elsewhere, agriculture and forestry remained the predominant economic activities. Nevertheless, Slovenia emerged as one of the most prosperous and economically dynamic areas in Yugoslavia, profiting from a large Balkan market. Arts and literature also prospered, as did architecture. The two largest Slovenian cities, Ljubljana and Maribor, underwent an extensive program of urban renewal and modernization. Architects like Jože Plečnik, Ivan Vurnik and Vladimir Šubic introduced modernist architecture to Slovenia. With a secret Treaty of London in 1915, the Kingdom of Italy was promised large portions of Austrian-Hungarian territory by the Triple Entente, in exchange for joining the Entente against the Central Powers in World War I. After the Central Powers were defeated in 1918, Italy went on to annex some of the promised territories, after signing the treaty of Rapallo with the new Kingdom of Serbs, Croats and Slovenes in 1920. However, these areas also included a quarter of Slovene ethnic territory and approximately 327.000 out of total population of 1.3 million Slovenes, were annexed by the Kingdom of Italy The treaty left half a million Slavs (besides Slovenes also Croatians) inside Italy, while only a few hundred Italians in the fledgling Yugoslav state". Trieste was at the end of 19th century de facto the largest Slovene city, having had more Slovene inhabitants than even Ljubljana. After being ceded from the multi-ethnic Austria, Italian lower middle class—who felt most threatened by the city's Slovene middle class—sought to make Trieste "città italianissima", committing series of attacks, led by Black Shirts, on Slovene shops, libraries, lawyer offices, and the central place of the rival community in "Narodni dom". Forced Italianization followed and by the mid-1930s, several thousand Slovenes, especially intellectuals from Trieste region, emigrated to the Kingdom of Yugoslavia and to South America. The present-day Slovenian municipalities of Idrija, Ajdovščina, Vipava, Kanal, Postojna, Pivka, and Ilirska Bistrica, were subjected to forced Italianization. The Slovene minority in Italy (1920-1947) lacked any minority protection under international or domestic law. Clashes between the Italian authorities and Fascist squads on one side, and the local Slovene population on the other, started as early as 1920, culminating with the burning of the Narodni dom, the Slovenian National Hall of Trieste. After all Slovene minority organizations in Italy had been suppressed, the militant anti-fascist organization TIGR was formed in 1927 in order to fight Fascist violence. The anti-Fascist guerrilla movement continued throughout the late 1920s and 1930s. When Hungary, Bulgaria and Romania joined the Tripartite pact in 1940, pressure greatly increased on Yugoslavia to join in as Hitler was trying to protect its southern flank before launching the attack on the Soviet Union. The signing of the Treaty of the Kingdom of Yugoslavia with Germany on March 25, 1941, was followed two days later by a coup led by aviation general Dusan Simovic. Regent prince Pavel was thrown out and authority was granted to young Peter. General Simovic took over the provisional administration of the government. Thus, Yugoslavia did not seem to be reliable anymore to Hitler, and so on April 6, 1941, according to the operation Marita and without a formal declaration of war, Axis forces invaded the Kingdom of Yugoslavia. The attack began with the bombing of Belgrade, killing 20,000 people. The resistance of the Yugoslav royal army was only symbolic, as only half of the recruits were able to collect due to slow mobilization, and the military equipment and doctrine of Serbia from the Balkan wars and the First World War were obsolete. Thus, on April 10, German troops have already reached Zagreb and on April 12 Belgrade. The Italian army launched its attack only on April 11, when Hungary joined. At that time, the German army was already in Karlovac. The Italian army split into two parts: it penetrated the part towards both Ljubljana and beyond via Kočevje, and the second part penetrated via Dalmatia. The German army also broke out of Bulgaria and with the mobilized units easily prevented the withdrawal of the Yugoslav army into the Thessaloniki front. Shortly after the attack, the National People's Council was formed under the leadership of Marko Natlačen, who called for a peaceful handover of weapons and expelled the occupier. After the capitulation of the Yugoslav army, Hungary took over most of Prekmurje. In 1941, five Slovene settlements were established under the authority of NDH: Bregansko selo (now called Slovenska vas), Nova vas near Bregana (now Nova vas near Mokrice), Jesenice in Dolenjska, Obrežje and Čedem. The territory was about 20 square kilometers, with about 800 inhabitants at that time. The Italians in the beginning held a moderate policy in their occupied territory. In this way, bilingualism coincided, the Italian language was introduced into schools only as a teaching subject, all non-political, cultural and sports associations allowed it. In the occupied territory, composed of Ljubljana, Notranjska and Dolenjska with approximately 320,000 inhabitants, Italy established the Province of Ljubljana (Italian Provincia di Lubiana). After the first successful rebel actions of the occupants in the occupied territory, the Italian authorities changed the policy and began the program of ethnic cleansing [15]. The execution of this plot led to the expulsion of approximately 35,000 civilians, of whom in the Italian concentration camps, in 1942 and 1943, about 3500 men, women and children died of hunger and disease [16] That this was an attempt to ethnic cleansing, results not only from the very large number of people killed and displaced, but also from the statements and orders of the high Italian officers, and in particular from the content of the notorious 3C circular, signed by General Mario Roatta on March 1, 1942. ] The German form of occupation was the tiniest of all three, since they banned all Slovenian newspapers, the German language was introduced into schools as a language, the adults were violently enrolled in the Styrian Homeland Association and the Carinthian People's Union or their armed sections. The official language has also become German. They violently took away 600 children who seemed to satisfy the criteria of the Aryan race and assigned them to the Lebensborn organization, they introduced Nazi laws, and later began to mobilize the military, which was contrary to international law, ... On April 26, 1941, the Anti-Imperialist Front was set up in Ljubljana (renamed the Liberation Front) in the German invasion of the Soviet Union, which began an armed struggle against the occupiers. The founding groups of the Anti-Imperialist Front were: the Communist Party of Slovenia, part of the Christian Socialists, the democratic part of the Liberal Gymnastical Society Sokol and a part of the cultural workers who were unconnected. In memory of this event was determined April 27 as the day of the resistance against the occupier. In Volkmerjev prehod in Maribor, on April 29, 1941, two anti-German-style young men under the leadership of Bojan Ilich burned two personal cars of the German Civil Administration. This was the first rebuffing anti-occupation campaign in occupied Slovenia, which was born out of a revolt at the trance, which was visited by Hitler during the three days before that of most of the German Germans. Nazi police arrested about 60 young men, but they soon released them because they could not prove their participation in the fire. On June 22, 1941, the main command of the Partisan forces was established and on the same day, the Secrets of the Liberation Movement OF were published. Subsequently, on November 1, 1941, the Basic Points of the OF, whose points 8 and 9 were written under the influence of the Atlantic Charter, were also published. By the signing of the Dolomite Declaration on March 1, 1943, the leading role in the Liberation Front was taken over by the Communist Party of Slovenia, which in the victorious national liberation struggle itself assumed all power. In 1943, a liberated territory was formed in Kočevje, where the OF organized the Kočevski Choir, in which it elected the highest organ of the Slovenian state, adopted a decision on joining the Primorska Slovenia and elected a delegation for the II. sitting AVNOJ. At the end of the war, the Slovene Partisan army, together with the Yugoslav Army and the Soviet Red Army, freed the entire Slovenian ethnic territory. The VOS departments under the command of the Communist Party and the Soviet model, after the end of the war, mostly performed post-war extrajudicial killings against civilian and military personnel. Up to 600 graves have been evacuated so far throughout Slovenia. Following the re-establishment of Yugoslavia at the end of World War II, Slovenia became part of the Socialist Federal Republic of Yugoslavia, declared on 29 November 1943. A socialist state was established, but because of the Tito–Stalin split, economic and personal freedoms were broader than in the Eastern Bloc. In 1947, Italy ceded most of the Julian March to Yugoslavia, and Slovenia thus regained the Slovenian Littoral. The towns of Koper, Izola, and Piran, Italian-populated urban enclaves saw mass ethnic Italian and anti-Communist emigration (part of the Istrian Exodus) due to the ongoing Foibe massacres and other revenge against them for Italian war crimes and due to their fear of Communism, which by 1947 had nationalised all private property. The dispute over the port of Trieste however remained opened until 1954, until the short-lived Free Territory of Trieste was divided among Italy and Yugoslavia, thus giving Slovenia access to the sea. This division was ratified only in 1975 with the Treaty of Osimo, which gave a final legal sanction to Slovenia's long disputed western border. From the 1950s, the Socialist Republic of Slovenia enjoyed a relatively wide autonomy. Between 1945 and 1948, a wave of political repressions took place in Slovenia and in Yugoslavia. Thousands of people were imprisoned for their political beliefs. Several tens of thousands of Slovenes left Slovenia immediately after the war in fear of Communist persecution. Many of them settled in Argentina, which became the core of Slovenian anti-Communist emigration. More than 50,000 more followed in the next decade, frequently for economic reasons, as well as political ones. These later waves of Slovene immigrants mostly settled in Canada and in Australia, but also in other western countries. In 1948, the Tito–Stalin split took place. In the first years following the split, the political repression worsened, as it extended to Communists accused of Stalinism. Hundreds of Slovenes were imprisoned in the concentration camp of Goli Otok, together with thousands of people of other nationalities. Among the show trials that took place in Slovenia between 1945 and 1950, the most important were the Nagode Trial against democratic intellectuals and left liberal activists (1946) and the Dachau trials (1947–1949), where former inmates of Nazi concentration camps were accused of collaboration with the Nazis. Many members of the Roman Catholic clergy suffered persecution. The case of bishop of Ljubljana Anton Vovk, who was doused with gasoline and set on fire by Communist activists during a pastoral visit to Novo Mesto in January 1952, echoed in the western press. Between 1949 and 1953, a forced collectivization was attempted. After its failure, a policy of gradual liberalization was followed. In the late 1950s, Slovenia was the first of the Yugoslav republics to begin a process of relative pluralization. A decade of industrialisation was accompanied also by a fervent cultural and literary production with many tensions between the regime and the dissident intellectuals. From the late 1950s onward, dissident circles started to be formed, mostly around short-lived independent journals, such as "Revija 57" (1957–1958), which was the first independent intellectual journal in Yugoslavia and one of the first of this kind in the Communist bloc, and "Perspektive" (1960–1964). Among the most important critical public intellectuals in this period were the sociologist Jože Pučnik, the poet Edvard Kocbek, and the literary historian Dušan Pirjevec. By the late 1960s, the reformist faction gained control of the Slovenian Communist Party, launching a series of reforms, aiming at the modernization of Slovenian society and economy. A new economic policy, known as workers self-management started to be implemented under the advice and supervision of the main theorist of the Yugoslav Communist Party, the Slovene Edvard Kardelj. In 1973, this trend was stopped by the conservative faction of the Slovenian Communist Party, backed by the Yugoslav Federal government. A period known as the "Years of Lead" (Slovene: "svinčena leta") followed. In the 1980s, Slovenia experienced a rise of cultural pluralism. Numerous grass-roots political, artistic and intellectual movements emerged, including the Neue Slowenische Kunst, the Ljubljana school of psychoanalysis, and the "Nova revija" intellectual circle. By the mid-1980s, a reformist fraction, led by Milan Kučan, took control of the Slovenian Communist Party, starting a gradual reform towards a market socialism and controlled political pluralism. The Yugoslav economic crisis of the 1980s increased the struggles within the Yugoslav Communist regime regarding the appropriate economic measures to be undertaken. Slovenia, which had less than 10% of overall Yugoslav population, produced around a fifth of the country's GDP and a fourth of all Yugoslav exports. The political disputes around economic measures was echoed in the public sentiment, as many Slovenes felt they were being economically exploited, having to sustain an expensive and inefficient federal administration. In 1987 and 1988, a series of clashes between the emerging civil society and the Communist regime culminated with the Slovene Spring. In 1987, a group of liberal intellectuals published a manifesto in the alternative "Nova revija" journal; in their so-called Contributions for the Slovenian National Program, they called for democratization and a greater independence for Slovenia. Some of the articles openly contemplated Slovenia's independence from Yugoslavia and the establishment of a full-fedged parliamentary democracy. The manifesto was condemned by the Communist authorities, but the authors did not suffer any direct repression, and the journal was not suppressed (although the editorial board was forced to resign). At the end of the same year, a massive strike broke out in the Litostroj manufacturing plant in Ljubljana, which led to the establishment of the first independent trade union in Yugoslavia. The leaders of the strike established an independent political organization, called the Social Democratic Union of Slovenia. Soon afterwards, in mid May 1988, an independent Peasant Union of Slovenia was organized. Later in the same month, the Yugoslav Army arrested four Slovenian journalists of the alternative magazine "Mladina", accusing them of revealing state secrets. The so-called Ljubljana trial triggered mass protests in Ljubljana and other Slovenian cities. A mass democratic movement, coordinated by the Committee for the Defense of Human Rights, pushed the Communists in the direction of democratic reforms. These revolutionary events in Slovenia pre-dated by almost one year the Revolutions of 1989 in Eastern Europe, but went largely unnoticed by international observers. At the same time, the confrontation between the Slovenian Communists and the Serbian Communist Party (which was dominated by the nationalist leader Slobodan Milošević), became the most important political struggle in Yugoslavia. The poor economic performance of the Federation, and rising clashes between the different republics, created a fertile soil for the rise of secessionist ideas among Slovenes, both anti-Communists and Communists. On 27 of September 1989 the Slovenian Assembly made many amendments to the 1974 constitution including the abandonment of the League of Communists of Slovenia monopoly on political power and the right of Slovenia to leave Yugoslavia. In an action named "Action North" in 1989, Slovene police forces, members of which later organized their own veteran organization, prevented several hundred Milošević supporters from meeting in Ljubljana on 1 December at a so-called Rally of Truth, with an attempt to overthrow Slovenian leadership because of its opposition to Serb centralist policy. The action can be considered the first defense action for Slovenian independence. On 23 January 1990, the League of Communists of Slovenia, in protest against the domination of the Serb nationalist leadership, walked out of the 14th Congress of the League of Communists of Yugoslavia which effectively ceased to exist as a national party – they were followed soon after by the League of Communists of Croatia. In September 1989, numerous constitutional amendments were passed by the Assembly, which introduced parliamentary democracy to Slovenia. On 7 March 1990, the Slovenian Assembly passed the amendment XCI changing the official name of the state to the Republic of Slovenia dropping the word 'Socialist'. The new name has been official since 8 March 1990. On 30 December 1989 Slovenia officially opened the spring 1990 elections to opposition parties thus inaugurating multi-party democracy. The "Democratic Opposition of Slovenia" (DEMOS) coalition of democratic political parties was created by an agreement between the Slovenian Democratic Union, the Social Democrat Alliance of Slovenia, the Slovene Christian Democrats, the Farmers' Alliance and the Greens of Slovenia. The leader of the coalition was the well-known dissident Jože Pučnik. On 8 April 1990, the first free multiparty parliamentary elections, and the first round of the Presidential elections, were held. DEMOS defeated the former Communist party in the parliamentary elections, by gathering 54% of the votes. A coalition government led by the Christian Democrat Lojze Peterle was formed, and began economic and political reforms that established a market economy and a liberal democratic political system. At the same time, the government pursued the independence of Slovenia from Yugoslavia. Milan Kučan was elected President in the second round of the Presidential elections on 22 Apr 1990, defeating the DEMOS candidate Jože Pučnik. Milan Kučan strongly opposed the preservation of Yugoslavia through violent means. After the concept of a loose confederation failed to gain support by the republics of Yugoslavia, Kučan favoured a controlled process of non-violent disassociation that would enable the collaboration of the former Yugoslav nations on a new, different basis. On 23 December 1990, a referendum on the independence of Slovenia was held, in which the more than 88% of Slovenian residents voted for the independence of Slovenia from Yugoslavia. Slovenia became independent through the passage of the appropriate acts on 25 June 1991. In the morning of the next day, a short Ten-Day War began, in which the Slovenian forces successfully rejected Yugoslav military interference. In the evening, the independence was solemnly proclaimed in Ljubljana by the Speaker of the Parliament France Bučar. The Ten-Day War lasted till 7 July 1991, when the Brijuni Agreement was made, with the European Community as a mediator, and the Yugoslav National Army started its withdrawal from Slovenia. On 26 October 1991, the last Yugoslav soldier left Slovenia. On 23 December 1991 the Assembly of the Republic of Slovenia passed a new Constitution, which became the first Constitution of independent Slovenia. Kučan represented Slovenia at the peace conference on former Yugoslavia in the Hague and Brussels which concluded that the former Yugoslav nations were free to determine their future as independent states. On May 22, 1992 Kučan represented Slovenia as it became a new member of the United Nations. The most important achievement of the Coalition, however, was the declaration of independence of Slovenia on 25 June 1991, followed by a Ten-Day War in which the Slovenians rejected Yugoslav military interference. As a result of internal disagreements the coalition fell apart in 1992. It was officially dissolved in April 1992 in agreement with all the parties that had composed it. Following the collapse of Lojze Peterle's government, a new coalition government, led by Janez Drnovšek was formed, which included several parties of the former DEMOS. Jože Pučnik became vice-president in Drnovšek's cabinet, guaranteeing some continuity in the government policies. The first country to recognise Slovenia as an independent country was Croatia on 26 June 1991. In the second half of 1991, some of the countries formed after the collapse of the Soviet Union recognized Slovenia. These were the Baltic countries Lithuania, Latvia, and Estonia, and Georgia, Ukraine, and Belarus. On 19 December 1991, Iceland and Sweden recognised Slovenia, and Germany passed a resolution on the recognition of Slovenia, realised alongside the European Economic Community (EEC) on 15 January 1992. On 13, respectively 14 January 1992, the Holy See and San Marino recognised Slovenia. The first transmarine countries to recognise Slovenia were Canada and Australia on the 15, respectively 16 January 1992. The United States was at first very reserved towards the Slovenian independence and recognised Slovenia only on 7 April 1992. The recognition by the EEC was particularly significant for Slovenia, as in December 1991 the EEC passed criteria for the international recognition of newly founded countries, which included democracy, the respect for human rights, the government of law, and the respect for the national minority rights. The recognition of Slovenia therefore indirectly also meant that Slovenia had been meeting the passed criteria. In December 1992, after the independence and the international recognition of Slovenia, Kučan was elected as the first President of Slovenia in the 1992 presidential election, with the support of the citizens list. He won another five-year term in the 1997 election, running again as an independent and again winning the majority in the first round. Drnovšek was the second Prime Minister of independent Slovenia. He was chosen as a compromise candidate and an expert in economic policy, transcending ideological and programmatic divisions between parties. Drnovšek's governments reoriented Slovenia's trade away from Yugoslavia towards the West and contrary to some other former Communist countries in Eastern Europe, the economic and social transformation followed a gradualist approach. After six months in opposition from May 2000 to Autumn 2000, Drnovšek returned to power again and helped to arrange the first meeting between George W. Bush and Vladimir Putin (Bush-Putin 2001). Drnovšek held the position of the President of Republic from 2002 to 2007. During the term, in March 2003, Slovenia held two referendums on joining the EU and NATO. Slovenia joined NATO on 29 March 2004 and the European Union on 1 May 2004. Janez Janša was Prime Minister of Slovenia from November 2004 to November 2008 for the first time. During the term characterized by over-enthusiasm after joining EU, between 2005 and 2008 the Slovenian banks have seen loan-deposit ratio veering out of control, over-borrowing from foreign banks and then over-crediting private sector, leading to its unsustainable growth. Danilo Türk held the position of the President of Republic from 2007 to 2012. Borut Pahor was Prime Minister of Slovenia from November 2008 until February 2012. Faced by the global economic crisis his government proposed economic reforms, but they were rejected by the opposition leader Janez Janša and blocked by referenda in 2011. On the other hand, the voters voted in favour of an arbitration agreement with Croatia, aimed to solve the border dispute between the countries, emerging after the breakup of Yugoslavia. Pahor has held the position of president since 2012. Janša was Prime Minister of Slovenia from February 2012 until March 2013 for the second time. He was replaced by the first woman PM in history of Slovenia, Alenka Bratušek, after the official anti-corruption agency's "Report on the Parliamentary Parties' Leaders" was issued.
https://en.wikipedia.org/wiki?curid=27339
Geography of Slovenia Slovenia is situated at the crossroads of central and southeast Europe, touching the Alps and bordering the Adriatic Sea. The Alps—including the Julian Alps, the Kamnik-Savinja Alps and the Karawank chain, as well as the Pohorje massif—dominate northern Slovenia along its long border to Austria. Slovenia's Adriatic coastline stretches approximately from Italy to Croatia. Its part south of Sava river belongs to Balkan peninsula – Balkans. The term "karst" originated in southwestern Slovenia's Karst Plateau (), a limestone region of underground rivers, gorges, and caves, between Ljubljana and the Mediterranean. On the Pannonian plain to the East and Northeast, toward the Croatian and Hungarian borders, the landscape is essentially flat. However, the majority of Slovenian terrain is hilly or mountainous, with around 90% of the surface 200 meters or more above sea level. Slovenia's location is where southeastern and Central Europe meet, where the Eastern Alps border the Adriatic Sea between Austria and Croatia. The 15th meridian east almost corresponds to the middle line of the country in the direction west–east. Extreme geographical points of Slovenia: The maximum north–south distance is 1°28' or . The maximum east–west distance is 3°13' or . The geometric centre of Slovenia (GEOSS) is located at . Since 2016, the geodetic system of Slovenia with the elevation benchmark of 0 m has its origin at the Koper tide gauge station. Until then, it referred to the Sartorio mole in Trieste (see metres above the Adriatic). The entire Slovenian coastline is located on the Gulf of Trieste. Towns along the coastline include: The traditional Slovenian regions, based on the former division of Slovenia into the four Habsburg crown lands of (Carniola, Carinthia, Styria, and the Littoral) and their parts, are: The last two are usually considered together as the Littoral Region ("Primorska"). White Carniola ("Bela krajina"), otherwise part of Lower Carniola, is usually considered a separate region, as is the Central Sava Valley ("Zasavje"), which is otherwise a part of Upper and Lower Carniola and Styria. Slovenian Littoral has no natural island, but there is a plan on building an artificial one. Humid subtropical climate (Cfa) on the coast, oceanic climate (Cfb) in most of Slovenia, continental climate with mild to hot summers and cold winters (Dfb) in the plateaus and mountains on the north, subpolar (Dfc) and tundra (ET) climate above the treeline on the highest mountain peaks. Precipitation is high away from the coast, with the spring being particularly prone to rainfall. Slovenia's Alps have frequent snowfalls during the winter. A short coastal strip on the Adriatic Sea, an alpine mountain region adjacent to Italy and Austria, mixed mountain and valleys with numerous rivers to the east. There is only one natural island in Slovenia: Bled Island in Lake Bled in the country's northwest. Lake Bled and Bled Island are Slovenia's most popular tourist destination. Lignite coal, lead, zinc, building stone, hydropower, forests The Sava River polluted with domestic and industrial waste; pollution of coastal waters with heavy metals and toxic chemicals; forest damage near Koper from air pollution (originating at metallurgical and chemical plants) and resulting acid rain.
https://en.wikipedia.org/wiki?curid=27340
Demographics of Slovenia This article is about the demographic features of the population of Slovenia, including population density, ethnicity, education level, health of the populace, economic status, religious affiliations and other aspects of the population. With 101 inhabitants per square kilometre (262/sq mi), Slovenia ranks low among the European countries in population density (compared to 402/km² (1042/sq mi) for the Netherlands or 195/km² (505/sq mi) for Italy). The Littoral–Inner Carniola Statistical Region has the lowest population density while the Central Slovenia Statistical Region has the highest. According to the 2002 census, Slovenia's main ethnic group are Slovenes (83%). At least 13% of the population were immigrants from other parts of Former Yugoslavia, primarily ethnic Bosniaks (Bosnian Muslims), Croats and Serbs and their descendants. They have settled mainly in cities and suburbanised areas. Relatively small but protected by the Constitution of Slovenia are the Hungarian and the Italian national community. A special position is held by the autochthonous and geographically dispersed Roma ethnic community. Slovenia is among the European countries with the most pronounced ageing of population, ascribable to a low birth rate and increasing life expectancy. Almost all Slovenian inhabitants older than 64 are retired, with no significant difference between the genders. The working-age group is diminishing in spite of immigration. The proposal to raise the retirement age from the current 57 for women and 58 for men was rejected in a referendum in 2011. Also the difference among the genders regarding life expectancy is still significant. In 2007, it was 74.6 years for men and 81.8 years for women. In addition, in 2009, the suicide rate in Slovenia was 22 per 100,000 persons per year, which places Slovenia among the highest ranked European countries in this regard. Births Deaths Natural increase The majority of Slovenia's population are ethnic Slovenes (83.06%). Hungarians and Italians have the status of indigenous minorities under the Constitution of Slovenia, which guarantees them seats in the National Assembly. Most other minority groups, particularly those from other parts of the former Yugoslavia (except for one part of autochthonous community of Serbs and Croats), relocated after World War II for economic reasons. Around 12.4% of the inhabitants of Slovenia were born abroad. According to data from 2008, there were around 100,000 non-EU citizens living in Slovenia, or around 5% of the overall population of the country. The highest number came from Bosnia and Herzegovina, followed by immigrants from Serbia, Macedonia, Croatia (which has since joined the EU itself) and Kosovo. In April 2019, there were 143,192 foreign citizens living in Slovenia, representing 6.87% of Slovenia's population. The number of people migrating to Slovenia has been steadily rising from 1995; and the rate of immigration itself has been increasing year-on-year, reaching its peak in 2016. Since Slovenia joined the EU in 2004, the yearly inflow of immigrants has doubled by 2006 and tripled by 2009. In 2007, Slovenia was one of the countries with the fastest growth of net migration rate in the European Union. Traditionally, Slovenes are predominantly Roman Catholic. Before World War II, 97% of Slovenes declared as Roman Catholics, around 2.5% were Lutheran, and only around 0.5% belonged to other denominations. Catholicism was an important feature of both social and political life in pre-Communist Slovenia. After 1945, the country underwent a process of gradual but steady secularization. After a decade of severe persecution of religions, the Communist regime adopted a policy of relative tolerance towards the churches, but limited their social functioning. After 1990, the Roman Catholic Church regained some of its former influence, but Slovenia remains a largely secularized society. According to the 2002 census, 57.8% of the population is Roman Catholic. As elsewhere in Europe, affiliation with Roman Catholicism is dropping: in 1991, 71.6% were self-declared Catholics, which means a drop of more than 1% annually. The vast majority of Slovenian Catholics belong to the Latin Rite. A small number of Greek Catholics live in the White Carniola region. Despite a relatively small number of Protestants (less than 1% in 2002), the Protestant legacy is important because of its historical significance, since the bases of Slovene standard language and Slovene literature were established by the Protestant Reformation in the 16th century. Nowadays, a significant Lutheran minority lives in the easternmost region of Prekmurje, where they represent around a fifth of the population and are headed by a bishop with the seat in Murska Sobota. Besides these two Christian denominations, a small Jewish community has also been historically present. Despite the losses suffered during the Holocaust, Judaism still numbers a few hundred adherents, mostly living in Ljubljana, site of the sole remaining active synagogue in the country. According to the 2002 census, Islam is the second largest religious denomination with around 2.4% of the population. Most Slovenian Muslims came from Bosnia, Kosovo, and Macedonia. The third largest denomination, with around 2.2% of the population, is Orthodox Christianity, with most adherents belonging to the Serbian Orthodox Church while a minority belongs to the Macedonian and other Orthodox churches. In the 2002, around 10% of Slovenes declared themselves as atheists, another 10% professed no specific denomination, and around 16% decided not to answer the question about their religious affiliation. According to the Eurobarometer Poll 2005, 37% of Slovenian citizens responded that "they believe there is a god", whereas 46% answered that "they believe there is some sort of spirit or life force" and 16% that "they do not believe there is any sort of spirit, god, or life force". The distribution of the residents of Slovenia by religion is the following: Roman Catholic 57.8%, atheist 10.1%, Muslim 2.4%, Orthodox Christian 2.3%, Protestant 0.9%, other and unknown 26.5% (2002). According to the published data from the 2002 Slovenian census, out of a total of 47,488 Muslims (2.4% of the total population) 2,804 Muslims (5.90% of the total Muslims in Slovenia) declared themselves as ethnic Slovenian Muslims. The official language in Slovenia is Slovene, which is a member of the South Slavic language group. In 2002, Slovene was the native language of around 88% of Slovenia's population according to the census, with more than 92% of the Slovenian population speaking it in their home environment. This places Slovenia among the most homogeneous countries in the EU in terms of the share of speakers of predominant mother tongue. Slovene is sometimes characterized as the most diverse Slavic language in terms of dialects, with different degrees of mutual intelligibility. Accounts of the number of dialects range from as few as seven dialects, often considered dialect groups or dialect bases that are further subdivided into as many as 50 dialects. Other sources characterize the number of dialects as nine or eight. The distribution of speakers by language is the following: Slovene 87.7%, Serbo-Croatian 8%, Hungarian 0.4%, Albanian 0.4%, Macedonian 0.2%, Romani 0.2%, Italian 0.2%, German 0.1%, other 0.1% (Russian, Czech, Ukrainian, English, Slovak, Polish, Romanian, Turkish, French, Bulgarian, Arabic, Spanish, Dutch, Vlach, Rusyn, Greek, Swedish, Danish or Armenian), unknown 2.7% (2002) The following demographic statistics are from the CIA World Factbook, unless otherwise indicated. 2,102,678 (July 2020 est.) "0–14 years:" 13.4% (male 138,604/female 130,337) "15–64 years:" 69.8% (male 703,374/female 692,640) "65 years and over:" 16.8% (male 132,096/female 203,068) (2011 est.) "at birth:" 1.07 male(s)/female "under 15 years:" 1.06 male(s)/female "15–64 years:" 1.01 male(s)/female "65 years and over:" 0.66 male(s)/female "total population:" 0.95 male(s)/female (2011 est.) 4.12 deaths/1,000 live births (2010) "total population:" 80 years "male:" 77 years "female:" 83 years (2013 est)
https://en.wikipedia.org/wiki?curid=27341
Politics of Slovenia The politics of Slovenia takes place in a framework of a parliamentary representative democratic republic, whereby the Prime Minister of Slovenia is the head of government, and of a multi-party system. Executive power is exercised by the Government of Slovenia. Legislative power is vested in the National Assembly and in minor part in the National Council. The judiciary of Slovenia is independent of the executive and the legislature. As a young independent republic, Slovenia pursued economic stabilization and further political openness, while emphasizing its Western outlook and central European heritage. Today, with a growing regional profile, a participant in the SFOR peacekeeping deployment in Bosnia and the KFOR deployment in Kosovo, and a charter World Trade Organization member, Slovenia plays a role on the world stage quite out of proportion to its small size. From 1998 to 2000, Slovenia occupied a nonpermanent seat on the UN Security Council and in that capacity distinguished itself with a constructive, creative, and consensus-oriented activism. Slovenia has been a member of the United Nations since May 1992 and of the Council of Europe since May 1993. Slovenia signed an association agreement with the European Union in 1996 and is a member of the Central European Free Trade Agreement. Slovenia also is a member of all major international financial institutions (the International Monetary Fund, the World Bank Group, and the European Bank for Reconstruction and Development) as well as 40 other international organizations, among them the World Trade Organization, of which it is a founding member. Since the breakup of the former Yugoslavia, Slovenia has instituted a stable, multi-party, democratic political system, characterized by regular elections, a free press, and an excellent human rights record. However, Slovenia is the only former Communist state that has never carried out lustration. By Constitution of Slovenia the country is a parliamentary democracy and a republic. Within its government, power is shared between a directly elected president, a prime minister, and an incompletely bicameral legislature. The legislative body is composed of the 90-member National Assembly—which takes the lead on virtually all legislative issues—and the National Council, a largely advisory body composed of representatives from social, economic, professional, and local interests. The Constitutional Court has the highest power of review of legislation to ensure its consistency with Slovenia's constitution. Its nine judges are elected for 9-year terms. In 1997, elections were held to elect both a president and representatives to Parliament's upper house, the National Council. Milan Kučan, elected President of the Yugoslav Republic of Slovenia in 1990, led his country to independence in 1991. He was elected the first President of independent Slovenia in 1992 and again in November 1997 by a comfortable margin. Janez Drnovšek of the center-left Liberal Democratic Party of Slovenia (LDS) was reelected Prime Minister in the 15 October 2000 parliamentary elections. Drnovšek's coalition held an almost two-thirds majority in Parliament. The government, most of the Slovenian polity, shares a common view of the desirability of a close association with the West, specifically of membership in both the European Union and NATO. For all the apparent bitterness that divides left and right wings, there are few fundamental philosophical differences between them in the area of public policy. Slovenian society is built on consensus, which has converged on a social-democrat model. Political differences tend to have their roots in the roles that groups and individuals played during the years of communist rule and the struggle for independence. As the most prosperous republic of the former Yugoslavia, Slovenia emerged from its brief ten-day war of secession in 1991 as an independent nation for the first time in its history. Since that time, the country has made steady but cautious progress toward developing a market economy. Economic reforms introduced shortly after independence led to healthy economic growth. Despite the halting pace of reform and signs of slowing GDP growth today, Slovenians now enjoy the highest per capita income of all the transition economies of central Europe. The Slovenians have pursued internal economic restructuring with caution. The first phase of privatization (socially owned property under the SFRY system) is now complete, and sales of remaining large state holdings are planned for next year. Trade has been diversified toward the West (trade with EU countries make up 66% of total trade in 2000) and the growing markets of central and eastern Europe. Manufacturing accounts for most employment, with machinery and other manufactured products comprising the major exports. Labor force surveys put unemployment at approximately 6.6% (Dec. 2000), with 106,153 registrations for unemployment assistance. Inflation has remained below double-digit levels, 6.1% (1999) and 8.9% (2000). Gross domestic product grew by about 4.8% in 2000 and is expected to post a slightly lower rate of 4.5% in 2001, as export demand lags. The currency is stable, fully convertible, and backed by substantial reserves. The economy provides citizens with a good standard of living. Ten years after independence, Slovenia has made tremendous progress establishing democratic institutions, enshrining respect for human rights, establishing a market economy and adapting its military to Western norms and standards. In contrast to its neighbors, civil tranquility and strong economic growth have marked this period. Upon achieving independence, Slovenia offered citizenship to all residents, regardless of ethnicity or origin, avoiding a sectarian trap that has caught out many central European countries. Slovenia willingly accepted refugees from the fighting in Bosnia and has since participated in international stabilization efforts in the region. On the international front, Slovenia has advanced rapidly toward integration into the Euro-Atlantic community of nations. Invited to begin accession negotiations with the European Union in November 1998, Slovenia has achieve two of its primary foreign policy goals: membership in the EU and NATO. Slovenia also participates in the Southeast Europe Cooperation Initiative (SECI). Slovenia remains firmly committed to achieving NATO membership in a second round of enlargement. Slovenia has been an active participant in Partnership for Peace (PfP) and has sought to demonstrate its preparedness to take on the responsibilities and burdens of membership in the Alliance. The United States looks to Slovenia to play a productive role in continuing security efforts throughout the region. It has done much– contributing to the success of IFOR, SFOR, efforts in Albania, the Republic of Macedonia, Montenegro, Kosovo, and elsewhere– and has continued to expand actively its constructive regional engagement. Slovenia is one of the focus countries for the United States' southeast European policy, aimed at reinforcing regional stability and integration. The Slovenian Government is well-positioned to be an influential role model for other southeast European governments at different stages of reform and integration. To these ends, the United States urges Slovenia to maintain momentum on internal economic, political, and legal reforms, while expanding their international cooperation as resources allow. Although harmonization with EU law and standards will require great efforts, already underway, the EU accession process will serve to advance Slovenia's structural reform agenda. U.S. and Allied efforts to assist Slovenia's military restructuring and modernization efforts are ongoing. The constitution was adopted on 23 December 1991, effective 23 December 1991. The president is elected by popular vote for a five-year term. Following National Assembly elections, the leader of the majority party or the leader of a majority coalition is usually nominated to become prime minister by the president and elected by the National Assembly. The Council of Ministers is nominated by the prime minister and elected by the National Assembly. The National Assembly ("Državni zbor") has 90 members, elected for a four-year term, 88 members elected by proportional representation using D'Hondt formula and 2 members elected by ethnic minorities using the Borda count. The President of the National Assembly of Slovenia is elected by the deputies and requires 46 votes to be elected. Currently, this position is held by Dejan Židan. Slovenia is divided into 212 municipalities, of which 11 are urban municipalities with a greater degree of autonomy. Slovenia is member of BIS, CCC, CE, CEI, EAPC, EBRD, ECE, EU, FAO, IADB, IAEA, IBRD, ICAO, ICC, ICRM, IDA, IFC, IFRCS, ILO, IMF, IMO, Intelsat (nonsignatory user), Interpol, IOC, IOM (observer), ISO, ITU, NAM (guest), NATO, OPCW, OSCE, PCA, PFP, SECI, UN, UNCTAD, UNESCO, UNFICYP, UNIDO, UNTSO, UPU, WEU (associate partner), WHO, WIPO, WMO, WToO, WTrO
https://en.wikipedia.org/wiki?curid=27342
Thyme Thyme () is the herb (dried aerial parts) of some members of the genus "Thymus" of aromatic perennial evergreen herbs in the mint family Lamiaceae. Thymes are relatives of the oregano genus "Origanum". They have culinary, medicinal, and ornamental uses, and the species most commonly cultivated and used for culinary purposes is "Thymus vulgaris". Ancient Egyptians used thyme for embalming. The ancient Greeks used it in their baths and burnt it as incense in their temples, believing it was a source of courage. The spread of thyme throughout Europe was thought to be due to the Romans, as they used it to purify their rooms and to "give an aromatic flavour to cheese and liqueurs". In the European Middle Ages, the herb was placed beneath pillows to aid sleep and ward off nightmares. In this period, women also often gave knights and warriors gifts that included thyme leaves, as it was believed to bring courage to the bearer. Thyme was also used as incense and placed on coffins during funerals, as it was supposed to assure passage into the next life. The name of the genus of fish "Thymallus", first given to the grayling ("T. thymallus", described in the 1758 edition of "Systema Naturae" by Swedish zoologist Carl Linnaeus), originates from the faint smell of thyme that emanates from the flesh. Thyme is best cultivated in a hot, sunny location with well-drained soil. It is generally planted in the spring, and thereafter grows as a perennial. It can be propagated by seed, cuttings, or dividing rooted sections of the plant. It tolerates drought well. The plants can take deep freezes and are found growing wild on mountain highlands. In some Levantine countries, and Assyria, the condiment "za'atar" (Arabic for both thyme and marjoram) contains many of the essential oils found in thyme. It is a common component of the "bouquet garni", and of "herbes de Provence". Thyme is sold both fresh and dried. While summer-seasonal, fresh greenhouse thyme is often available year-round. The fresh form is more flavourful, but also less convenient; storage life is rarely more than a week. However, the fresh form can last many months if carefully frozen. Fresh thyme is commonly sold in bunches of sprigs. A sprig is a single stem snipped from the plant. It is composed of a woody stem with paired leaf or flower clusters ("leaves") spaced apart. A recipe may measure thyme by the bunch (or fraction thereof), or by the sprig, or by the tablespoon or teaspoon. Dried thyme is widely used in Armenia in tisanes (called "urc"). Depending on how it is used in a dish, the whole sprig may be used (e.g., in a "bouquet garni"), or the leaves removed and the stems discarded. Usually, when a recipe specifies "bunch" or "sprig", it means the whole form; when it specifies spoons, it means the leaves. It is perfectly acceptable to substitute dried for whole thyme. Leaves may be removed from stems either by scraping with the back of a knife, or by pulling through the fingers or tines of a fork. Thyme retains its flavour on drying better than many other herbs. Oil of thyme, the essential oil of common thyme ("Thymus vulgaris"), contains 20–54% thymol. Thyme essential oil also contains a range of additional compounds, such as "p"-cymene, myrcene, borneol, and linalool. Thymol, an antiseptic, is an active ingredient in various commercially produced mouthwashes such as Listerine. Before the advent of modern antibiotics, oil of thyme was used to medicate bandages.
https://en.wikipedia.org/wiki?curid=29968
Tea Tea is an aromatic beverage commonly prepared by pouring hot or boiling water over cured or fresh leaves of the "Camellia sinensis", an evergreen shrub (bush) native to East Asia. After water, it is the most widely consumed drink in the world. There are many different types of tea; some, like Darjeeling and Chinese greens, have a cooling, slightly bitter, and astringent flavour, while others have vastly different profiles that include sweet, nutty, floral or grassy notes. Tea has a stimulating effect in humans primarily due to its caffeine content. Tea originated in the region encompassing today's north Burma and southwestern China, where it was used as a medicinal drink by various ethnic groups in the region. An early credible record of tea drinking dates to the 3rd century AD, in a medical text written by Hua Tuo. It was popularised as a recreational drink during the Chinese Tang dynasty, and tea drinking spread to other East Asian countries. Portuguese priests and merchants introduced it to Europe during the 16th century. During the 17th century, drinking tea became fashionable among the English, who started large-scale production and commercialisation of the plant in India. Combined, China and India supplied 62% of the world's tea in 2016. The term herbal tea refers to drinks not made from "Camellia sinensis": infusions of fruit, leaves, or other parts of the plant, such as steeps of rosehip, chamomile, or rooibos. These are sometimes called "tisanes" or "herbal infusions" to prevent confusion with tea made from the tea plant. The Chinese character for tea is 茶, originally written with an extra stroke as 荼 (pronounced "tú", used as a word for a bitter herb), and acquired its current form during the Tang Dynasty. The word is pronounced differently in the different varieties of Chinese, such as "chá" in Mandarin, "zo" and "dzo" in Wu Chinese, and "ta" and "te" in Min Chinese. One suggestion is that the different pronunciations may have arisen from the different words for tea in ancient China, for example "tú" (荼) may have given rise to "tê"; historical phonologists however argued that the "cha", "te" and "dzo" all arose from the same root with a reconstructed pronunciation "dra", which changed due to sound shift through the centuries. There were other ancient words for tea, though "ming" (茗) is the only other one still in common use. It has been proposed that the Chinese words for tea, "tu", "cha" and "ming", may have been borrowed from the Austro-Asiatic languages of people who inhabited southwest China; "cha" for example may have been derived from an archaic Austro-Asiatic root *"la", meaning "leaf" (""lá"" in Vietnamese or ""hla?"" in Khmu). Most Chinese languages, such as Mandarin and Cantonese, pronounce it along the lines of "cha", but Hokkien and Teochew Chinese varieties along the Southern coast of China pronounce it like "teh". These two pronunciations have made their separate ways into other languages around the world. Starting in the early 17th century, the Dutch played a dominant role in the early European tea trade via the Dutch East India Company. The Dutch borrowed the word for "tea" ("thee") from Min Chinese, either through trade directly from Hokkien speakers in Formosa where they had established a port, or from Malay traders in Bantam, Java. The Dutch then introduced to other European languages this Min pronunciation for tea, including English "tea", French "thé", Spanish "té", and German "Tee". This pronunciation is also the most common form worldwide. The "Cha" pronunciation came from the Cantonese "chàh" of Guangzhou (Canton), especially through Portuguese traders who settled Macau in the 16th century. The Portuguese adopted the Cantonese pronunciation "chá", and spread it to India. However, the Korean and Japanese pronunciations of "cha" were not from Cantonese, but were borrowed into Korean and Japanese during earlier periods of Chinese history. A third form, the increasingly widespread "chai," came from Persian چای "chay". Both the "châ" and "chây" forms are found in Persian dictionaries. They are derived from the Northern Chinese pronunciation of "chá", which passed overland to Central Asia and Persia, where it picked up the Persian grammatical suffix "-yi" before passing on to Russian as чай (, "chay"), Arabic as شاي (pronounced "shay" due to the lack of a sound in Arabic), Urdu as چائے "chay", Hindi as चाय "chāy", Turkish as çay, etc. English has all three forms: "cha" or "char" (both pronounced ), attested from the 16th century; "tea", from the 17th; and "chai", from the 20th. However, the form "chai" today refers specifically to a black tea mixed with sugar or honey, spices and milk. The few exceptions of words for tea that do not fall into the three broad groups of "te", "cha" and "chai" are languages from the botanical homeland of the tea plant, from which the Chinese words for tea might have been borrowed originally: northeast Myanmar (formerly Burma) and southwest Yunnan. Examples are "la" (meaning tea purchased elsewhere) and "miiem" (wild tea gathered in the hills) from the Wa people, "laphet" () in the Burmese language, and "meng" in Lamet meaning "fermented tea leaves", as well as "miang" () in the Northern Thai language ("fermented tea"). Tea plants are native to East Asia, and probably originated in the borderlands of north Burma and southwestern China. Chinese (small leaf) type tea ("C. sinensis" var. "sinensis") may have originated in southern China possibly with hybridization of unknown wild tea relatives. However, since there are no known wild populations of this tea, the precise location of its origin is speculative. Given their genetic differences forming distinct clades, Chinese Assam type tea ("C. sinensis" var. "assamica") may have two different parentages – one being found in southern Yunnan (Xishuangbanna, Pu'er City) and the other in western Yunnan (Lincang, Baoshan). Many types of Southern Yunnan assam tea have been hybridized with the closely related species "Camellia taliensis." Unlike Southern Yunnan Assam tea, Western Yunnan Assam tea shares many genetic similarities with Indian Assam type tea (also "C. sinensis" var. "assamica"). Thus, Western Yunnan Assam tea and Indian Assam tea both may have originated from the same parent plant in the area where southwestern China, Indo-Burma, and Tibet meet. However, as the Indian Assam tea shares no haplotypes with Western Yunnan Assam tea, Indian Assam tea is likely to have originated from an independent domestication. Some Indian Assam tea appears to have hybridized with the species "Camellia pubicosta." Assuming a generation of 12 years, Chinese small leaf tea is estimated to have diverged from Assam tea around 22,000 years ago while Chinese Assam tea and Indian Assam tea diverged 2,800 years ago. The divergence of Chinese small leaf tea and Assam tea would correspond to the last glacial maximum. Tea drinking may have begun in the region of Yunnan region, when it was used for medicinal purposes. It is also believed that in Sichuan, "people began to boil tea leaves for consumption into a concentrated liquid without the addition of other leaves or herbs, thereby using tea as a bitter yet stimulating drink, rather than as a medicinal concoction." Chinese legends attribute the invention of tea to the mythical Shennong (in central and northern China) in 2737 BC although evidence suggests that tea drinking may have been introduced from the southwest of China (Sichuan/Yunnan area). The earliest written records of tea come from China. The word "tú" 荼 appears in the "Shijing" and other ancient texts to signify a kind of "bitter vegetable" (苦菜), and it is possible that it referred to many different plants such as sowthistle, chicory, or smartweed, as well as tea. In the "Chronicles of Huayang", it was recorded that the Ba people in Sichuan presented "tu" to the Zhou king. The Qin later conquered the state of Ba and its neighbour Shu, and according to the 17th century scholar Gu Yanwu who wrote in "Ri Zhi Lu" (日知錄): "It was after the Qin had taken Shu that they learned how to drink tea." Another possible early reference to tea is found in a letter written by the Qin Dynasty general Liu Kun who requested that some "real tea" to be sent to him. The earliest known physical evidence of tea was discovered in 2016 in the mausoleum of Emperor Jing of Han in Xi'an, indicating that tea from the genus "Camellia" was drunk by Han Dynasty emperors as early as the 2nd century BC. The Han dynasty work, "The Contract for a Youth", written by Wang Bao in 59 BC, contains the first known reference to boiling tea. Among the tasks listed to be undertaken by the youth, the contract states that "he shall boil tea and fill the utensils" and "he shall buy tea at Wuyang". The first record of tea cultivation is also dated to this period (the reign of Emperor Xuan of Han), during which tea was cultivated on Meng Mountain (蒙山) near Chengdu. Another early credible record of tea drinking dates to the third century AD, in a medical text by Hua Tuo, who stated, "to drink bitter t'u constantly makes one think better." However, before the mid-8th century Tang dynasty, tea-drinking was primarily a southern Chinese practice while the main drink in northern China was yogurt. Tea was disdained by the Northern dynasties aristocrats of the Central Plains, who describe it as a "slaves' drink", inferior to yogurt. It became widely popular during the Tang Dynasty, when it was spread to Korea, Japan, and Vietnam. The Classic of Tea, a treatise on tea and its preparations, was written by Lu Yu in 762. Through the centuries, a variety of techniques for processing tea, and a number of different forms of tea, were developed. During the Tang dynasty, tea was steamed, then pounded and shaped into cake form, while in the Song dynasty, loose-leaf tea was developed and became popular. During the Yuan and Ming dynasties, unoxidized tea leaves were first pan-fried, then rolled and dried, a process that stops the oxidation process that turns the leaves dark, thereby allowing tea to remain green. In the 15th century, oolong tea, in which the leaves were allowed to partially oxidize before pan-frying, was developed. Western tastes, however, favoured the fully oxidized black tea, and the leaves were allowed to oxidize further. Yellow tea was an accidental discovery in the production of green tea during the Ming dynasty, when apparently sloppy practices allowed the leaves to turn yellow, but yielded a different flavour as a result. Tea was first introduced to Western priests and merchants in China during the 16th century, at which time it was termed "chá". The earliest European reference to tea, written as "Chiai", came from "Delle navigationi e viaggi" written by a Venetian, Giambattista Ramusio, in 1545. The first recorded shipment of tea by a European nation was in 1607 when the Dutch East India Company moved a cargo of tea from Macao to Java, then two years later, the Dutch bought the first assignment of tea which was from Hirado in Japan to be shipped to Europe. Tea became a fashionable drink in The Hague in the Netherlands, and the Dutch introduced the drink to Germany, France and across the Atlantic to New Amsterdam (New York). In 1567, Russian people came in contact with tea when the Cossack Atamans Petrov and Yalyshev visited China. In 1638 the Mongolian Khan donated to Tsar Michael I four poods (65–70 kg) of tea. According to Jeremiah Curtin, it was possibly in 1636 that Vassili Starkov was sent as envoy to the Altyn Khan. As a gift to the Tsar, he was given 250 pounds of tea. Starkov at first refused, seeing no use for a load of dead leaves, but the Khan insisted. Thus was tea introduced to Russia. In 1679, Russia concluded a treaty on regular tea supplies from China via camel caravan in exchange for furs. It is today considered the "de facto" national beverage. The first record of tea in English came from a letter written by Richard Wickham, who ran an East India Company office in Japan, writing to a merchant in Macao requesting "the best sort of chaw" in 1615. Peter Mundy, a traveller and merchant who came across tea in Fujian in 1637, wrote, ""chaa" – only water with a kind of herb boyled in it ". Tea was sold in a coffee house in London in 1657, Samuel Pepys tasted tea in 1660, and Catherine of Braganza took the tea-drinking habit to the English court when she married Charles II in 1662. Tea, however, was not widely consumed in the British Isles until the 18th century, and remained expensive until the latter part of that period. English drinkers preferred to add sugar and milk to black tea, and black tea overtook green tea in popularity in the 1720s. Tea smuggling during the 18th century led to the general public being able to afford and consume tea. The British government removed the tax on tea, thereby eliminating the smuggling trade, by 1785. In Britain and Ireland, tea was initially consumed as a luxury item on special occasions, such as religious festivals, wakes, and domestic work gatherings. The price of tea in Europe fell steadily during the 19th century, especially after Indian tea began to arrive in large quantities; by the late 19th century tea had become an everyday beverage for all levels of society. The popularity of tea also informed a number of historical events – the Tea Act of 1773 provoked the Boston Tea Party that escalated into the American Revolution. The need to address the issue of British trade deficit due to the trade in tea resulted in the Opium Wars. The Qing Kangxi Emperor had proclaimed that "China was the center of the world, possessing everything they could ever want or need and banned foreign products from being sold in China", decreeing in 1685 "that all goods bought from China must be paid for in silver coin or bullion". Traders from other nation then sought to find other product, in this case opium, to sell to China to earn back the silver they were required to paid for tea and other commodities. The subsequent attempts by the Chinese Government to curtail the trade in opium led to war. Chinese small leaf type tea was introduced into India in 1836 by the British in an attempt to break the Chinese monopoly on tea. In 1841, Archibald Campbell brought seeds of Chinese tea from the Kumaun region and experimented with planting tea in Darjeeling. The Alubari tea garden was opened in 1856 and Darjeeling tea began to be produced. In 1848, Robert Fortune was sent by the East India Company on a mission to China to bring the tea plant back to Great Britain. He began his journey in high secrecy as his mission occurred in the lull between the Anglo-Chinese First Opium War (18391842) and Second Opium War (18561860). The Chinese tea plants he brought back were introduced to the Himalayas, though most did not survive. The British had discovered that a different variety of tea was endemic to Assam and the northeast region of India and that it was used by the local Singpho people, and these were then grown instead of the Chinese tea plant and then were subsequently hybridized with Chinese small leaf type tea as well as likely closely related wild tea species. Using the Chinese planting and cultivation techniques, the British launched a tea industry by offering land in Assam to any European who agreed to cultivate it for export. Tea was originally consumed only by anglicized Indians; however, it became widely popular in India in the 1950s because of a successful advertising campaign by the India Tea Board. "Camellia sinensis" is an evergreen plant that grows mainly in tropical and subtropical climates. Some varieties can also tolerate marine climates and are cultivated as far north as Cornwall in England, Perthshire in Scotland, Washington state in the United States, and Vancouver Island in Canada. In the Southern Hemisphere, tea is grown as far south as Hobart on the Australian island of Tasmania and Waikato in New Zealand. Tea plants are propagated from seed and cuttings; about 4 to 12 years are needed for a plant to bear seed and about three years before a new plant is ready for harvesting. In addition to a zone 8 climate or warmer, tea plants require at least 127 cm (50 in) of rainfall a year and prefer acidic soils. Many high-quality tea plants are cultivated at elevations of up to above sea level. Though at these heights the plants grow more slowly, they acquire a better flavour. Two principal varieties are used: "Camellia sinensis" var. "sinensis," which is used for most Chinese, Formosan and Japanese teas, and "C. sinensis" var. "assamica," used in Pu-erh and most Indian teas (but not Darjeeling). Within these botanical varieties, many strains and modern clonal varieties are known. Leaf size is the chief criterion for the classification of tea plants, with three primary classifications being, Assam type, characterised by the largest leaves; China type, characterised by the smallest leaves; and Cambodian type, characterised by leaves of intermediate size. The Cambod type tea ("C. assamica" subsp. "lasiocaly") was originally considered a type of assam tea. However, later genetic work showed that it is a hybrid between Chinese small leaf tea and assam type tea. Darjeeling tea also appears to be hybrids between Chinese small leaf tea and assam type tea. A tea plant will grow into a tree of up to if left undisturbed, but cultivated plants are generally pruned to waist height for ease of plucking. Also, the short plants bear more new shoots which provide new and tender leaves and increase the quality of the tea. Only the top of the mature plant are picked. These buds and leaves are called 'flushes'. A plant will grow a new flush every seven to 15 days during the growing season. Leaves that are slow in development tend to produce better-flavoured teas. Several teas are available from specified flushes; for example, Darjeeling tea is available as first flush (at a premium price), second flush, monsoon and autumn. Assam second flush or "tippy" tea is considered superior to first flush, due to the gold tips that appear on the leaves. Pests of tea include mosquito bugs of the genus "Helopeltis" (which are true bugs that must not be confused with the dipteran) that can tatter leaves, so they may be sprayed with insecticides. In addition, there may be Lepidopteran leaf feeders and various tea diseases. Physically speaking, tea has properties of both a solution and a suspension. It is a solution of all the water-soluble compounds that have been extracted from the tea leaves, such as the polyphenols and amino acids, but is a suspension when all of the insoluble components are considered, such as the cellulose in the tea leaves. Caffeine constitutes about 3% of tea's dry weight, translating to between 30 and 90 milligrams per cup depending on the type, brand, and brewing method. A study found that the caffeine content of one gram of black tea ranged from 22–28 mg, while the caffeine content of one gram of green tea ranged from 11–20 mg, reflecting a significant difference. Tea also contains small amounts of theobromine and theophylline, which are stimulants, and xanthines similar to caffeine. Black and green teas contain no essential nutrients in significant amounts, with the exception of the dietary mineral manganese, at 0.5 mg per cup or 26% of the Reference Daily Intake (RDI). Fluoride is sometimes present in tea; certain types of "brick tea", made from old leaves and stems, have the highest levels, enough to pose a health risk if much tea is drunk, which has been attributed to high levels of fluoride in soils, acidic soils, and long brewing. The astringency in tea can be attributed to the presence of polyphenols. These are the most abundant compounds in tea leaves, making up 30–40% of their composition. Polyphenols include flavonoids, epigallocatechin gallate (EGCG), and other catechins. It has been suggested that green and black tea may protect against cancer or other diseases such as obesity or Alzheimer's disease, but the compounds found in green tea have not been conclusively demonstrated to have any effect on human diseases. Tea is generally divided into categories based on how it is processed. At least six different types are produced: After picking, the leaves of "C. sinensis" soon begin to wilt and oxidize unless immediately dried. An enzymatic oxidation process triggered by the plant's intracellular enzymes causes the leaves to turn progressively darker as their chlorophyll breaks down and tannins are released. This darkening is stopped at a predetermined stage by heating, which deactivates the enzymes responsible. In the production of black teas, halting by heating is carried out simultaneously with drying. Without careful moisture and temperature control during manufacture and packaging, growth of undesired molds and bacteria may make tea unfit for consumption. After basic processing, teas may be altered through additional processing steps before being sold, and is often consumed with additions to the basic tea leaf and water added during preparation or drinking. Examples of additional processing steps that occur before tea is sold are blending, flavouring, scenting, and decaffeination of teas. Examples of additions added at the point of consumption include milk, sugar and lemon. Tea blending is the combination of different teas together to achieve the final product. Almost all tea in bags and most loose tea sold in the West is blended. Such teas may combine others from the same cultivation area or several different ones. The aim is to obtain consistency, better taste, higher price, or some combination of the three. Flavoured and scented teas add new aromas and flavours to the base tea. This can be accomplished through directly adding flavouring agents, such as ginger or dried ginger, cloves, mint leaves, cardamom, bergamot (found in Earl Grey), vanilla, and spearmint. Alternatively, because tea easily retains odours, it can be placed in proximity to an aromatic ingredient to absorb its aroma, as in traditional jasmine tea. The addition of milk to tea in Europe was first mentioned in 1680 by the epistolist Madame de Sévigné. Many teas are traditionally drunk with milk in cultures where dairy products are consumed. These include Indian masala chai and British tea blends. These teas tend to be very hearty varieties of black tea which can be tasted through the milk, such as Assams, or the East Friesian blend. Milk is thought to neutralise remaining tannins and reduce acidity. The Han Chinese do not usually drink milk with tea but the Manchus do, and the elite of the Qing Dynasty of the Chinese Empire continued to do so. Hong Kong-style milk tea is based on British colonial habits. Tibetans and other Himalayan peoples traditionally drink tea with milk or yak butter and salt. In Eastern European countries, Russia and Italy, tea is commonly served with lemon juice. In Poland, tea is traditionally served with a slice of lemon and is sweetened with either sugar or honey; tea with milk is called a "bawarka" ("Bavarian style") in Polish and is also widely popular. In Australia, tea with milk is known as "white tea". The order of steps in preparing a cup of tea is a much-debated topic, and can vary widely between cultures or even individuals. Some say it is preferable to add the milk to the cup before the tea, as the high temperature of freshly brewed tea can denature the proteins found in fresh milk, similar to the change in taste of UHT milk, resulting in an inferior-tasting beverage. Others insist it is better to add the milk to the cup after the tea, as black tea is often brewed as close to boiling as possible. The addition of milk chills the beverage during the crucial brewing phase, if brewing in a cup rather than using a pot, meaning the delicate flavour of a good tea cannot be fully appreciated. By adding the milk afterwards, it is easier to dissolve sugar in the tea and also to ensure the desired amount of milk is added, as the colour of the tea can be observed. Historically, the order of steps was taken as an indication of class: only those wealthy enough to afford good-quality porcelain would be confident of its being able to cope with being exposed to boiling water unadulterated with milk. Higher temperature difference means faster heat transfer, so the earlier milk is added, the slower the drink cools. A 2007 study published in the "European Heart Journal" found certain beneficial effects of tea may be lost through the addition of milk. Common varieties of black tea include Assam, Nepal, Darjeeling, Nilgiri, Rize, Keemun, and Ceylon teas. Western black teas are usually brewed for about four minutes. In many regions of the world, actively boiling water is used and the tea is often stewed. In India, black tea is often boiled for fifteen minutes or longer to make Masala chai, as a strong brew is preferred. Tea is often strained while serving. A food safety management group of the International Organization for Standardization (ISO) has published a standard for preparing a cup of tea (ISO 3103: "Tea – Preparation of liquor for use in sensory tests"), primarily intended for standardizing preparation for comparison and rating purposes. It is defined as 2.0 grams of tea leaves steeped for 6 minutes per 100 ml of boiling water. In regions of the world that prefer mild beverages, such as the Far East, green tea is steeped in water around . Regions such as North Africa or Central Asia prefer a bitter tea, and hotter water is used. In Morocco, green tea is steeped in boiling water for 15 minutes. The container in which green tea is steeped is often warmed beforehand to prevent premature cooling. High-quality green and white teas can have new water added as many as five or more times, depending on variety, at increasingly higher temperatures. Oolong tea is brewed around 82 to 96 °C (185 to 205 °F), with the brewing vessel warmed before pouring the water. Yixing purple clay teapots are the traditional brewing-vessel for oolong tea which can be brewed multiple times from the same leaves, unlike green tea, seeming to improve with reuse. In the southern Chinese and Taiwanese Gongfu tea ceremony, the first brew is discarded, as it is considered a rinse of leaves rather than a proper brew. Pu-erh teas require boiling water for infusion. Some prefer to quickly rinse pu-erh for several seconds with boiling water to remove tea dust which accumulates from the ageing process, then infuse it at the boiling point (100 °C or 212 °F), and allow it to steep from 30 seconds to five minutes. Meaning "spiced tea", masala chai tea is prepared using black or green tea with milk (in which case it may be called a "latte"), and may be spiced with ginger. In recent times, there has been a trend in India for Tandoor tea. This tea is prepared by placing the tea in red hot "tandoor" (fire oven) and then pouring the hot milky preparation when it is boiling. While most tea is prepared using hot water, it is also possible to brew a beverage from tea using room temperature or cooled water. This requires longer steeping time to extract the key components, and produces a different flavour profile. Cold brews use about 1.5 times the tea leaves that would be used for hot steeping, and are refrigerated for 4–10 hours. The process of making cold brew tea is much simpler than that for cold brew coffee. Cold brewing has some disadvantages compared to hot steeping. If the leaves or source water contain unwanted bacteria, they may flourish, whereas using hot water has the benefit of killing most bacteria. This is less of a concern in modern times and developed regions. Cold brewing may also allow for less caffeine to be extracted. The flavour of tea can also be altered by pouring it from different heights, resulting in varying degrees of aeration. The art of elevated pouring is used principally to enhance the flavour of the tea and improve mouthfeel, while cooling the beverage sufficiently for immediate consumption. In Southeast Asia, the practice of pouring tea from a height has been refined further using brewed black tea to which condensed milk is mixed then poured from a height alternately from matching hand-held vessels several times in quick succession. This creates a tea with entrapped air bubbles and a frothy "head", which is then immediately served in a cup. This beverage, "teh tarik", literally, "pulled tea" (which has its origin as a hot Indian tea beverage), has a creamier taste than flat milk tea and is common in the region. Drinking tea is often believed to result in calm alertness ; it contains L-theanine, theophylline, and bound caffeine (sometimes called "theine"). Decaffeinated brands are also sold. While herbal teas are also referred to as tea, most of them do not contain leaves from the tea plant. While tea is the second most consumed beverage on Earth after water, in many cultures it is also consumed at elevated social events, such as the tea party. Tea ceremonies have arisen in different cultures, such as the Chinese and Japanese traditions, each of which employs certain techniques and ritualised protocol of brewing and serving tea for enjoyment in a refined setting. One form of Chinese tea ceremony is the "Gongfu tea ceremony", which typically uses small Yixing clay teapots and oolong tea. In the United Kingdom 63% of people drink tea daily and is perceived as one of Britain's cultural beverages. It is customary for a host to offer tea to guests soon after their arrival. Tea is consumed both at home and outside the home, often in cafés or tea rooms. Afternoon tea with cakes on fine porcelain is a cultural stereotype. In southwest England, many cafés serve a cream tea, consisting of scones, clotted cream, and jam alongside a pot of tea. In some parts of Britain and India 'tea' may also refer to the evening meal. Ireland, as of 2016, was the second biggest per capita consumers of tea in the world. With local blends the most popular in Ireland, including Irish breakfast tea, using Rwandan, Kenyan and Assam teas. The annual national average of tea consumption in Ireland is 2.7kg to 4kg per person. Tea in Ireland is usually taken with milk or sugar, and brewed longer for a stronger taste. Tea is prevalent in most cultures in the Middle East. In Arab culture, tea is a focal point for social gatherings. Turkish tea is an important part of that country's cuisine, and is the most commonly consumed hot drink, despite the country's long history of coffee consumption. In 2004 Turkey produced 205,500 tonnes of tea (6.4% of the world's total tea production), which made it one of the largest tea markets in the world, with 120,000 tons being consumed in Turkey, and the rest being exported. In 2010 Turkey had the highest per capita consumption in the world at 2.7 kg. As of 2013, the per-capita consumption of Turkish tea exceeds 10 cups per day and 13.8 kg per year. Tea is grown mostly in Rize Province on the Black Sea coast. In Iranian culture, tea is so widely consumed, it is generally the first thing offered to a household guest. Russia has a long, rich tea history dating to 1638 when tea was introduced to Tsar Michael. Social gatherings were considered incomplete without tea, which was traditionally brewed in a samovar. Today 82% of Russians consume tea daily. In Pakistan, both black and green teas are popular and are known locally as "sabz chai" and "kahwah", respectively. The popular green tea called "kahwah" is often served after every meal in the Pashtun belt of Balochistan and in Khyber Pakhtunkhwa, which is where the Khyber Pass is found. In central and southern Punjab and the metropolitan Sindh region of Pakistan, tea with milk and sugar (sometimes with pistachios, cardamom, etc.), commonly referred to as "chai", is widely consumed. It is the most common beverage of households in the region. In the northern Pakistani regions of Chitral and Gilgit-Baltistan, a salty, buttered Tibetan-style tea is consumed. In the transnational Kashmir region, which straddles the border between India and Pakistan, Kashmiri chai or "noon chai", a pink, creamy tea with pistachios, almonds, cardamom, and sometimes cinnamon, is consumed primarily at special occasions, weddings, and during the winter months when it is sold in many kiosks. Indian tea culture is strong – the drink is the most popular hot beverage in the country. It is consumed daily in almost all houses, offered to guests, consumed in high amounts in domestic and official surroundings, and is made with the addition of milk with or without spices, and usually sweetened. At homes it is sometimes served with biscuits to be dipped in the tea and eaten before consuming the tea. More often than not, it is drunk in "doses" of small cups (referred to as "Cutting" chai if sold at street tea vendors) rather than one large cup. On 21 April 2012, the Deputy Chairman of Planning Commission (India), Montek Singh Ahluwalia, said tea would be declared as national drink by April 2013. The move is expected to boost the tea industry in the country. Speaking on the occasion, Assam Chief Minister Tarun Gogoi said a special package for the tea industry would be announced in the future to ensure its development. The history of tea in India is especially rich. In Burma (Myanmar), tea is consumed not only as hot drinks, but also as sweet tea and green tea known locally as "laphet-yay" and "laphet-yay-gyan", respectively. Pickled tea leaves, known locally as "laphet", are also a national delicacy. Pickled tea is usually eaten with roasted sesame seeds, crispy fried beans, roasted peanuts and fried garlic chips. In Mali, gunpowder tea is served in series of three, starting with the highest oxidisation or strongest, unsweetened tea, locally referred to as "strong like death", followed by a second serving, where the same tea leaves are boiled again with some sugar added ("pleasant as life"), and a third one, where the same tea leaves are boiled for the third time with yet more sugar added ("sweet as love"). Green tea is the central ingredient of a distinctly Malian custom, the "Grin", an informal social gathering that cuts across social and economic lines, starting in front of family compound gates in the afternoons and extending late into the night, and is widely popular in Bamako and other large urban areas. In the United States, 80% of tea is consumed as iced tea. Sweet tea is native to the southeastern US, and is iconic in its cuisine. In 2017, global production of tea was about 6 million tonnes, led by China with 40% and India with 21% of the world total (table). Kenya, Sri Lanka, and Vietnam were other major producers. Tea is the most popular manufactured drink consumed in the world, equaling all others – including coffee, chocolate, soft drinks, and alcohol – combined. Most tea consumed outside East Asia is produced on large plantations in the hilly regions of India and Sri Lanka, and is destined to be sold to large businesses. Opposite this large-scale industrial production are many small "gardens," sometimes minuscule plantations, that produce highly sought-after teas prized by gourmets. These teas are both rare and expensive, and can be compared to some of the most expensive wines in this respect. India is the world's largest tea-drinking nation, although the per capita consumption of tea remains a modest per person every year. Turkey, with of tea consumed per person per year, is the world's greatest per capita consumer. Multiple recent reports have found that most Chinese and Indian teas contain residues of banned toxic pesticides. Tea production in Kenya, Malawi, Rwanda, Tanzania, and Uganda has been reported to make use of child labor according to the U.S. Department of Labor's "List of Goods Produced by Child Labor or Forced Labor" (a report on the worst forms of child labor). Workers who pick and pack tea on plantations in developing countries can face harsh working conditions and may earn below the living wage. A number of bodies independently certify the production of tea. Tea from certified estates can be sold with a certification label on the pack. The most important certification schemes are Rainforest Alliance, Fairtrade, UTZ Certified, and Organic, which also certify other crops such as coffee, cocoa and fruit. Rainforest Alliance certified tea is sold by Unilever brands Lipton and PG Tips in Western Europe, Australia and the US. Fairtrade certified tea is sold by a large number of suppliers around the world. UTZ Certified announced a partnership in 2008 with Sara Lee brand Pickwick tea. Production of organic tea has risen since its introduction in 1990 at Rembeng, Kondoli Tea Estate, Assam. tons of organic tea were sold in 1999. About 75% of organic tea production is sold in France, Germany, Japan, the United Kingdom, and the United States. In 2013, China – the world's largest producer of tea – exported 325,806 tonnes, or 14% of their total crop. India exported 254,841 tonnes or 20% of their total production. In 2013, the largest importer of tea was the Russian Federation with 173,070 tonnes, followed by the United Kingdom, the United States, and Pakistan. In 1907, American tea merchant Thomas Sullivan began distributing samples of his tea in small bags of Chinese silk with a drawstring. Consumers noticed they could simply leave the tea in the bag and reuse it with fresh tea. However, the potential of this distribution and packaging method would not be fully realised until later on. During World War II, tea was rationed in the United Kingdom. In 1953, after rationing in the UK ended, Tetley launched the tea bag to the UK and it was an immediate success. The "pyramid tea bag" (or sachet), introduced by Lipton and PG Tips/Scottish Blend in 1996, attempts to address one of the connoisseurs' arguments against paper tea bags by way of its three-dimensional tetrahedron shape, which allows more room for tea leaves to expand while steeping. However, some types of pyramid tea bags have been criticised as being environmentally unfriendly, since their synthetic material is not as biodegradable as loose tea leaves and paper tea bags. The tea leaves are packaged loosely in a canister, paper bag, or other container such as a tea chest. Some whole teas, such as rolled gunpowder tea leaves, which resist crumbling, are sometimes vacuum-packed for freshness in aluminised packaging for storage and retail. The loose tea must be individually measured for use, allowing for flexibility and flavour control at the expense of convenience. Strainers, tea balls, tea presses, filtered teapots, and infusion bags prevent loose leaves from floating in the tea and over-brewing. A traditional method uses a three-piece lidded teacup called a gaiwan, the lid of which is tilted to decant the tea into a different cup for consumption. Compressed tea (such as pu-erh) is produced for convenience in transport, storage, and ageing. It can usually be stored longer without spoilage than loose leaf tea. Compressed tea is prepared by loosening leaves from the cake using a small knife, and steeping the extracted pieces in water. During the Tang dynasty, as described by Lu Yu, compressed tea was ground into a powder, combined with hot water, and ladled into bowls, resulting in a "frothy" mixture. In the Song dynasty, the tea powder would instead be whisked with hot water in the bowl. Although no longer practiced in China today, the whisking method of preparing powdered tea was transmitted to Japan by Zen Buddhist monks, and is still used to prepare matcha in the Japanese tea ceremony. Compressed tea was the most popular form of tea in China during the Tang dynasty. By the beginning of the Ming dynasty, it had been displaced by loose-leaf tea. It remains popular, however, in the Himalayan countries and Mongolian steppes. In Mongolia, tea bricks were ubiquitous enough to be used as a form of currency. Among Himalayan peoples, compressed tea is consumed by combining it with yak butter and salt to produce butter tea. "Instant tea", similar to freeze-dried instant coffee and an alternative to brewed tea, can be consumed either hot or cold. Instant tea was developed in the 1930s, with Nestlé introducing the first commercial product in 1946, while Redi-Tea debuted instant iced tea in 1953. Delicacy of flavour is sacrificed for convenience. Additives such as chai, vanilla, honey or fruit, are popular, as is powdered milk. During the Second World War British and Canadian soldiers were issued an instant tea known as "Compo" in their Composite Ration Packs. These blocks of instant tea, powdered milk, and sugar were not always well received. As Royal Canadian Artillery Gunner, George C Blackburn observed: Canned tea is sold prepared and ready to drink. It was introduced in 1981 in Japan. The first bottled tea was introduced by an Indonesian tea company PT. Sinar Sosro in 1969 with the brand name Teh Botol Sosro (or Sosro bottled tea). In 1983, Swiss-based Bischofszell Food Ltd., was the first company to bottle iced tea on an industrial scale. Storage conditions and type determine the shelf life of tea. Black tea's is greater than green's. Some, such as flower teas, may last only a month or so. Others, such as pu-erh, improve with age. To remain fresh and prevent mold, tea needs to be stored away from heat, light, air, and moisture. Tea must be kept at room temperature in an air-tight container. Black tea in a bag within a sealed opaque canister may keep for two years. Green tea deteriorates more rapidly, usually in less than a year. Tightly rolled gunpowder tea leaves keep longer than the more open-leafed Chun Mee tea. Storage life for all teas can be extended by using desiccant or oxygen-absorbing packets, vacuum sealing, or refrigeration in air-tight containers (except green tea, where discrete use of refrigeration or freezing is recommended and temperature variation kept to a minimum). Sources
https://en.wikipedia.org/wiki?curid=29969
Tank A tank is an armoured fighting vehicle designed for front-line combat. Tanks have heavy firepower, strong armour, and good battlefield manoeuvrability provided by tracks and a powerful engine; usually their main armament is mounted in a turret. They are a mainstay of modern 20th and 21st century ground forces and a key part of combined arms combat. Modern tanks are versatile mobile land weapon system platforms that have a mounted large-calibre cannon called tank gun in a rotating gun turret supplemented by mounted machine guns or other weapons such as anti-tank guided missiles or rockets. They have heavy vehicle armour which provides protection for the crew, the vehicle's weapons and propulsion systems. The use of tracks rather than wheels provides operational mobility which allows the tank to move over rugged terrain and counter adverse conditions such as mud (and be positioned on the battlefield in advantageous locations). These features enable the tank to perform well in a variety of intense combat situations, simultaneously both offensively (with fire from their powerful tank gun) and defensively (due to their near invulnerability to common firearms and good resistance to heavier weapons, all while maintaining the mobility needed to exploit changing tactical situations). Fully integrating tanks into modern military forces spawned a new era of combat: armoured warfare. There are classes of tanks: some being larger and very heavily armoured and with high calibre guns, while others are smaller, lightly armoured, and equipped with a smaller calibre and lighter gun. These smaller tanks move over terrain with speed and agility and can perform a reconnaissance role in addition to engaging enemy targets. The smaller, faster tank would not normally engage in battle with a larger, heavily armoured tank, except during a surprise flanking manoeuvre. The modern tank is the result of a century of development from the first primitive armoured vehicles, due to improvements in technology such as the internal combustion engine, which allowed the rapid movement of heavy armoured vehicles. As a result of these advances, tanks underwent tremendous shifts in capability in the years since their first appearance. Tanks in World War I were developed separately and simultaneously by Great Britain and France as a means to break the deadlock of trench warfare on the Western Front. The first British prototype, nicknamed Little Willie, was constructed at William Foster & Co. in Lincoln, England in 1915, with leading roles played by Major Walter Gordon Wilson who designed the gearbox and hull, and by William Tritton of William Foster and Co., who designed the track plates. This was a prototype of a new design that would become the British Army's Mark I tank, the first tank used in combat in September 1916 during the Battle of the Somme. The name "tank" was adopted by the British during the early stages of their development, as a security measure to conceal their purpose (see etymology). While the British and French built thousands of tanks in World War I, Germany was unconvinced of the tank's potential, and did not have enough resources, thus it built only twenty. Tanks of the interwar period evolved into the much larger and more powerful designs of World War II. Important new concepts of armoured warfare were developed; the Soviet Union launched the first mass tank/air attack at Khalkhin Gol (Nomonhan) in August 1939, and later developed the T-34, one of the predecessors of the main battle tank. Less than two weeks later, Germany began their large-scale armoured campaigns that would become known as blitzkrieg ("lightning war") – massed concentrations of tanks combined with motorised and mechanised infantry, artillery and air power designed to break through the enemy front and collapse enemy resistance. The widespread introduction of high-explosive anti-tank warheads during the second half of World War II led to lightweight infantry-carried anti-tank weapons such as the Panzerfaust, which could destroy some types of tanks. Tanks in the Cold War were designed with these weapons in mind, and led to greatly improved armour types during the 1960s, especially composite armour. Improved engines, transmissions and suspensions allowed tanks of this period to grow larger. Aspects of gun technology changed significantly as well, with advances in shell design and aiming technology. During the Cold War, the main battle tank concept arose and became a key component of modern armies. In the 21st century, with the increasing role of asymmetrical warfare and the end of the Cold War, that also contributed to the increase of cost-effective anti-tank rocket propelled grenades (RPGs) worldwide and its successors, the ability of tanks to operate independently has declined. Modern tanks are more frequently organized into combined arms units which involve the support of infantry, who may accompany the tanks in infantry fighting vehicles, and supported by reconnaissance or ground-attack aircraft. The tank is the 20th century realization of an ancient concept: that of providing troops with mobile protection and firepower. The internal combustion engine, armour plate, and continuous track were key innovations leading to the invention of the modern tank. Many sources imply that Leonardo da Vinci and H.G. Wells in some way foresaw or "invented" the tank. Leonardo's late 15th century drawings of what some describe as a "tank" show a man-powered, wheeled vehicle with cannons all around it. However the human crew would not have enough power to move it over larger distance, and usage of animals was problematic in a space so confined. In the 15th century, Jan Žižka built armoured wagons containing cannons and used them effectively in several battles. The continuous "caterpillar" track arose from attempts to improve the mobility of wheeled vehicles by spreading their weight, reducing ground pressure, and increasing their traction. Experiments can be traced back as far as the 17th century, and by the late nineteenth they existed in various recognizable and practical forms in several countries. It is frequently claimed that Richard Lovell Edgeworth created a caterpillar track. It is true that in 1770 he patented a "machine, that should carry and lay down its own road", but this was Edgeworth's choice of words. His own account in his autobiography is of a horse-drawn wooden carriage on eight retractable legs, capable of lifting itself over high walls. The description bears no similarity to a caterpillar track. Armoured trains appeared in the mid-19th century, and various armoured steam and petrol-engined vehicles were also proposed. The machines described in Wells' 1903 short story "The Land Ironclads" are a step closer, insofar as they are armour-plated, have an internal power plant, and are able to cross trenches. Some aspects of the story foresee the tactical use and impact of the tanks that later came into being. However, Wells' vehicles were driven by steam and moved on pedrail wheel, technologies that were already outdated at the time of writing. After seeing British tanks in 1916, Wells denied having "invented" them, writing, "Yet let me state at once that I was not their prime originator. I took up an idea, manipulated it slightly, and handed it on." It is, though, possible that one of the British tank pioneers, Ernest Swinton, was subconsciously or otherwise influenced by Wells' tale. The first combinations of the three principal components of the tank appeared in the decade before World War One. In 1903, Captain Léon René Levavasseur of the French Artillery proposed mounting a field gun in an armoured box on tracks. Major William E. Donohue, of the British Army's Mechanical Transport Committee, suggested fixing a gun and armoured shield on a British type of track-driven vehicle. The first armoured car was produced in Austria in 1904. However, all were restricted to rails or reasonably passable terrain. It was the development of a practical caterpillar track that provided the necessary independent, all-terrain mobility. In a memorandum of 1908, Antarctic explorer Robert Falcon Scott presented his view that man-hauling to the South Pole was impossible and that motor traction was needed. Snow vehicles did not yet exist however, and so his engineer Reginald Skelton developed the idea of a caterpillar track for snow surfaces. These tracked motors were built by the Wolseley Tool and Motor Car Company in Birmingham, tested in Switzerland and Norway, and can be seen in action in Herbert Ponting's 1911 documentary film of Scott's Antarctic Terra Nova Expedition (at minute 50, here). Scott died during the expedition in 1912, but expedition member and biographer Apsley Cherry-Garrard credited Scott's "motors" with the inspiration for the British World War I tanks, writing: "Scott never knew their true possibilities; for they were the direct ancestors of the 'tanks' in France". In 1911, a Lieutenant Engineer in the Austrian Army, Günther Burstyn, presented to the Austrian and Prussian War Ministries plans for a light, three-man tank with a gun in a revolving turret, the so-called Burstyn-Motorgeschütz. In the same year an Australian civil engineer named Lancelot de Mole submitted a basic design for a tracked, armoured vehicle to the British War Office. In Russia, Vasiliy Mendeleev designed a tracked vehicle containing a large naval gun. All of these ideas were rejected and, by 1914, forgotten (although it was officially acknowledged after the war that de Mole's design was at least the equal to the initial British tanks). Various individuals continued to contemplate the use of tracked vehicles for military applications, but by the outbreak of the War no one in a position of responsibility in any army gave much thought to tanks. From late 1914 a small number of middle-ranking British Army officers tried to persuade the War Office and the Government to consider the creation of armoured vehicles. Amongst their suggestions was the use of caterpillar tractors, but although the Army used many such vehicles for towing heavy guns, it could not be persuaded that they could be adapted as armoured vehicles. The consequence was that early tank development in Great Britain was carried out by the Royal Navy. As the result of an approach by Royal Naval Air Service officers who had been operating armoured cars on the Western Front, the First Lord of the Admiralty, Winston Churchill formed the Landship Committee, on 20 February 1915. The Director of Naval Construction for the Royal Navy, Eustace Tennyson d'Eyncourt, was appointed to head the Committee in view of his experience with the engineering methods it was felt might be required; the two other members were naval officers, and a number of industrialists were engaged as consultants. So many played a part in its long and complicated development that it is not possible to name any individual as the sole inventor of the tank. However leading roles were played by Lt Walter Gordon Wilson R.N. who designed the gearbox and developed practical tracks and by William Tritton whose agricultural machinery company, William Foster & Co. in Lincoln, Lincolnshire, England built the prototypes. On 22 July 1915, a commission was placed to design a machine that could cross a trench 4 ft wide. Secrecy surrounded the project with the designers locking themselves in a room at the White Hart Hotel in Lincoln. The committee's first design, Little Willie, ran for the first time in September 1915 and served to develop the form of the track but an improved design, better able to cross trenches, swiftly followed and in January 1916 the prototype, nicknamed "Mother", was adopted as the design for future tanks. The first order for tanks was placed on 12 February 1916, and a second on 21 April. Fosters built 37 (all "male"), and Metropolitan Carriage, Wagon, and Finance Company, of Birmingham, 113 (38 "male" and 75 "female"), a total of 150. Production models of "Male" tanks (armed with naval cannon and machine guns) and "Females" (carrying only machine-guns) would go on to fight in history's first tank action at the Somme in September 1916. Great Britain produced about 2,600 tanks of various types during the war. The first tank to engage in battle was designated "D1", a British Mark I Male, during the Battle of Flers-Courcelette (part of the wider Somme offensive) on 15 September 1916. Bert Chaney, a nineteen-year-old signaller with the 7th London Territorial Battalion, reported that "three huge mechanical monsters such as [he] had never seen before" rumbled their way onto the battlefield, "frightening the Jerries out of their wits and making them scuttle like frightened rabbits." When the news of the first use of the tanks emerged, Prime Minister David Lloyd George commented, Whilst several experimental machines were investigated in France, it was a colonel of artillery, J.B.E. Estienne, who directly approached the Commander-in-Chief with detailed plans for a tank on caterpillar tracks, in late 1915. The result was two largely unsatisfactory types of tank, 400 each of the Schneider and Saint-Chamond, both based on the Holt Tractor. The following year, the French pioneered the use of a full 360° rotation turret in a tank for the first time, with the creation of the Renault FT light tank, with the turret containing the tank's main armament. In addition to the traversible turret, another innovative feature of the FT was its engine located at the rear. This pattern, with the gun located in a mounted turret and the engine at the back, has become the standard for most succeeding tanks across the world even to this day. The FT was the most numerous tank of the war; over 3,000 were made by late 1918. Germany fielded very few tanks during World War I, and started development only after encountering British tanks on the Somme. The A7V, the only type made, was introduced in March 1918. with just 20 being produced during the war. The first tank "versus" tank action took place on 24 April 1918 at the Second Battle of Villers-Bretonneux, France, when three British Mark IVs met three German A7Vs. Captured British Mk IVs formed the bulk of Germany's tank forces during World War I; about 35 were in service at any one time. Plans to expand the tank programme were under way when the War ended. The United States Tank Corps used tanks supplied by France and Great Britain during World War I. Production of American-built tanks had just begun when the War came to an end. Italy also manufactured two Fiat 2000s towards the end of the war, too late to see service. Russia independently built and trialed two prototypes early in the War; the tracked, two-man Vezdekhod and the huge Lebedenko, but neither went into production. A tracked self-propelled gun was also designed but not produced. Although tank tactics developed rapidly during the war, piecemeal deployments, mechanical problems, and poor mobility limited the military significance of the tank in World War I, and the tank did not fulfil its promise of rendering trench warfare obsolete. Nonetheless, it was clear to military thinkers on both sides that tanks in some way could have a significant role in future conflicts. In the interwar period tanks underwent further mechanical development. In terms of tactics, J.F.C. Fuller's doctrine of spearhead attacks with massed tank formations was the basis for work by Heinz Guderian in Germany, Percy Hobart in Britain, Adna R. Chaffee, Jr., in the US, Charles de Gaulle in France, and Mikhail Tukhachevsky in the USSR. Liddell Hart held a more moderate view that all arms – cavalry, infantry and artillery – should be mechanized and work together. The British formed the all-arms Experimental Mechanized Force to test the use of tanks with supporting forces. In the Second World War only Germany would initially put the theory into practice on a large scale, and it was their superior tactics and French blunders, not superior weapons, that made the "blitzkrieg" so successful in May 1940. For information regarding tank development in this period, see tank development between the wars. Germany, Italy and the Soviet Union all experimented heavily with tank warfare during their clandestine and “volunteer” involvement in the Spanish Civil War, which saw some of the earliest examples of successful mechanised combined arms —such as when Republican troops, equipped with Soviet-supplied tanks and supported by aircraft, eventually routed Italian troops fighting for the Nationalists in the seven-day Battle of Guadalajara in 1937. However, of the nearly 700 tanks deployed during this conflict, only about 64 tanks representing the "Franco" faction and 331 from the "Republican" side were equipped with cannon, and of those 64 nearly all were World War I vintage Renault FT tanks, while the 331 Soviet supplied machines had 45mm main guns and were of 1930s manufacture. The balance of "Nationalist" tanks were machine gun armed. The primary lesson learned from this war was that machine gun armed tanks had to be equipped with cannon, with the associated armour inherent to modern tanks. The five-month-long war between the Soviet Union and the Japanese 6th Army at "Khalkhin Gol" (Nomonhan) in 1939 brought home some lessons. In this conflict, the Soviets fielded over two thousand tanks, to the around 73 cannon armed tanks deployed by the Japanese, the major difference being that Japanese armour were equipped with diesel engines as opposed to the Russian tanks equipped with petrol engines. After General Georgy Zhukov inflicted a defeat on the Japanese 6th Army with his massed combined tank and air attack, the Soviets learned a lesson on the use of gasoline engines, and quickly incorporated those newly found experiences into their new T-34 medium tank during World War II. Prior to World War II, the tactics and strategy of deploying tank forces underwent a revolution. In August 1939, Soviet General Georgy Zhukov used the combined force of tanks and airpower at Nomonhan against the Japanese 6th Army; Heinz Guderian, a tactical theoretician who was heavily involved in the formation of the first independent German tank force, said "Where tanks are, the front is", and this concept became a reality in World War II. Guderian's armoured warfare ideas, combined with Germany's existing doctrines of "Bewegungskrieg" ("maneuver warfare") and infiltration tactics from World War I, became the basis of blitzkrieg in the opening stages of World War II. During World War II, the first conflict in which armoured vehicles were critical to battlefield success, the tank and related tactics developed rapidly. Armoured forces proved capable of tactical victory in an unprecedentedly short amount of time, yet new anti-tank weaponry showed that the tank was not invulnerable. During the Invasion of Poland, tanks performed in a more traditional role in close cooperation with infantry units, but in the Battle of France deep independent armoured penetrations were executed by the Germans, a technique later called "blitzkrieg". Blitzkrieg used innovative combined arms tactics and radios in all of the tanks to provide a level of tactical flexibility and power that surpassed that of the Allied armour. The French Army, with tanks equal or superior to the German tanks in both quality and quantity, employed a linear defensive strategy in which the armoured cavalry units were made subservient to the needs of the infantry armies to cover their entrenchment in Belgium. In addition, they lacked radios in many of their tanks and headquarters, which limited their ability to respond to German attacks. In accordance with blitzkrieg methods, German tanks bypassed enemy strongpoints and could radio for close air support to destroy them, or leave them to the infantry. A related development, motorized infantry, allowed some of the troops to keep up with the tanks and create highly mobile combined arms forces. The defeat of a major military power within weeks shocked the rest of the world, spurring tank and anti-tank weapon development. The North African Campaign also provided an important battleground for tanks, as the flat, desolate terrain with relatively few obstacles or urban environments was ideal for conducting mobile armoured warfare. However, this battlefield also showed the importance of logistics, especially in an armoured force, as the principal warring armies, the German Afrika Korps and the British Eighth Army, often outpaced their supply trains in repeated attacks and counter-attacks on each other, resulting in complete stalemate. This situation would not be resolved until 1942, when during the Second Battle of El Alamein, the Afrika Korps, crippled by disruptions in their supply lines, had 95% of its tanks destroyed and was forced to retreat by a massively reinforced Eighth Army, the first in a series of defeats that would eventually lead to the surrender of the remaining Axis forces in Tunisia. When Germany launched its invasion of the Soviet Union, Operation Barbarossa, the Soviets had a superior tank design, the T-34. A lack of preparations for the Axis surprise attack, mechanical problems, poor training of the crews and incompetent leadership caused the Soviet machines to be surrounded and destroyed in large numbers. However, interference from Adolf Hitler, the geographic scale of the conflict, the dogged resistance of the Soviet combat troops, and the Soviets' massive advantages in manpower and production capability prevented a repeat of the Blitzkrieg of 1940. Despite early successes against the Soviets, the Germans were forced to up-gun their Panzer IVs, and to design and build both the larger and more expensive Tiger heavy tank in 1942, and the Panther medium tank the following year. In doing so, the "Wehrmacht" denied the infantry and other support arms the production priorities that they needed to remain equal partners with the increasingly sophisticated tanks, in turn violating the principle of combined arms they had pioneered. Soviet developments following the invasion included upgunning the T-34, development of self-propelled anti-tank guns such as the SU-152, and deployment of the IS-2 in the closing stages of the war, with the T-34 being the most produced tank of World War II, totalling up to some 65,000 examples by May 1945. Much like the Soviets, when entering World War II six months later (December 1941), the United States' mass production capacity enabled it to rapidly construct thousands of relatively cheap M4 Sherman medium tanks. A compromise all round, the Sherman was reliable and formed a large part of the Anglo-American ground forces, but in a tank-versus-tank battle was no match for the Panther or Tiger. Numerical and logistical superiority and the successful use of combined arms allowed the Allies to overrun the German forces during the Battle of Normandy. Upgunned versions with the 76 mm gun M1 and the 17-pounder were introduced to improve the M4's firepower, but concerns about protection remained—despite the apparent armour deficiencies, a total of some 42,000 Shermans were built and delivered to the Allied nations using it during the war years, a total second only to the T-34. Tank hulls were modified to produce flame tanks, mobile rocket artillery, and combat engineering vehicles for tasks including mine-clearing and bridging. Specialised self-propelled guns, most of which could double as tank destroyers, were also both developed by the Germans—with their "Sturmgeschütz", "Panzerjäger" and "Jagdpanzer" vehicles—and the "Samokhodnaya ustanovka" families of AFV's for the Soviets: such turretless, casemate-style tank destroyers and assault guns were less complex, stripped down tanks carrying heavy guns, solely firing forward. The firepower and low cost of these vehicles made them attractive but as manufacturing techniques improved and larger turret rings made larger tank guns feasible, the gun turret was recognised as the most effective mounting for the main gun to allow movement in a different direction from firing, enhancing tactical flexibility. During the Cold War, tension between the Warsaw Pact countries and North Atlantic Treaty Organisation (NATO) countries created an arms race that ensured that tank development proceeded largely as it had during World War II. The essence of tank designs during the Cold War had been hammered out in the closing stages of World War II. Large turrets, capable suspension systems, greatly improved engines, sloped armour and large-calibre (90 mm and larger) guns were standard. Tank design during the Cold War built on this foundation and included improvements to fire control, gyroscopic gun stabilisation, communications (primarily radio) and crew comfort and saw the introduction of laser rangefinders and infrared night vision equipment. Armour technology progressed in an ongoing race against improvements in anti-tank weapons, especially antitank guided missiles like the TOW. Medium tanks of World War II evolved into the "main battle tank" (MBT) of the Cold War and took over the majority of tank roles on the battlefield. This gradual transition occurred in the 1950s and 1960s due to anti-tank guided missiles, sabot ammunition and high explosive anti-tank warheads. World War II had shown that the speed of a light tank was no substitute for armour & firepower and medium tanks were vulnerable to newer weapon technology, rendering them obsolete. In a trend started in World War II, economies of scale led to serial production of progressively upgraded models of all major tanks during the Cold War. For the same reason many upgraded post-World War II tanks and their derivatives (for example, the T-55 and T-72) remain in active service around the world, and even an obsolete tank may be the most formidable weapon on battlefields in many parts of the world. Among the tanks of the 1950s were the British Centurion and Soviet T-54/55 in service from 1946, and the US M48 from 1951. These three vehicles formed the bulk of the armoured forces of NATO and the Warsaw Pact throughout much of the Cold War. Lessons learned from tanks such as the Leopard 1, M48 Patton series, Chieftain, and T-72 led to the contemporary Leopard 2, M1 Abrams, Challenger 2, C1 Ariete, T-90 and Merkava IV. Tanks and anti-tank weapons of the Cold War era saw action in a number of proxy wars like the Korean War, Vietnam War, Indo-Pakistani War of 1971, Soviet–Afghan War and Arab-Israeli conflicts, culminating with the Yom Kippur War. The T-55, for example, has seen action in no fewer than 32 conflicts. In these wars the U.S. or NATO countries and the Soviet Union or China consistently backed opposing forces. Proxy wars were studied by Western and Soviet military analysts and provided a contribution to the Cold War tank development process. The role of tank vs. tank combat is becoming diminished. Tanks work in concert with infantry in urban warfare by deploying them ahead of the platoon. When engaging enemy infantry, tanks can provide covering fire on the battlefield. Conversely, tanks can spearhead attacks when infantry are deployed in personnel carriers. Tanks were used to spearhead the initial US invasion of Iraq in 2003. As of 2005, there were 1,100 M1 Abrams used by the United States Army in the course of the Iraq War, and they have proven to have an unexpectedly high level of vulnerability to roadside bombs. A relatively new type of remotely detonated mine, the explosively formed penetrator has been used with some success against American armoured vehicles (particularly the Bradley fighting vehicle). However, with upgrades to their armour in the rear, M1s have proven invaluable in fighting insurgents in urban combat, particularly at the Battle of Fallujah, where the US Marines brought in two extra brigades. Britain deployed its Challenger 2 tanks to support its operations in southern Iraq. Israeli Merkava tanks contain features that enable them to support infantry in low intensity conflicts (LIC) and counter-terrorism operations. Such features are the rear door and rear corridor, enabling the tank to carry infantry and embark safely; the IMI APAM-MP-T multi-purpose ammunition round, advanced C4IS systems and recently: TROPHY active protection system which protects the tank from shoulder-launched anti-tank weapons. During the Second Intifada further modifications were made, designated as "Merkava Mk. 3d Baz LIC". In terms of firepower, the focus of 2010s-era R&D is on increased detection capability such as thermal imagers, automated fire control systems for the guns and increased muzzle energy from the gun to improve range, accuracy and armour penetration. The most mature future gun technology is the electrothermal-chemical gun. The XM291 electrothermal-chemical tank gun has gone through successful multiple firing sequences on a modified M8 Armored Gun System chassis. To improve tank protection, one field of research involves making the tank invisible to radar by adapting stealth technologies originally designed for aircraft. Improvements to camouflage or and attempts to render it invisible through active camouflage, which changes according to where the tank is located, are being pursued. Research is also ongoing in electromagnetic armour systems to disperse or deflect incoming shaped charges, as well as various forms of active protection systems to prevent incoming projectiles (RPGs, missiles, etc.) from striking the tank. Mobility may be enhanced in future tanks by the use of diesel-electric or turbine-electric series hybrid drives—first used in a primitive, gasoline-engined form with Porsche's "Elefant" German tank destroyer of 1943—improving fuel efficiency while reducing the size and weight of the power plant. Furthermore, advances in gas turbine technology, including the use of advanced recuperators, have allowed for reduction in engine volume and mass to less than 1 m3 and 1 metric ton, respectively, while maintaining fuel efficiency similar to that of a diesel engine. In line with the new doctrine of network-centric warfare, the 2010s-era modern battle tank shows increasing sophistication in its electronics and communication systems. The three traditional factors determining a tank's capability effectiveness are its "firepower", "protection", and "mobility". Firepower is the ability of a tank's crew to identify, engage, and destroy enemy tanks and other targets using its large-calibre cannon. Protection is the degree to which the tank's armour, profile and camouflage enables the tank crew to evade detection, protect themselves from enemy fire, and retain vehicle functionality during and after combat. Mobility includes how well the tank can be transported by rail, sea, or air to the operational staging area; from the staging area by road or over terrain towards the enemy; and tactical movement by the tank over the battlefield during combat, including traversing of obstacles and rough terrain. The variations of tank designs have been determined by the way these three fundamental features are blended. For instance, in 1937, the French doctrine focused on firepower and protection more than mobility because tanks worked in intimate liaison with the infantry. There was also the case of the development of a heavy cruiser tank, which focused on armour and firepower to challenge Germany's Tiger and Panther tanks. Tanks have been classified by weight, role, or other criteria, that has changed over time and place. Classification is determined by the prevailing theories of armoured warfare, which have been altered in turn by rapid advances in technology. No one classification system works across all periods or all nations; in particular, weight-based classification is inconsistent between countries and eras. In World War I, the first tank designs focused on crossing wide trenches, requiring very long and large vehicles, such as the British Mark I; these became classified as heavy tanks. Tanks that fulfilled other combat roles were smaller, like the French Renault FT; these were classified as light tanks or tankettes. Many late-war and inter-war tank designs diverged from these according to new, though mostly untried, concepts for future tank roles and tactics. Tank classifications varied considerably according to each nation's own tank development, such as "cavalry tanks", "fast tanks", and "breakthrough tanks". During World War II, many tank concepts were found unsatisfactory and discarded, mostly leaving the more multi-role tanks; these became easier to classify. Tank classes based on weight (and the corresponding transport and logistical needs) led to new definitions of heavy and light tank classes, with medium tanks covering the balance of those between. The British maintained cruiser tanks, focused on speed, and infantry tanks that traded speed for more armour. Tank destroyers are tanks or other armoured fighting vehicles specifically designed to defeat enemy tanks. Assault guns are armored fighting vehicles that could combine the roles of infantry tanks and tank destroyers. Some tanks were converted to flame tanks, specializing on close-in attacks on enemy strongholds with flamethrowers. As the war went on, tanks tended to become larger and more powerful, shifting some tank classifications and leading to super-heavy tanks. Experience and technology advances during the Cold War continued to consolidate tank roles. With the worldwide adoption of the modern main battle tank designs, which favour a modular universal design, most other classifications are dropped from modern terminology. All main battle tanks tend to have a good balance of speed, armour, and firepower, even while technology continues to improve all three. Being fairly large, main battle tanks can be complemented with light tanks, armoured personnel carriers, infantry fighting vehicles or similar relatively lighter armoured fighting vehicles, typically in the roles of armoured reconnaissance, amphibious or air assault operations, or against enemies lacking main battle tanks. The main weapon of modern tanks is typically a single, large-calibre cannon mounted in a fully traversing (rotating) gun turret. The typical modern tank gun is a smoothbore weapon capable of firing a variety of ammunition, including armour-piercing kinetic energy penetrators (KEP), also known as armour-piercing discarding sabot (APDS), and/or armour piercing fin stabilised discarding sabot (APFSDS) and high explosive anti-tank (HEAT) shells, and/or high explosive squash head (HESH) and/or anti-tank guided missiles (ATGM) to destroy armoured targets, as well as high explosive (HE) shells for shooting at "soft" targets (unarmoured vehicles or troops) or fortifications. Canister shot may be used in close or urban combat situations where the risk of hitting friendly forces with shrapnel from HE rounds is unacceptably high. A gyroscope is used to stabilise the main gun, allowing it to be effectively aimed and fired at the "short halt" or on the move. Modern tank guns are also commonly fitted with insulating thermal jackets to reduce gun-barrel warping caused by uneven thermal expansion, bore evacuators to minimise gun firing fumes entering the crew compartment and sometimes muzzle brakes to minimise the effect of recoil on accuracy and rate of fire. Traditionally, target detection relied on visual identification. This was accomplished from within the tank through telescopic periscopes; often, however, tank commanders would open up the hatch to view the outside surroundings, which improved situational awareness but incurred the penalty of vulnerability to sniper fire. Though several developments in target detection have taken place, these methods are still common practice. In the 2010s, more electronic target detection methods are available. In some cases spotting rifles were used to confirm proper trajectory and range to a target. These spotting rifles were mounted co-axially to the main gun, and fired tracer ammunition ballistically matched to the gun itself. The gunner would track the movement of the tracer round in flight, and upon impact with a hard surface, it would give off a flash and a puff of smoke, after which the main gun was immediately fired. However this slow method has been mostly superseded by laser rangefinding equipment. Modern tanks also use sophisticated light intensification and thermal imaging equipment to improve fighting capability at night, in poor weather and in smoke. The accuracy of modern tank guns is pushed to the mechanical limit by computerised fire-control systems. A fire-control system uses a laser rangefinder to determine the range to the target, a thermocouple, anemometer and wind vane to correct for weather effects and a muzzle referencing system to correct for gun-barrel temperature, warping and wear. Two sightings of a target with the range-finder enable calculation of the target movement vector. This information is combined with the known movement of the tank and the principles of ballistics to calculate the elevation and aim point that maximises the probability of hitting the target. Usually, tanks carry smaller calibre armament for short-range defence where fire from the main weapon would be ineffective or wasteful, for example when engaging infantry, light vehicles or close air support aircraft. A typical complement of secondary weapons is a general-purpose machine gun mounted coaxially with the main gun, and a heavier anti-aircraft-capable machine gun on the turret roof. Some tanks also have a hull-mounted machine gun. These weapons are often modified variants of those used by infantry, and so utilise the same kinds of ammunition. The measure of a tank's protection is the combination of its ability to avoid detection (due to having a low profile and through the use of camouflage), to avoid being hit by enemy fire, its resistance to the effects of enemy fire, and its capacity to sustain damage whilst still completing its objective, or at least protecting its crew. This is done by a variety of countermeasures, such as armour plating and reactive defences, as well as more complex ones such as heat-emissions reduction. In common with most unit types, tanks are subject to additional hazards in dense wooded and urban combat environments which largely negate the advantages of the tank's long-range firepower and mobility, limit the crew's detection capabilities and can restrict turret traverse. Despite these disadvantages, tanks retain high survivability against previous-generation rocket-propelled grenades aimed at the most-armoured sections. However, as effective and advanced as armour plating has become, tank survivability against newer-generation tandem-warhead anti-tank missiles is a concern for military planners. Tandem-warhead RPGs use two warheads to fool active protection systems; a first dummy warhead is fired first, to trigger the active defenses, with the real warhead following it. For example, the RPG-29 from 1980s is able to penetrate the frontal hull armour of the Challenger II and also managed to damage a M1 Abrams. As well, even tanks with advanced armour plating can have their tracks or gear cogs damaged by RPGs, which may render them immobile or hinder their mobility. Despite all of the advances in armour plating, a tank with its hatches open remains vulnerable to Molotov cocktail (gasoline bombs) and grenades. Even a "buttoned up" tank may have components which are vulnerable to Molotov cocktails, such as optics, extra gas cans and extra ammunition stored on the outside of the tank. A tank avoids detection using the doctrine of countermeasures known as CCD: camouflage (looks the same as the surroundings), concealment (cannot be seen) and deception (looks like something else). Camouflage can include disruptive painted shapes on the tank to break up the distinctive appearance and silhouette of a tank. Netting or actual branches from the surrounding landscape are also used. Prior to development of infrared technology, tanks were often given a coating of camouflage paint that, depending on environmental region or season, would allow it to blend in with the rest of its environment. A tank operating in wooded areas would typically get a green and brown paintjob; a tank in a winter environment would get white paint (often mixed with some darker colours); tanks in the desert often get khaki paintjobs. The Russian Nakidka camouflage kit was designed to reduce the optical, thermal, infrared, and radar signatures of a tank, so that acquisition of the tank would be difficult. According to Nii Stali, the designers of Nakidka, Nakidka would reduce the probabilities of detection via "visual and near-IR bands by 30%, the thermal band by 2–3 fold, radar band by 6 fold, and radar-thermal band to near-background levels. Concealment can include hiding the tank among trees or digging in the tank by having a combat bulldozer dig out part of a hill, so that much of the tank will be hidden. A tank commander can conceal the tank by using "hull down" approaches to going over upward-sloping hills, so that she or he can look out the commander's cupola without the distinctive-looking main cannon cresting over the hill. Adopting a turret-down or hull-down position reduces the visible silhouette of a tank as well as providing the added protection of a position in defilade. Working against efforts to avoid detection is the fact that a tank is a large metallic object with a distinctive, angular silhouette that emits copious heat and engine noise. A tank that is operating in cold weather or which needs to use its radio or other communications or target-detecting electronics will need to start its engine regularly to maintain its battery power, which will create engine noise. Consequently, it is difficult to effectively camouflage a tank in the absence of some form of cover or concealment (e.g., woods) it can hide its hull behind. The tank becomes easier to detect when moving (typically, whenever it is in use) due to the large, distinctive auditory, vibration and thermal signature of its engine and power plant. Tank tracks and dust clouds also betray past or present tank movement. Switched-off tanks are vulnerable to infra-red detection due to differences between the thermal conductivity and therefore heat dissipation of the metallic tank and its surroundings. At close range the tank can be detected even when powered down and fully concealed due to the column of warmer air above the tank and the smell of diesel or gasoline. Thermal blankets slow the rate of heat emission and some thermal camouflage nets use a mix of materials with differing thermal properties to operate in the infra-red as well as the visible spectrum. Grenade launchers can rapidly deploy a smoke screen that is opaque to infrared light, to hide it from the thermal viewer of another tank. In addition to using its own grenade launchers, a tank commander could call in an artillery unit to provide smoke cover. Some tanks can produce a smoke screen. Sometimes camouflage and concealment are used at the same time. For example, a camouflage-painted and branch-covered tank (camouflage) may be hidden in a behind a hill or in a dug-in-emplacement (concealment). Some armoured recovery vehicles (often tracked, tank chassis-based "tow trucks" for tanks) have dummy turrets and cannons. This makes it less likely that enemy tanks will fire on these vehicles. Some armies have fake "dummy" tanks made of wood which troops can carry into position and hide behind obstacles. These "dummy" tanks may cause the enemy to think that there are more tanks than are actually possessed. To effectively protect the tank and its crew, tank armour must counter a wide variety of antitank threats. Protection against kinetic energy penetrators and high explosive anti-tank (HEAT) shells fired by other tanks is of primary importance, but tank armour also aims to protect against infantry mortars, grenades, rocket-propelled grenades, anti-tank guided missiles, anti-tank mines, anti-tank rifles, bombs, direct artillery hits, and (less often) nuclear, biological and chemical threats, any of which could disable or destroy a tank or its crew. Steel armour plate was the earliest type of armour. The Germans pioneered the use of face hardened steel during World War II and the Soviets also achieved improved protection with sloped armour technology. World War II developments led to the obsolescence of homogeneous steel armour with the development of shaped-charge warheads, exemplified by the Panzerfaust and bazooka infantry-carried weapons which were effective, despite some early success with spaced armour. Magnetic mines led to the development of anti-magnetic paste and paint. From WWII to the modern era, troops have added improvised armour to tanks while in combat settings, such as sandbags or pieces of old armour plating. British tank researchers took the next step with the development of Chobham armour, or more generally composite armour, incorporating ceramics and plastics in a resin matrix between steel plates, which provided good protection against HEAT weapons. High explosive squash head warheads led to anti-spall armour linings, and kinetic energy penetrators led to the inclusion of exotic materials like a matrix of depleted uranium into a composite armour configuration. Reactive armour consists of small explosive-filled metal boxes that detonate when hit by the metallic jet projected by an exploding HEAT warhead, causing their metal plates to disrupt it. Tandem warheads defeat reactive armour by causing the armour to detonate prematurely. Modern reactive armour protects itself from Tandem warheads by having a thicker front metal plate to prevent the precursor charge from detonating the explosive in the reactive armour. Reactive armours can also reduce the penetrative abilities of kinetic energy penetrators by deforming the penetrator with the metal plates on the Reactive armour, thereby reducing its effectiveness against the main armour of the tank. The latest generation of protective measures for tanks are active protection systems. The term "active" is used to contrast these approaches with the armour used as the primary protective approach in earlier tanks. The mobility of a tank is described by its battlefield or tactical mobility, its operational mobility, and its strategic mobility. Tank agility is a function of the weight of the tank due to its inertia while manoeuvring and its ground pressure, the power output of the installed power plant and the tank transmission and track design. In addition, rough terrain effectively limits the tank's speed through the stress it puts on the suspension and the crew. A breakthrough in this area was achieved during World War II when improved suspension systems were developed that allowed better cross-country performance and limited firing on the move. Systems like the earlier Christie or later torsion-bar suspension developed by Ferdinand Porsche dramatically improved the tank's cross-country performance and overall mobility. Tanks are highly mobile and able to travel over most types of terrain due to their continuous tracks and advanced suspension. The tracks disperse the weight of the vehicle over a large area, resulting in less ground pressure. A tank can travel at approximately across flat terrain and up to on roads, but due to the mechanical strain this places on the vehicle and the logistical strain on fuel delivery and tank maintenance, these must be considered "burst" speeds that invite mechanical failure of engine and transmission systems. Consequently, wheeled tank transporters and rail infrastructure is used wherever possible for long-distance tank transport. The limitations of long-range tank mobility can be viewed in sharp contrast to that of wheeled armoured fighting vehicles. The majority of blitzkrieg operations were conducted at the pedestrian pace of , and that was only achieved on the roads of France. The tank's power plant supplies kinetic energy to move the tank, and electric power via a generator to components such as the turret rotation motors and the tank's electronic systems. The tank power plant has evolved from predominantly petrol and adapted large-displacement aeronautical or automotive engines during World Wars I and II, through diesel engines to advanced multi-fuel diesel engines, and powerful (per unit weight) but fuel-hungry gas turbines in the T-80 and M1 Abrams. Strategic mobility is the ability of the tanks of an armed force to arrive in a timely, cost effective, and synchronized fashion. For good strategic mobility transportability by air is important, which means that weight and volume must be kept within the designated transport aircraft capabilities. Nations often stockpile enough tanks to respond to any threat without having to make more tanks as many sophisticated designs can only be produced at a relatively low rate. The US for instance keeps 6,000 MBTs in storage. In the absence of combat engineers, most tanks are limited to fording small rivers. The typical fording depth for MBTs is approximately , being limited by the height of the engine air intake and driver's position. Modern tanks such as the Russian T-90 and the German Leopard 1 and Leopard 2 tanks can ford to a depth of when properly prepared and equipped with a snorkel to supply air for the crew and engine. Tank crews usually have a negative reaction towards deep fording but it adds considerable scope for surprise and tactical flexibility in water crossing operations by opening new and unexpected avenues of attack. Amphibious tanks are specially designed or adapted for water operations, such as by including snorkels and skirts, but they are rare in modern armies, being replaced by purpose-built amphibious assault vehicles or armoured personnel carriers in amphibious assaults. Advances such as the EFA mobile bridge and armoured vehicle-launched scissors bridges have also reduced the impediment to tank advance that rivers posed in World War II. Most modern tanks most often have four crew members, or three if an auto-loader is installed. These are the: Historically, crews have varied from just two members to a dozen. First World War tanks were developed with immature technologies; in addition to the crew needed to man the multiple guns and machine guns, up to four crewmen were needed to drive the tank: the driver, acting as the vehicle commander and manning the brakes, drove via orders to his gears-men; a co-driver operated the gearbox and throttle; and two gears-men, one on each track, steered by setting one side or the other to idle, allowing the track on the other side to slew the tank to one side. Pre-World War II French tanks were noted for having a two-man crew, in which the overworked commander had to load and fire the gun in addition to commanding the tank. With World War II the multi-turreted tanks proved impracticable, and as the single turret on a low hull design became standard, crews became standardized around a crew of four or five. In those tanks with a fifth crew member, usually three were located in the turret (as described above) while the fifth was most often seated in the hull next to the driver, and operated the hull machine gun in addition to acting as a co-driver or radio operator. Well-designed crew stations, giving proper considerations to comfort and ergonomics, are an important factor in the combat effectiveness of a tank, as it limits fatigue and speeds up individual actions. A noted author on the subject of tank design engineering, Richard M Ogorkiewicz, outlined the following basic engineering sub-systems that are commonly incorporated into tank's technological development: To the above can be added unit communication systems and electronic anti-tank countermeasures, crew ergonomic and survival systems (including flame suppression), and provision for technological upgrading. Few tank designs have survived their entire service lives without some upgrading or modernisation, particularly during wartime, including some that have changed almost beyond recognition, such as the latest Israeli Magach versions. The characteristics of a tank are determined by the performance criteria required for the tank. The obstacles that must be traversed affect the vehicles front and rear profiles. The terrain that is expected to be traversed determines the track ground pressure that may be allowed to be exerted for that particular terrain. Tank design is a compromise between its technological and budgetary constraints and its tactical capability requirements. It is not possible to maximise firepower, protection and mobility simultaneously while incorporating the latest technology and retain affordability for sufficient procurement quantity to enter production. For example, in the case of tactical capability requirements, increasing protection by adding armour will result in an increase in weight and therefore decrease in mobility; increasing firepower by installing a larger gun will force the designer team to increase armour, the therefore weight of the tank by retaining same internal volume to ensure crew efficiency during combat. In the case of the Abrams MBT which has good firepower, speed and armour, these advantages are counterbalanced by its engine's notably high fuel consumption, which ultimately reduces its range, and in a larger sense its mobility. Since the Second World War, the economics of tank production governed by the complexity of manufacture and cost, and the impact of a given tank design on logistics and field maintenance capabilities, have also been accepted as important in determining how many tanks a nation can afford to field in its force structure. Some tank designs that were fielded in significant numbers, such as Tiger I and M60A2 proved to be too complex or expensive to manufacture, and made unsustainable demands on the logistics services support of the armed forces. The "affordability of the design" therefore takes precedence over the combat capability requirements. Nowhere was this principle illustrated better than during the Second World War when two Allied designs, the T-34 and the M4 Sherman, although both simple designs which accepted engineering compromises, were used successfully against more sophisticated designs by Germany that were more complex and expensive to produce, and more demanding on overstretched logistics of the Wehrmacht. Given that a tank crew will spend most of its time occupied with maintenance of the vehicle, engineering simplicity has become the primary constraint on tank design since the Second World War despite advances in mechanical, electrical and electronics technologies. Since the Second World War, tank development has incorporated experimenting with significant mechanical changes to the tank design while focusing on technological advances in the tank's many subsystems to improve its performance. However, a number of novel designs have appeared throughout this period with mixed success, including the Soviet IT-1 and T-64 in firepower, and the Israeli Merkava and Swedish S-tank in protection, while for decades the US's M551 remained the only light tank deployable by parachute. Commanding and coordinating tanks in the field has always been subject to particular problems, particularly in the area of communications, but in modern armies these problems have been partially alleviated by networked, integrated systems that enable communications and contribute to enhanced situational awareness. Armoured bulkheads, engine noise, intervening terrain, dust and smoke, and the need to operate "buttoned up" (with hatches closed) are severe detriments to communication and lead to a sense of isolation for small tank units, individual vehicles, and tank crew. Radios were not then portable or robust enough to be mounted in a tank, although Morse code transmitters were installed in some Mark IVs at Cambrai as messaging vehicles. Attaching a field telephone to the rear would become a practice only during the next war. During World War I when these failed or were unavailable, situation reports were sent back to headquarters by some crews releasing carrier pigeons through loopholes or hatches and communications between vehicles was accomplished using hand signals, handheld semaphore flags which continued in use in the Red Army/Soviet Army through the Second and Cold wars, or by foot or horse-mounted messengers. From the beginning, the German military stressed wireless communications, equipping their combat vehicles with radios, and drilled all units to rely on disciplined radio use as a basic element of tactics. This allowed them to respond to developing threats and opportunities during battles, giving the Germans a notable tactical advantage early in the war; even where Allied tanks initially had better firepower and armour, they generally lacked individual radios. By mid-war, Western Allied tanks adopted full use of radios, although Russian use of radios remained relatively limited. On the modern battlefield an intercom mounted in the crew helmet provides internal communications and a link to the radio network, and on some tanks an external intercom on the rear of the tank provides communication with co-operating infantry. Radio networks employ radio voice procedure to minimize confusion and "chatter". A recent development in AFV equipment and doctrine is integration of information from the fire control system, laser rangefinder, Global Positioning System and terrain information via hardened military specification electronics and a battlefield network to display information on enemy targets and friendly units on a monitor in the tank. The sensor data can be sourced from nearby tanks, planes, UAVs or, in the future infantry (such as the US Future Force Warrior project). This improves the tank commander's situational awareness and ability to navigate the battlefield and select and engage targets. In addition to easing the reporting burden by automatically logging all orders and actions, orders are sent via the network with text and graphical overlays. This is known as Network-centric warfare by the US, Network Enabled Capability (UK) or Digital Army Battle Management System צי"ד (Israel). Advanced battle tanks, including the K-2 Black Panther, have taken up the first major step forward in adopting a fully radar integrated Fire Control System which allows it to detect tanks from a further distance and identify it as a friend-or-foe as well as increasing the tank's accuracy as well as its capability to lock onto tanks. Performing situational awareness and communication is the one of four primary MBT functions in 21st century. To improve the crew's situational awareness MBTs use circular review system with a combination of Augmented reality and Artifical Intelligence technologies. Further advancements in tank defence systems have led to the development of active protection systems. These involve either one of two options: The word "tank" was first applied to the British "landships" in 1915, before they entered service, to keep their nature secret. On 24 December 1915, a meeting took place of the Inter-Departmental Conference (including representatives of the Director of Naval Construction's Committee, the Admiralty, the Ministry of Munitions, and the War Office). Its purpose was to discuss the progress of the plans for what were described as "Caterpillar Machine Gun Destroyers or Land Cruisers." In his autobiography, Albert Gerald Stern (Secretary to the Landships Committee, later head of the Mechanical Warfare Supply Department) says that at that meeting "Mr. (Thomas J.) Macnamara (M.P., and Parliamentary and Financial Secretary to the Admiralty) then suggested, for secrecy's sake, to change the title of the Landships Committee. Mr. d'Eyncourt agreed that it was very desirable to retain secrecy by all means, and proposed to refer to the vessel as a "Water Carrier". In Government offices, committees and departments are always known by their initials. For this reason I, as Secretary, considered the proposed title totally unsuitable. In our search for a synonymous term, we changed the word "Water Carrier" to "Tank," and became the "Tank Supply" or "T.S." Committee. That is how these weapons came to be called Tanks," and incorrectly added, "and the name has now been adopted by all countries in the world." Colonel Ernest Swinton, who was secretary to the meeting, says that he was instructed to find a non-committal word when writing his report of the proceedings. In the evening he discussed it with a fellow officer, Lt-Col Walter Dally Jones, and they chose the word "tank". "That night, in the draft report of the conference, the word 'tank' was employed in its new sense for the first time." Swinton's "Notes on the Employment of Tanks", in which he uses the word throughout, was published in January 1916. In July 1918, "Popular Science Monthly" reported: D'Eyncourt's account differs from Swinton's and Tritton's: This appears to be an imperfect recollection. He says that the name problem arose "when we shipped the first two vehicles to France the following year" (August, 1916), but by that time the name "tank" had been in use for eight months. The tanks were labelled "With Care to Petrograd," but the belief was encouraged that they were a type of snowplough. In saying that the word "tank" was adopted worldwide, Stern was wrong. In France, the second country to use tanks in battle, the word "tank" or "tanque" was adopted initially, but was then, largely at the insistence of Colonel J.B.E. Estienne, rejected in favour of "char d'assaut" ("assault vehicle") or simply "char" ("vehicle"). During World War I, German sources tended to refer to British tanks as "Tanks" and to their own as "Kampfwagen". Later, tanks became referred to as "Panzer" (lit. "armour"), a shortened form of the full term ""Panzerkampfwagen"", literally "armoured fighting vehicle". In the Arab world, tanks are called "Dabbāba" (after a type of siege engine). In Italian, a tank is a ""carro armato"" (lit. "armed wagon"), without reference to its armour. Norway uses the term "stridsvogn" and Sweden the similar "stridsvagn" (lit. "battle wagon", also used for "chariots"), whereas Denmark uses "kampvogn" (lit. fight wagon). Finland uses "panssarivaunu" (armoured wagon), although "tankki" is also used colloquially. The Polish name "czołg", derived from verb "czołgać się" ("to crawl"), is used, depicting the way of machine's movement and its speed. In Hungarian the tank is called "harckocsi" (combat wagon), albeit "tank" is also common. In Japanese, the term is taken from Chinese and used, and this term is likewise borrowed into Korean as "jeoncha" (전차/戰車); more recent Chinese literature uses the English derived 坦克 "tǎnkè" (tank) as opposed to 戰車 "zhànchē" (battle vehicle) used in earlier days.
https://en.wikipedia.org/wiki?curid=29970
Herbal tea Herbal teas—less commonly called tisanes (UK and US , US also )—are beverages made from the infusion or decoction of herbs, spices, or other plant material in hot water. The term "herbal tea" is often used in contrast to true teas (e.g., black, green, white, yellow, oolong), which are prepared from the cured leaves of the tea plant, "Camellia sinensis". Besides coffee and true teas (they are also available decaffeinated), most other tisanes do not contain caffeine. Camellia sinesis, the tea plant, has been grown for around 5000 years. The plant is a member of the family Theaceae, its origins dating back to China and Southeast Asia. According to ancient Chinese legend, the drink was made accidentally by King Shen Nong (around 2700 b.c.e). Despite the legend, it is documented that the Chinese have been using herbal tea as a medicine dating back to around 2000 years ago. The habitual consumption of tea grew in Asia and eventually European explorers brought it home to Europe in the 17th century. Herbal tea then became a staple in British and Irish culture during that time. Tea is widely consumed all over the world today. Some feel that the term "tisane" is more correct than "herbal tea" or that the latter is even misleading, but most dictionaries record that the word "tea" is also used to refer to other plants beside the tea plant and to beverages made from these other plants. In any case, the term "herbal tea" is very well established and much more common than "tisane". The word "tisane" was rare in its modern sense before the 20th century, when it was borrowed in the modern sense from French. (This is why some people feel it should be pronounced as in French, but the original English pronunciation continues to be more common in US English and especially in UK English). The word had already existed in late Middle English in the sense of "medicinal drink" and had already been borrowed from French (Old French). The Old French word came from the Latin word "ptisana", which came from the Ancient Greek word πτισάνη ("ptisánē"), which meant "peeled" barley, in other words pearl barley, and a drink made from this that is similar to modern barley water. While most herbal teas are safe for regular consumption, some herbs have toxic or allergenic effects. Among the greatest causes of concern are: Herbal teas can also have different effects from person to person, and this is further compounded by the problem of potential misidentification. The deadly foxglove, for example, can be mistaken for the much more benign (but still relatively toxic to the liver) comfrey. The US does not require herbal teas to have any evidence concerning their efficacy, but does treat them technically as food products and require that they be safe for consumption. Fruit or fruit-flavored tea is usually acidic and thus may contribute to erosion of tooth enamel. Depending on the source of the herbal ingredients, herbal teas, like any crop, may be contaminated with pesticides or heavy metals. According to Naithani & Kakkar (2004), "all herbal preparations should be checked for toxic chemical residues to allay consumer fears of exposure to known neuro-toxicant pesticides and to aid in promoting global acceptance of these products". In addition to the issues mentioned above which are toxic to all people, several medicinal herbs are considered abortifacients, and if consumed by a pregnant woman could cause miscarriage. These include common ingredients like nutmeg, mace, papaya, bitter melon, verbena, saffron, slippery elm, and possibly pomegranate. It also includes more obscure herbs, like mugwort, rue, pennyroyal, wild carrot, blue cohosh, tansy, and savin. Herbal teas can be made with fresh or dried flowers, fruit, leaves, seeds or roots. They are made by pouring boiling water over the plant parts and letting them steep for a few minutes. The herbal tea is then strained, sweetened if desired, and served. Many companies produce herbal tea bags for such infusions. While varieties of tisanes can be made from any edible plant material, below is a list of those commonly used for such:
https://en.wikipedia.org/wiki?curid=29972
Turmeric Turmeric (pronounced , also or ) is a flowering plant, "Curcuma longa" of the ginger family, Zingiberaceae, the roots of which are used in cooking. The plant is a perennial, rhizomatous, herbaceous plant native to the Indian subcontinent and Southeast Asia, that requires temperatures between and a considerable amount of annual rainfall to thrive. Plants are gathered each year for their rhizomes, some for propagation in the following season and some for consumption. The rhizomes are used fresh or boiled in water and dried, after which they are ground into a deep orange-yellow powder commonly used as a coloring and flavoring agent in many Asian cuisines, especially for curries, as well as for dyeing. Turmeric powder has a warm, bitter, black pepper-like flavor and earthy, mustard-like aroma. Long-used in Ayurvedic medicine, where it is also known as "haridra". According to the US Food and Drug Administration there is no high-quality clinical evidence for using turmeric or its constituent, curcumin, to treat any disease. Turmeric has been used in Asia for thousands of years and is a major part of Ayurveda, Siddha medicine, traditional Chinese medicine, Unani, and the animistic rituals of Austronesian peoples. It was first used as a dye, and then later for its supposed properties in folk medicine. The greatest diversity of "Curcuma" species by number alone is in India, at around 40 to 45 species. Thailand has a comparable 30 to 40 species for example, but is much smaller than India. Other countries in tropical Asia also have numerous wild species of "Curcuma". Recent studies have also shown that the taxonomy of "Curcuma longa" is problematic, with only the specimens from South India being identifiable as "C. longa". The phylogeny, relationships, intraspecific and interspecific variation, and even identity of other species and cultivars in other parts of the world still need to be established and validated. Various species currently utilized and sold as "turmeric" in other parts of Asia have been shown to belong to several physically similar taxa, with overlapping local names. Furthermore, there is linguistic and circumstantial evidence of the spread and use of turmeric by the Austronesian peoples into Oceania and Madagascar. The populations in Polynesia and Micronesia, in particular, never came into contact with India, but use turmeric widely for both food and dye. Thus independent domestication events are also likely. The name possibly derives from Middle English or Early Modern English as ' or '. It may be of Latin origin, ' ("meritorious earth"). The name of the genus, "curcuma", is derived from the Sanskrit ', referring to turmeric, used in India since ancient times. Turmeric is a perennial herbaceous plant that reaches up to tall. Highly branched, yellow to orange, cylindrical, aromatic rhizomes are found. The leaves are alternate and arranged in two rows. They are divided into leaf sheath, petiole, and leaf blade. From the leaf sheaths, a false stem is formed. The petiole is long. The simple leaf blades are usually long and rarely up to . They have a width of and are oblong to elliptical, narrowing at the tip. At the top of the inflorescence, stem bracts are present on which no flowers occur; these are white to green and sometimes tinged reddish-purple, and the upper ends are tapered. The hermaphrodite flowers are zygomorphic and threefold. The three sepals are long, fused, and white, and have fluffy hairs; the three calyx teeth are unequal. The three bright-yellow petals are fused into a corolla tube up to long. The three corolla lobes have a length of and are triangular with soft-spiny upper ends. While the average corolla lobe is larger than the two lateral, only the median stamen of the inner circle is fertile. The dust bag is spurred at its base. All other stamens are converted to staminodes. The outer staminodes are shorter than the labellum. The labellum is yellowish, with a yellow ribbon in its center and it is obovate, with a length from . Three carpels are under a constant, trilobed ovary adherent, which is sparsely hairy. The fruit capsule opens with three compartments. In East Asia, the flowering time is usually in August. Terminally on the false stem is an inflorescence stem, long, containing many flowers. The bracts are light green and ovate to oblong with a blunt upper end with a length of . Turmeric powder is about 60–70% carbohydrates, 6–13% water, 6–8% protein, 5–10% fat, 3–7% dietary minerals, 3–7% essential oils, 2–7% dietary fiber, and 1–6% curcuminoids. Phytochemical components of turmeric include diarylheptanoids, a class including numerous curcuminoids, such as curcumin, demethoxycurcumin, and bisdemethoxycurcumin. Curcumin constitutes up to 3.14% of assayed commercial samples of turmeric powder (the average was 1.51%); curry powder contains much less (an average of 0.29%). Some 34 essential oils are present in turmeric, among which turmerone, germacrone, atlantone, and zingiberene are major constituents. Turmeric is one of the key ingredients in many Asian dishes, imparting a mustard-like, earthy aroma and pungent, slightly bitter flavor to foods. It is used mostly in savory dishes, but also is used in some sweet dishes, such as the cake "sfouf". In India, turmeric leaf is used to prepare special sweet dishes, "patoleo", by layering rice flour and coconut-jaggery mixture on the leaf, then closing and steaming it in a special utensil ("chondrõ"). Most turmeric is used in the form of rhizome powder to impart a golden yellow color. It is used in many products such as canned beverages, baked products, dairy products, ice cream, yogurt, yellow cakes, orange juice, biscuits, popcorn color, cereals, sauces, and gelatin. It is a principal ingredient in curry powders. Although typically used in its dried, powdered form, turmeric also is used fresh, like ginger. It has numerous uses in East Asian recipes, such as a pickle that contains large chunks of fresh soft turmeric. Turmeric is used widely as a spice in South Asian and Middle Eastern cooking. Various Iranian "khoresh" recipes begin with onions caramelized in oil and turmeric. The Moroccan spice mix ras el hanout typically includes turmeric. In South Africa, turmeric is used to give boiled white rice a golden color, known as "geelrys" (yellow rice) traditionally served with bobotie. In Vietnamese cuisine, turmeric powder is used to color and enhance the flavors of certain dishes, such as "bánh xèo, bánh khọt", and "mi quang". The staple Cambodian curry paste, "kroeung", used in many dishes including "amok", typically contains fresh turmeric. In Indonesia, turmeric leaves are used for Minang or Padang curry base of Sumatra, such as "rendang", "sate padang", and many other varieties. In the Philippines, turmeric is used in the preparation and cooking of Kuning and Satay. In Thailand, fresh turmeric rhizomes are used widely in many dishes, in particular in the southern Thai cuisine, such as yellow curry and turmeric soup. Turmeric is used in a hot drink called "turmeric latte" or "golden milk" that is made with milk, frequently coconut milk. The turmeric milk drink known as "haldi doodh" ("haldi" means turmeric in Hindi) is a South Asian recipe. Sold in the US and UK, the drink known as "golden mylk" uses nondairy milk and sweetener, and sometimes black pepper after the traditional recipe (which may also use "ghee"). The golden yellow color of turmeric is due to curcumin. It also contains an orange-colored volatile oil. Turmeric makes a poor fabric dye, as it is not very light fast, but is commonly used in Indian clothing, such as" saris" and Buddhist monks's robes. It is used to protect food products from sunlight, coded as E100 when used as a food additive. The oleoresin is used for oil-containing products. A curcumin and polysorbate solution or curcumin powder dissolved in alcohol is used for water-containing products. Overcoloring, such as in pickles, relishes, and mustard, is sometimes used to compensate for fading. In combination with annatto (E160b), turmeric has been used to color cheeses, yogurt, dry mixes, salad dressings, winter butter, and margarine. Turmeric is used to give a yellow color to some prepared mustards, canned chicken broths, and other foodsoften as a much cheaper replacement for saffron. Turmeric paper, also called curcuma paper or in German literature, "Curcumapapier", is paper steeped in a tincture of turmeric and allowed to dry. It is used in chemical analysis as an indicator for acidity and alkalinity. The paper is yellow in acidic and neutral solutions and turns brown to reddish-brown in alkaline solutions, with transition between pH of 7.4 and 9.2. Turmeric grows wild in the forests of South and Southeast Asia, where it is collected for use in classical Indian medicine (Siddha or Ayurveda). In Eastern India, the plant is used as one of the nine components of "navapatrika" along with young plantain or banana plant, taro leaves, barley ("jayanti"), wood apple ("bilva"), pomegranate ("darimba"), "Saraca indica", "manaka" ("Arum"), or "manakochu", and rice paddy. The Haldi ceremony called "gaye holud" in Bengal (literally "yellow on the body") is a ceremony observed during wedding celebrations of people of Indian culture all throughout the Indian subcontinent. In Tamil Nadu and Andhra Pradesh, as a part of the Tamil–Telugu marriage ritual, dried turmeric tuber tied with string is used to create a Thali necklace. In western and coastal India, during weddings of the Marathi and Konkani people, Kannada Brahmins, turmeric tubers are tied with strings by the couple to their wrists during a ceremony, "Kankanabandhana". Friedrich Ratzel reported in "The History of Mankind" during 1896, that in Micronesia, turmeric powder was applied for embellishment of body, clothing, utensils, and ceremonial uses. As turmeric and other spices are commonly sold by weight, the potential exists for powders of toxic, cheaper agents with a similar color to be added, such as lead(II,IV) oxide ("red lead"). These additives give turmeric an orange-red color instead of its native gold-yellow, and such conditions led the US Food and Drug Administration (FDA) to issue import alerts from 2013 to 2019 on turmeric originating in India and Bangladesh. Imported into the United States in 2014 were approximately of turmeric, some of which was used for food coloring, traditional medicine, or dietary supplement. Lead detection in turmeric products led to recalls across the United States, Canada, Japan, Korea, and the United Kingdom through 2016. Lead chromate, a bright yellow chemical compound, was found as an adulterant of turmeric in Bangladesh, where turmeric is used commonly in foods and the contamination levels were up to 500 times higher than the national limit. Researchers identified a chain of sources adulterating the turmeric with lead chromate: from farmers to merchants selling low-grade turmeric roots to "polishers" who added lead chromate for yellow color enhancement, to wholesalers for market distribution, all unaware of the potential consequences of lead toxicity. Another common adulterant in turmeric, metanil yellow (also known as acid yellow 36), is considered an illegal dye for use in foods by the British Food Standards Agency. Turmeric and curcumin, one of its constituents, have been studied in numerous clinical trials for various human diseases and conditions, but the conclusions have either been uncertain or negative. Claims that curcumin in turmeric may help to reduce inflammation remain unproven .
https://en.wikipedia.org/wiki?curid=29973
Total war Total war, as opposed to limited war, is used to describe warfare that mobilizes all of the resources of society to fight the war, including any and all civilian-associated resources and infrastructure, and gives priority to what the state requires for warfare over the needs of non-combatant. The term emerged in the middle of the 20th century to describe World War I and later World War II when mass conscription and the converting of national economies into wartime economies became normal. The "Encyclopædia Britannica" defines total war as "military conflict in which the contenders are willing to make any sacrifice in lives and other resources to obtain a complete victory, as distinguished from limited war." In a total war the differentiation between combatants and non-combatants diminishes due to the capacity of opposing sides to consider nearly every human, including non-combatants, as resources that are used in the war effort. The phrase "total war" can be traced back to the 1935 publication of German general Erich Ludendorff's World War I memoir, "Der totale Krieg" ("The total war"). Some authors extend the concept back as far as classic work of Carl von Clausewitz, "On War", as "absoluter Krieg" (absolute war), even-though he did not use the term; others interpret Clausewitz differently. Total war also describes the French "guerre à outrance" during the Franco-Prussian War. In his December 24, 1864 letter to his Chief of Staff during the American Civil War, Union general William Tecumseh Sherman wrote the Union was "not only fighting hostile armies, but a hostile people, and must make old and young, rich and poor, feel the hard hand of war, as well as their organized armies," defending Sherman's March to the Sea, the operation that inflicted widespread destruction of infrastructure in Georgia. United States Air Force General Curtis LeMay updated the concept for the nuclear age. In 1949, he first proposed that a total war in the nuclear age would consist of delivering the entire nuclear arsenal in a single overwhelming blow, going as far as "killing a nation". Written by academics at Eastern Michigan University, the "Cengage Advantage Books: World History" textbook claims that while total war "is traditionally associated with the two global wars of the twentieth century... it would seem that instances of total war predate the twentieth century." They write: The Sullivan Expedition of 1779 is considered one of the first modern examples of total warfare. As Indians and Tory forces killed livestock and burned buildings in remote areas (where the devastation was keenly felt) George Washington advised Sullivan to seek "the total destruction and devastation of their settlements and the capture of as many prisoners of every age and sex as possible". The expedition devastated "14 towns and most flourishing crops of corn" in New York but, despite the large scale destruction, failed to drive the Indians off the land. In his book, "The First Total War: Napoleon's Europe and the Birth of Warfare as We Know it", David A Bell, a French History professor at Princeton University argues that the French Revolutionary Wars introduced to mainland Europe some of the first concepts of total war, such as mass conscription. He claims that the new republic found itself threatened by a powerful coalition of European nations and used the entire nation's resources in an unprecedented war effort that included levée en masse (mass conscription). By August 23, 1793 the French the French front line forces grew to some 800,000 with a total of 1.5 million in all services—the first time an army in excess of a million had been mobilized in Western history: During the Russian campaign of 1812 the Russians retreated while destroying infrastructure and agriculture in order to effectively hamper the French and strip them of adequate supplies. In the campaign of 1813, Allied forces in the German theater alone amounted to nearly one million whilst two years later in the Hundred Days a French decree called for the total mobilization of some 2.5 million men (though at most a fifth of this was managed by the time of the French defeat at Waterloo). During the prolonged Peninsular War from 1808–1814 some 300,000 French troops were kept permanently occupied by, in addition to several hundred thousand Spanish, Portuguese and British regulars, an enormous and sustained guerrilla insurgency—ultimately French deaths would amount to 300,000 in the Peninsular War alone. One of the features of total war in Britain was the use of government propaganda posters to divert all attention to the war on the home front. Posters were used to influence public opinion about what to eat and what occupations to take, and to change the attitude of support towards the war effort. Even the Music Hall was used as propaganda, with propaganda songs aimed at recruitment. After the failure of the Battle of Neuve Chapelle, the large British offensive in March 1915, the British Commander-in-Chief Field Marshal John French blamed the lack of progress on insufficient and poor-quality artillery shells. This led to the Shell Crisis of 1915 which brought down both the Liberal government and Premiership of H. H. Asquith. He formed a new coalition government dominated by Liberals and appointed David Lloyd George as Minister of Munitions. It was a recognition that the whole economy would have to be geared for war if the Allies were to prevail on the Western Front. Carl Schmitt, a supporter of Nazi Germany, wrote that total war meant "total politics"—authoritarian domestic policies that imposed direct control of the press and economy. In Schmitt's view the total state, which directs fully the mobilization of all social and economic resources to war, is antecedent to total war. Scholars consider that the seeds of this total state concept already existed in the German state of World War I, which exercised full control of the press and other aspects economic and social life as espoused in the statement of state ideology known as the "Ideas of 1914". As young men left the farms for the front, domestic food production in Britain and Germany fell. In Britain the response was to import more food, which was done despite the German introduction of unrestricted submarine warfare, and to introduce rationing. The Royal Navy's blockade of German ports prevented Germany from importing food and hastened German capitulation by creating a food crisis in Germany. Almost the whole of Europe and the European colonial empires mobilized to wage World War I. Rationing occurred on the home fronts. Bulgaria went so far as to mobilize a quarter of its population, or 800,000 people, a greater share of its population than any other country during the war. The Second World War was the quintessential total war of modernity. The level of national mobilization of resources on all sides of the conflict, the battlespace being contested, the scale of the armies, navies, and air forces raised through conscription, the active targeting of non-combatants (and non-combatant property), the general disregard for collateral damage, and the unrestricted aims of the belligerents marked total war on an unprecedented and unsurpassed, multicontinental scale. During the first part of the Shōwa era, the government of Imperial Japan launched a string of policies to promote a total war effort against China and occidental powers and increase industrial production. Among these were the National Spiritual Mobilization Movement and the Imperial Rule Assistance Association. The National Mobilization Law had fifty clauses, which provided for government controls over civilian organizations (including labor unions), nationalization of strategic industries, price controls and rationing, and nationalized the news media. The laws gave the government the authority to use unlimited budgets to subsidize war production, and to compensate manufacturers for losses caused by war-time mobilization. Eighteen of the fifty articles outlined penalties for violators. To improve its production, Shōwa Japan used millions of slave labourers and pressed more than 18 million people in East Asia into forced labor. Before the onset of the Second World War, the United Kingdom drew on its First World War experience to prepare legislation that would allow immediate mobilization of the economy for war, should future hostilities break out. Rationing of most goods and services was introduced, not only for consumers but also for manufacturers. This meant that factories manufacturing products that were irrelevant to the war effort had more appropriate tasks imposed. All artificial light was subject to legal blackouts. Not only were men conscripted into the armed forces from the beginning of the war (something which had not happened until the middle of World War I), but women were also conscripted as Land Girls to aid farmers and the Bevin Boys were conscripted to work down the coal mines. Enormous casualties were expected in bombing raids, so children were evacuated from London and other cities en masse to the countryside for compulsory billeting in households. In the long term this was one of the most profound and longer-lasting social consequences of the whole war for Britain. This is because it mixed up children with the adults of other classes. Not only did the middle and upper classes become familiar with the urban squalor suffered by working class children from the slums, but the children got a chance to see animals and the countryside, often for the first time, and experience rural life. The use of statistical analysis, by a branch of science which has become known as Operational Research to influence military tactics was a departure from anything previously attempted. It was a very powerful tool but it further dehumanised war particularly when it suggested strategies which were counter intuitive. Examples where statistical analysis directly influenced tactics include the work done by Patrick Blackett's team on the optimum size and speed of convoys and the introduction of bomber streams by the Royal Air Force to counter the night fighter defences of the Kammhuber Line. In contrast, Germany started the war under the concept of Blitzkrieg. Officially, it did not accept that it was in a total war until Joseph Goebbels' Sportpalast speech of 18 February 1943 – in which the crowd was told ""Totaler Krieg – Kürzester Krieg"" ("Total War – Shortest War".) Goebbels and Hitler had spoken in March 1942 about Goebbels' idea to put the entire home front on a war footing. Hitler appeared to accept the concept, but took no action. Goebbels had the support of minister of armaments Albert Speer, economics minister Walther Funk and Robert Ley, head of the German Labour Front, and they pressed Hitler in October 1942 to take action, but Hitler, while outwardly agreeing, continued to dither. Finally, after the holidays in 1942, Hitler sent his powerful personal secretary, Martin Bormann, to discuss the question with Goebbels and Hans Lammers, the head of the Reich Chancellery. As a result, Bormann told Goebbels to go ahead and draw up a draft of the necessary decree, to be signed in January 1943. Hitler signed the decree on 13 January, almost a year after Goebbels first discussed the concept with him. The decree set up a steering committee consisting of Bormann, Lammers, and General Wilheml Keitel to oversee the effort, with Goebbels and Speer as advisors; Goebbels had expected to be one of the triumvirate. Hitler remained aloof from the project, and it was Goebbels and Hermann Göring who gave the "total war" radio address from the Sportspalast the next month, on the 10th anniversary of the Nazi's "seizure of power". The commitment to the doctrine of the short war was a continuing handicap for the Germans; neither plans nor state of mind were adjusted to the idea of a long war until the failure of the Operation Barbarossa. A major strategic defeat in the Battle of Moscow forced Speer as armaments minister to nationalize German war production and eliminate the worst inefficiencies. Under Speer's direction a threefold increase in armament production occurred and did not reach its peak until late 1944. To do this during the damage caused by the growing strategic Allied bomber offensive, is an indication of the degree of industrial under-mobilization in the earlier years. It was because the German economy through most of the war was substantially under-mobilized that it was resilient under air attack. Civilian consumption was high during the early years of the war and inventories both in industry and in consumers' possession were high. These helped cushion the economy from the effects of bombing. Plant and machinery were plentiful and incompletely used, thus it was comparatively easy to substitute unused or partly used machinery for that which was destroyed. Foreign labour, both slave labour and labour from neighbouring countries who joined the Anti-Comintern Pact with Germany, was used to augment German industrial labour which was under pressure by conscription into the "Wehrmacht" (Armed Forces). The Soviet Union (USSR) was a command economy which already had an economic and legal system allowing the economy and society to be redirected into fighting a total war. The transportation of factories and whole labour forces east of the Urals as the Germans advanced across the USSR in 1941 was an impressive feat of planning. Only those factories which were useful for war production were moved because of the total war commitment of the Soviet government. The Eastern Front of the European Theatre of World War II encompassed the conflict in central and eastern Europe from June 22, 1941 to May 9, 1945. It was the largest theatre of war in history in terms of numbers of soldiers, equipment and casualties and was notorious for its unprecedented ferocity, destruction, and immense loss of life (see World War II casualties). The fighting involved millions of German, Hungarian, Romanian and Soviet troops along a broad front hundreds of kilometres long. It was by far the deadliest single theatre of World War II. Scholars now believe that at most 27 million Soviet citizens died during the war, including at least 8.7 million soldiers who fell in battle against Hitler's armies or died in POW camps. Millions of civilians died from starvation, exposure, atrocities, and massacres. The Axis lost over 5 million soldiers in the east as well as many thousands of civilians. During the Battle of Stalingrad, newly built T-34 tanks were driven—unpainted because of a paint shortage—from the factory floor straight to the front. This came to symbolise the USSR's commitment to the World War II and demonstrated the government's total war policy. The United States underwent an unprecedented mobilization of national resources for the Second World War. Conditions in the United States were not as strained as they were in the United Kingdom or as desperate as they were in the Soviet Union, but the United States greatly curtailed nearly all non-essential activities in its prosecution of the Second World War and redirected nearly all available national resources to the conflict, including reaching the point of diminishing returns by late 1944, where the U.S. military was unable to find any more males of the correct military age to draft into service. The strategists of the U.S. military looked abroad at the storms brewing on the horizon in Europe and Asia, and began quietly making contingency plans as early as the mid-1930s; new weapons and weapons platforms were designed, and made ready. Following the outbreak of war in Europe and the ongoing aggression in Asia, efforts were stepped up significantly. The collapse of France and the airborne aggression directed at Great Britain unsettled the Americans, who had close relations with both nations, and a peacetime draft was instituted, along with Lend-Lease programs to aid the British, and covert aid was passed to the Chinese as well. American public opinion was still opposed to involvement in the problems of Europe and Asia, however. In 1941, the Soviet Union became the latest nation to be invaded, and the U.S. gave her aid as well. American ships began defending aid convoys to the Allied nations against submarine attacks, and a total trade embargo against the Empire of Japan was instituted to deny its military the raw materials its factories and military forces required to continue its offensive actions in China. In late 1941, Japan's Army-dominated government decided to seize by military force the strategic resources of South-East Asia and Indonesia since the Western powers would not give Japan these goods by trade. Planning for this action included surprise attacks on American and British forces in Hong Kong, the Philippines, Malaya, and the U.S. naval base and warships at Pearl Harbor. In response to these attacks, the U.K. and U.S. declared war on the Empire of Japan the next day. Nazi Germany declared war on the U.S. a few days later, along with Fascist Italy; the U.S. found itself fully involved in a second world war. As the United States began to gear up for a major war, information and propaganda efforts were set in motion. Civilians (including children) were encouraged to take part in fat, grease, and scrap metal collection drives. Many factories making non-essential goods retooled for war production. Levels of industrial productivity previously unheard of were attained during the war; multi-thousand-ton convoy ships were routinely built in a month-and-a-half, and tanks poured out of the former automobile factories. Within a few years of the U.S. entry into the Second World War, nearly every man fit for service, between 18 and 30, had been conscripted into the military "for the duration" of the conflict, and unprecedented numbers of women took up jobs previously held by them. Strict systems of rationing of consumer staples were introduced to redirect productive capacity to war needs. Previously untouched sections of the nation mobilized for the war effort. Academics became technocrats; home-makers became bomb-makers (massive numbers of women worked in heavy industry during the war); union leaders and businessmen became commanders in the massive armies of production. The great scientific communities of the United States were mobilized as never before, and mathematicians, doctors, engineers, and chemists turned their minds to the problems ahead of them. By the war's end a multitude of advances had been made in medicine, physics, engineering, and the other sciences. Even the theoretical physicists, whose theories were not believed to have military applications (at the time), were sent far into the Western deserts to work at the Los Alamos National Laboratory on the Manhattan Project that culminated in the Trinity nuclear test and changed the course of history. In the war, the United States lost 407,316 military personnel, but had managed to avoid the extensive level of damage to civilian and industrial infrastructure that other participants suffered. The U.S. emerged as one of the two superpowers after the war. After the United States entered World War II, Franklin D. Roosevelt declared at Casablanca conference to the other Allies and the press that unconditional surrender was the objective of the war against the Axis Powers of Germany, Italy, and Japan. Prior to this declaration, the individual regimes of the Axis Powers could have negotiated an armistice similar to that at the end of World War I and then a conditional surrender when they perceived that the war was lost. The unconditional surrender of the major Axis powers caused a legal problem at the post-war Nuremberg Trials, because the trials appeared to be in conflict with Articles 63 and 64 of the Geneva Convention of 1929. Usually if such trials are held, they would be held under the auspices of the defeated power's own legal system as happened with some of the minor Axis powers, for example in the post World War II Romanian People's Tribunals. To circumvent this, the Allies argued that the major war criminals were captured after the end of the war, so they were not prisoners of war and the Geneva Conventions did not cover them. Further, the collapse of the Axis regimes created a legal condition of total defeat ("debellatio") so the provisions of the 1907 Hague Convention over military occupation were not applicable. Since the end of World War II, no industrial nation has fought such a large, decisive war. This is likely due to the availability of nuclear weapons, whose destructive power and quick deployment render a full mobilization of a country's resources such as in World War II logistically impractical and strategically irrelevant. Such weapons are developed and maintained with relatively modest peacetime defense budgets. By the end of the 1950s, the ideological stand-off of the Cold War between the Western World and the Soviet Union had resulted in thousands of nuclear weapons being aimed by each side at the other. Strategically, the equal balance of destructive power possessed by each side situation came to be known as Mutually Assured Destruction (MAD), considering that a nuclear attack by one superpower would result in nuclear counter-strike by the other. This would result in hundreds of millions of deaths in a world where, in words widely attributed to Nikita Khrushchev, "The living will envy the dead". During the Cold War, the two superpowers sought to avoid open conflict between their respective forces, as both sides recognized that such a clash could very easily escalate, and quickly involve nuclear weapons. Instead, the superpowers fought each other through their involvement in proxy wars, military buildups, and diplomatic standoffs. In the case of proxy wars, each superpower supported its respective allies in conflicts with forces aligned with the other superpower, such as in the Vietnam War and the Soviet invasion of Afghanistan. During the Yugoslav Wars, NATO conducted strikes against the electrical grid in enemy territory using graphite bombs. Some observers considered this to be an act of total war, owing to the fact that powerplants supported by the electrical grid were essential to water purification and thus the strike represented a direct attack on civilian resources. NATO claimed that the objective of their strikes was to disrupt military infrastructure and communications. Actions that may characterize the post-19th century concept of total war include: Notes Bibliography Further reading
https://en.wikipedia.org/wiki?curid=29974
Time constraint In law, time constraints are placed on certain actions and filings in the interest of speedy justice, and additionally to prevent the evasion of the ends of justice by waiting until a matter is moot. The penalty for violating a legislative or court-imposed time constraint may be anything from a small fine to judicial determination of an entire case against one's interests. For example, if a complaining party files an action and then fails to cause the papers pertaining thereto to be served on the opposing party within the time established by local rules, and is unable to convince the court that there was good and sufficient reason for the delay, he risks having his action dismissed with prejudice. If the opposing party is served with the papers and Anil fails to respond within the time limit provided for his answer, he risks having the case decided against him by default. If one is aggrieved by the judicial outcome of an action and wishes to appeal, he may be forever barred from doing so if he fails to meet the deadline by which his appeal may be filed. By court order, or by local rule, there may be other time constraints. One may be required to answer interrogatories or a request to produce or other discovery pleadings within a given time. He may be required to give a certain number of days' advance notice before he intends to depose a party or witness. A court may order that there will be only a certain number of weeks or months allowed during which the parties to an action may conduct discovery. There may be a limitation placed upon a deposition, requiring that the party taking it conclude his questioning within a certain number of hours or days.
https://en.wikipedia.org/wiki?curid=29980
Taurus (constellation) Taurus (Latin for "the Bull") is one of the constellations of the zodiac and is located in the Northern celestial hemisphere. Taurus is a large and prominent constellation in the northern hemisphere's winter sky. It is one of the oldest constellations, dating back to at least the Early Bronze Age when it marked the location of the Sun during the spring equinox. Its importance to the agricultural calendar influenced various bull figures in the mythologies of Ancient Sumer, Akkad, Assyria, Babylon, Egypt, Greece, and Rome. The symbol representing Taurus is (Unicode ♉), which resembles a bull's head. A number of features exist that are of interest to astronomers. Taurus hosts two of the nearest open clusters to Earth, the Pleiades and the Hyades, both of which are visible to the naked eye. At first magnitude, the red giant Aldebaran is the brightest star in the constellation. In the northwest part of Taurus is the supernova remnant Messier 1, more commonly known as the Crab Nebula. One of the closest regions of active star formation, the Taurus-Auriga complex, crosses into the northern part of the constellation. The variable star T Tauri is the prototype of a class of pre-main-sequence stars. Taurus is a large and prominent constellation in the northern hemisphere's winter sky, between Aries to the west and Gemini to the east; to the north lies Perseus and Auriga, to the southeast Orion, to the south Eridanus, and to the southwest Cetus. In late November-early December, Taurus reaches opposition (furthest point from the Sun) and is visible the entire night. By late March, it is setting at sunset and completely disappears behind the Sun's glare from May to July. This constellation forms part of the zodiac and hence is intersected by the ecliptic. This circle across the celestial sphere forms the apparent path of the Sun as the Earth completes its annual orbit. As the orbital plane of the Moon and the planets lie near the ecliptic, they can usually be found in the constellation Taurus during some part of each year. The galactic plane of the Milky Way intersects the northeast corner of the constellation and the galactic anticenter is located near the border between Taurus and Auriga. Taurus is the only constellation crossed by all three of the galactic equator, celestial equator, and ecliptic. A ring-like galactic structure known as Gould's Belt passes through the constellation. The recommended three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Tau". The official constellation boundaries, as set by Eugène Delporte in 1930, are defined by a polygon of 26 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between 31.10° and −1.35°. Because a small part of the constellation lies to the south of the celestial equator, this can not be a completely circumpolar constellation at any latitude. During November, the Taurid meteor shower appears to radiate from the general direction of this constellation. The Beta Taurid meteor shower occurs during the months of June and July in the daytime, and is normally observed using radio techniques. Between 18 and 29 October, both the Northern Taurids and the Southern Taurids are active; though the latter stream is stronger. However, between November 1 and 10, the two streams equalize. The brightest member of this constellation is Aldebaran, an orange-hued, spectral class K5 III giant star. Its name derives from "", Arabic for "the follower", probably from the fact that it follows the Pleiades during the nightly motion of the celestial sphere across the sky. Forming the profile of a Bull's face is a "V" or "K"-shaped asterism of stars. This outline is created by prominent members of the Hyades, the nearest distinct open star cluster after the Ursa Major Moving Group. In this profile, Aldebaran forms the bull's bloodshot eye, which has been described as "glaring menacingly at the hunter Orion", a constellation that lies just to the southwest. The Hyades span about 5° of the sky, so that they can only be viewed in their entirety with binoculars or the unaided eye. It includes a naked eye double star, Theta Tauri (the proper name of Theta2 Tauri is "Chakumuy"), with a separation of 5.6 arcminutes. In the northeastern quadrant of the Taurus constellation lie the Pleiades (M45), one of the best known open clusters, easily visible to the naked eye. The seven most prominent stars in this cluster are at least visual magnitude six, and so the cluster is also named the "Seven Sisters". However, many more stars are visible with even a modest telescope. Astronomers estimate that the cluster has approximately 500-1,000 stars, all of which are around 100 million years old. However, they vary considerably in type. The Pleiades themselves are represented by large, bright stars; also many small brown dwarfs and white dwarfs exist. The cluster is estimated to dissipate in another 250 million years. The Pleiades cluster is classified as a Shapley class c and Trumpler class I 3 r n cluster, indicating that it is irregularly shaped and loose, though concentrated at its center and detached from the star-field. In the northern part of the constellation to the northwest of the Pleiades lies the Crystal Ball Nebula, known by its catalogue designation of NGC 1514. This planetary nebula is of historical interest following its discovery by German-born English astronomer William Herschel in 1790. Prior to that time, astronomers had assumed that nebulae were simply unresolved groups of stars. However, Herschel could clearly resolve a star at the center of the nebula that was surrounded by a nebulous cloud of some type. In 1864, English astronomer William Huggins used the spectrum of this nebula to deduce that the nebula is a luminous gas, rather than stars. To the west, the two horns of the bull are formed by Beta (β) Tauri and Zeta (ζ) Tauri; two star systems that are separated by 8°. Beta is a white, spectral class B7 III giant star known as "El Nath", which comes from the Arabic phrase "the butting", as in butting by the horns of the bull. At magnitude 1.65, it is the second brightest star in the constellation, and shares the border with the neighboring constellation of Auriga. As a result, it also bears the designation Gamma Aurigae. Zeta Tauri (the proper name is "Tianguan") is an eclipsing binary star that completes an orbit every 133 days. A degree to the northwest of ζ Tauri is the Crab Nebula (M1), a supernova remnant. This expanding nebula was created by a Type II supernova explosion, which was seen from Earth on July 4, 1054. It was bright enough to be observed during the day and is mentioned in Chinese historical texts. At its peak, the supernova reached magnitude −4, but the nebula is currently magnitude 8.4 and requires a telescope to observe. North American peoples also observed the supernova, as evidenced from a painting on a New Mexican canyon and various pieces of pottery that depict the event. However, the remnant itself was not discovered until 1731, when John Bevis found it. The star Lambda (λ) Tauri is an eclipsing binary star. This system consists of a spectral class B3 star being orbited by a less massive class A4 star. The plane of their orbit lies almost along the line of sight to the Earth. Every 3.953 days the system temporarily decreases in brightness by 1.1 magnitudes as the brighter star is partially eclipsed by the dimmer companion. The two stars are separated by only 0.1 astronomical units, so their shapes are modified by mutual tidal interaction. This results in a variation of their net magnitude throughout each orbit. Located about 1.8° west of Epsilon (ε) Tauri is T Tauri, the prototype of a class of variable stars called T Tauri stars. This star undergoes erratic changes in luminosity, varying between magnitude 9 to 13 over a period of weeks or months. This is a newly formed stellar object that is just emerging from its envelope of gas and dust, but has not yet become a main sequence star. The surrounding reflection nebula NGC 1555 is illuminated by T Tauri, and thus is also variable in luminosity. To the north lies Kappa Tauri, a visual double star consisting of two A7-type components. The pair have a separation of just 5.6 arc minutes, making them a challenge to split with the naked eye. This constellation includes part of the Taurus-Auriga complex, or Taurus dark clouds, a star-forming region containing sparse, filamentary clouds of gas and dust. This spans a diameter of and contains 35,000 solar masses of material, which is both larger and less massive than the Orion Nebula. At a distance of , this is one of the nearest active star forming regions. Located in this region, about 10° to the northeast of Aldebaran, is an asterism NGC 1746 spanning a width of 45 arcminutes. The identification of the constellation of Taurus with a bull is very old, certainly dating to the Chalcolithic, and perhaps even to the Upper Paleolithic. Michael Rappenglück of the University of Munich believes that Taurus is represented in a cave painting at the Hall of the Bulls in the caves at Lascaux (dated to roughly 15,000 BC), which he believes is accompanied by a depiction of the Pleiades. The name "seven sisters" has been used for the Pleiades in the languages of many cultures, including indigenous groups of Australia, North America and Siberia. This suggests that the name may have a common ancient origin. Taurus marked the point of vernal (spring) equinox in the Chalcolithic and the Early Bronze Age, from about 4000 BC to 1700 BC, after which it moved into the neighboring constellation Aries. The Pleiades were closest to the Sun at vernal equinox around the 23rd century BC. In Babylonian astronomy, the constellation was listed in the MUL.APIN as , "The Bull of Heaven". It has been claimed that "when the Babylonians first set up their zodiac, the vernal equinox lay in Taurus." This is in fact incorrect: when the MUL.APIN tablets were compiled in ~1100-700 BC, the vernal equinox was marked by the Babylonian constellation known as "the hired man" (the modern Aries). The Akkadian name was "Alu". In the Old Babylonian "Epic of Gilgamesh", the goddess Ishtar sends Taurus, the Bull of Heaven, to kill Gilgamesh for spurning her advances. Enkidu tears off the bull's hind part and hurls the quarters into the sky where they become the stars we know as Ursa Major and Ursa Minor. Some locate Gilgamesh as the neighboring constellation of Orion, facing Taurus as if in combat, while others identify him with the sun whose rising on the equinox vanquishes the constellation. In early Mesopotamian art, the Bull of Heaven was closely associated with Inanna, the Sumerian goddess of sexual love, fertility, and warfare. One of the oldest depictions shows the bull standing before the goddess' standard; since it has 3 stars depicted on its back (the cuneiform sign for "star-constellation"), there is good reason to regard this as the constellation later known as Taurus. The same iconic representation of the Heavenly Bull was depicted in the Dendera zodiac, an Egyptian bas-relief carving in a ceiling that depicted the celestial hemisphere using a planisphere. In these ancient cultures, the orientation of the horns was portrayed as upward or backward. This differed from the later Greek depiction where the horns pointed forward. To the Egyptians, the constellation Taurus was a sacred bull that was associated with the renewal of life in spring. When the spring equinox entered Taurus, the constellation would become covered by the Sun in the western sky as spring began. This "sacrifice" led to the renewal of the land. To the early Hebrews, Taurus was the first constellation in their zodiac and consequently it was represented by the first letter in their alphabet, Aleph. In 1990, due to the precession of the equinoxes, the position of the Sun on the first day of summer (June 21) crossed the IAU boundary of Gemini into Taurus. The Sun will slowly move through Taurus at a rate of 1° east every 72 years until approximately 2600 AD, at which point it will be in Aries on the first day of summer. In Greek mythology, Taurus was identified with Zeus, who assumed the form of a magnificent white bull to abduct Europa, a legendary Phoenician princess. In illustrations of Greek mythology, only the front portion of this constellation is depicted; this was sometimes explained as Taurus being partly submerged as he carried Europa out to sea. A second Greek myth portrays Taurus as Io, a mistress of Zeus. To hide his lover from his wife Hera, Zeus changed Io into the form of a heifer. Greek mythographer Acusilaus marks the bull Taurus as the same that formed the myth of the Cretan Bull, one of The Twelve Labors of Heracles. Taurus became an important object of worship among the Druids. Their Tauric religious festival was held while the Sun passed through the constellation. Among the arctic people known as the Inuit, the constellation is called Sakiattiat and the Hyades is Nanurjuk, with the latter representing the spirit of the polar bear. Aldebaran represents the bear, with the remainder of the stars in the Hyades being dogs that are holding the beast at bay. In Buddhism, legends hold that Gautama Buddha was born when the full moon was in Vaisakha, or Taurus. Buddha's birthday is celebrated with the Wesak Festival, or Vesākha, which occurs on the first or second full moon when the Sun is in Taurus. , the Sun appears in the constellation Taurus from May 13 to June 21. In tropical astrology, the Sun is considered to be in the sign Taurus from April 20 to May 20. The space probe "Pioneer 10" is moving in the direction of this constellation, though it will not be nearing any of the stars in this constellation for many thousands of years, by which time its batteries will be long dead. Several stars in the Hyades star cluster, including Kappa Tauri, were photographed during the total solar eclipse of May 29, 1919, by the expedition of Arthur Eddington in Príncipe and others in Sobral, Brazil, that confirmed Albert Einstein's prediction of the bending of light around the Sun according to his general theory of relativity which he published in 1915.
https://en.wikipedia.org/wiki?curid=29984
Taco A taco (, , ) is a traditional Mexican dish consisting of a small hand-sized corn or wheat tortilla topped with a filling. The tortilla is then folded around the filling and eaten by hand. A taco can be made with a variety of fillings, including beef, pork, chicken, seafood, vegetables, and cheese, allowing great versatility and variety. They are often garnished with various condiments, such as salsa, guacamole, or sour cream, and vegetables, such as lettuce, onion, tomatoes, and chiles. Tacos are a common form of antojitos, or Mexican street food, which have spread around the world. Tacos can be contrasted with similar foods such as burrito, which are often much larger and rolled rather than folded, taquitos which are rolled and fried, or chalupas/tostadas, in which the tortilla is fried before filling. The origins of the taco are not precisely known, and etymologies for the culinary usage of the word are generally theoretical. According to the Real Academia Española, publisher of "Diccionario de la Lengua Española", the word "taco" describes a typical Mexican dish of a maize tortilla folded around food. This meaning of the Spanish word "taco" is a Mexican innovation, but in other dialects "taco" is used to mean "wedge; wad, plug; billiard cue; blowpipe; ramrod; short, stocky person; [or] short, thick piece of wood." In this non-culinary usage, the word "taco" has cognates in other European languages, including the French word "tache" and the English word "tack (nail)." According to one etymological theory, the culinary meaning of "taco" derives from its "plug" meaning as employed among Mexican silver miners, who used explosive charges in plug form consisting of a paper wrapper and gunpowder filling. Indigenous origins for the culinary word "taco" are also proposed. One possibility is that the word derives from the Nahuatl word "tlahco", meaning "half" or "in the middle," in the sense that food would be placed in the middle of a tortilla. Furthermore, dishes analogous to the taco were known to have existed in Pre-Columbian society—for example, the Nahuatl word "tlaxcalli" (a type of corn tortilla). The taco predates the arrival of the Spanish in Mexico. There is anthropological evidence that the indigenous people living in the lake region of the Valley of Mexico traditionally ate tacos filled with small fish. Writing at the time of the Spanish conquistadors, Bernal Díaz del Castillo documented the first taco feast enjoyed by Europeans, a meal which Hernán Cortés arranged for his captains in Coyoacán. There are many traditional varieties of tacos: As an accompaniment to tacos, many taco stands will serve whole or sliced red radishes, lime slices, salt, pickled or grilled chilis (hot peppers), and occasionally cucumber slices, or grilled cambray onions. The hard-shell or crispy taco is a tradition that developed in the United States. The most common type of taco in the US is the hard-shell, U-shaped version, first described in a cookbook in 1949. This type of taco is typically served as a crisp-fried corn tortilla filled with seasoned ground beef, cheese, lettuce, and sometimes tomato, onion, salsa, sour cream, and avocado or guacamole. Such tacos are sold by restaurants and by fast food chains, while kits are readily available in most supermarkets. Hard shell tacos are sometimes known as "tacos dorados" ("golden tacos") in Spanish, a name that they share with taquitos. Various sources credit different individuals with the invention of the hard-shell taco, but some form of the dish likely predates all of them. Beginning from the early part of the twentieth century, various types of tacos became popular in the country, especially in Texas and California but also elsewhere. By the late 1930s, companies like Ashley Mexican Food and Absolute Mexican Foods were selling appliances and ingredients for cooking hard shell tacos, and the first patents for hard-shell taco cooking appliances were filed in the 1940s. In the mid-1950s, Glen Bell opened Taco Tia, and began selling a simplified version of the tacos being sold by Mexican restaurants in San Bernardino, particularly the "tacos dorados" being sold at the Mitla Cafe, owned by Lucia and Salvador Rodriguez across the street from another of Bell's restaurants. Over the next few years, Bell owned and operated a number of restaurants in southern California including four called El Taco. At this time, Los Angeles was racially-segregated, and the tacos sold at Bell's restaurants were many white Americans' first introduction to Mexican food. Bell sold the El Tacos to his partner and built the first Taco Bell in Downey in 1962. Kermit Becky, a former Los Angeles police officer, bought the first Taco Bell franchise from Glen Bell in 1964, and located it in Torrance. The company grew rapidly, and by 1967, the 100th restaurant opened at 400 South Brookhurst in Anaheim. In 1968, its first franchise location east of the Mississippi River opened in Springfield, Ohio. Traditionally, soft-shelled tacos referred to corn tortillas that were cooked to a softer state than a hard taco - usually by grilling or steaming. More recently, the term has come to include flour tortilla based tacos mostly from large manufacturers and restaurant chains. In this context, "soft tacos" are tacos made with wheat flour tortillas and filled with the same ingredients as a hard taco. The breakfast taco, found in Tex-Mex cuisine, is a soft corn or flour tortilla filled with meat, eggs, or cheese, and can also contain other ingredients. Some have claimed that Austin, Texas is the home of the breakfast taco. However, food writer and "OC Weekly" editor Gustavo Arellano responded that such a statement reflects a common trend of "whitewashed" foodways reporting, noting that predominantly Hispanic San Antonio, Texas "never had to brag about its breakfast taco love—folks there just call it 'breakfast'. Indian tacos, or "Navajo tacos", are made using frybread instead of tortillas. They are commonly eaten at pow-wows, festivals, and other gatherings by and for indigenous people in the United States and Canada. This kind of taco is not known to have been present before the arrival of Europeans in what is now the Southwestern United States. Navajo tradition indicates that frybread came into use in the 1860s when the government forced the tribe to relocate from their homeland in Arizona in a journey known as the Long Walk of the Navajo. It was made from ingredients given to them by the government to supplement their diet since the region couldn’t support growing the agricultural commodities that had been previously used. Since at least 1978, a variation called the "puffy taco" has been popular. "Henry's Puffy Tacos", opened by Henry Lopez in San Antonio, Texas, claims to have invented the variation, in which uncooked corn tortillas (flattened balls of masa dough) are quickly fried in hot oil until they expand and become "puffy". Fillings are similar to hard-shell versions. Restaurants offering this style of taco have since appeared in other Texas cities, as well as in California, where Henry's brother, Arturo Lopez, opened "Arturo's Puffy Taco" in Whittier, not long after Henry's opened. Henry's continues to thrive, managed by the family's second generation. Kits are available at grocery and convenience stores and usually consist of taco shells (corn tortillas already fried in a U-shape), seasoning mix and taco sauce. Commercial vendors for the home market also market soft taco kits with tortillas instead of taco shells. The tacodilla contains melted cheese in between the two folded tortillas, thus resembling a quesadilla. In the United States, the "National Taco Day" is celebrated annually on October 4.
https://en.wikipedia.org/wiki?curid=29985
The Penguins The Penguins were an American doo-wop group of the 1950s and early 1960s, best remembered for their only Top 40 hit, "Earth Angel", which was one of the first rhythm and blues hits to cross over to the pop charts. The song peaked at #8 on the "Billboard" Hot 100 chart, but had a three-week run at #1 on the R&B chart, later used in the "Back to the Future" movies. The group's tenor was Cleveland Duncan. The original members of The Penguins were Cleveland Duncan (July 23, 1935 – November 7, 2012), Curtis Williams (December 11, 1934 – August 10, 1979), Dexter Tisby (March 10, 1935 - May 2019) and Bruce Tate (January 27, 1937 – June 20, 1973). Duncan and Williams were former classmates at Fremont High School in Los Angeles, California, and Williams had become a member of The Hollywood Flames. In late 1953, they decided to form a new vocal group, and added Tisby and Tate. Their midtempo performance style was a cross between rhythm and blues and rock and roll. Williams brought with him a song, "Earth Angel", on which he had worked with Gaynel Hodge, another member of the Hollywood Flames. The Penguins were one of a number of doo-wop groups of the period named after birds (such as The Orioles, The Flamingos, and The Crows). One of the members smoked Kool cigarettes, which, at the time, had "Willie the Penguin" as its cartoon advertising character. They considered themselves "cool", and accordingly decided to call themselves "The Penguins". Dootone Records released The Penguins' single "Hey Senorita" in late 1954 as the intended A-side, but a radio DJ flipped the record over to the B-side: "Earth Angel" worked its way up to #1 on the "Billboard" R&B chart (the only Penguins song to chart that high), and held that place for three weeks early in 1955. By 1966 the disc had sold four million copies. The Penguins followed up this hit with a Christmas release "A Christmas prayer" with "Jingle jangle." The Penguins performed for the eleventh famed Cavalcade of Jazz concert held at Wrigley Field in Los Angeles which was produced by Leon Hefflin, Sr. on July 24, 1955. Also featured Big Jay McNeely, Lionel Hampton and his Orchestra, The Medallions and James Moody and his Orchestra. Duncan sang lead on "Earth Angel". He reprised his performance a decade later on Frank Zappa's "Memories of El Monte", an elegiac 1963 song in which he suddenly breaks into "Earth Angel" as one of the various songs remembered. El Monte, a city near Los Angeles, had spawned such popular performers as Tony Allan, Marvin & Johnny, The Shields, as well as the Penguins. Those groups were also emulated as part of Zappa's tribute to early days of rock and roll. In a common practice of the time, radio stations frequently featured segregated playlists. Thus, "Earth Angel" was simultaneously recorded by the white group, The Crew-Cuts in 1955. The Crew-Cuts cover peaked at #3 on the Hot 100 chart, five spots higher than the Penguins version. The single's success contributed to the Crew-Cuts' own successful career of recording crossover-friendly covers of R&B hits. The songwriting genesis for "Earth Angel" was a matter of some dispute, eventually ending up in a split credit between Penguins baritone Curtis Williams, Gaynel Hodge, and Jesse Belvin. The song had evolved through several Los Angeles area groups, and was based on the "Blue Moon" chord changes that were so popular with many doo-wop groups. The song was influenced by Jesse and Marvin's #2 R&B hit "Dream Girl", which contained many of the same vocal inflections used to great effect in "Earth Angel". The "Will you be mine?" hook in "Earth Angel", which was also the song's subtitle, was borrowed from the #9 R&B hit of the same name by The Swallows. The Hollywood Flames had also recorded "I Know" in 1953, a song which has been called a chord-for-chord blueprint for "Earth Angel", and which featured the same Curtis Williams piano intro that Williams himself reused on the Penguins hit. The coda of "Earth Angel", with the repeatedly harmonized word "You-oo... you-oo... you-oo... you-oo", had previously been heard in the Dominoes' #5 R&B cover of "These Foolish Things Remind Me Of You". Coming off the success of "Earth Angel", the Penguins approached Buck Ram to manage them. Ram's primary interest was in managing The Platters, who at that point had no hit singles, but were a profitable touring group. With the Penguins in hand, Ram was able to swing a 2-for-1 deal with Mercury Records, in which the company agreed to take on the Platters as a condition for getting the Penguins (the group that Mercury really wanted). The Platters became the label's more successful act, the Penguins never scoring another hit single. In 1955, Bruce Tate left the group. He was replaced by Randy Jones (who would later sing with the Cadets). During the summer of 1956, Jones and Tisby were briefly out of the group, and were replaced by Ray Brewster and Teddy Harper, respectively. Jones and Tisby returned shortly afterwards. Curtis Williams left in December 1957, with Harper rejoining as his permanent replacement. The Penguins never had another national hit, but their 1957 cover of "Pledge of Love" reached #15 on the R&B chart. The group broke up in 1962. Cleveland Duncan continued recording as "The Penguins", with new member Walter Saulsberry and a backing group, the Viceroys. Later, the group was Duncan, Saulsberry, Vesta and Evelyn King, and Vera Walker. (Duncan and the King sisters had recorded a record as "Cleve Duncan and the Radiants" in 1959.) By the late 1960s, the group was being billed as the "Fabulous Penguins", and featured Duncan, Walker, and new member Rudy Wilson. By the 1970s, the members were Duncan, the returning Walter Saulsberry, and new member Glenn Madison, formerly of the Delcos (Indiana). This was the current line up of the group until 2012. The group performed on the PBS television special, "Doo Wop 50". Duncan, Madison, and Saulsberry also performed with Randy Jones as guest, in 2001. It was planned for Jones to appear with the Penguins the following year but he suffered a stroke while rehearsing with the group and died shortly thereafter. Jones also performed with the reunited Jacks/Cadets in the 1990s. Duncan died on November 7, 2012, in Los Angeles at the age of 77. The group is mentioned in the Paul Simon song "Rene and Georgette Magritte with Their Dog after the War". The Penguins were inducted into The Vocal Group Hall of Fame in 2004.
https://en.wikipedia.org/wiki?curid=29987
Tenochtitlan Tenochtitlan ( ; ), also known as Mexica-Tenochtitlan ( ; ), was a large Mexica altepetl in what is now the center of Mexico City. The exact date of the founding of the city is unclear. The date 13 March 1325 was chosen in 1925 to celebrate the 600th anniversary of the city. The city was built on an island in what was then Lake Texcoco in the Valley of Mexico. The city was the capital of the expanding Aztec Empire in the 15th century until it was captured by the Spanish in 1521. At its peak, it was the largest city in the pre-Columbian Americas. It subsequently became a "cabecera" of the Viceroyalty of New Spain. Today, the ruins of Tenochtitlan are in the historic center of the Mexican capital. The World Heritage Site of Xochimilco contains what remains of the geography (water, boats, floating gardens) of the Mexica capital. Tenochtitlan was one of two Mexica "āltēpetl" (city-states or polities) on the island, the other being Tlatelolco. Traditionally, the name Tenochtitlan was thought to come from Nahuatl ("rock") and ("prickly pear") and is often thought to mean, "Among the prickly pears [growing among] rocks". However, one attestation in the late 16th-century manuscript known as "the Bancroft dialogues" suggest the second vowel was short, so that the true etymology remains uncertain. covered an estimated , situated on the western side of the shallow Lake Texcoco. At the time of Spanish conquests, Mexico City comprised both and . The city extended from north to south, from the north border of to the swamps, which by that time were gradually disappearing to the west; the city ended more or less at the present location of . The city was connected to the mainland by bridges and causeways leading to the north, south, and west. The causeways were interrupted by bridges that allowed canoes and other water traffic to pass freely. The bridges could be pulled away, if necessary, to defend the city. The city was interlaced with a series of canals, so that all sections of the city could be visited either on foot or via canoe. Lake Texcoco was the largest of five interconnected lakes. Since it formed in an endorheic basin, Lake was brackish. During the reign of Moctezuma I, the "levee of " was constructed, reputedly designed by . Estimated to be in length, the levee was completed circa 1453. The levee kept fresh spring-fed water in the waters around Tenochtitlan and kept the brackish waters beyond the dike, to the east. Two double aqueducts, each more than long and made of terracotta, provided the city with fresh water from the springs at . This was intended mainly for cleaning and washing. For drinking, water from mountain springs was preferred. Most of the population liked to bathe twice a day; was said to take four baths a day. According to the context of Aztec culture in literature, the soap that they most likely used was the root of a plant called ("Saponaria americana"), and to clean their clothes they used the root of ("Agave americana"). Also, the upper classes and pregnant women washed themselves in a , similar to a sauna bath, which is still used in the south of Mexico. This was also popular in other Mesoamerican cultures. The city was divided into four zones, or "camps"; each "camp" was divided into 20 districts ("calpullis", Nahuatl "calpōlli"); and each "calpulli", or 'big house', was crossed by streets or "tlaxilcalli". There were three main streets that crossed the city, each leading to one of the three causeways to the mainland of Tepeyac, Iztapalapa, and Tlacopan. Bernal Díaz del Castillo reported that they were wide enough for ten horses. Surrounding the raised causeways were artificial floating gardens with canal waterways and gardens of plants, shrubs, and trees. The "calpullis" were divided by channels used for transportation, with wood bridges that were removed at night. The earliest European images of the city were woodcuts published in Augsburg around 1522. Each "calpulli" (from Classical Nahuatl "calpōlli", Nahuatl pronunciation: [kaɬˈpoːlːi], meaning "large house") had its own "tiyanquiztli" (marketplace), but there was also a main marketplace in Tlatelolco – Tenochtitlan's sister city. Cortés estimated it was twice the size of the city of Salamanca with about 60,000 people trading daily. Bernardino de Sahagún provides a more conservative population estimate of 20,000 on ordinary days and 40,000 on feast days. There were also specialized markets in the other central Mexican cities. In the center of the city were the public buildings, temples, and palaces. Inside a walled square, 500 meters to a side, was the ceremonial center. There were about 45 public buildings, including: the Templo Mayor, which was dedicated to the Aztec patron deity Huitzilopochtli and the Rain God Tlaloc; the temple of Quetzalcoatl; the "tlachtli" (ball game court) with the "tzompantli" or rack of skulls; the Sun Temple, which was dedicated to Tonatiuh; the Eagle's House, which was associated with warriors and the ancient power of rulers; the platforms for the gladiatorial sacrifice; and some minor temples. Outside was the palace of Moctezuma with 100 rooms, each with its own bath, for the lords and ambassadors of allies and conquered people. Also located nearby was the "cuicalli", or house of the songs, and the "calmecac". The city had great symmetry. All constructions had to be approved by the "calmimilocatl", a functionary in charge of the city planning. The palace of Moctezuma II also had two houses or zoos, one for birds of prey and another for other birds, reptiles, and mammals. About 300 people were dedicated to the care of the animals. There was also a botanical garden and an aquarium. The aquarium had ten ponds of salt water and ten ponds of fresh water, containing various fish and aquatic birds. Places like this also existed in Texcoco, Chapultepec, Huaxtepec (now called Oaxtepec), and Texcotzingo. Tenochtitlan can be considered the most complex society in Mesoamerica in regard to social stratification. The complex system involved many social classes. The "macehualtin" were commoners who lived outside the island city of Tenochtitlan. The "pipiltin" were noblemen who were relatives of leaders and former leaders, and lived in the confines of the island. "Cuauhipiltin", or eagle nobles, were commoners who impressed the nobles with their martial prowess, and were treated as nobles. "Teteuctin" were the highest class, rulers of various parts of the empire, including the king. "Tlacohtin" were individuals who chose to enslave themselves to pay back a debt; they were not slaves forever and were not treated as badly as typical slaves seen in other ancient civilizations worldwide. Finally, the "pochteca" were merchants who traveled all of Mesoamerica trading. The membership of this class was based on heredity. "Pochteca" could become very rich because they did not pay taxes, but they had to sponsor the ritual feast of Xocotl Huetzi from the wealth that they obtained from their trade expeditions. Status was displayed by location and type of house where a person lived. Ordinary people lived in houses made of reeds plastered with mud and roofed with thatch. People who were better off had houses of adobe brick with flat roofs. The wealthy had houses of stone masonry with flat roofs. They most likely made up the house complexes that were arranged around the inner court. The higher officials in Tenochtitlan lived in the great palace complexes that made up the city. Adding even more complexity to Aztec social stratification was the "calpolli". "Calpolli", meaning ‘big house’ is a group of families related by either kinship or proximity. These groups consist of both elite members of Aztec society and commoners. Elites provided commoners with arable land and nonagricultural occupations, and commoners performed services for chiefs and gave tribute. Tenochtitlan was the capital of the Mexican civilization of the Mexica people, founded in 1325. The state religion of the Mexica civilization awaited the fulfillment of an ancient prophecy: the wandering tribes would find the destined site for a great city whose location would be signaled by an eagle with a snake in its beak perched atop a cactus (Opuntia). The Mexica saw this vision on what was then a small swampy island in Lake Texcoco, a vision that is now immortalized in Mexico's coat of arms and on the Mexican flag. Not deterred by the unfavourable terrain, they set about building their city, using the "chinampa" system (misnamed as "floating gardens") for agriculture and to dry and expand the island. A thriving culture developed, and the Mexica civilization came to dominate other tribes around Mexico. The small natural island was perpetually enlarged as Tenochtitlan grew to become the largest and most powerful city in Mesoamerica. Commercial routes were developed that brought goods from places as far as the Gulf of Mexico, the Pacific Ocean and perhaps even the Inca Empire. After a flood of Lake Texcoco, the city was rebuilt under the rule of Ahuitzotl in a style that made it one of the grandest ever in Mesoamerica. Spanish conquistador Hernán Cortés arrived in Tenochtitlan on 8 November 1519. Although there are not precise numbers, the city's population has been estimated at between 200,000–400,000 inhabitants, placing Tenochtitlan among the largest cities in the world at that time. Compared to the cities of Europe, only Paris, Venice and Constantinople might have rivaled it. It was five times the size of the London of Henry VIII. In a letter to the Spanish king, Cortés wrote that Tenochtitlan was as large as Seville or Córdoba. Cortes' men were in awe at the sight of the splendid city and many wondered if they were dreaming. Although some popular sources put the number as high as 350,000, the most common estimates of the population are of over 200,000 people. One of the few comprehensive academic surveys of Mesoamerican city and town sizes arrived at a population of 212,500 living on , It is also said that at one time, Moctezuma had rule over an empire of almost five million people in central and southern Mexico because he had extended his rule to surrounding territories to gain tribute and prisoners to sacrifice to the gods. When Cortés and his men invaded Tenochtitlan, Moctezuma II chose to welcome Cortés as an ambassador rather than risk a war which might quickly be joined by aggrieved indigenous people. As Cortés approached Tenochtitlan, the Tenochcah celebrated Toxcatl. At this event the most prominent warriors of altepetl would dance in front of a huge statue of Huitzilopochtli. The Spanish leader, Pedro de Alvarado, who was left in charge, worried that the natives planned a surprise attack. He captured three natives and tortured them until they said that this was indeed true. During the festival, the Spaniards came heavily armed and closed off every exit from the courtyard so that no one would escape. This happened during their last days in Tenochtitlan. Nobles lined each side of the city's main causeway, which extended about a league. Walking down the center came Moctezuma II, with two lords at his side, one his brother, the ruler of Iztapalapa. Cortés dismounted and was greeted by the ruler and his lords, but forbidden to touch him. Cortés gave him a necklace of crystals, placing it over his neck. They were then brought to a large house that would serve as their home for their stay in the city. Once they were settled, Moctezuma himself sat down and spoke with Cortés. The great ruler declared that anything that they needed would be theirs to have. He was thrilled to have visitors of such stature. Although the Spaniards were seeking gold, Moctezuma expressed that he had very little of the sort, but all of it was to be given to Cortés if he desired it. Since arriving in Tenochtitlan, Cortés faced early trouble. Leaving a post in Vera Cruz, the officer left in charge received a letter from Qualpopoca, the leader of Nueva Almería, asking to become a vassal of the Spaniards. He requested that officials be sent to him so that he could confirm his submission. To reach the province, the officers would have to travel through hostile land. The officer in charge of Vera Cruz decided to send four officers to meet with Qualpopoca. When they arrived, they were captured and two were killed, the other two escaping through the woods. Upon their return to Vera Cruz, the officer in charge was infuriated, and led troops to storm Almería. Here they learned that Moctezuma was supposedly the one who ordered the officers executed. Back in Tenochtitlan, Cortés detained Moctezuma and questioned him. Though no serious conclusions were reached, this started the relationship between Moctezuma and the Spaniards on a bad note. Cortés subsequently besieged Tenochtitlan for over 90 days, causing a famine; directed the systematic destruction and leveling of the city; and began its rebuilding, despite opposition, with a central area designated for Spanish use (the "traza"). The outer Indian section, now dubbed "San Juan Tenochtitlan", continued to be governed by the previous indigenous elite and was divided into the same subdivisions as before. While the people of Tenochtitlan were celebrating the over 60 Spaniards who were captured, sacrificed while living and then eaten by the locals. The skins, feet and hands of captured Spaniards were sent around the country as a warning to other tribes. The people of Tenochtitlan were exposed to diseases. Symptoms were often delayed for up to ten days, when the infection would spread throughout the body, causing sores, pain, and high fever. People were weak to the point that they could not move, nor obtain food and water. Burial of the dead become difficult to impossible, due to the pervasiveness of the people's illness. The people of Tenochtitlan began to starve and weaken. The death toll rose steadily over the course of the next 60 days. Cortés founded the Spanish capital of Mexico City on the ruins of Tenochtitlan. Despite the extensive damage to the built environment, the site retained symbolic power and legitimacy as the capital of the Aztec empire, which Cortés sought to appropriate. For a time this , the highest rank in the Spanish hierarchy of settlement designation, was called Mexico–Tenochtitlan. Charles Gibson devotes the final chapter of his classic work, "The Aztecs Under Spanish Rule", to what he called "The City," with later historians building on his work. The Spaniards established a or town council, which had jurisdiction over the Spanish residents. The Spanish established a Europeans-only zone in the center of the city, an area of 13 blocks in each direction of the central plaza, which was the . Although many native residents died during the siege of Tenochtitlan, the indigenous still had a strong presence in the city, and were settled in two main areas of the island, designated San Juan Tenochtitlan and Santiago Tlatelolco, each with a municipal council that functioned the entire colonial period. San Juan Tenochtitlan was a Spanish administrative creation, which amalgamated four indigenous sections, with each losing territory to the Spanish . The Spanish laid out the streets of the in a checker board pattern, with straight streets and plazas at intervals, whereas the indigenous portions of the city were irregular in layout and built of modest materials. In the colonial period both San Juan Tenochtitlan and Santiago Tlatelolco retained jurisdiction over settlements on the mainland that they could draw on for labor and tribute demanded by the Spanish, but increasingly those subordinate settlements () were able to gain their autonomy with their own rulers and separate relationship with the Spanish rulers. Concern about the health of the indigenous population in early post-conquest Mexico–Tenochtitlan led to the founding of a royal hospital for indigenous residents. There are a number of colonial-era pictorial manuscripts dealing with Tenochtitlan–Tlatelolco, which shed light on litigation between Spaniards and indigenous over property. An account with information about the war of Tenochtitlan against its neighbor Tlatelolco in 1473 and the Spanish conquest in 1521 is the . Anthropologist Susan Kellogg has studied colonial-era inheritance patterns of Nahuas in Mexico City, using Nahuatl- and Spanish-language testaments. Tenochtitlan's main temple complex, the Templo Mayor, was dismantled and the central district of the Spanish colonial city was constructed on top of it. The great temple was destroyed by the Spanish during the construction of a cathedral. The location of the Templo Mayor was rediscovered in the early 20th century, but major excavations did not take place until 1978–1982, after utility workers came across a massive stone disc depicting the nude dismembered body of the moon goddess Coyolxauhqui. The disc is in diameter, and is held at the Templo Mayor Museum. The ruins, constructed over seven periods, were built on top of each other. The resulting weight of the structures caused them to sink into the sediment of Lake Texcoco; the ruins now rest at an angle instead of horizontally. Mexico City's Zócalo, the Plaza de la Constitución, is located at the site of Tenochtitlan's original central plaza and market, and many of the original "calzadas" still correspond to modern city streets. The Aztec calendar stone was located in the ruins. This stone is in diameter and weighs over . It was once located half-way up the great pyramid. This sculpture was carved around 1470 under the rule of King Axayacatl, the predecessor of Tizoc, and is said to tell the history of the Mexicas and to prophesy the future. In August 1987, archaeologists discovered a mix of 1,789 human bones below street level in Mexico City. The burial dates back to the 1480s, ie before Cortez, and lies at the foot of the main temple in the sacred ceremonial precinct of the Aztec capital. The bones are from children, teenagers and adults. A complete skeleton of a young woman was also found at the site.
https://en.wikipedia.org/wiki?curid=29988
Triassic The Triassic ( ) is a geologic period and system which spans 50.6 million years from the end of the Permian Period 251.9 million years ago (Mya), to the beginning of the Jurassic Period Mya. The Triassic is the first and shortest period of the Mesozoic Era. Both the start and end of the period are marked by major extinction events. The Triassic period is subdivided into three epochs: Early Triassic, Middle Triassic and Late Triassic. The Triassic began in the wake of the Permian–Triassic extinction event, which left the Earth's biosphere impoverished; it was well into the middle of the Triassic before life recovered its former diversity. Therapsids and archosaurs were the chief terrestrial vertebrates during this time. A specialized subgroup of archosaurs, called dinosaurs, first appeared in the Late Triassic but did not become dominant until the succeeding Jurassic Period. The first true mammals, themselves a specialized subgroup of therapsids, also evolved during this period, as well as the first flying vertebrates, the pterosaurs, who, like the dinosaurs, were a specialized subgroup of archosaurs. The vast supercontinent of Pangaea existed until the mid-Triassic, after which it began to gradually rift into two separate landmasses, Laurasia to the north and Gondwana to the south. The global climate during the Triassic was mostly hot and dry, with deserts spanning much of Pangaea's interior. However, the climate shifted and became more humid as Pangaea began to drift apart. The end of the period was marked by yet another major mass extinction, the Triassic–Jurassic extinction event, that wiped out many groups and allowed dinosaurs to assume dominance in the Jurassic. The Triassic was named in 1834 by Friedrich von Alberti, after the three distinct rock layers ("tri" meaning "three") that are found throughout Germany and northwestern Europe—red beds, capped by marine limestone, followed by a series of terrestrial mud- and sandstones—called the "Trias". The Triassic is usually separated into Early, Middle, and Late Triassic Epochs, and the corresponding rocks are referred to as Lower, Middle, or Upper Triassic. The faunal stages from the youngest to oldest are: During the Triassic, almost all the Earth's land mass was concentrated into a single supercontinent centered more or less on the equator and spanning from pole to pole, called Pangaea ("all the land"). From the east, along the equator, the Tethys sea penetrated Pangaea, causing the Paleo-Tethys Ocean to be closed. Later in the mid-Triassic a similar sea penetrated along the equator from the west. The remaining shores were surrounded by the world-ocean known as Panthalassa ("all the sea"). All the deep-ocean sediments laid down during the Triassic have disappeared through subduction of oceanic plates; thus, very little is known of the Triassic open ocean. The supercontinent Pangaea was rifting during the Triassic—especially late in that period—but had not yet separated. The first nonmarine sediments in the rift that marks the initial break-up of Pangaea, which separated New Jersey from Morocco, are of Late Triassic age; in the U.S., these thick sediments comprise the Newark Group. Because a super-continental mass has less shoreline compared to one broken up, Triassic marine deposits are globally relatively rare, despite their prominence in Western Europe, where the Triassic was first studied. In North America, for example, marine deposits are limited to a few exposures in the west. Thus Triassic stratigraphy is mostly based on organisms that lived in lagoons and hypersaline environments, such as Estheria crustaceans. At the beginning of the Mesozoic Era, Africa was joined with Earth's other continents in Pangaea. Africa shared the supercontinent's relatively uniform fauna which was dominated by theropods, prosauropods and primitive ornithischians by the close of the Triassic period. Late Triassic fossils are found throughout Africa, but are more common in the south than north. The time boundary separating the Permian and Triassic marks the advent of an extinction event with global impact, although African strata from this time period have not been thoroughly studied. During the Triassic peneplains are thought to have formed in what is now Norway and southern Sweden. Remnants of this peneplain can be traced as a tilted summit accordance in the Swedish West Coast. In northern Norway Triassic peneplains may have been buried in sediments to be then re-exposed as coastal plains called strandflats. Dating of illite clay from a strandflat of Bømlo, southern Norway, have shown that landscape there became weathered in Late Triassic times ( 210 million years ago) with the landscape likely also being shaped during that time. At Paleorrota geopark, located in Rio Grande do Sul, Brazil, the Santa Maria Formation and Caturrita Formations are exposed. In these formations, one of the earliest dinosaurs, "Staurikosaurus", as well as the mammal ancestors "Brasilitherium" and "Brasilodon" have been discovered. The Triassic continental interior climate was generally hot and dry, so that typical deposits are red bed sandstones and evaporites. There is no evidence of glaciation at or near either pole; in fact, the polar regions were apparently moist and temperate, providing a climate suitable for forests and vertebrates, including reptiles. Pangaea's large size limited the moderating effect of the global ocean; its continental climate was highly seasonal, with very hot summers and cold winters. The strong contrast between the Pangea supercontinent and the global ocean triggered intense cross-equatorial monsoons. The Triassic may have mostly been a dry period, but evidence exists that it was punctuated by several episodes of increased rainfall in tropical and subtropical latitudes of the Tethys Sea and its surrounding land. Sediments and fossils suggestive of a more humid climate are known from the Anisian to Ladinian of the Tethysian domain, and from the Carnian and Rhaetian of a larger area that includes also the Boreal domain (e.g., Svalbard Islands), the North American continent, the South China block and Argentina. The best studied of such episodes of humid climate, and probably the most intense and widespread, was the Carnian Pluvial Event. A 2020 study found bubbles of carbon dioxide in basaltic rocks dating back to the end of the Triassic, and concluded that volcanic activity helped trigger climate change in that period. Three categories of organisms can be distinguished in the Triassic record: survivors from the Permian–Triassic extinction event, new groups which flourished briefly, and other new groups which went on to dominate the Mesozoic Era. On land, the surviving vascular plants included the lycophytes, the dominant cycadophytes, ginkgophyta (represented in modern times by "Ginkgo biloba"), ferns, horsetails and glossopterids. The spermatophytes, or seed plants, came to dominate the terrestrial flora: in the northern hemisphere, conifers, ferns and bennettitales flourished. "Glossopteris" (a seed fern) was the dominant southern hemisphere tree during the Early Triassic period. Before the Permian extinction, Archaeplastida (red and green algae) had been the major marine phytoplanktons since about 659–645 million years ago, when they replaced marine planktonic cyanobacteria, which first appeared about 800 million years ago, as the dominant phytoplankton in the oceans. In the Triassic, secondary endosymbiotic algae became the most important plankton. In marine environments, new modern types of corals appeared in the Early Triassic, forming small patches of reefs of modest extent compared to the great reef systems of Devonian or modern times. Serpulids appeared in the Middle Triassic. Microconchids were abundant. The shelled cephalopods called ammonites recovered, diversifying from a single line that survived the Permian extinction. The fish fauna was remarkably uniform, suggesting that very few families survived the Permian extinction. There were also many types of marine reptiles. These included the Sauropterygia, which featured pachypleurosaurus and nothosaurs (both common during the Middle Triassic, especially in the Tethys region), placodonts, and the first plesiosaurs. The first of the lizardlike Thalattosauria (askeptosaurs) and the highly successful ichthyosaurs, which appeared in Early Triassic seas soon diversified, and some eventually developed to huge size during the Late Triassic. Subequatorial saurichthyids and birgeriids have also been described in Early Triassic strata. Groups of terrestrial fauna, which appeared in the Triassic period or achieved a new level of evolutionary success during it include: The Permian–Triassic extinction devastated terrestrial life. Biodiversity rebounded as the surviving species repopulated empty terrain, but these were short-lived. Diverse communities with complex food-web structures took 30 million years to reestablish. Temnospondyl amphibians were among those groups that survived the Permian–Triassic extinction; some lineages (e.g. trematosaurs) flourished briefly in the Early Triassic, while others (e.g. capitosaurs) remained successful throughout the whole period, or only came to prominence in the Late Triassic (e.g. "Plagiosaurus", metoposaurs). As for other amphibians, the first Lissamphibia, progenitors of first frogs, are known from the Early Triassic, but the group as a whole did not become common until the Jurassic, when the temnospondyls had become very rare. Most of the Reptiliomorpha, stem-amniotes that gave rise to the amniotes, disappeared in the Triassic, but two water-dwelling groups survived: Embolomeri that only survived into the early part of the period, and the Chroniosuchia, which survived until the end of the Triassic. Archosauromorph reptiles, especially archosaurs, progressively replaced the synapsids that had dominated the previous Permian period. The "Cynognathus" was the characteristic top predator in earlier Triassic (Olenekian and Anisian) on Gondwana. Both kannemeyeriid dicynodonts and gomphodont cynodonts remained important herbivores during much of the period, and ecteniniids played a role as large-sized, cursorial predators in the Late Triassic. During the Carnian (early part of the Late Triassic), some advanced cynodonts gave rise to the first mammals. At the same time the Ornithodira, which until then had been small and insignificant, evolved into pterosaurs and a variety of dinosaurs. The Crurotarsi were the other important archosaur clade, and during the Late Triassic these also reached the height of their diversity, with various groups including the phytosaurs, aetosaurs, several distinct lineages of Rauisuchia, and the first crocodylians (the Sphenosuchia). Meanwhile, the stocky herbivorous rhynchosaurs and the small to medium-sized insectivorous or piscivorous Prolacertiformes were important basal archosauromorph groups throughout most of the Triassic. Among other reptiles, the earliest turtles, like "Proganochelys" and "Proterochersis", appeared during the Norian Age (Stage) of the Late Triassic Period. The Lepidosauromorpha, specifically the Sphenodontia, are first found in the fossil record of the earlier Carnian Age. The Procolophonidae were an important group of small lizard-like herbivores. During the Triassic, archosaurs displaced therapsids as the dominant amniotes. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliaform successors to live as small, mainly nocturnal insectivores. Nocturnal life may have forced the mammaliaforms to develop fur and a higher metabolic rate. The Monte San Giorgio lagerstätte, now in the Lake Lugano region of northern Italy and Switzerland, was in Triassic times a lagoon behind reefs with an anoxic bottom layer, so there were no scavengers and little turbulence to disturb fossilization, a situation that can be compared to the better-known Jurassic Solnhofen Limestone lagerstätte. The remains of fish and various marine reptiles (including the common pachypleurosaur Neusticosaurus, and the bizarre long-necked archosauromorph "Tanystropheus"), along with some terrestrial forms like "Ticinosuchus" and "Macrocnemus", have been recovered from this locality. All these fossils date from the Anisian/Ladinian transition (about 237 million years ago). The Triassic period ended with a mass extinction, which was particularly severe in the oceans; the conodonts disappeared, as did all the marine reptiles except ichthyosaurs and plesiosaurs. Invertebrates like brachiopods, gastropods, and molluscs were severely affected. In the oceans, 22% of marine families and possibly about half of marine genera went missing. Though the end-Triassic extinction event was not equally devastating in all terrestrial ecosystems, several important clades of crurotarsans (large archosaurian reptiles previously grouped together as the thecodonts) disappeared, as did most of the large labyrinthodont amphibians, groups of small reptiles, and some synapsids (except for the proto-mammals). Some of the early, primitive dinosaurs also became extinct, but more adaptive ones survived to evolve into the Jurassic. Surviving plants that went on to dominate the Mesozoic world included modern conifers and cycadeoids. The cause of the Late Triassic extinction is uncertain. It was accompanied by huge volcanic eruptions that occurred as the supercontinent Pangaea began to break apart about 202 to 191 million years ago (40Ar/39Ar dates), forming the Central Atlantic Magmatic Province (CAMP), one of the largest known inland volcanic events since the planet had first cooled and stabilized. Other possible but less likely causes for the extinction events include global cooling or even a bolide impact, for which an impact crater containing Manicouagan Reservoir in Quebec, Canada, has been singled out. However, the Manicouagan impact melt has been dated to 214±1 Mya. The date of the Triassic-Jurassic boundary has also been more accurately fixed recently, at Mya. Both dates are gaining accuracy by using more accurate forms of radiometric dating, in particular the decay of uranium to lead in zircons formed at time of the impact. So, the evidence suggests the Manicouagan impact preceded the end of the Triassic by approximately 10±2 Ma. It could not therefore be the immediate cause of the observed mass extinction. The number of Late Triassic extinctions is disputed. Some studies suggest that there are at least two periods of extinction towards the end of the Triassic, separated by 12 to 17 million years. But arguing against this is a recent study of North American faunas. In the Petrified Forest of northeast Arizona there is a unique sequence of late Carnian-early Norian terrestrial sediments. An analysis in 2002 found no significant change in the paleoenvironment. Phytosaurs, the most common fossils there, experienced a change-over only at the genus level, and the number of species remained the same. Some aetosaurs, the next most common tetrapods, and early dinosaurs, passed through unchanged. However, both phytosaurs and aetosaurs were among the groups of archosaur reptiles completely wiped out by the end-Triassic extinction event. It seems likely then that there was some sort of end-Carnian extinction, when several herbivorous archosauromorph groups died out, while the large herbivorous therapsids—the kannemeyeriid dicynodonts and the traversodont cynodonts—were much reduced in the northern half of Pangaea (Laurasia). These extinctions within the Triassic and at its end allowed the dinosaurs to expand into many niches that had become unoccupied. Dinosaurs became increasingly dominant, abundant and diverse, and remained that way for the next 150 million years. The true "Age of Dinosaurs" is during the following Jurassic and Cretaceous periods, rather than the Triassic.
https://en.wikipedia.org/wiki?curid=29989
Titanic Thompson Alvin Clarence Thomas (November 30, 1893 – May 19, 1974) was an American gambler, golfer and hustler better known as Titanic Thompson. Thompson traveled the country wagering at cards, dice games, golf, shooting, billiards, horseshoes and proposition bets of his own devising. As an ambidextrous golfer, card player, marksman and pool shark, his skills and reputation were compared to "Merlin himself". Writer Damon Runyon allegedly based the character Sky Masterson, the gambler-hero of "The Idyll of Miss Sarah Brown" (on which the musical "Guys and Dolls" is based), on Thompson. In 1928, Thompson was involved in a high-stakes poker game that led to the shooting death of New York City crime boss Arnold Rothstein, then called the "crime of the century". The following year he testified in the trial of George McManus, who was charged with Rothstein's murder, but later acquitted. Thomas was born in Monett, Missouri, but raised mainly on a farm in the Ozark Mountains, a few miles from Rogers, Arkansas, 50 miles further south. His mother remarried (following desertion by his father Lee Thomas, a gambler himself). Thomas began conducting his nomadic, lucrative career of hustling in the rural south-central United States about 1908, leaving home at age 16 with less than a dollar in his pocket. Unable to read or write effectively, he had attended school only sporadically, and felt unwelcome in the home of his stepfather. Thomas spent most of his youth developing skills he would use later, such as shooting and understanding odds at card games through marathon dealing of hands. Thomas was drafted in early 1918, several months after the United States entered World War I. Following basic training, where he excelled, he was promoted to the rank of sergeant. Thomas remained stateside, trained younger draftees, and did not see overseas service or combat before the war ended in November 1918, when he was discharged. Thomas also taught gambling skills to many of his trainees, and then proceeded to win substantial money from them. He ended the war with more than $50,000 in cash, and used much of this money to buy his mother a house in Monett, Missouri, his birthplace. Later, when Thompson had honed his skills, he became a "road gambler", a traveling hustler who became an underground legend by winning at all manner of propositions, many of them tricky if not outright fraudulent. Among his favorites were: betting he could throw a walnut over a building (he had weighted the hollowed shell with lead beforehand), throwing a large room key into its lock, and moving a road mileage sign before betting that the listed distance to the town was in error. Thompson once bet that he could drive a golf ball 500 yards, using a hickory-shafted club, at a time when an expert player's drive was just over 200 yards. He won by waiting until winter and driving the ball onto a frozen lake, where it bounced past the required distance on the ice. Thompson's partners in "the hustling game" included pool player Minnesota Fats, who considered Titanic a genius, "the greatest action man of all time". Thompson's one weakness, as he admitted, was betting on horse racing, where he lost millions of dollars during his life in failed bets. Blessed with extraordinary eyesight and hand-eye coordination, he was a skilled athlete, crack shot and self-taught golfer good enough to turn professional. Raised in a poor environment far from exclusive golf courses, Thomas did not take up golf seriously until he was in his early thirties, but improved very quickly during an extended stint in San Francisco, where he took lessons from club professionals and honed his skills. From then on he played several times per week for the next 20 years. In an era when the top pro golfers would be fortunate to make $30,000 a year, Thomas (who, after a misprint in a New York newspaper, let people think his name was Thompson) could make that much in a week hustling rich country club players. Asked whether he would ever turn professional, he replied, "I could not afford the cut in pay". Hall of Fame golfer Ben Hogan, who traveled with him in the early 1930s for money games, later called Titanic the best shotmaker he ever saw. "He can play right- or left-handed, you can't beat him", said Hogan. One hustle of his was to beat a golfer playing right-handed, and then offer double or nothing to play the course again left-handed as an apparent concession. One thing his opponent usually did not know was that Thomas was naturally left-handed. Thomas' genius was in figuring out the odds on almost any proposition and heavily betting that way. He also had to perform under pressure, and most often did. As he aged, Thompson liked to pick promising young players as his golf partners. Several of these who went on to later PGA Tour stardom included young and unknown Ben Hogan, Ky Laffoon, Herman Keiser and Lee Elder. Other well-known golfers who left behind first-hand documented accounts of their dealings and matches with Thompson included Harvey Penick, Paul Runyan, Byron Nelson and Sam Snead, all of whom were inducted into the World Golf Hall of Fame. Married five times, Thompson fathered three children, all boys, with three different wives. He was also romantically linked with many women. Among his alleged trysts were actresses Myrna Loy and Jean Harlow. He typically married a young woman, lived with her for a few months, then returned to his road hustling, while leaving comfortable housing and financial support for his newly divorced wife. Thompson killed five men. The first was in 1910, in rural Arkansas, when a man named Jim Johnson accused him of cheating at dice and threw him off the boat on which they were traveling (and which Thompson had recently won when gambling with its previous owner – a friend of Johnson's). When Thompson climbed back on board, Johnson drew a knife and threatened Thompson's girlfriend, who was also on board. Thompson seized a hammer and struck Johnson several times on the head before throwing him overboard. The unconscious Johnson drowned. Thompson showed no remorse, stating it was Johnson's fault for not being able to swim. The sheriff gave Thompson a choice: standing trial, or handing over the deed to the boat and leaving town, which he chose. The other four men Thompson killed were shot in self-defense when they tried to rob him of gambling winnings. Two were killed in one incident in St. Louis in 1919 (the local police chief thanked him for killing two wanted bank robbers). The third came in St. Joseph, where Thompson and his hired bodyguard between them shot two men attempting to rob a poker game (again, the victims were known criminals and no charges were pressed). Thompson's last killing came near a country club in Texas in 1932 when he shot a masked figure who was holding him at gunpoint. This turned out to be sixteen-year-old Jimmy Frederick, who had caddied for Thompson earlier that day in a winning match. The dying Frederick confirmed to witnesses that he had been trying to rob Thompson. On November 4, 1928, Arnold Rothstein was murdered, allegedly because he refused to pay his debts from a poker game, held several months earlier, that he believed to have been fixed. This game had been organized by George McManus, who stood trial for the murder the next year, in a proceeding heavily covered by the media. McManus was eventually acquitted due to lack of evidence, and no one else was ever tried for Rothstein's death. Thompson had been present at the game, and an active participant in it; and it was he who, in association with one Nate Raymond, allegedly fixed the game, leaving Rothstein with total debts estimated at $500,000. Thompson, who was not present at the shooting, gave evidence at McManus's trial, without revealing his own role in the poker game. Rothstein had stood to recoup his losses by successful heavy bets on the 1928 elections of Herbert Hoover (new president) and Franklin Delano Roosevelt (new governor of New York), which did take place, shortly after Rothstein's death. Thompson later told close friends that he knew the real killer had been Rothstein's bodyguard. In his own story, published in "Sports Illustrated" in 1972, Alvin Thomas, listed as a co-author, said: In the 1960s, Thompson settled in Dallas and, although approaching 70 years of age, kept up a good standard of golf, and frequently hustled games at Tenison Park, a municipal golf course, and at posh Glen Lakes Country Club. Mid-decade, Thompson sponsored a young Raymond Floyd, then early in his PGA Tour career but already a winner, in a big money stakes match against Lee Trevino, then an unknown assistant pro, in El Paso, at Trevino's home course. After three days of play, honors and bets were equal, with both players well under par each round. Trevino gained confidence from the match, and within a few years became a Tour star himself, while Floyd's career also ascended. Thompson was honored at the first World Series of Poker in Las Vegas, Nevada, in 1970. He lived out his final years in a nursing home near Dallas. Thompson had made gambling trips with eldest son Tommy for many years, but after his father died, Tommy, who also had become a skilled, successful gambler, gave up gambling for a church ministry and later counseled prisoners, preaching to convince others to stay away from gambling.
https://en.wikipedia.org/wiki?curid=29990
The Shockwave Rider The Shockwave Rider is a science fiction novel by John Brunner, originally published in 1975. It is notable for its hero's use of computer hacking skills to escape pursuit in a dystopian future, and for the coining of the word "worm" to describe a program that propagates itself through a computer network. It also introduces the concept of a "Delphi pool", perhaps derived from the RAND Corporation's Delphi method – a futures market on world events which bears close resemblance to DARPA's controversial and cancelled Policy Analysis Market. The title derives from the futurist work "Future Shock" by Alvin Toffler. The hero is a survivor in a hypothetical world of quickly changing identities, fashions, and lifestyles, where individuals are still controlled and oppressed by a powerful and secretive state apparatus. His highly developed computer skills enable him to use any public telephone to punch in a new identity, thus reinventing himself, within hours. As a fugitive, he must do this from time to time to avoid capture. The title is also a metaphor for survival in an uncertain world. The novel shows a dystopian early 21st-century America dominated by computer networks, and is considered by some critics to be an early ancestor of the cyberpunk genre. The hero, Nick Haflinger, is a runaway from Tarnover, a government program intended to find, educate and indoctrinate highly gifted children to further the interests of the state in a future where quantitative analysis backed by the tacit threat of coercion has replaced overt military and economic power as the deciding factor in international competition. In parallel with this, the government has become a oligarchy whose beneficiaries are members of organised crime. Nick's talent extends to programming the network using only a touch-tone telephone. One of his handlers at Tarnover explains that this is like a classical pianist being able to play entire sonatas and concertos from memory. However, Nick also has some personality flaws, amounting almost to a deathwish. These become manifest in exhibitions of his abilities, revealing his identity to his pursuers. The background to the story includes a massive earthquake laying waste to the San Francisco Bay Area in California. Millions die and millions more are left to live on government handouts. The subsequent economic depression, coupled with the rootlessness enabled by access to online data and strong social pressure to be flexible (the results of corporations wanting highly mobile workforces without strong local ties), results in a fragmentation of society along religious, ethnic, and a variety of class markers, what Toffler called "subcults", including what would in 2010 be described as gangs. The equitable distribution of data access and data privacy is a prominent theme in the book; characters who have access to information which is nominally secret enjoy demonstrable economic advantages over others lacking access to such data. In the novel, data privacy is reserved for corporate entities and individuals, who may then conceal wrongdoing; by contrast, normal citizens do not enjoy significant privacy. The world described in the book is dystopian, with laissez-faire economics portrayed as leading to disaster as greed trumps long-term planning. The educational system is dysfunctional, with teachers unable to perform their jobs due to strictures. The only functional educational system seen in the book is portrayed as an enclave, the tightly-controlled Tarnover school. Communities are either walled enclaves of privilege or largely lawless areas entirely lacking protection from corrupt civil authorities. Infrastructure has been allowed to crumble, and characters who reside within "paid avoidance zones" receive compensation from the government in lieu of actual services. The novel is set in the weeks following Nick's recapture after several years on the run, alternating between moral arguments with his interrogator, who is trying to discover why the program's star pupil had absconded, and flashbacks of his career. The interrogator is Paul Freeman, a graduate of another secret installation known as Electric Skillet, which focuses on weapons and defence strategy. Although he had initially felt at home at Tarnover, Nick eventually becomes aware of experiments in genetic engineering being performed there. These produce monstrous deformed children who are disposed of when they are no longer needed for study. At this point Nick becomes determined to escape. He studies data processing, steals a personal ID code intended for privileged individuals who wish to live their lives without surveillance, and goes on the run. He uses the stolen computer access code to cover his data trails and create new identities for himself, easily adopting entire new personas. One is the pastor of a popular church, another is a successful computer consultant. In this last role, calling himself Sandy Locke, he becomes the lover of Ina Grierson, a top executive at Ground to Space Industries, a powerful "hypercorp" known to all as G2S. Intending to use the computer facilities at G2S to ensure that his stolen code is still valid, he signs on as a "systems rationalizer" with the company. This brings him into contact with Ina's daughter, Kate, who attracts him despite her plain appearance and simple lifestyle. At the age of 22, Nick's age when he left Tarnover, Kate is a perpetual student at the University of Missouri–Kansas City. She is perceptive enough to penetrate Nick's adopted persona, deeply disturbing him even though she fascinates him. He visits her at home, helping her to clean out some of her possessions, and meeting her tame cougar, Bagheera – the product of her late father's genetic research into intelligence. He died shortly after abandoning the research because the government was using it to produce animals for military uses. The 21st-century lifestyle produces a symptom called "overload" in many people, and most, including Nick, take tranquilizers to some degree. However Nick collapses completely when told that a representative from Tarnover is coming to his promotional interview at G2S. He returns to Kate and confesses that he is not what he seems, asking for her help. She conducts him to one of the "paid avoidance areas" in California, where people are paid to do without the full panoply of modern technology, as an alternative to spending billions to rebuild infrastructure after the earthquake. After Nick risks exposure yet again in one of these places, they move to the least known one, a town called Precipice. Precipice turns out to be a Utopian community of a few thousand people. The nearest comparison would be an agrarian, cottage industry community designed by William Morris. Precipice is also the home of Hearing Aid, an anonymous telephone confession service accessible to anyone in the country. Hearing Aid is also known as the "Ten Nines", after the phone number used to call it: 999-999-9999. People call the service, a human operator answers, and they simply talk while the operator listens. Some rant, others seek sympathy, still others commit suicide while on the phone. Hearing Aid's promise is that nobody else, not even the government, will hear the call. The only response Hearing Aid gives to a caller is "Only I heard that, I hope it helped." Nick and Kate settle into the community. The inhabitants include intelligent dogs that escaped from the projects that Kate's father worked on. These act as companions, guards, nannies, and even lie detectors, using their sense of smell. Nick rewrites the "computer tapeworm" that prevents the calls to Hearing Aid being monitored. While at G2S he became aware of massive backups of data being performed, clearly in anticipation of a major network outage. The Hearing Aid worm is designed to scramble network traffic if attacked, but Nick realises that it could be destroyed if the authorities were prepared for the effects and ready to recover from them. His new worm, which he calls a "phage", cannot be removed without dismantling the entire network. Possibly encouraged by the government, local gangs and tribes raid Precipice, burning down Nick and Kate's house before being overwhelmed by the dogs. Nick, suffering another overload, blames Kate for the incident, since she, following Hearing Aid policy, cut off a call from someone attempting to warn Precipice. He hits her, and then, filled with remorse, leaves the town. He finally reveals his location to the authorities when, encountering one of the "Roman circus" operations which broadcast live fights and other bloody exhibitions to the country, he responds to an "all comers" challenge by the father of the leader of one of the gangs, and cripples him in front of a nationwide audience. At Tarnover, Paul Freeman takes charge of the interrogation. He was the representative whom Nick, as Sandy Locke, was supposed to meet at G2S. Freeman, a tall gaunt African-American, gradually comes to realise that he has more sympathy with Nick's views than his employer's, and eventually absconds himself, giving Nick computer access so that Nick can make his own escape. The precipitating event in this case is Kate's abduction by government agents, who bring her to Tarnover for further questioning and to threaten Nick. With the code he gets from Freeman, he sets up an identity as an Army major, with Kate as his prisoner. Once clear of Tarnover, they disappear together. This time around, Nick has another plan, and rather than running and hiding, he and Kate spend a number of months travelling the country, aided by an "invisible college" of academics who are allies or former residents of Precipice. He creates a new "worm" which is designed to destroy all secrecy. (Brunner invented the term "worm" for this program, as a self-replicating program that propagates across a computer network – the word was later adopted by computer researchers as the name for this type of program.) The worm is eventually activated, and the details of all the government's dark secrets (clandestine genetic experimentation that produces crippled children, bribes and kickbacks from corporations, concealed crimes of high public officials) now become accessible from anywhere on the network – in fact, those most affected by a particular crime of a government official are emailed the full details. In place of the old system, Nick has designed the worm to enforce a kind of utilitarian socialism, with people's worth being defined by their roles in society, not their connections in high places. In effect, the network becomes the entire government and financial system, policing income for illegal money, freezing the accounts of criminals, while making sure money (or credit) flows to places where people are in need. This will only happen fully if the results of a plebiscite, again conducted over the network, allow it. In a final atavistic attempt at revenge, the government orders a nuclear strike by a single aircraft from a local Air Force base. Warned by Hearing Aid, Nick is able to penetrate the military computers and manufacture a counter-order to stop the plane just before it reaches the town. The book ends optimistically, with there being no more privileged hiding of information, no more secret conspiracies of the rich and powerful. Spider Robinson gave the novel a mixed review, saying that while "the book reads well ..., [i]ndividual sections are often brilliant, [and] the message is incisive and timely", that "as a story it limps" and that many characters, including the main antagonists, "are cut from cardboard". "The New York Times" reviewer Gerald Jonas was even more critical, saying that while Brunner was attempting to write "slice-of-life" fiction about a future society, the result of his arbitrary choices about social details is that "the entire fictional edifice collapses like a house of cards." The novel was written shortly after two pivotal events of the 1970s, the resignation of Richard Nixon and the overthrow of the Chilean President Salvador Allende, which are cited in the novel as examples, in Nixon's case, of a failed attempt by organised crime to suborn the Presidency, and in the second, of the consequences of working against multinational commercial interests. Most of the characters live with the feeling that their lives could be turned upside down in an instant because of someone breaking into the data held on the network. They also believe that the network knows more about them than they do about themselves. This is an extension of the sense of paranoia felt by many people in the 1970s, believing themselves to be powerless in the face of political and economic forces over which they had no control. Perception is a recurrent theme in the novel. In particular, Brunner is concerned with perceptual patterns and how they can both help and mislead. Nick projects patterns of behaviour to assume his personas, but Kate has "natural wisdom" which means that she ignores surface patterns to perceive the truth beneath. When they arrive in Precipice, the couple have to abandon their normal "urban pattern" to see the ways in which the town's unique design merges public and private space, along with natural and artificial structures. The theme of patterns in perception runs through the entire novel. Future shock arises when reality and change disrupt patterns. People respond by falling into strong patterns within human nature, particularly tribalism. Others try to convince themselves that all change is good, adopting the "plug-in" lifestyle where they feel able to relocate to another city and insert themselves into a new social niche with a minimum of inconvenience. Their mobility is, however, a reflection of the failure of the lifestyle to satisfy them, resulting in more moves. In this world of confusion are also companies specialising in psychological intervention. One such is Anti-Trauma Inc. which is hired to "normalise" children in a process akin to deprogramming, the (often violent) attempt to force people to renounce their association with groups perceived as cults. Anti-Trauma does significant harm to its charges, although as so often happens in Brunner's interconnected society, it also spends much money and time covering up its failures. Brunner's concept of the computer worm was inspired by analogy with the tapeworm, a digestive parasite. A biological tapeworm consists of a head attached to a long train of reproductive segments, each of which can produce more worms when detached. Brunner's "data-net tapeworm" consists of a head followed by other segments, each being some kind of code which has effects on databases and other systems. Several are unleashed in the book. Besides the two Hearing Aid tapeworms, and Nick's ultimate tapeworm, there is a "denunciation tapeworm" created as revenge by a representative of Anti-Trauma Inc. whom Nick insults and curses. At the time, Nick was playing the role of a priest in a revivalist church. The worm's intent was to destroy the church by cancelling all its utility services. Nick in turn sends another worm into the network to destroy that one.
https://en.wikipedia.org/wiki?curid=29991
Turkish language Turkish (), also referred to as Istanbul Turkish ("İstanbul Türkçesi") or Turkey Turkish ("Türkiye Türkçesi"), is the most widely spoken of the Turkic languages, with around 70 to 80 million speakers, mostly in Turkey. Outside its native country, significant smaller groups of speakers exist in Germany, Austria, Bulgaria, North Macedonia, Northern Cyprus, Greece, the Caucasus, and other parts of Europe and Central Asia. Cyprus has requested that the European Union add Turkish as an official language, even though Turkey is not a member state. To the west, the influence of Ottoman Turkish—the variety of the Turkish language that was used as the administrative and literary language of the Ottoman Empire—spread as the Ottoman Empire expanded. In 1928, as one of Atatürk's Reforms in the early years of the Republic of Turkey, the Ottoman Turkish alphabet was replaced with a Latin alphabet. The distinctive characteristics of the Turkish language are vowel harmony and extensive agglutination. The basic word order of Turkish is subject–object–verb. Turkish has no noun classes or grammatical gender. The language makes usage of honorifics and has a strong T–V distinction which distinguishes varying levels of politeness, social distance, age, courtesy or familiarity toward the addressee. The plural second-person pronoun and verb forms are used referring to a single person out of respect. About 40% of all speakers of Turkic languages are native Turkish speakers. The characteristic features of Turkish, such as vowel harmony, agglutination, and lack of grammatical gender, are universal within the Turkic family. The Turkic family comprises some 30 living languages spoken across Eastern Europe, Central Asia, and Siberia. Turkish is a member of the Oghuz group of languages, a subgroup of the Turkic language family. There is a high degree of mutual intelligibility between Turkish and the other Oghuz Turkic languages, including Azerbaijani, Turkmen, Qashqai, Gagauz, and Balkan Gagauz Turkish. The Turkic languages were grouped into the controversial Altaic language group. The earliest known Old Turkic inscriptions are the three monumental Orkhon inscriptions found in modern Mongolia. Erected in honour of the prince Kul Tigin and his brother Emperor Bilge Khagan, these date back to the Second Turkic Khaganate. After the discovery and excavation of these monuments and associated stone slabs by Russian archaeologists in the wider area surrounding the Orkhon Valley between 1889 and 1893, it became established that the language on the inscriptions was the Old Turkic language written using the Old Turkic alphabet, which has also been referred to as "Turkic runes" or "runiform" due to a superficial similarity to the Germanic runic alphabets. With the Turkic expansion during Early Middle Ages (c. 6th–11th centuries), peoples speaking Turkic languages spread across Central Asia, covering a vast geographical region stretching from Siberia all the way to Europe and the Mediterranean. The Seljuqs of the Oghuz Turks, in particular, brought their language, Oghuz—the direct ancestor of today's Turkish language—into Anatolia during the 11th century. Also during the 11th century, an early linguist of the Turkic languages, Mahmud al-Kashgari from the Kara-Khanid Khanate, published the first comprehensive Turkic language dictionary and map of the geographical distribution of Turkic speakers in the "Compendium of the Turkic Dialects" (Ottoman Turkish: "Divânü Lügati't-Türk"). Following the adoption of Islam c. 950 by the Kara-Khanid Khanate and the Seljuq Turks, who are both regarded as the ethnic and cultural ancestors of the Ottomans, the administrative language of these states acquired a large collection of loanwords from Arabic and Persian. Turkish literature during the Ottoman period, particularly Divan poetry, was heavily influenced by Persian, including the adoption of poetic meters and a great quantity of imported words. The literary and official language during the Ottoman Empire period (c. 1299–1922) is termed Ottoman Turkish, which was a mixture of Turkish, Persian, and Arabic that differed considerably and was largely unintelligible to the period's everyday Turkish. The everyday Turkish, known as "kaba Türkçe" or "rough Turkish", spoken by the less-educated lower and also rural members of society, contained a higher percentage of native vocabulary and served as basis for the modern Turkish language. After the foundation of the modern state of Turkey and the script reform, the Turkish Language Association (TDK) was established in 1932 under the patronage of Mustafa Kemal Atatürk, with the aim of conducting research on Turkish. One of the tasks of the newly established association was to initiate a language reform to replace loanwords of Arabic and Persian origin with Turkish equivalents. By banning the usage of imported words in the press, the association succeeded in removing several hundred foreign words from the language. While most of the words introduced to the language by the TDK were newly derived from Turkic roots, it also opted for reviving Old Turkish words which had not been used for centuries. Owing to this sudden change in the language, older and younger people in Turkey started to differ in their vocabularies. While the generations born before the 1940s tend to use the older terms of Arabic or Persian origin, the younger generations favor new expressions. It is considered particularly ironic that Atatürk himself, in his lengthy speech to the new Parliament in 1927, used a style of Ottoman which sounded so alien to later listeners that it had to be "translated" three times into modern Turkish: first in 1963, again in 1986, and most recently in 1995. The past few decades have seen the continuing work of the TDK to coin new Turkish words to express new concepts and technologies as they enter the language, mostly from English. Many of these new words, particularly information technology terms, have received widespread acceptance. However, the TDK is occasionally criticized for coining words which sound contrived and artificial. Some earlier changes—such as ' to replace ', "political party"—also failed to meet with popular approval (' has been replaced by the French loanword '). Some words restored from Old Turkic have taken on specialized meanings; for example "" (originally meaning "book") is now used to mean "script" in computer science. Some examples of modern Turkish words and the old loanwords are: Turkish is natively spoken by the Turkish people in Turkey and by the Turkish diaspora in some 30 other countries. Turkish language is mutually intelligible with Azerbaijani and other Turkic languages. In particular, Turkish-speaking minorities exist in countries that formerly (in whole or part) belonged to the Ottoman Empire, such as Iraq, Bulgaria, Cyprus, Greece (primarily in Western Thrace), the Republic of North Macedonia, Romania, and Serbia. More than two million Turkish speakers live in Germany; and there are significant Turkish-speaking communities in the United States, France, the Netherlands, Austria, Belgium, Switzerland, and the United Kingdom. Due to the cultural assimilation of Turkish immigrants in host countries, not all ethnic members of the diaspora speak the language with native fluency. In 2005 93% of the population of Turkey were native speakers of Turkish, about 67 million at the time, with Kurdish languages making up most of the remainder. Turkish is the official language of Turkey and is one of the official languages of Cyprus. Turkish has official status in 38 municipalities in Kosovo, including Mamusha, two in the Republic of North Macedonia and in Kirkuk Governorate in Iraq. In Turkey, the regulatory body for Turkish is the Turkish Language Association ("Türk Dil Kurumu" or TDK), which was founded in 1932 under the name "Türk Dili Tetkik Cemiyeti" ("Society for Research on the Turkish Language"). The Turkish Language Association was influenced by the ideology of linguistic purism: indeed one of its primary tasks was the replacement of loanwords and of foreign grammatical constructions with equivalents of Turkish origin. These changes, together with the adoption of the new Turkish alphabet in 1928, shaped the modern Turkish language spoken today. The TDK became an independent body in 1951, with the lifting of the requirement that it should be presided over by the Minister of Education. This status continued until August 1983, when it was again made into a governmental body in the constitution of 1982, following the military coup d'état of 1980. Modern standard Turkish is based on the dialect of Istanbul. This "Istanbul Turkish" ("İstanbul Türkçesi") constitutes the model of written and spoken Turkish, as recommended by Ziya Gökalp, Ömer Seyfettin and others. Dialectal variation persists, in spite of the levelling influence of the standard used in mass media and in the Turkish education system since the 1930s. Academic researchers from Turkey often refer to Turkish dialects as "ağız" or "şive", leading to an ambiguity with the linguistic concept of accent, which is also covered with these words. Several universities, as well as a dedicated work-group of the Turkish Language Association, carry out projects investigating Turkish dialects. work continued on the compilation and publication of their research as a comprehensive dialect-atlas of the Turkish language. Some immigrants to Turkey from Rumelia speak "Rumelice", which includes the distinct dialects of Ludogorie, Dinler, and Adakale, which show the influence of the theoretized Balkan sprachbund. "Kıbrıs Türkçesi" is the name for Cypriot Turkish and is spoken by the Turkish Cypriots. "Edirne" is the dialect of Edirne. "Ege" is spoken in the Aegean region, with its usage extending to Antalya. The nomadic Yörüks of the Mediterranean Region of Turkey also have their own dialect of Turkish. This group is not to be confused with the Yuruk nomads of Macedonia, Greece, and European Turkey, who speak Balkan Gagauz Turkish. "Güneydoğu" is spoken in the southeast, to the east of Mersin. "Doğu", a dialect in the Eastern Anatolia Region, has a dialect continuum. The Meskhetian Turks who live in Kazakhstan, Azerbaijan and Russia as well as in several Central Asian countries, also speak an Eastern Anatolian dialect of Turkish, originating in the areas of Kars, Ardahan, and Artvin and sharing similarities with Azerbaijani, the language of Azerbaijan. The Central Anatolia Region speaks "Orta Anadolu". "Karadeniz", spoken in the Eastern Black Sea Region and represented primarily by the Trabzon dialect, exhibits substratum influence from Greek in phonology and syntax; it is also known as "Laz dialect" (not to be confused with the Laz language). "Kastamonu" is spoken in Kastamonu and its surrounding areas. Karamanli Turkish is spoken in Greece, where it is called . It is the literary standard for the Karamanlides. At least one source claims Turkish consonants are larengially specified three-way fortis-lenis (aspirated/neutral/voiced) like Armenian. The phoneme that is usually referred to as "yumuşak g" ("soft g"), written in Turkish orthography, represents a vowel sequence or a rather weak bilabial approximant between rounded vowels, a weak palatal approximant between unrounded front vowels, and a vowel sequence elsewhere. It never occurs at the beginning of a word or a syllable, but always follows a vowel. When word-final or preceding another consonant, it lengthens the preceding vowel. In native Turkic words, the sounds , , and are in complementary distribution with , , and ; the former set occurs adjacent to front vowels and the latter adjacent to back vowels. The distribution of these phonemes is often unpredictable, however, in foreign borrowings and proper nouns. In such words, , , and often occur with back vowels: some examples are given below. Turkish orthography reflects final-obstruent devoicing, a form of consonant mutation whereby a voiced obstruent, such as , is devoiced to at the end of a word or before a consonant, but retains its voicing before a vowel. In loan words, the voiced equivalent of "/k/" is "/g/"; in native words, it is "/ğ/". This is analogous to languages such as German and Russian, but in the case of Turkish, the spelling is usually made to match the sound. However, in a few cases, such as "ad" 'name' (dative "ada"), the underlying form is retained in the spelling (cf. "at" 'horse', dative "ata"). Other exceptions are "od" 'fire' vs. "ot" 'herb', "sac" 'sheet metal', "saç" 'hair'. Most loanwords, such as "kitap" above, are spelled as pronounced, but a few such as "hac" 'hajj', "şad" 'happy', and "yad" 'strange(r)' also show their underlying forms. Native nouns of two or more syllables that end in "/k/" in dictionary form are nearly all "//ğ//" in underlying form. However, most verbs and monosyllabic nouns are underlyingly "//k//". The vowels of the Turkish language are, in their alphabetical order, , , , , , , , . The Turkish vowel system can be considered as being three-dimensional, where vowels are characterised by how and where they are articulated focusing on three key features: front and back, rounded and unrounded and vowel height. Vowels are classified [±back], [±round] and [±high]. The only diphthongs in the language are found in loanwords and may be categorised as falling diphthongs usually analyzed as a sequence of /j/ and a vowel. Turkish is an agglutinative language where a series of suffixes are added to the stem word; vowel harmony is a phonological process which ensures a smooth flow, requiring the least amount of oral movement as possible. Vowel harmony can be viewed as a process of assimilation, whereby following vowels take on the characteristics of the preceding vowel. It may be useful to think of Turkish vowels as two symmetrical sets: the a-undotted (a, ı, o, u) which are all back vowels, articulated at the back of the mouth; and the e-dotted (e, i, ö, ü) vowels which are articulated at the front of the mouth. The place and manner of articulation of the vowels will determine which pattern of vowel harmony a word will adopt. The pattern of vowels is shown in the table above. Grammatical affixes have "a chameleon-like quality", and obey one of the following patterns of vowel harmony: Practically, the twofold pattern (also referred to as the e-type vowel harmony) means that in the environment where the vowel in the word stem is formed in the front of the mouth, the suffix will take the e-form, while if it is formed in the back it will take the a-form. The fourfold pattern (also called the i-type) accounts for rounding as well as for front/back. The following examples, based on the copula "-dir"4 ("[it] is"), illustrate the principles of i-type vowel harmony in practice: "Türkiye'dir ("it is Turkey"), "kapıdır ("it is the door"), but "gündür ("it is the day"), "paltodur ("it is the coat"). There are several exceptions to the vowel harmony rules, which can be categorised as follows: Some rural dialects lack some or all of these exceptions mentioned above. The road sign in the photograph above illustrates several of these features: The rules of vowel harmony may vary by regional dialect. The dialect of Turkish spoken in the Trabzon region of northeastern Turkey follows the reduced vowel harmony of Old Anatolian Turkish, with the additional complication of two missing vowels (ü and ı), thus there is no palatal harmony. It's likely that "elün" meant "your hand" in Old Anatolian. While the 2nd person singular possessive would vary between back and front vowel, -ün or -un, as in "elün" for "your hand" and "kitabun" for "your book", the lack of ü vowel in the Trabzon dialect means -un would be used in both of these cases — "elun" and "kitabun". Word-accent is usually on the last syllable in most words. There are however, several exceptions. Exceptions include certain loanwords, particularly from Italian and Greek, as well as interjections, certain question words, adverbs (although not adjectives functioning as adverbs), and many proper names. Loanwords are usually accented on the penultimate syllable ( "lokanta" "restaurant" or "iskele" "quay"). Proper names are usually accented on the penultimate syllable as in "İstanbul", but sometimes on the antepenultimate, if the word ends in a cretic rhythm (¯ ˘ ¯ or ¯ ˘ ˘), as in "Ankara". (See Turkish phonology#Place names.) In addition, there are certain suffixes such as "-le" "with" and the verbal negative particle "-me-/-ma-", which place an accent on the syllable which precedes them, e.g. "kitáp-la" "with the book", "dé-me-mek" "not to say". In some circumstances (for example, in the second half of compound words or when verbs are preceded by an indefinite object) the accent on a word is suppressed and cannot be heard. Turkish has two groups of sentences: verbal and nominal sentences. In the case of a verbal sentence, the predicate is a finite verb, while the predicate in nominal sentence will have either no overt verb or a verb in the form of the copula "ol" or "y" (variants of "be"). Examples of both are given below: The two groups of sentences have different ways of forming negation. A nominal sentence can be negated with the addition of the word "değil". For example, the sentence above would become "Necla öğretmen değil" ('Necla is not a teacher'). However, the verbal sentence requires the addition of a negative suffix "-me" to the verb (the suffix comes after the stem but before the tense): "Necla okula gitmedi" ('Necla did not go to school'). In the case of a verbal sentence, an interrogative clitic "mi" is added after the verb and stands alone, for example "Necla okula gitti mi?" ('Did Necla go to school?'). In the case of a nominal sentence, then "mi" comes after the predicate but before the personal ending, so for example "Necla, siz öğretmen misiniz"? ('Necla, are you [formal, plural] a teacher?'). Word order in simple Turkish sentences is generally subject–object–verb, as in Korean and Latin, but unlike English, for verbal sentences and subject-predicate for nominal sentences. However, as Turkish possesses a case-marking system, and most grammatical relations are shown using morphological markers, often the SOV structure has diminished relevance and may vary. The SOV structure may thus be considered a "pragmatic word order" of language, one that does not rely on word order for grammatical purposes. Consider the following simple sentence which demonstrates that the focus in Turkish is on the element that immediately precedes the verb: The postpredicate position signifies what is referred to as background information in Turkish- information that is assumed to be known to both the speaker and the listener, or information that is included in the context. Consider the following examples: There has been some debate among linguists whether Turkish is a subject-prominent (like English) or topic-prominent (like Japanese and Korean) language, with recent scholarship implying that it is indeed both subject and topic-prominent. This has direct implications for word order as it is possible for the subject to be included in the verb-phrase in Turkish. There can be S/O inversion in sentences where the topic is of greater importance than the subject. Turkish is an agglutinative language and frequently uses affixes, and specifically suffixes, or endings. One word can have many affixes and these can also be used to create new words, such as creating a verb from a noun, or a noun from a verbal root (see the section on Word formation). Most affixes indicate the grammatical function of the word. The only native prefixes are alliterative intensifying syllables used with adjectives or adverbs: for example sımsıcak" ("boiling hot" < "sıcak") and masmavi" ("bright blue" < "mavi"). The extensive use of affixes can give rise to long words, e.g. "Çekoslovakyalılaştıramadıklarımızdanmışsınızcasına", meaning "In the manner of you being one of those that we apparently couldn't manage to convert to Czechoslovakian". While this case is contrived, long words frequently occur in normal Turkish, as in this heading of a newspaper obituary column: "Bayramlaşamadıklarımız" (Bayram [festival]-Recipr-Impot-Partic-Plur-PossPl1; "Those of our number with whom we cannot exchange the season's greetings"). Another example can be seen in the final word of this heading of the online Turkish Spelling Guide ("İmlâ Kılavuzu"): "Dilde birlik, ulusal birliğin vazgeçilemezlerindendir" ("Unity in language is among the indispensables [dispense-Pass-Impot-Plur-PossS3-Abl-Copula] of national unity ~ Linguistic unity is a "sine qua non" of national unity"). There is no definite article in Turkish, but definiteness of the object is implied when the accusative ending is used (see below). Turkish nouns decline by taking case endings. There are six noun cases in Turkish, with all the endings following vowel harmony (shown in the table using the shorthand superscript notation. The plural marker "-ler" ² immediately follows the noun before any case or other affixes (e.g. "köylerin" "of the villages"). The accusative case marker is used only for definite objects; compare "(bir) ağaç gördük" "we saw a tree" with "ağacı gördük" "we saw the tree". The plural marker "-ler" ² is generally not used when a class or category is meant: "ağaç gördük" can equally well mean "we saw trees [as we walked through the forest]"—as opposed to "ağaçları gördük" "we saw the trees [in question]". The declension of "ağaç" illustrates two important features of Turkish phonology: consonant assimilation in suffixes ("ağaçtan, ağaçta") and voicing of final consonants before vowels ("ağacın, ağaca, ağacı"). Additionally, nouns can take suffixes that assign person: for example "-imiz" 4, "our". With the addition of the copula (for example "-im" 4, "I am") complete sentences can be formed. The interrogative particle "mi" 4 immediately follows the word being questioned: "köye mi?" "[going] to the village?", "ağaç mı?" "[is it a] tree?". The Turkish personal pronouns in the nominative case are "ben" (1s), "sen" (2s), "o" (3s), "biz" (1pl), "siz" (2pl, or 2h), and "onlar" (3pl). They are declined regularly with some exceptions: "benim" (1s gen.); "bizim" (1pl gen.); "bana" (1s dat.); "sana" (2s dat.); and the oblique forms of "o" use the root "on". All other pronouns (reflexive "kendi" and so on) are declined regularly. Two nouns, or groups of nouns, may be joined in either of two ways: The following table illustrates these principles. In some cases the constituents of the compounds are themselves compounds; for clarity these subsidiary compounds are marked with [square brackets]. The suffixes involved in the linking are underlined. Note that if the second noun group already had a possessive suffix (because it is a compound by itself), no further suffix is added. As the last example shows, the qualifying expression may be a substantival sentence rather than a noun or noun group. There is a third way of linking the nouns where both nouns take no suffixes ("takısız tamlama"). However, in this case the first noun acts as an adjective, e.g. "Demir kapı" (iron gate), "elma yanak" ("apple cheek", i.e. red cheek), "kömür göz" ("coal eye", i.e. black eye) : Turkish adjectives are not declined. However most adjectives can also be used as nouns, in which case they are declined: e.g. "güzel" ("beautiful") → "güzeller" ("(the) beautiful ones / people"). Used attributively, adjectives precede the nouns they modify. The adjectives "var" ("existent") and "yok" ("non-existent") are used in many cases where English would use "there is" or "have", "e.g." "süt yok" ("there is no milk", "lit." "(the) milk (is) non-existent"); the construction ""noun 1"-GEN "noun 2"-POSS var/yok" can be translated ""noun 1" has/doesn't have "noun 2""; "imparatorun elbisesi yok" "the emperor has no clothes" ("(the) emperor-"of" clothes-"his" non-existent"); "kedimin ayakkabıları yoktu" ("my cat had no shoes", "lit." "cat-"my"-"of" shoe-"plur."-"its" non-existent-"past tense""). Turkish verbs indicate person. They can be made negative, potential ("can"), or impotential ("cannot"). Furthermore, Turkish verbs show tense (present, past, future, and aorist), mood (conditional, imperative, inferential, necessitative, and optative), and aspect. Negation is expressed by the infix "-me²-" immediately following the stem. ("Note". For the sake of simplicity the term "tense" is used here throughout, although for some forms "aspect" or "mood" might be more appropriate.) There are 9 simple and 20 compound tenses in Turkish. 9 simple tenses are simple past ("di'li geçmiş"), inferential past ("miş'li geçmiş"), present continuous, simple present (aorist), future, optative, subjunctive, necessitative ("must") and imperative. There are three groups of compound forms. Story ("hikaye") is the witnessed past of the above forms (except command), rumor ("rivayet") is the unwitnessed past of the above forms (except simple past and command), conditional ("koşul") is the conditional form of the first five basic tenses. In the example below the second person singular of the verb "gitmek" ("go"), stem "gid-/git-", is shown. There are also so-called combined verbs, which are created by suffixing certain verb stems (like "bil" or "ver") to the original stem of a verb. "Bil" is the suffix for the sufficiency mood. It is the equivalent of the English auxiliary verbs "able to", "can" or "may". "Ver" is the suffix for the swiftness mood, "kal" for the perpetuity mood and "yaz" for the approach ("almost") mood. Thus, while "gittin" means "you went", "gidebildin" means "you could go" and "gidiverdin" means "you went swiftly". The tenses of the combined verbs are formed the same way as for simple verbs. Turkish verbs have attributive forms, including present, similar to the English present participle (with the ending 2); future (2); indirect/inferential past (4); and aorist (2 or 4). The most important function of some of these attributive verbs is to form modifying phrases equivalent to the relative clauses found in most European languages. The subject of the verb in an 2 form is (possibly implicitly) in the third person (he/she/it/they); this form, when used in a modifying phrase, does not change according to number. The other attributive forms used in these constructions are the future (2) and an older form (4), which covers both present and past meanings. These two forms take "personal endings", which have the same form as the possessive suffixes but indicate the person and possibly number of the subject of the attributive verb; for example, "yediğim means "what I eat", "yediğin means "what you eat", and so on. The use of these "personal or relative participles" is illustrated in the following table, in which the examples are presented according to the grammatical case which would be seen in the equivalent English relative clause. Latest 2010 edition of "Büyük Türkçe Sözlük" ("Great Turkish Dictionary"), the official dictionary of the Turkish language published by Turkish Language Association, contains 616,767 words, expressions, terms and nouns. The 2005 edition of "Güncel Türkçe Sözlük", the official dictionary of the Turkish language published by Turkish Language Association, contains 104,481 words, of which about 86% are Turkish and 14% are of foreign origin. Among the most significant foreign contributors to Turkish vocabulary are Arabic, French, Persian, Italian, English, and Greek. Turkish extensively uses agglutination to form new words from nouns and verbal stems. The majority of Turkish words originate from the application of derivative suffixes to a relatively small set of core vocabulary. Turkish obeys certain principles when it comes to suffixation. Most suffixes in Turkish will have more than one form, depending on the vowels and consonants in the root- vowel harmony rules will apply; consonant-initial suffixes will follow the voiced/ voiceless character of the consonant in the final unit of the root; and in the case of vowel-initial suffixes an additional consonant may be inserted if the root ends in a vowel, or the suffix may lose its initial vowel. There is also a prescribed order of affixation of suffixes- as a rule of thumb, derivative suffixes precede inflectional suffixes which are followed by clitics, as can be seen in the example set of words derived from a substantive root below: Another example, starting from a verbal root: New words are also frequently formed by compounding two existing words into a new one, as in German. Compounds can be of two types- bare and (s)I. The bare compounds, both nouns and adjectives are effectively two words juxtaposed without the addition of suffixes for example the word for girlfriend "kızarkadaş" ("kız+arkadaş") or black pepper "karabiber" ("kara+biber"). A few examples of compound words are given below: However, the majority of compound words in Turkish are (s)I compounds, which means that the second word will be marked by the 3rd person possessive suffix. A few such examples are given in the table below (note vowel harmony): Turkish is written using a Latin alphabet introduced in 1928 by Atatürk to replace the Ottoman Turkish alphabet, a version of Perso-Arabic alphabet. The Ottoman alphabet marked only three different vowels—long "ā, ū" and "ī"—and included several redundant consonants, such as variants of "z" (which were distinguished in Arabic but not in Turkish). The omission of short vowels in the Arabic script was claimed to make it particularly unsuitable for Turkish, which has eight vowels. The reform of the script was an important step in the cultural reforms of the period. The task of preparing the new alphabet and selecting the necessary modifications for sounds specific to Turkish was entrusted to a Language Commission composed of prominent linguists, academics, and writers. The introduction of the new Turkish alphabet was supported by public education centers opened throughout the country, cooperation with publishing companies, and encouragement by Atatürk himself, who toured the country teaching the new letters to the public. As a result, there was a dramatic increase in literacy from its original Third World levels. The Latin alphabet was applied to the Turkish language for educational purposes even before the 20th-century reform. Instances include a 1635 Latin-Albanian dictionary by Frang Bardhi, who also incorporated several sayings in the Turkish language, as an appendix to his work (e.g. "alma agatsdan irak duschamas"—"An apple does not fall far from its tree"). Turkish now has an alphabet suited to the sounds of the language: the spelling is largely phonemic, with one letter corresponding to each phoneme. Most of the letters are used approximately as in English, the main exceptions being , which denotes ( being used for the found in Persian and European loans); and the undotted , representing . As in German, and represent and . The letter , in principle, denotes but has the property of lengthening the preceding vowel and assimilating any subsequent vowel. The letters and represent and , respectively. A circumflex is written over back vowels following , , or when these consonants represent , , and —almost exclusively in Arabic and Persian loans. The Turkish alphabet consists of 29 letters (q, x, w omitted and ç, ş, ğ, ı, ö, ü added); the complete list is: The specifically Turkish letters and spellings described above are illustrated in this table: "Dostlar Beni Hatırlasın" by Aşık Veysel Şatıroğlu (1894–1973), a minstrel and highly regarded poet in the Turkish folk literature tradition. In the Turkish province of Giresun, the locals in the village of Kuşköy have communicated using a whistled version of Turkish for over 400 years. The region consists of a series of deep valleys and the unusual mode of communication allows for conversation over distances of up to 5 kilometres. Turkish authorities estimate that there are still around 10,000 people using the whistled language. However, in 2011 UNESCO found whistling Turkish to be a dying language and included it in its intangible cultural heritage list. Since then the local education directorate has introduced it as a course in schools in the region, hoping to revive its use. A study was conducted by a German scientist of Turkish origin Onur Güntürkün at Ruhr University, observing 31 "speakers" of "" ("bird's tongue") from Kuşköy, and he found that the whistled language mirrored the lexical and syntactical structure of Turkish language. Turkish language uses two standardised keyboard layouts, known as Turkish Q (QWERTY) and Turkish F, with Turkish Q being the most common. On-line sources
https://en.wikipedia.org/wiki?curid=29992
The Shining (novel) The Shining is a horror novel by American author Stephen King. Published in 1977, it is King's third published novel and first hardback bestseller: the success of the book firmly established King as a preeminent author in the horror genre. The setting and characters are influenced by King's personal experiences, including both his visit to The Stanley Hotel in 1974 and his struggle with alcoholism. The novel was adapted into a 1980 film of the same name and was later followed by a sequel, "Doctor Sleep", published in 2013. "The Shining" centers on the life of Jack Torrance, an aspiring writer and recovering alcoholic who accepts a position as the off-season caretaker of the historic Overlook Hotel in the Colorado Rockies. His family accompanies him on this job, including his young son Danny Torrance, who possesses "the shining", an array of psychic abilities that allow Danny to see the hotel's horrific past. Soon, after a winter storm leaves them snowbound, the supernatural forces inhabiting the hotel influence Jack's sanity, leaving his wife and son in incredible danger. "The Shining" mainly takes place in the fictional Overlook Hotel, an isolated, haunted resort hotel located in the Colorado Rockies. The history of the hotel, which is described in backstory by several characters, includes the deaths of some of its guests and of former winter caretaker Delbert Grady, who “succumbed to cabin fever” and killed his family and himself. Jack Torrance, his wife Wendy, and their five-year-old son Danny move into the hotel after Jack accepts the position as winter caretaker. Jack is an aspiring writer and recovering alcoholic with anger issues which, prior to the story, had caused him to accidentally break Danny's arm and lose his position as a teacher after assaulting a student. Jack hopes that the hotel's seclusion will help him reconnect with his family and give him the motivation needed to work on a play. Danny, unknown to his parents, possesses psychic abilities referred to as "the shining" that enable him to read minds and experience premonitions as well as clairvoyance. The Torrances arrive at the hotel on closing day and are given a tour by the manager. They meet Dick Hallorann, the chef, who also possesses similar abilities to Danny's and helps to explain them to him, giving Hallorann and Danny a special connection. The remaining staff and guests depart the hotel, leaving the Torrances alone in the hotel for the winter. As the Torrances settle in at the Overlook, Danny sees ghosts and frightening visions. Although Danny is close to his parents, he does not tell either of them about his visions because he senses that the care-taking job is important to his father and the family's future. Wendy considers leaving Jack at the Overlook to finish the job by himself; Danny refuses, thinking his father will be happier if they stayed. However, Danny soon realizes that his presence in the hotel makes the supernatural activity more powerful, turning echoes of past tragedies into dangerous threats. Apparitions take solid form and the garden's topiary animals come to life. The winter snowfall leaves the Torrances cut off from the outside world in the isolated hotel. The Overlook has difficulty possessing Danny, so it begins to possess Jack by frustrating his need and desire to work and by enticing him with the storied history of the hotel through a scrapbook and records in the basement. Jack starts to develop cabin fever and becomes increasingly unstable, destroying a CB radio and sabotaging a snowmobile, the only two links with the outside world the Torrances had. One day, after a fight with Wendy, Jack finds the hotel's bar fully stocked with liquor despite being previously empty, and witnesses a party at which he meets the ghost of a bartender named Lloyd. He also dances with a young woman ghost who tries to seduce Jack. As he gets drunk, the hotel uses the ghost of the former caretaker Grady to urge Jack to kill his wife and son. He initially resists, but the increasing influence of the hotel combined with Jack's own alcoholism and anger prove too great. He becomes a monster under the control of the hotel, succumbing to his dark side. Wendy and Danny get the better of Jack after he attacks Wendy, locking him inside the walk-in pantry, but the ghost of Delbert Grady releases him after he makes Jack promise to bring him Danny and to kill Wendy. Jack attacks Wendy with one of the hotel's roque mallets, grievously injuring her, but she escapes to the caretaker's suite and locks herself in the bathroom. Jack attempts to break the door with the mallet, but Wendy slashes his hand with a razor blade to deter him. Meanwhile, Hallorann has received a psychic distress call from Danny while working at a winter resort in Florida. Hallorann rushes back to the Overlook, only to be attacked by the topiary animals and severely injured by Jack. As Jack pursues Danny through the Overlook and corners him on the hotel's top floor, he briefly gains control of himself and implores Danny to run away after Danny stands his ground and denounces Jack as a mask and false face worn by the hotel. The hotel takes control of Jack again, making him violently batter his own face and skull into ruin with the mallet, destroying the last vestiges of Jack and leaving a being controlled by the hotel's own malevolent "manager" personality. Remembering that Jack has neglected to relieve the pressure on the hotel's unstable boiler, Danny informs the hotel that it is about to explode. As Danny, Wendy, and Hallorann flee, the hotel-creature rushes to the basement in an attempt to vent the pressure, but it is too late and the boiler explodes, killing Jack and destroying the Overlook. Fighting off a last attempt by the hotel to possess him, Hallorann guides Danny and Wendy to safety. The book's epilogue is set during the next summer. Hallorann, who has taken a chef's job at a resort in Maine, comforts Danny over the loss of his father as Wendy recuperates from the injuries Jack inflicted on her. After writing "Carrie" and "'Salem's Lot", which are both set in small towns in King's native Maine, King was looking for a change of pace for the next book. "I wanted to spend a year away from Maine so that my next novel would have a different sort of background." King opened an atlas of the US on the kitchen table and randomly pointed to a location, which turned out to be Boulder, Colorado. On October 30, 1974, King and his wife Tabitha checked into The Stanley Hotel in nearby Estes Park, Colorado. They were the only two guests in the hotel that night. "When we arrived, they were just getting ready to close for the season, and we found ourselves the only guests in the place — with all those long, empty corridors". They checked into room 217, which was said to be haunted. This is where room 217 comes from in the book. Ten years earlier, King had read Ray Bradbury's "The Veldt" and was inspired to someday write a story about a person whose dreams would become real. In 1972, King started a novel entitled "Darkshine", which was to be about a psychic boy in a psychic amusement park, but the idea never came to fruition and he abandoned the book. During the night at the Stanley, this story came back to him. King and his wife had dinner that evening in the grand dining room, totally alone. They were offered one choice for dinner, the only meal still available. Taped orchestral music played in the room and theirs was the only table set for dining. "Except for our table all the chairs were up on the tables. So the music is echoing down the hall, and, I mean, it was like God had put me there to hear that and see those things. And by the time I went to bed that night, I had the whole book in my mind". After dinner, his wife decided to turn in, but King took a walk around the empty hotel. He ended up in the bar and was served drinks by a bartender named Grady. "That night I dreamed of my three-year-old son running through the corridors, looking back over his shoulder, eyes wide, screaming. He was being chased by a fire-hose. I woke up with a tremendous jerk, sweating all over, within an inch of falling out of bed. I got up, lit a cigarette, sat in a chair looking out the window at the Rockies, and by the time the cigarette was done, I had the bones of the book firmly set in my mind." "The Shining" was also heavily influenced by Shirley Jackson's "The Haunting of Hill House", Edgar Allan Poe's "The Masque of the Red Death" and "The Fall of the House of Usher", and Robert Marasco's "Burnt Offerings". The story has been often compared to Guy de Maupassant's story "The Inn". Before writing "The Shining", King had written "Roadwork" and "The Body" which were both published later. The first draft of "The Shining" took less than four months to complete and he was able to publish it before the others. The title was inspired by the John Lennon song "Instant Karma!", which contained the line "We all shine on". Bill Thompson, King's editor at Doubleday, tried to talk him out of "The Shining" because he thought that after writing "Carrie" and "Salem's Lot", he would get "typed" as a horror writer. King considered that a compliment. Originally, there was a prologue titled "Before the Play" that chronicled earlier events in the Overlook's history, as well as an epilogue titled "After the Play", though neither remained part of the published novel. The prologue was later published in "Whispers" magazine in August 1982, and an abridged version appeared in the April 26–May 2, 1997 issue of "TV Guide" to promote the then-upcoming miniseries of "The Shining". The epilogue was thought to have been lost, but was re-discovered in 2016 as part of an early manuscript version of the novel. Both "Before the Play" and "After the Play" were published as part of the Deluxe Special Edition of "The Shining" by Cemetery Dance Publications in early 2017. On November 19, 2009, during a reading at the Canon Theatre in Toronto, King described to the audience an idea for a sequel to "The Shining". The idea was prompted by the occasional person asking, "Whatever happened to Danny?" The story would follow Danny Torrance, now in his 40s, living in New Hampshire, where he works as an orderly at a hospice and helps terminally ill patients pass away with the aid of some extraordinary powers. Later, on December 1, 2009, King posted a poll on his official website, asking visitors to vote for which book he should write next, "Doctor Sleep" or : Voting ended on December 31, 2009, and it was revealed that "Doctor Sleep" received 5,861 votes, while "The Wind Through the Keyhole" received 5,812. In 2011, King posted an update confirming that "Doctor Sleep" was in the works and that the plot included a traveling group of psychic vampires called The True Knot. "Doctor Sleep" was published on September 24, 2013. The novel was adapted into a 1980 feature film of the same name directed by Stanley Kubrick and co-written with Diane Johnson. Although King himself remains disappointed with the adaptation, having criticized its handling of the book's themes and of Wendy's character, it is regarded as one of the greatest horror films ever made. The novel was also adapted to a television miniseries, which premiered in 1997. Stephen King wrote and closely monitored the making of the series to ensure that it followed the novel's narrative. The novel was also adapted into an opera of the same name in 2016. The novel is being adapted into a stage play directed by Ivo van Hove and written by Simon Stephens. A spin-off series titled "Overlook" is in development by J. J. Abrams and his production company Bad Robot. It will air on HBO Max and explores the tales of the Overlook Hotel.
https://en.wikipedia.org/wiki?curid=29999
Taxi Driver Taxi Driver is a 1976 American neo-noir psychological thriller drama film directed by Martin Scorsese, written by Paul Schrader, and starring Robert De Niro, Jodie Foster, Cybill Shepherd, Harvey Keitel, Peter Boyle, Leonard Harris and Albert Brooks. Set in a decaying and morally bankrupt New York City following the Vietnam War, the film tells the story of Travis Bickle (De Niro), a lonely taxi driver who descends into insanity as he plots to assassinate both the presidential candidate (Harris) for whom the woman (Shepherd) he is infatuated with works, and the pimp (Keitel) of an underage prostitute (Foster) he befriends. A critical and commercial success upon release and nominated for four Academy Awards, including for Best Picture, Best Actor (for De Niro) and Best Supporting Actress (for Foster), "Taxi Driver" won the Palme d'Or at the 1976 Cannes Film Festival. The film generated controversy at the time of its release for its depiction of violence and casting of a 12-year-old Foster in the role of a child prostitute. In 2012, "Sight & Sound" named it the 31st-best film ever in its decennial critics' poll, ranked with "The Godfather Part II", and the fifth-greatest film of all time on its directors' poll. The film was considered "culturally, historically or aesthetically" significant by the US Library of Congress and was selected for preservation in the National Film Registry in 1994. Travis Bickle is a lonely, depressed 26-year-old honorably discharged U.S. Marine and Vietnam War veteran living in isolation in New York City. Travis takes a job as a night shift taxi driver to cope with his chronic insomnia, driving passengers around the city's boroughs. He frequents the porn theaters on 42nd Street and keeps a diary in which he consciously attempts to include aphorisms, such as "you're only as healthy as you feel." Travis becomes infatuated with Betsy, a campaign volunteer for Senator and presidential candidate Charles Palantine. After watching her interact with fellow worker Tom through her window, Travis enters to volunteer as a pretext to talk to her, then takes her out for coffee. On a later date, he takes her to see a pornographic film, which offends her, and she ends their budding relationship. His attempts at reconciliation by sending flowers are rebuffed, so he berates her at the campaign office, before being kicked out by Tom. Travis is disgusted by the sleaze, dysfunction, and prostitution that he witnesses throughout the city. His worldview is furthered when an adolescent prostitute and runaway, Iris, is forcibly removed from Travis' taxi by her pimp, Sport, which reminds Travis of the corruption that surrounds him. A similarly influential event occurs when an unhinged passenger gloats to Travis of his intentions to kill his adulterous wife and her lover. Travis confides in fellow taxi driver Wizard about his thoughts, which are beginning to turn violent; however, Wizard assures him that he will be fine, leaving Travis to his own destructive path. In attempting to find an outlet for his frustrations, Travis begins a program of intense physical training. A fellow taxi driver refers him to an illegal gun dealer, "Easy" Andy, from whom Travis buys four handguns. At home, Travis practices drawing his weapons, and modifies one to allow him to hide and quickly deploy it from his sleeve. He also begins attending Palantine's rallies to scope out their security. One night, Travis enters a convenience store moments before an attempted armed robbery, and fatally shoots the robber. The store owner, who happens to be an old acquaintance of Travis's, takes responsibility for the deed; claiming Travis's illegal handgun as his own. Travis seeks out Iris, through Sport, and twice tries to convince her to stop prostituting herself, an effort which partially convinces her. After a breakfast with Iris, Travis writes and mails a letter to her. The letter, containing money, states that he will soon be dead, and that Iris should return home. Travis cuts his hair into a mohawk, and attends a public rally where he plans to assassinate Palantine. However, he is scared off after seeing Secret Service agents becoming suspicious of him. That evening, Travis drives to Sport's brothel in the East Village. In a moving shootout that begins outside the brothel and ends in Iris's room, Travis kills Sport and Iris' customer, but is shot multiple times in the process. Iris, hysterical with fear, pleads with Travis to stop the killing. Instead, Travis kills the brothel's bouncer in front of her with his last bullet. Unable to commit suicide after the firefight, Travis slumps on a sofa next to Iris. When the police arrive, he mimes shooting himself with his index finger. Travis' shootout is seen by the police and press as an attempt to rescue Iris from armed gangsters. He is not prosecuted, and is hailed as a local hero in the press. He receives a letter from Iris' father, thanking him for saving her and revealing that she has returned home to Pittsburgh, where she is going to school. After recovering, Travis returns to work, where he encounters Betsy as a fare. Travis drives her home, then refuses to let her pay the fare, driving away with a smile. As Travis drives off, he becomes suddenly agitated after noticing something in his rear-view mirror. The film had a budget of $1.9 million. According to Scorsese, it was Brian De Palma who introduced him to Paul Schrader. In "Scorsese on Scorsese", Scorsese says "Taxi Driver" arose from his feeling that movies are like dreams or drug-induced reveries. He attempted to incubate within the viewer the feeling of being in a limbo state between sleeping and waking. He calls Travis an "avenging angel" floating through the streets of a New York City intended to represent all cities everywhere. Scorsese calls attention to improvisation in the film, such as in the scene between De Niro and Cybill Shepherd in the coffee shop. The director also cites Alfred Hitchcock's "The Wrong Man" and Jack Hazan's "A Bigger Splash" as inspirations for his camerawork in the movie. In writing the script, Schrader was inspired by the diaries of Arthur Bremer (who shot presidential candidate George Wallace in 1972), by Jean-Paul Sartre's existential novel "Nausea" and John Ford's film "The Searchers". The writer also used himself as inspiration; in a 1981 interview with Tom Snyder on the "Tomorrow" show, Schrader related his experience living in New York City while battling chronic insomnia, which led him to frequent pornographic bookstores and theaters because they remained open all night. Following a divorce and a breakup with a live-in girlfriend, he spent a few weeks living in his car. After visiting a hospital for a stomach ulcer, Schrader wrote the screenplay for "Taxi Driver" in "under a fortnight," recalling that "When I was talking to the nurse, I realised I hadn't spoken to anyone in weeks ... that was when the metaphor of the taxi occurred to me. That is what I was: this person in an iron box, a coffin, floating round the city, but seemingly alone." Schrader decided to make Bickle a Vietnam vet because the national trauma of the war seemed to blend perfectly with Bickle's paranoid psychosis, making his experiences after the war more intense and threatening. In "Scorsese on Scorsese", Scorsese mentions the religious symbolism in the story, comparing Bickle to a saint who wants to cleanse or purge both his mind and his body of weakness. Bickle attempts to kill himself near the end of the movie as a tribute to the samurai's "death with honor" principle. When Travis meets Betsy to join him for coffee and pie, she is reminded of a line in Kris Kristofferson's song "The Pilgrim, Chapter 33": "He's a prophet and a pusher, partly truth, partly fiction—a walking contradiction." On their date, Bickle takes her to see a Swedish "sex education" film, which is in fact the American sexploitation film "Sexual Freedom in Denmark" with added Swedish sound. "Taxi Driver" was shot during a New York City summer heat wave and sanitation strike in 1975. The film came into conflict with the MPAA for its violence. Scorsese de-saturated the color in the final shootout, and the film got an R rating. To achieve the atmospheric scenes in Bickle's taxi, the sound men would get in the trunk and Scorsese and his cinematographer, Michael Chapman, would ensconce themselves on the back seat floor and use available light to shoot. Chapman admitted the filming style was greatly influenced by New Wave filmmaker Jean-Luc Godard and his cinematographer Raoul Coutard due to the fact the crew did not have the time nor the money to do "traditional things." When Bickle decides to assassinate Senator Palantine, he cuts his hair into a Mohawk. This detail was suggested by actor Victor Magnotta, a friend of Scorsese, who had a small role as a Secret Service agent and who had served in Vietnam. Scorsese later noted: "He told us that, in Saigon, if you saw a guy with his head shaved—like a little Mohawk—that usually meant that those people were ready to go into a certain Special Forces situation. You didn't even go near them. They were ready to kill." While preparing for his role as Bickle, De Niro was filming Bernardo Bertolucci's "1900" in Italy. According to Boyle, he would "finish shooting on a Friday in Rome ... get on a plane ... [and] fly to New York." De Niro obtained a taxi driver's license, and when on break would pick up a taxi and drive around New York for a couple of weeks, before returning to Rome to resume filming "1900." De Niro apparently lost 35 pounds and listened repeatedly to a taped reading of the diaries of Arthur Bremer. When he had time off from shooting "1900", De Niro visited an army base in Northern Italy and tape-recorded soldiers from the Midwestern United States, whose accents he thought might be appropriate for Travis's character. Scorsese brought in the film title designer Dan Perri to design the title sequence for "Taxi Driver". Perri had been Scorsese's original choice to design the titles for "Alice Doesn't Live Here Anymore" in 1974, but Warner Bros would not allow him to hire an unknown designer. By the time "Taxi Driver" was going into production, Perri had established his reputation with his work on "The Exorcist", and Scorsese was now able to hire him. Perri created the opening titles for "Taxi Driver" using second unit footage which he color-treated through a process of film copying and slit-scan, resulting in a highly stylised graphic sequence that evoked the "underbelly" of New York City through lurid colors, glowing neon signs distorted nocturnal images and deep black levels. Perri went on to design opening titles for a number of major films after this including "Star Wars" (1977) and "Raging Bull" (1980). Shooting took place on New York City's West Side, at a time when the city was on the brink of bankruptcy. According to producer Michael Phillips, "the whole West Side was bombed out. There really were row after row of condemned buildings and that's what we used to build our sets, were condemned buildings. Now it's fashionable real estate ... But New York and Times Square was shuddering and disgusting. It's just exciting to see the city bounce back and become the great place it is today from where it was then. We didn't know we were documenting what looked like the dying gasp of New York." Taking place in an actual apartment, the tracking shot over the murder scene at the end took three months of preparation just because the production team had to cut through the ceiling in order to get it right. The music by Bernard Herrmann was his final score before his death on December 24, 1975, and the film is dedicated to his memory. Robert Barnett of MusicWeb International has said that it contrasts deep, sleazy noises, representing the "scum" that Travis sees all over the city, with the saxophone, a musical counterpart to Travis, creating a mellifluously disenchanted troubadour, played by Tom Scott. Barnett also observes that the opposing noises in the soundtrack—gritty little harp figures, hard as shards of steel, as well as a jazz drum kit placing the drama in the city—are indicative of loneliness in the midst of mobs of people. Deep brass and woodwinds are also evident. Barnett heard in the drumbeat a wild-eyed martial air charting the pressure on Bickle, who is increasingly oppressed by the corruption around him, and that the harp, drum, and saxophone play significant roles in the music. Jackson Browne's "Late for the Sky" is also featured in the film, appearing in a scene where couples are dancing on the program "American Bandstand" as Travis watches on his television. Some critics showed concern over 12-year-old Foster's presence during the climactic shoot-out. Foster said that she was present during the setup and staging of the special effects used during the scene; the entire process was explained and demonstrated for her, step by step. Moreover, Foster said, she was fascinated and entertained by the behind-the-scenes preparation that went into the scene. In addition, before being given the part, Foster was subjected to psychological testing, attending sessions with a UCLA psychiatrist, to ensure that she would not be emotionally scarred by her role, in accordance with California Labor Board requirements monitoring children's welfare on film sets. Copies of the film distributed for TV broadcast had an unexplained disclaimer added during the closing credits: Additional concerns surrounding Foster's age focus on the role she played as Iris, a prostitute. Years later she confessed how uncomfortable the treatment of her character was on set. Scorsese did not know how to approach different scenes with the actress. The director relied on Robert De Niro to deliver his directions to the young actress. Foster often expressed how De Niro, in that moment, became a mentor to her, stating that her acting career was highly influenced by the actor's advice during the filming of "Taxi Driver". "Taxi Driver" formed part of the delusional fantasy of John Hinckley Jr. that triggered his attempted assassination of President Ronald Reagan in 1981, an act for which he was found not guilty by reason of insanity. Hinckley stated that his actions were an attempt to impress Foster, on whom Hinckley was fixated, by mimicking Travis's mohawked appearance at the Palantine rally. His attorney concluded his defense by playing the movie for the jury. When Scorsese heard about Hinckley's motivation behind his assassination attempt, he temporarily thought about quitting film-making as the association brought a negative perception of the film. The climactic shoot-out was considered intensely graphic by a few critics, considering an X-rating for the film. To attain an "R" rating, Scorsese had the colors de-saturated, making the brightly colored blood less prominent. In later interviews, Scorsese commented that he was pleased by the color change and considered it an improvement over the originally filmed scene. In the special-edition DVD, Michael Chapman, the film's cinematographer, regrets the decision and the fact that no print with the unmuted colors exists anymore, as the originals had long since deteriorated. Roger Ebert has written of the film's ending: There has been much discussion about the ending, in which we see newspaper clippings about Travis's "heroism" of saving Iris, and then Betsy gets into his cab and seems to give him admiration instead of her earlier disgust. Is this a fantasy scene? Did Travis survive the shoot-out? Are we experiencing his dying thoughts? Can the sequence be accepted as literally true? ... I am not sure there can be an answer to these questions. The end sequence plays like music, not drama: It completes the story on an emotional, not a literal, level. We end not on carnage but on redemption, which is the goal of so many of Scorsese's characters. James Berardinelli, in his review of the film, argues against the dream or fantasy interpretation, stating: Scorsese and writer Paul Schrader append the perfect conclusion to "Taxi Driver". Steeped in irony, the five-minute epilogue underscores the vagaries of fate. The media builds Bickle into a hero, when, had he been a little quicker drawing his gun against Senator Palantine, he would have been reviled as an assassin. As the film closes, the misanthrope has been embraced as the model citizen—someone who takes on pimps, drug dealers, and mobsters to save one little girl. On the LaserDisc audio commentary, Scorsese acknowledged several critics' interpretation of the film's ending as being Bickle's dying dream. He admits that the last scene of Bickle glancing at an unseen object implies that Bickle might fall into rage and recklessness in the future, and he is like "a ticking time bomb". Writer Paul Schrader confirms this in his commentary on the 30th-anniversary DVD, stating that Travis "is not cured by the movie's end", and that "he's not going to be a hero next time." When asked on the website Reddit about the film's ending, Schrader said that it was not to be taken as a dream sequence, but that he envisioned it as returning to the beginning of the film—as if the last frame "could be spliced to the first frame, and the movie started all over again." The film has also been connected with the 1970s wave of vigilante films and has been noted as a more respectable New Hollywood counterpart to the numerous exploitation vigilante films of the decade. However, despite similarities between "Taxi Driver" and the vigilante films of the 1970s, the film has also been explicitly distinguished as not being a vigilante film or not belonging to the 1970s vigilante film wave. The film can be viewed as a spiritual successor to "The Searchers". As Roger Ebert pointed out, both films center on a lonely war veteran who attempts to rescue a young girl who does not want to be saved. Both also portray the main character as someone who is alienated from society and who cannot establish normal relationships with people. It is not clear whether Paul Schrader looked for this film specifically for inspiration, but the similarities are apparent. Some critics have described the film as "neo-noir". It has also been referred to as an antihero film. The film opened at the Coronet Theater in New York City and grossed a house record $68,000 in its first week. It went on to gross $28.3 million in the United States, making it the 17th-highest-grossing film of 1976. Roger Ebert instantly praised it as one of the greatest films he had ever seen, claiming: "Taxi Driver" is a hell, from the opening shot of a cab emerging from stygian clouds of steam to the climactic killing scene in which the camera finally looks straight down. Scorsese wanted to look away from Travis's rejection; we almost want to look away from his life. But he's there, all right, and he's suffering. It was nominated for four Academy Awards, including Best Picture and Best Actor (De Niro), and received the Palme d'Or at the 1976 Cannes Film Festival. It has been selected for preservation in the United States National Film Registry. The film was chosen by "Time" as one of the 100 best films of all time. Rotten Tomatoes gives the film a score of 96% based on reviews from 89 critics with an average rating of 9.05/10; the site's consensus states: "A must-see film for movie lovers, this Martin Scorsese masterpiece is as hard-hitting as it is compelling, with Robert De Niro at his best." Metacritic gives the film a score of 94 out of 100, based on reviews from 23 critics, indicating "universal acclaim". The July/August 2009 issue of "Film Comment" polled several critics on the best films to win the Palme d'Or at the Cannes Film Festival. "Taxi Driver" placed first, above films such as "Il Gattopardo", "Viridiana", "Blowup", "The Conversation", "Apocalypse Now", "La Dolce Vita", and "Pulp Fiction". "Taxi Driver" was ranked by the American Film Institute as the 52nd-greatest American film on its AFI's 100 Years...100 Movies (10th Anniversary Edition) list, and Bickle was voted the 30th greatest villain in a poll by the same organization. "Empire" also ranked him 18th in its "The 100 Greatest Movie Characters" poll, and the film ranks at No. 17 on the magazine's 2008 list of the 500 greatest movies of all time. "Time Out" magazine conducted a poll of the 100 greatest movies set in New York City. "Taxi Driver" topped the list placing at No. 1. Schrader's screenplay for the film was ranked the 43rd greatest ever written by the Writers Guild of America. By contrast, Leonard Maltin gave a rating of only 2 stars and called the film a "gory, cold-blooded story of sick man's lurid descent into violence" which was "ugly and unredeeming". AFI's 100 Years... 100 Heroes and Villains – #30 Villain – Travis Bickle National Film Registry – Inducted in 1994. "Taxi Driver", "American Gigolo", "Light Sleeper", and "The Walker" make up a series referred to variously as the "Man in a Room" or "Night Worker" films. Screenwriter Paul Schrader (who directed the latter three films) has said that he considers the central characters of the four films to be one character, who has changed as he has aged. The film also influenced the Charles Winkler film "You Talkin' to Me?" The 1994 portrayal of psychopath Albie Kinsella by Robert Carlyle in British television series "Cracker" was in part inspired by Travis Bickle, and Carlyle's performance has frequently been compared to De Niro's as a result. In the 2012 film "Seven Psychopaths", psychotic Los Angeles actor Billy Bickle (Sam Rockwell) believes himself to be the illegitimate son of Travis Bickle. The vigilante ending inspired Jacques Audiard for his 2015 Palme d'Or-winning film "Dheepan". The French director based the eponymous Tamil Tiger character on the one played by Robert De Niro in order to make him a "real movie hero". The script of "Joker" by Todd Phillips, draws inspiration from "Taxi Driver". De Niro's "You talkin' to me?" speech has become a pop culture mainstay. In 2005, it was ranked number 10 on the American Film Institute's AFI's 100 Years... 100 Movie Quotes. In the relevant scene, the deranged Bickle is looking into a mirror at himself, imagining a confrontation that would give him a chance to draw his gun: ""You talkin' to me? You talkin' to me? You talkin' to me? Then who the hell else are you talkin' to? You talkin' to me? Well I'm the only one here. Who the fuck do you think you're talking to?"" Scorsese said that he drew inspiration from John Huston's 1967 movie "Reflections in a Golden Eye" in a scene in which Marlon Brando's character is facing the mirror. Screenwriter Paul Schrader does not take credit for the line, saying that his script only read "Travis speaks to himself in the mirror", and that De Niro improvised the dialogue. However, he went on to say that De Niro's performance was inspired by "an underground New York comedian" he had once seen, possibly including his signature line. Roger Ebert said of the latter part of the phrase "I'm the only one here" that it was "the truest line in the film... Travis Bickle's desperate need to make some kind of contact somehow—to share or mimic the effortless social interaction he sees all around him, but does not participate in." In his 2009 memoir, saxophonist Clarence Clemons said that De Niro explained the line's origins when Clemons coached De Niro to play the saxophone for the 1977 film "New York, New York". Clemons said that De Niro had seen Bruce Springsteen say the line onstage at a concert as fans were screaming his name, and decided to make the line his own. De Niro repeated the monologue (with some alterations) as Fearless Leader in the 2000 film "The Adventures of Rocky and Bullwinkle". The first Collector's Edition DVD, released in 1999, was packaged as a single-disc edition release. It contained special features, such as behind-the-scenes and several trailers, including one for "Taxi Driver". In 2006, a 30th-anniversary 2-disc Collector's Edition DVD was released. The first disc contains the film itself, two audio commentaries (one by writer Schrader and the other by Professor Robert Kolker), and trailers. This edition also retains some of the special features from the earlier release on the second disc, as well as some newly produced documentary material. A Blu-ray was released on April 5, 2011, to commemorate the film's 35th anniversary. It includes the special features from the previous 2-disc collector's edition, plus an audio commentary by Scorsese released in 1991 for the Criterion Collection, previously released on Laserdisc. As part of the Blu-ray production, Sony gave the film a full 4K digital restoration, which included scanning and cleaning the original negative (removing emulsion dirt and scratches). Colors were matched to director-approved prints under guidance from Scorsese and director of photography Michael Chapman. An all-new lossless DTS-HD Master Audio 5.1 soundtrack was also made from the original stereo recordings by Scorsese's personal sound team. The restored print premiered in February 2011 at the Berlin Film Festival, and to promote the Blu-ray, Sony also had the print screened at AMC Theatres across the United States on March 19 and 22. In late January 2005, a sequel was announced by De Niro and Scorsese. At a 25th-anniversary screening of "Raging Bull", De Niro talked about the story of an older Travis Bickle being in development. Also in 2000, De Niro mentioned interest in bringing back the character in conversation with Actors Studio host James Lipton. In November 2013, he revealed that Schrader had done a first draft but both he and Scorsese thought that it was not good enough to go beyond. In 2010, "Variety" reported rumors that Lars von Trier, Scorsese, and De Niro planned to work on a remake of the film with the same restrictions that were used in "The Five Obstructions". In 2014, Paul Schrader said that it was not being made. He said, "It was a terrible idea" and "in Marty's mind, it never was something that should be done."
https://en.wikipedia.org/wiki?curid=30000
Theory of relativity The theory of relativity usually encompasses two interrelated theories by Albert Einstein: special relativity and general relativity. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to other forces of nature. It applies to the cosmological and astrophysical realm, including astronomy. The theory transformed theoretical physics and astronomy during the 20th century, superseding a 200-year-old theory of mechanics created primarily by Isaac Newton. It introduced concepts including spacetime as a unified entity of space and time, relativity of simultaneity, kinematic and gravitational time dilation, and length contraction. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Albert Einstein published the theory of special relativity in 1905, building on many theoretical results and empirical findings obtained by Albert A. Michelson, Hendrik Lorentz, Henri Poincaré and others. Max Planck, Hermann Minkowski and others did subsequent work. Einstein developed general relativity between 1907 and 1915, with contributions by many others after 1915. The final form of general relativity was published in 1916. The term "theory of relativity" was based on the expression "relative theory" () used in 1906 by Planck, who emphasized how the theory uses the principle of relativity. In the discussion section of the same paper, Alfred Bucherer used for the first time the expression "theory of relativity" (). By the 1920s, the physics community understood and accepted special relativity. It rapidly became a significant and necessary tool for theorists and experimentalists in the new fields of atomic physics, nuclear physics, and quantum mechanics. By comparison, general relativity did not appear to be as useful, beyond making minor corrections to predictions of Newtonian gravitation theory. It seemed to offer little potential for experimental test, as most of its assertions were on an astronomical scale. Its mathematics seemed difficult and fully understandable only by a small number of people. Around 1960, general relativity became central to physics and astronomy. New mathematical techniques to apply to general relativity streamlined calculations and made its concepts more easily visualized. As astronomical phenomena were discovered, such as quasars (1963), the 3-kelvin microwave background radiation (1965), pulsars (1967), and the first black hole candidates (1981), the theory explained their attributes, and measurement of them further confirmed the theory. Special relativity is a theory of the structure of spacetime. It was introduced in Einstein's 1905 paper "On the Electrodynamics of Moving Bodies" (for the contributions of many other physicists see History of special relativity). Special relativity is based on two postulates which are contradictory in classical mechanics: The resultant theory copes with experiment better than classical mechanics. For instance, postulate 2 explains the results of the Michelson–Morley experiment. Moreover, the theory has many surprising and counterintuitive consequences. Some of these are: The defining feature of special relativity is the replacement of the Galilean transformations of classical mechanics by the Lorentz transformations. (See Maxwell's equations of electromagnetism). General relativity is a theory of gravitation developed by Einstein in the years 1907–1915. The development of general relativity began with the equivalence principle, under which the states of accelerated motion and being at rest in a gravitational field (for example, when standing on the surface of the Earth) are physically identical. The upshot of this is that free fall is inertial motion: an object in free fall is falling because that is how objects move when there is no force being exerted on them, instead of this being due to the force of gravity as is the case in classical mechanics. This is incompatible with classical mechanics and special relativity because in those theories inertially moving objects cannot accelerate with respect to each other, but objects in free fall do so. To resolve this difficulty Einstein first proposed that spacetime is curved. In 1915, he devised the Einstein field equations which relate the curvature of spacetime with the mass, energy, and any momentum within it. Some of the consequences of general relativity are: Technically, general relativity is a theory of gravitation whose defining feature is its use of the Einstein field equations. The solutions of the field equations are metric tensors which define the topology of the spacetime and how objects move inertially. Einstein stated that the theory of relativity belongs to a class of "principle-theories". As such, it employs an analytic method, which means that the elements of this theory are not based on hypothesis but on empirical discovery. By observing natural processes, we understand their general characteristics, devise mathematical models to describe what we observed, and by analytical means we deduce the necessary conditions that have to be satisfied. Measurement of separate events must satisfy these conditions and match the theory's conclusions. Relativity is a falsifiable theory: It makes predictions that can be tested by experiment. In the case of special relativity, these include the principle of relativity, the constancy of the speed of light, and time dilation. The predictions of special relativity have been confirmed in numerous tests since Einstein published his paper in 1905, but three experiments conducted between 1881 and 1938 were critical to its validation. These are the Michelson–Morley experiment, the Kennedy–Thorndike experiment, and the Ives–Stilwell experiment. Einstein derived the Lorentz transformations from first principles in 1905, but these three experiments allow the transformations to be induced from experimental evidence. Maxwell's equations—the foundation of classical electromagnetism—describe light as a wave that moves with a characteristic velocity. The modern view is that light needs no medium of transmission, but Maxwell and his contemporaries were convinced that light waves were propagated in a medium, analogous to sound propagating in air, and ripples propagating on the surface of a pond. This hypothetical medium was called the luminiferous aether, at rest relative to the "fixed stars" and through which the Earth moves. Fresnel's partial ether dragging hypothesis ruled out the measurement of first-order (v/c) effects, and although observations of second-order effects (v2/c2) were possible in principle, Maxwell thought they were too small to be detected with then-current technology. The Michelson–Morley experiment was designed to detect second-order effects of the "aether wind"—the motion of the aether relative to the earth. Michelson designed an instrument called the Michelson interferometer to accomplish this. The apparatus was more than accurate enough to detect the expected effects, but he obtained a null result when the first experiment was conducted in 1881, and again in 1887. Although the failure to detect an aether wind was a disappointment, the results were accepted by the scientific community. In an attempt to salvage the aether paradigm, FitzGerald and Lorentz independently created an "ad hoc" hypothesis in which the length of material bodies changes according to their motion through the aether. This was the origin of FitzGerald–Lorentz contraction, and their hypothesis had no theoretical basis. The interpretation of the null result of the Michelson–Morley experiment is that the round-trip travel time for light is isotropic (independent of direction), but the result alone is not enough to discount the theory of the aether or validate the predictions of special relativity. While the Michelson–Morley experiment showed that the velocity of light is isotropic, it said nothing about how the magnitude of the velocity changed (if at all) in different inertial frames. The Kennedy–Thorndike experiment was designed to do that, and was first performed in 1932 by Roy Kennedy and Edward Thorndike. They obtained a null result, and concluded that "there is no effect ... unless the velocity of the solar system in space is no more than about half that of the earth in its orbit". That possibility was thought to be too coincidental to provide an acceptable explanation, so from the null result of their experiment it was concluded that the round-trip time for light is the same in all inertial reference frames. The Ives–Stilwell experiment was carried out by Herbert Ives and G.R. Stilwell first in 1938 and with better accuracy in 1941. It was designed to test the transverse Doppler effect the redshift of light from a moving source in a direction perpendicular to its velocity—which had been predicted by Einstein in 1905. The strategy was to compare observed Doppler shifts with what was predicted by classical theory, and look for a Lorentz factor correction. Such a correction was observed, from which was concluded that the frequency of a moving atomic clock is altered according to special relativity. Those classic experiments have been repeated many times with increased precision. Other experiments include, for instance, relativistic energy and momentum increase at high velocities, experimental testing of time dilation, and modern searches for Lorentz violations. General relativity has also been confirmed many times, the classic experiments being the perihelion precession of Mercury's orbit, the deflection of light by the Sun, and the gravitational redshift of light. Other tests confirmed the equivalence principle and frame dragging. Far from being simply of theoretical interest, relativistic effects are important practical engineering concerns. Satellite-based measurement needs to take into account relativistic effects, as each satellite is in motion relative to an Earth-bound user and is thus in a different frame of reference under the theory of relativity. Global positioning systems such as GPS, GLONASS, and Galileo, must account for all of the relativistic effects, such as the consequences of Earth's gravitational field, in order to work with precision. This is also the case in the high-precision measurement of time. Instruments ranging from electron microscopes to particle accelerators would not work if relativistic considerations were omitted.
https://en.wikipedia.org/wiki?curid=30001
Telephone A telephone is a telecommunications device that permits two or more users to conduct a conversation when they are too far apart to be heard directly. A telephone converts sound, typically and most efficiently the human voice, into electronic signals that are transmitted via cables and other communication channels to another telephone which reproduces the sound to the receiving user. The term is derived from ("tēle", "far") and φωνή ("phōnē", "voice"), together meaning "distant voice". A common short form of the term is phone, which has been in use since the early 20th century. In 1876, Alexander Graham Bell was the first to be granted a United States patent for a device that produced clearly intelligible replication of the human voice. This instrument was further developed by many others, and became rapidly indispensable in business, government, and in households. The essential elements of a telephone are a microphone ("transmitter") to speak into and an earphone ("receiver") which reproduces the voice in a distant location. In addition, most telephones contain a "ringer" to announce an incoming telephone call, and a dial or keypad to enter a telephone number when initiating a call to another telephone. The receiver and transmitter are usually built into a handset which is held up to the ear and mouth during conversation. The dial may be located either on the handset or on a base unit to which the handset is connected. The transmitter converts the sound waves to electrical signals which are sent through a telephone network to the receiving telephone, which converts the signals into audible sound in the receiver or sometimes a loudspeaker. Telephones are duplex devices, meaning they permit transmission in both directions simultaneously. The first telephones were directly connected to each other from one customer's office or residence to another customer's location. Being impractical beyond just a few customers, these systems were quickly replaced by manually operated centrally located switchboards. These exchanges were soon connected together, eventually forming an automated, worldwide public switched telephone network. For greater mobility, various radio systems were developed for transmission between mobile stations on ships and automobiles in the mid-20th century. Hand-held mobile phones were introduced for personal service starting in 1973. In later decades their analog cellular system evolved into digital networks with greater capability and lower cost. Convergence has given most modern cell phones capabilities far beyond simple voice conversation. Most are smartphones, integrating all mobile communication and many computing needs. A traditional landline telephone system, also known as "plain old telephone service" (POTS), commonly carries both control and audio signals on the same twisted pair ("C" in diagram) of insulated wires, the telephone line. The control and signaling equipment consists of three components, the ringer, the hookswitch, and a dial. The ringer, or beeper, light or other device (A7), alerts the user to incoming calls. The hookswitch signals to the central office that the user has picked up the handset to either answer a call or initiate a call. A dial, if present, is used by the subscriber to transmit a telephone number to the central office when initiating a call. Until the 1960s dials used almost exclusively the rotary technology, which was replaced by dual-tone multi-frequency signaling (DTMF) with pushbutton telephones (A4). A major expense of wire-line telephone service is the outside wire plant. Telephones transmit both the incoming and outgoing speech signals on a single pair of wires. A twisted pair line rejects electromagnetic interference (EMI) and crosstalk better than a single wire or an untwisted pair. The strong outgoing speech signal from the microphone (transmitter) does not overpower the weaker incoming speaker (receiver) signal with sidetone because a hybrid coil (A3) and other components compensate the imbalance. The junction box (B) arrests lightning (B2) and adjusts the line's resistance (B1) to maximize the signal power for the line length. Telephones have similar adjustments for inside line lengths (A8). The line voltages are negative compared to earth, to reduce galvanic corrosion. Negative voltage attracts positive metal ions toward the wires. The landline telephone contains a switchhook (A4) and an alerting device, usually a ringer (A7), that remains connected to the phone line whenever the phone is "on hook" (i.e. the switch (A4) is open), and other components which are connected when the phone is "off hook". The off-hook components include a transmitter (microphone, A2), a receiver (speaker, A1), and other circuits for dialing, filtering (A3), and amplification. The place a telephone call, the calling party picks up the telephone's handset, thereby operating a lever which closes the hook switch (A4). This powers the telephone by connecting the transmission hybrid transformer, as well as the transmitter (microphone) and receiver (speaker) to the line. In this off-hook state, the telephone circuitry has a low resistance of typically than 300 ohms, which causes the flow of direct current (DC) in the line (C) from the telephone exchange. The exchange detects this current, attaches a digit receiver circuit to the line, and sends dial tone to indicate its readiness. On a modern push-button telephone, the caller then presses the number keys to send the telephone number of the destination, the called party. The keys control a tone generator circuit (not shown) that sends DTMF tones to the exchange. A rotary-dial telephone uses pulse dialing, sending electrical pulses, that the exchange counts to decode each digit of the telephone number. If the called party's line is available, the terminating exchange applies an intermittent alternating current (AC) ringing signal of 40 to 90 volts to alert the called party of the incoming call. If the called party's line is in use, however, the exchange returns a busy signal to the calling party. If the called party's line is in use but subscribes to call waiting service, the exchange sends an intermittent audible tone to the called party to indicate another call. The electromechanical ringer of a telephone (A7) is connected to the line through a capacitor (A6), which blocks direct current and passes the alternating current of the ringing power. The telephone draws no current when it is on hook, while a DC voltage is continually applied to the line. Exchange circuitry (D2) can send an alternating current down the line to activate the ringer and announce an incoming call. In manual service exchange areas, before dial service was installed, telephones had hand-cranked magneto generators to generate a ringing voltage back to the exchange or any other telephone on the same line. When a landline telephone is inactive (on hook), the circuitry at the telephone exchange detects the absence of direct current to indicate that the line is not in use. When a party initiates a call to this line, the exchange sends the ringing signal. When the called party picks up the handset, they actuate a double-circuit switchhook (not shown) which may simultaneously disconnects the alerting device and connects the audio circuitry to the line. This, in turn, draws direct current through the line, confirming that the called phone is now active. The exchange circuitry turns off the ring signal, and both telephones are now active and connected through the exchange. The parties may now converse as long as both phones remain off hook. When a party hangs up, placing the handset back on the cradle or hook, direct current ceases in that line, signaling the exchange to disconnect the call. Calls to parties beyond the local exchange are carried over trunk lines which establish connections between exchanges. In modern telephone networks, fiber-optic cable and digital technology are often employed in such connections. Satellite technology may be used for communication over very long distances. In most landline telephones, the transmitter and receiver (microphone and speaker) are located in the handset, although in a speakerphone these components may be located in the base or in a separate enclosure. Powered by the line, the microphone (A2) produces a modulated electric current which varies its frequency and amplitude in response to the sound waves arriving at its diaphragm. The resulting current is transmitted along the telephone line to the local exchange then on to the other phone (via the local exchange or via a larger network), where it passes through the coil of the receiver (A3). The varying current in the coil produces a corresponding movement of the receiver's diaphragm, reproducing the original sound waves present at the transmitter. Along with the microphone and speaker, additional circuitry is incorporated to prevent the incoming speaker signal and the outgoing microphone signal from interfering with each other. This is accomplished through a hybrid coil (A3). The incoming audio signal passes through a resistor (A8) and the primary winding of the coil (A3) which passes it to the speaker (A1). Since the current path A8 – A3 has a far lower impedance than the microphone (A2), virtually all of the incoming signal passes through it and bypasses the microphone. At the same time the DC voltage across the line causes a DC current which is split between the resistor-coil (A8-A3) branch and the microphone-coil (A2-A3) branch. The DC current through the resistor-coil branch has no effect on the incoming audio signal. But the DC current passing through the microphone is turned into AC (in response to voice sounds) which then passes through only the upper branch of the coil's (A3) primary winding, which has far fewer turns than the lower primary winding. This causes a small portion of the microphone output to be fed back to the speaker, while the rest of the AC goes out through the phone line. A lineman's handset is a telephone designed for testing the telephone network, and may be attached directly to aerial lines and other infrastructure components. Before the development of the electric telephone, the term "telephone" was applied to other inventions, and not all early researchers of the electrical device called it "telephone". Perhaps the earliest use of the word for a communications system was the "telephon" created by Gottfried Huth in 1796. Huth proposed an alternative to the optical telegraph of Claude Chappe in which the operators in the signalling towers would shout to each other by means of what he called "speaking tubes", but would now be called giant megaphones. A communication device for sailing vessels called a "telephone" was invented by the captain John Taylor in 1844. This instrument used four air horns to communicate with vessels in foggy weather. Johann Philipp Reis used the term in reference to his invention, commonly known as the Reis telephone, in c. 1860. His device appears to be the first device based on conversion of sound into electrical impulses. The term "telephone" was adopted into the vocabulary of many languages. It is derived from the , "tēle", "far" and φωνή, "phōnē", "voice", together meaning "distant voice". Credit for the invention of the electric telephone is frequently disputed. As with other influential inventions such as radio, television, the light bulb, and the computer, several inventors pioneered experimental work on "voice transmission over a wire" and improved on each other's ideas. New controversies over the issue still arise from time to time. Charles Bourseul, Antonio Meucci, Johann Philipp Reis, Alexander Graham Bell, and Elisha Gray, amongst others, have all been credited with the invention of the telephone. Alexander Graham Bell was the first to be awarded a patent for the electric telephone by the United States Patent and Trademark Office (USPTO) in March 1876. Before Bell's patent, the telephone transmitted sound in a way that was similar to the telegraph. This method used vibrations and circuits to send electrical pulses, but was missing key features. Bell found that this method produced a sound through intermittent currents, but in order for the telephone to work a fluctuating current reproduced sounds the best. The fluctuating currents became the basis for the working telephone, creating Bell's patent. That first patent by Bell was the "master patent" of the telephone, from which other patents for electric telephone devices and features flowed. The Bell patents were forensically victorious and commercially decisive. In 1876, shortly after Bell's patent application, Hungarian engineer Tivadar Puskás proposed the telephone switch, which allowed for the formation of telephone exchanges, and eventually networks. In the United Kingdom "the blower" is used as a slang term for a telephone. The term came from navy slang for a speaking tube. Early telephones were technically diverse. Some used a water microphone, some had a metal diaphragm that induced current in an electromagnet wound around a permanent magnet, and some were dynamic – their diaphragm vibrated a coil of wire in the field of a permanent magnet or the coil vibrated the diaphragm. The sound-powered dynamic variants survived in small numbers through the 20th century in military and maritime applications, where its ability to create its own electrical power was crucial. Most, however, used the Edison/Berliner carbon transmitter, which was much louder than the other kinds, even though it required an induction coil which was an impedance matching transformer to make it compatible with the impedance of the line. The Edison patents kept the Bell monopoly viable into the 20th century, by which time the network was more important than the instrument. Early telephones were locally powered, using either a dynamic transmitter or by the powering of a transmitter with a local battery. One of the jobs of outside plant personnel was to visit each telephone periodically to inspect the battery. During the 20th century, telephones powered from the telephone exchange over the same wires that carried the voice signals became common. Early telephones used a single wire for the subscriber's line, with ground return used to complete the circuit (as used in telegraphs). The earliest dynamic telephones also had only one port opening for sound, with the user alternately listening and speaking (or rather, shouting) into the same hole. Sometimes the instruments were operated in pairs at each end, making conversation more convenient but also more expensive. At first, the benefits of a telephone exchange were not exploited. Instead telephones were leased in pairs to a subscriber, who had to arrange for a telegraph contractor to construct a line between them, for example between a home and a shop. Users who wanted the ability to speak to several different locations would need to obtain and set up three or four pairs of telephones. Western Union, already using telegraph exchanges, quickly extended the principle to its telephones in New York City and San Francisco, and Bell was not slow in appreciating the potential. Signalling began in an appropriately primitive manner. The user alerted the other end, or the exchange operator, by whistling into the transmitter. Exchange operation soon resulted in telephones being equipped with a bell in a ringer box, first operated over a second wire, and later over the same wire, but with a condenser (capacitor) in series with the bell coil to allow the AC ringer signal through while still blocking DC (keeping the phone "on hook"). Telephones connected to the earliest Strowger switch automatic exchanges had seven wires, one for the knife switch, one for each telegraph key, one for the bell, one for the push-button and two for speaking. Large wall telephones in the early 20th century usually incorporated the bell, and separate bell boxes for desk phones dwindled away in the middle of the century. Rural and other telephones that were not on a common battery exchange had a magneto hand-cranked generator to produce a high voltage alternating signal to ring the bells of other telephones on the line and to alert the operator. Some local farming communities that were not connected to the main networks set up barbed wire telephone lines that exploited the existing system of field fences to transmit the signal. In the 1890s a new smaller style of telephone was introduced, packaged in three parts. The transmitter stood on a stand, known as a "candlestick" for its shape. When not in use, the receiver hung on a hook with a switch in it, known as a "switchhook". Previous telephones required the user to operate a separate switch to connect either the voice or the bell. With the new kind, the user was less likely to leave the phone "off the hook". In phones connected to magneto exchanges, the bell, induction coil, battery and magneto were in a separate bell box or "ringer box". In phones connected to common battery exchanges, the ringer box was installed under a desk, or other out-of-the-way place, since it did not need a battery or magneto. Cradle designs were also used at this time, having a handle with the receiver and transmitter attached, now called a handset, separate from the cradle base that housed the magneto crank and other parts. They were larger than the "candlestick" and more popular. Disadvantages of single-wire operation such as crosstalk and hum from nearby AC power wires had already led to the use of twisted pairs and, for long-distance telephones, four-wire circuits. Users at the beginning of the 20th century did not place long-distance calls from their own telephones but made an appointment to use a special soundproofed long-distance telephone booth furnished with the latest technology. What turned out to be the most popular and longest-lasting physical style of telephone was introduced in the early 20th century, including Bell's 202-type desk set. A carbon granule transmitter and electromagnetic receiver were united in a single molded plastic handle, which when not in use sat in a cradle in the base unit. The circuit diagram of the model 202 shows the direct connection of the transmitter to the line, while the receiver was induction coupled. In local battery configurations, when the local loop was too long to provide sufficient current from the exchange, the transmitter was powered by a local battery and inductively coupled, while the receiver was included in the local loop. The coupling transformer and the ringer were mounted in a separate enclosure, called the subscriber set. The dial switch in the base interrupted the line current by repeatedly but very briefly disconnecting the line 1 to 10 times for each digit, and the hook switch (in the center of the circuit diagram) disconnected the line and the transmitter battery while the handset was on the cradle. In the 1930s, telephone sets were developed that combined the bell and induction coil with the desk set, obviating a separate ringer box. The rotary dial becoming commonplace in the 1930s in many areas enabled customer-dialed service, but some magneto systems remained even into the 1960s. After World War II, the telephone networks saw rapid expansion and more efficient telephone sets, such as the model 500 telephone in the United States, were developed that permitted larger local networks centered around central offices. A breakthrough new technology was the introduction of Touch-Tone signaling using push-button telephones by American Telephone & Telegraph Company (AT&T) in 1963. The invention of the transistor in 1947 dramatically changed the technology used in telephone systems and in the long-distance transmission networks, over the next several decades. Along with the development of stored program control for electronic switching systems, and new transmission technologies, such as pulse-code modulation (PCM), telephony gradually evolved towards digital telephony, which improved the capacity, quality, and cost of the network. The development of digital data communications methods made it possible to digitize voice and transmit it as real-time data across computer networks and the Internet, giving rise to the field of Internet Protocol (IP) telephony, also known as voice over Internet Protocol (VoIP), a term that reflects the methodology memorably. VoIP has proven to be a disruptive technology that is rapidly replacing traditional telephone network infrastructure. By January 2005, up to 10% of telephone subscribers in Japan and South Korea had switched to this digital telephone service. A January 2005 Newsweek article suggested that Internet telephony may be "the next big thing." The technology has spawned a new industry comprising many VoIP companies that offer services to consumers and businesses. IP telephony uses high-bandwidth Internet connections and specialized customer premises equipment to transmit telephone calls via the Internet, or any modern private data network. The customer equipment may be an analog telephone adapter (ATA) which translates the signals of a conventional analog telephone to packet-switched IP messages. IP Phones have these function combined in standalone device, and computer softphone applications use microphone and headset devices of a personal computer. While traditional analog telephones are typically powered from the central office through the telephone line, digital telephones require a local power supply. Internet-based digital service also requires special provisions to provide the service location to the emergency services when a emergency telephone number is called. In 2002, only 10% of the world's population used mobile phones and by 2005 that percentage had risen to 46%. By the end of 2009, there were a total of nearly 6 billion mobile and fixed-line telephone subscribers worldwide. This included 1.26 billion fixed-line subscribers and 4.6 billion mobile subscribers. The Unicode system provides various code points for graphic symbols used in designating telephone devices, services, or information, for print, signage, and other media.
https://en.wikipedia.org/wiki?curid=30003
Telia Company Telia Company AB is a Swedish multinational telecommunications company and mobile network operator present in Sweden, Finland, Norway, Denmark, Lithuania, Latvia and Estonia. It also runs an international IP backbone network which is ranked number one Dyn in the world through Telia Carrier. The company is headquartered in Stockholm and its stock is traded on the Stockholm Stock Exchange and on the Helsinki Stock Exchange. Telia Company in its current form was first established as TeliaSonera, as the result of a 2002 merger between the Swedish and Finnish telecommunications companies, Telia and Sonera. This merger followed three years after Telia's failed merger attempt with Norwegian telecommunications company Telenor, now its chief competitor in the Nordic countries. Before privatisation, Telia was a state telephone monopoly. Sonera, on the other hand, had a monopoly only on trunk network calls, while most (c. 75%) of local telecommunication was provided by telephone cooperatives. The separate brands Telia and Sonera continued to be used in the Swedish and Finnish markets respectively until March 2017 when Sonera was rebranded to Telia. Of the stock, 39.5% (31 March 2020) is owned by the Swedish government, and the rest by institutions, companies, and private investors worldwide. The Finnish government (through Solidium) divested from Telia Company in February 2018 when it sold its remaining 3.2% stake. The Swedish Kungl. Telegrafverket (literally: "Royal Telegraph Agency") was founded in 1853, when the first electric telegraph line was established between Stockholm and Uppsala. Allmänna Telefon found an equipment supplier in Lars Magnus Ericsson. In this early competition, Telegrafverket with its brand Rikstelefon was a latecomer. However, by securing a national monopoly on long-distance telephone lines, it was able with time to control and take over the local networks of quickly growing private telephone companies. A de facto telephone monopoly position was reached around 1920, and never needed legal sanction. In 1953 the name was modernised to Televerket. On 1 July 1992 this huge government agency's regulating functions was split off into the Swedish Post and Telecom Authority (, PTS), with similar functions as the Federal Communications Commission of the United States. The operation of the state radio and TV broadcast network was spun off into a company named Teracom. On 1 July 1993 the remaining telephone and mobile network operator was transformed into a government-owned shareholding company, named Telia AB. At the height of the dot-com bubble, on 13 June 2000, close to one-third of Telia's shares were introduced on the Stockholm Stock Exchange. In the 1980s, Televerket was a pioneering mobile network operator with the NMT system, followed in the 1990s by GSM. Private competition in analogue mobile phone systems had already broken the telephone monopoly, and the growing internet allowed more opportunities for competitors. The most important of Telia's Swedish competitors in these areas has been Tele2. When PTS awarded four licenses for the 3rd generation mobile networks in December 2000, Telia was not among the winners, but later established an agreement to build a 3G network jointly with Tele2 using Tele2's licence. SUNAB was founded as the jointly owned company that would in turn build, own and operate the joint 3G network. In December 2018, Telia in cooperation with Ericsson launched Sweden's first 5G network at KTH Royal Institute of Technology in Stockholm. The history of Sonera dates back to 1917, when Suomen Lennätinlaitos (Finnish Telegraph Agency) was founded. In 1927, the telegraph agency was merged with the Finnish Post to form a new agency, Post and Telegraph Agency. This agency governed all long distance and international calls until 1994, when competitors were allowed to enter the Finnish market. In the same year, the Post and Telegraph Agency was divided to form two companies, Suomen Posti Oy (Finnish Post), and Telecom Finland Oy. Telecom Finland then changed its name to Sonera in 1998. During the run up to the 2006 general election the Swedish liberal-conservative Alliance stated as one of its policy aims to reduce government ownership in commercial entities, and specifically to sell its stake in TeliaSonera. The Alliance went on to win the election and formed a coalition government. After the merger with Sonera, the Swedish State held 46% of the shares and with parliamentary approval the government sold down to 37.3%. Further divestment of TeliaSonera was however presented to the parliament only after the next election in 2010, when the Alliance lost its majority but stayed on as a minority administration. On 16 March 2011 the Alliance administration lost a parliamentary vote on sale of publicly owned commercial entities, including TeliaSonera, when a coalition of all opposition parties - the Left Party, Social Democratic Party, Green Party and Sweden Democrats - united against the Alliance. In the beginning of 2008, TeliaSonera announced measures to save nearly 500 million Euros which would include 2900 redundancies: 2000 from Sweden and 900 from Finland. France Télécom (now Orange S.A.) proposed a 33 billion Euro acquisition offer for TeliaSonera on 5 June 2008, which was promptly rejected by the company's board. On 20 July 2018, Telia Company announced the acquisition proposal of Bonnier Broadcasting Group from Bonnier Group for 9.2 billion SEK (roughly $1 billion), thus owning TV4 AB (commercial television broadcaster in Sweden), MTV Oy (commercial television broadcaster in Finland) and C More Entertainment (pan-Nordic operator of premium television channels). The European Commission approved the deal on 12 November 2019 with certain conditions, and the acquisition was completed on 2 December that year. Ahead of the completion of Bonnier Broadcasting deal, the Telia Company nomination committee proposed on 20 October 2019 that Marie Ehrling be succeeded by Lars-Johan Jarnheimer, the former CEO of Tele2 until 2008 and then-chair of Egmont Media, as the company's board chair. The proposal was approved on 26 November that year, following the extraordinary general meeting. Meanwhile, on 24 October, Telia Company appointed Allison Kirkby, the former CEO of Tele2 from 2015 until 2018 and then went on to become the president and CEO of TDC, as the company's new president and CEO. Kirkby assumed office on May 4, 2020. Telia Carrier (AS1299; formerly TeliaSonera International Carrier) is a tier 1 carrier. Telia Company is the largest Nordic and Baltic fixed-voice, broadband, and mobile operator by revenue and customer base. It also owns a TV-media operation which includes TV4 in Sweden and MTV in Finland as well as C More. It operates the world's largest and fastest-growing wholesale IP backbone (AS1299). Telia mobile telephone business in Europe:(Estonia, Latvia, Lithuania and Sweden) Leader company. (Finland, Norway) 2nd company. (Denmark) 3rd company. Telia Company is a 12.25% stakeholder of the Afghanistan Roshan (telco) cellphone network. In Denmark Telia Company operates a mobile operator (Telia), a mobile virtual network operator (Call Me), and a broadband supplier (Telia). The company started in 1995, the result of a merger between Telia Stofa and TeliaSonera. Telia Mobile is the third-largest operator and is in fierce competition with Telenor, which is number two in the market. Telia was the fourth operator to launch 3G services and is the only operator to have a nationwide EDGE network. Telia Broadband was relaunched in 2008 because of the need for TeliaSonera to offer both mobile and broadband in all of their home markets (Sweden, Norway, Denmark and Finland). Telia Broadband was the first operator to launch digital TV with their broadband at no extra cost. Stofa is mainly a cable TV operator, but also supplies broadband via the cable TV network. Telia Company owns 100% of Eesti Telekom. Eesti Telekom is one of the largest telecommunication companies in the Baltic countries and the largest telecommunications company in Estonia. TeliaSonera and the Estonian government reached a deal over the sale of Eesti Telekom in September 2009. On 20 January 2016, Eesti Telekom switched its name to Telia Eesti. Telia Finland is the second largest mobile operator in Finland and also one of the biggest providers of landline telephone and internet services. Before the rebranding on 23 March 2017, Telia was known in Finland under the brands of Sonera and Tele Finland. In September 1999, Sonera became the world's first mobile operator to launch mobile Internet services via Wireless Application Protocol (WAP). Since 2014, Telia Finland and DNA Oyj have jointly deployed a shared 4G LTE network using the 800 MHz (LTE Band 20) "digital dividend" band in remote Northern and Eastern Finland under the "Suomen Yhteisverkko Oy" joint venture. Telia Finland owns 51% of "Suomen Yhteisverkko Oy". TeliaSonera owns 49% of LMT (24.5% as Telia Company AB and 24.5% as Sonera Holding B.V.). TeliaSonera also owns 49% of Lattelecom, which owns 23% of LMT, which owns "Okarte", "Amigo". It also owns 100% of Telia Latvija, a business cable operator and data centre operator. TeliaSonera owns 88.15% of (Teo LT until 2017), the largest landline phone operator in Lithuania, which recently purchased Omnitel, one of largest mobile network operators there. It was previously owned by TeliaSonera group. In October 2015, TeliaSonera announced the merger of Teo and Omnitel, through the acquisition of Omnitel by Teo. On February 1, 2017, Omnitel and Teo merged under the name of "Telia Lietuva". In Norway Telia first entered after the de-regulation in 1998 as a virtual supplier of fixed telephone and Internet services. This was sold to Enitel during the merger attempt with Telenor, but Telia re-entered in 2000 with the purchase of one of the two mobile network operators, NetCom. In 2006 it also bought the virtual mobile provider Chess Communication. 1 March 2016, NetCom was rebranded as Telia Norge. In July 2018, Telia acquired Get AS and TDC Norway for $2.6 billion. In Sweden, Telia Company operates under the consumer brands Telia and its lower-cost flanker brands Halebop and Fello. On the business side, Skanova Access and Cygate are also used. Telia Sverige is currently the largest mobile phone operator in Sweden, both in terms of revenue and customer base. Main competitors include Tele2, Telenor, 3, Com Hem and Boxer. Telia also owns TV4 Group which includes TV4 in Sweden and MTV in Finland and C More Entertainment after acquiring them in 2019. Telia Company is a minority owner (47%) of Turkcell Holding, which holds 51% in the listed leading mobile operator in Turkey. Turkcell owns 80% of in Belarus and 100% of in Ukraine. Telia has been selling off their shares in companies they own that isn't in their main region of business. On 15 May 2010, after Azercell went through rebranding, it joined the network of TeliaSonera. On 5 March 2018, Telia confirmed they have sold their stake in Azercell. TeliaSonera purchased a majority stake in Star-Cell in 2008 which was the number four player in the market at that time. By 2010 it exited Cambodia after a $100 million write down and collapse in subscriber numbers. It was subsequently taken over by a more dominant competitor Smart Mobile. From 2007–2018, Telia Company has owned 58.55% of the Geocell company, while Turkcell owns the remaining 41.45%. Since 2018 Silknet bought full part of Geocell. Telia Company operated in Kazakhstan under the brand Kcell. From 21 December 2018, Kcell sold to Kazakhtelecom. TeliaSonera owned a majority stake in Ncell, the largest mobile operator in Nepal with US$16.2 billion operating income. On 21 December 2015, TeliaSonera announced its exit from Ncell, selling its 60.4 percent of the shares to Malaysian telecommunications group Axiata. TeliaSonera exited Nepal without settling billions of Capital Gains Tax owed to Nepalese government. Telia Company owned 25.2% of MegaFon, the second largest mobile phone operator in Russia. In October 2017 Telia Company agreed to sell their entire MegaFon stake for US$1 billion. Telia Company owned a 76.6% holding in the Spanish operator Yoigo until 21 June 2016 when it was sold to . Telia Company owned 60% of mobile phone operator Tcell. Tcell is a merger of Somoncom and Indigo Tajikistan; the merger was completed in July 2012. On 27 April 2017, it was confirmed that Tcell has been sold. In February 2020 Telia Company has agreed to sell its 100% holding in Moldcell to CG Cell Technologies DAC, for a transaction price of US$31.5 million. In five years, Ucell, the Uzbek subsidiary, increased the number of its subscribers from 400,000 to 9 million (2012). Some former TeliaSonera executives were under preliminary investigation by Swedish prosecutors for allegations of bribery and money laundering associated with the acquisition of their 3G license in Uzbekistan from Takilant Limited, registered in Gibraltar.[30] Under these investigations involving four Uzbek nationals, hundreds of millions of francs have been frozen in Swiss banks.[31] The former executives were acquitted in the first instance in the Swedish legal proceedings in February 2019, the verdict has been appealed. In September 2017 Telia Company announced that a global settlement had been reached with the U.S. Department of Justice (DOJ), Securities and Exchange Commission (SEC) and the Dutch Public Prosecution Service (Openbaar Ministerie, OM) relating to previously disclosed investigations regarding historical transactions in Uzbekistan. The global resolution ended all known corruption related investigations or inquiries into Telia Company. When Telia and Sonera merged in 2002, TeliaSonera used a simple wordmark as the logo. In 2011, TeliaSonera released its new purple pebble logo to the corporation and its affiliate brands. The pebble was designed by Landor Associates. In 2016, TeliaSonera changed name to Telia Company and presented an updated pebble brand profile, designed by Wolff Olins, to be used by all Telia brand companies. TeliaSonera has been accused of indirectly supporting dictatorships, allowing them to do man-in-the-middle attacks on their citizens. This was disclosed in the Swedish TV show Uppdrag Granskning in 2012. TeliaSonera responded to these allegations with: "This is happening every day in all countries and applies to all operators. We are obliged to comply with the legislation of each country." Further allegations have been presented in Swedish media and elsewhere that TeliaSonera may have illegally, through bribery, acquired licenses in Uzbekistan and Azerbaijan. As a result of internal investigations on these and other potential violations to the company's policies, several senior managers were dismissed from the company. When TeliaSonera exited Nepal there were voices raised in the public debate in Nepal that it had evaded approximately 36 billions of Nepalese Rupees Capital Gains Tax owed to Nepalese government, when it sold its stake to Axiata, a Malaysian Telecom Group, A claim which has been refuted by Telia Company on several occasions. In that context, Telia was criticized by media (TV) even in Sweden where its headquarter is located. Also, a group of Nepalese people started a movement 'No Tax.. No Ncell' to boycott services of Ncell in Nepal.
https://en.wikipedia.org/wiki?curid=30004
Telefónica Telefónica, S.A. () is a Spanish multinational telecommunications company headquartered in Madrid, Spain. It is one of the largest telephone operators and mobile network providers in the world. It provides fixed and mobile telephony, broadband and subscription television, operating in Europe and the Americas. As well as the Telefónica brand, it also trades as Movistar, O2 and Vivo. The company is a component of the Euro Stoxx 50 stock market index. As of May 2017, Telefónica was the 110th largest company in the world, according to "Forbes". The company was created in Madrid in 1924 as Compañía Telefónica Nacional de España (CTNE) with ITT as one of its major shareholders. In 1945, the state acquired by law a share of 79.6% of the company. This stake was diluted by a capital increase in 1967. Until the liberalisation of the telecom market in 1997, Telefónica was the only telephone operator in Spain it still holds a dominant position (over 75% in 2000). Nowadays, Telefónica is present in more than 20 countries around Europe and America. Telefónica is a 100% listed company with more than 1.5 million direct shareholders. Its share capital currently comprises 4.563.996.485 ordinary shares traded on the Spanish Stock Market (Madrid, Barcelona, Bilbao and Valencia) and on those in London, New York, Lima, and Buenos Aires. The five major stockholders include: Telefónica is the second largest corporation in Spain, behind the Santander Group. It owns "Telefónica de España" which is the largest fixed phone and ADSL operator in Spain, "Telefónica Móviles", the largest mobile phone operator in Spain (under the Movistar brand), and Terra Networks, S.A., an Internet subsidiary. Spain now has the most expensive fibre-to-home network in Europe, as of April 2016. Telefónica was the parent of Telefónica Deutschland, which held two alternative IP carriers. The two ISPs, mediaWays and HighwayOne merged in January 2003 after having been purchased by Telefónica in 2001 and February 2002 respectively. On 26 January 2006 Telefónica completed its £17.7 billion (€25.7 billion) acquisition of the UK-based operator O2 which also provided mobile phone services in Germany under the O2 brand. Following the purchase, Telefónica merged Telefónica Deutschland and O2 Germany to form the current business Telefónica Germany. Telefónica Germany, purchased competitor E-Plus on 1 October 2014. As part of the purchase, Telefónica reduced its stake in its subsidiary to 62.1%. Integration continues as of August 2015, but the now merged network is Germany's largest in customers. On 31 October 2005, O2 agreed to be taken over by Telefónica, with a cash offer of £17.7 billion, or £2 per share. According to the merger announcement, O2, which provided mobile phone services in the UK, Ireland, Germany and the Isle of Man (uniquely to the O2 group Manx Telecom also offered fixed-line services), retained its name and continued to be based in the United Kingdom, keeping both the brand and the management team. The merger became unconditional on 23 January 2006 and O2 became a wholly owned subsidiary of Telefónica. Manx Telecom was sold by Telefónica Europe in June 2010. In January 2015, Li Ka-shing entered into talks with Telefónica to buy O2 for around £10.25 billion, aiming to merge it with his subsidiary Three. The acquisition was officially blocked by the European Commission on 11 May 2016, which argued that the merger would reduce consumer choice and lead to a higher cost of services Telefónica began to seek a stock market flotation of the business instead. Announced on the 7th May 2020, Liberty Global owner of Virgin Media and Telefónica owner of O2 have agreed to merge their U.K. businesses in a deal worth £31bn and form one of the UK’s largest entertainment and telecommunication companies to rival the BT Group. In France, since 2011, Telefónica has a joint venture with the French telecommunications company Bouygues Telecom, part of the Bouygues group, to offer global telecommunication services packages to multinational companies. This cooperation was expanded in June 2015 through the creation of a separate joint venture company named Telefónica Global Solutions France, with its own marketing and sales teams offering Telefónica and Bouygues Telecom services packages to corporations. Telefónica operates the Movistar mobile phone brand throughout Latin America. In Mexico it occupies a distant second place and it is the largest in Chile, Venezuela, Brazil, and Peru. Telefónica owns "Telefónica de Argentina" which is the largest fixed-line operator in the country. It provides broadband, local and long distance telephone services in southern part of the country as well as the Greater Buenos Aires area. The Telefónica Group has been in the country since 1990. The mobile business is run by Telefónica Móviles through Movistar, a local subsidiary. Telefónica's largest fixed-line operation in South America is in Brazil, where it provides broadband, local and long distance telephone services in the aforementioned state, which alone represents the highest GDP of South America. It also owns a majority stake in the Brazilian mobile operator Vivo, having agreed on 28 July 2010 to buy Portugal Telecom's stake in the firm for €7.5 billion, after increasing its original offer by €1.8 billion over three months of incident-rich negotiations. The Telefónica group has been in the country since 1996 when it acquired CRT, a fixed-line and mobile operator in the southern part of the country. The landline division is currently part of Brasil Telecom. In July 1998 acquired Telesp, the telephony operator of the Telebrás system in the state of São Paulo, forming Telefônica Brasil. In 2009, after four big "blackouts" on Telefónica's broadband "Speedy", ANATEL ordered Telefónica to stop sales of its broadband service until improvements were made on the infrastructure to provide better-quality service. After the release of sales of broadband internet in August 2009, ANATEL expects the company's service investments to keep on par with the sales. On 24 July 2010 Telefónica announced that the number of Speedy subscribers had exceeded three million people. Telefónica owns "Telefónica Chile", formerly CTC (Compañía de Telecomunicaciones de Chile, formerly known as Compañía de Teléfonos de Chile) which is the biggest fixed-line operator and internet service provider in the country. The Telefónica Group has been in the country since 1989. The mobile business is run by Telefónica Móviles through a local subsidiary. On 25 October 2009, Telefónica Chile changed its name to "Movistar", including cellphone, landline, satellite TV, and internet. On 18 April 2006, Telefónica's president Cesar Alierta signed an agreement with the Colombian government to buy 50% and one share of the state-owned communications company, Colombia Telecomunicaciones (TELECOM). With this sale, Telefónica became the largest Colombian land-line operator, and also gained an important presence in the local, long-distance and broadband market. The mobile business is run by Telefónica Móviles through the brand movistar. It is unknown what will happen with their previous established subsidiary Telefónica Empresas, being most probable a merger with TELECOM. The company is now known as Telefónica - Telecom. Telefónica signed a contract for 15 years (extendable for 10 additional years) on 12 May 2011 with the government of Costa Rica. It started operations in 2011 under its Movistar branding. In 2000, Telefónica acquired a 26.5% stake in Tricom when it purchased part of the shares Motorola had obtained in 1993. After acquiring 100% of OTECEL S.A. (Bellsouth), Telefónica Móviles Ecuador started its operations on 14 October 2004 as Movistar. It offers mobile solutions for the Ecuadorian market and is one of only three mobile operators in Ecuador. Telefónica in Ecuador has started offering 3G service from the second half of 2009. After acquiring 100% of Paysandú S.A., Telefónica Guatemala Centro América started its operations in 1998 as Telefónica Movistar and just Telefónica for landlines. In 2004, acquired 100% of BellSouth Guatemala, relaunching mobile operations as movistar in 2005, with mobile services based on CDMA technology, in 2004 as Telefónica Movistar launch national service with GSM/GPRS technology, and CDMA 1x EV-DO for data. It offers mobile solutions for the Guatemalan market and is one of only three mobile operators in Guatemala, international operator as Millicom (TIGO) and América Móvil (Claro). Telefónica Móviles Guatemala (renamed in 2005) offers services on UMTS/HSPA since June 2009, and it was the last operator to launch commercial services on this technology, with coverage in all major cities. Telefónica started its operations in Panama in 2004 as Telefónica acquired 100% of Bellsouth Panama. Since then it has operated using the name Movistar for mobile services. It migrated from CDMA technology used by Bellsouth to GSM 850. It also offers 3G using UMTS 850 and UMTS 1900. In 2015 it launched LTE with coverage expanding in Panama City, Arraijan, Chorrera up to Buenaventura Beach. The Telefónica Group has been in the country since 1994 and owns the largest fixed-line operator in the country. The local subsidiary offers local, long-distance, and broadband services nationwide. The mobile business is run by Telefónica Móviles through a local subsidiary. The mobile telephone business goes by the name Movistar and competes with major provider Claro. Their main offices are located in Santa Beatriz on Av. Arequipa 1155. Since January 2011, Telefónica has operated in the market under the Movistar brand. Telefónica in Puerto Rico has presence through Telefónica Empresas, Telefónica Larga Distancia - TLD, Telefónica International Wholesale Services - TIWS (formerly Emergia) and Atento. Telefónica Moviles, through its Movistar brand, had presence in Puerto Rico until mid-2007 that they sold the Puerto Rico network to a private equity group who renamed it Open Mobile. In late 2004 Telefónica took over the operations of Telcel Bellsouth in Venezuela, the first and largest mobile operator in the South American country at the time. After re-branding as Movistar, its CDMA2000 EvDO was progressively replaced by a GSM UMTS 3G network. Telefónica is currently rolling out 4G LTE in the country. Based in Miami, Florida, Telefónica USA, Inc. provides services to U.S.-based multinational companies that have operations in Latin America and Europe. Telefónica USA also operates the KeyCenter™, a data center in Miami built to withstand category 5 hurricanes, from where the company supports Business Continuity and IT services for Enterprise customers in South Florida. In 2009, China Unicom agreed to a $1Bn cross-holding with Telefónica. In January 2011, the two partners agreed to a further $500 million tie-up in each other. Following completion in late 2011, Telefónica will hold a 9.7% stake in China Unicom, and China Unicom will own 1.4% of the Spanish firm. In 2018, China Unicom and Telefónica establish a new partnership to combine their services and networks in the internet of things, so as to enable their clients to deploy IoT products and services in China, Europe and Latin America with a single global IoT SIM card. In 2005, Telefónica bought "Český Telecom" (Czech Telecom), the former state-owned Czech phone operator which still dominates the Czech fixed-line market. As part of this deal Telefónica also gained its 100% subsidiary Eurotel, one of three mobile phone operators in the Czech Republic. Starting 1 July 2006, both companies were merged into one legal entity and renamed "Telefónica O2 Czech Republic". In 2011, the company was renamed "Telefónica Czech Republic" and in 2013, it was announced that Telefónica would sell its stake in the company to PPF. Under the terms of the sale, the company was allowed to continue to trade under the O2 brand for a maximum of four years. In August 2017, the brand license agreement was extended to 2022, with a 5-year extension to 2027 available. During 2006, Telefónica won the tender to become the third mobile phone operator in Slovakia, under the O2 brand. It began providing services on 2 February 2007 under the name Telefónica O2 Slovakia, s.r.o. It initially launched providing only a prepaid service but in mid-2007 began to sell contract phones. The company was sold along with Telefónica Czech Republic to PPF. O2 in Ireland was purchased by Telefónica as part of its acquisition of O2 plc in the UK in 2005. Telefónica Ireland has become the second largest mobile phone operator in Ireland, operating a GSM/EDGE and high-speed HSPA+ wireless broadband network to residential and business customers through its "O2" brand. Telefónica Ireland also provide fixed broadband to business customers. It was announced on 24 June 2013 that Telefónica had agreed to sell its O2 Ireland mobile business for at least €780 million ($1 billion) in cash to Hutchison Whampoa's subsidiary 3. O2 was merged into Hutchison Whampoa's subsidiary Three Ireland in March 2015. Telefónica currently owns 46% of Telco, the holding company that controls 22% of Telecom Italia, Italy's former government–owned telephone company. In late 2003, Telefónica announced its intention to acquire the entirety of Telco by January 2014, potentially becoming Telecom Italia's largest shareholder. The plan, was however challenged by the Brazilian competition authority since Telefónica and Telecom Italia, with Vivo and TIM respectively, are the two largest telephone companies competing in Brazil. Subsequently, Telefónica confirmed in September 2014 that it intended to sell its shares in Telecom Italia following the purchase of Global Village Telecom (GVT) in Brazil from Vivendi. Telefónica sold its shares in the business to Vivendi as part of the sale of GVT in June 2015. The firm provides fixed, mobile and data telecommunications, digital platforms and ICT services to the B2B sector (MNC, Enterprise, SME and Wholesale) through its Telefónica Business Solutions unit. Customers: Telefónica lists, among other, the following as existing customers: Inditex, Scottish Power, BBVA, Endesa, Ferrovial and FCC. Full network operations: Argentina, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, Germany, Guatemala, Mexico, Nicaragua, Panama, Peru, Puerto Rico, Spain, United Kingdom, Uruguay and Venezuela Commercial offices: Austria, Belgium, Bulgaria, China, Czech Republic, Denmark, Estonia, France, Greece, Hungary, Ireland, Italy, Netherlands, Poland, Portugal, Romania, Singapore, Slovakia, Sweden, Switzerland and USA Remote operations and network points of presence: Finland, Latvia, Lithuania, Morocco, Norway and Slovenia Strategic and industrial alliances: China Unicom It has an extended service reach in 129 additional countries. "Source :OpesC" Quarterly Report Jan - Dec 2008 - page 9 In football, Teléfonica is an official sponsor for several national teams like Spain (Movistar+) in Europe and Brazil (vivo), Mexico, Colombia, Peru or Venezuela in America. O2 are the main sponsors of the England national rugby team. From 2011, they are to sponsor the Spanish UCI ProTour cycling team known as Movistar Team. Teléfonica, through Movistar, was the title sponsor of Yamaha Motor Racing from 2014-2018, a motorcycle racing team in MotoGP, and was also the title sponsor of Suzuki's factory team from 2000-2002 Sito Pons' Honda team from 1997-1999 and Fausto Gresini's Honda team from 2003-2005. Within Formula One, Telefónica was a major sponsor of the Renault F1 Team until Fernando Alonso's departure to McLaren in 2007, and were title sponsors of the Spanish Grand Prix from 2006 to 2010. Through its acquisition of O2, Telefónica also indirectly sponsored the BMW Sauber F1 Team. "F1 Racing" estimates these sponsorships amount to $18 million, $15 million and $23 million respectively. They also sponsored the Ford Focus WRC during seasons 2000-2002 when Spanish rally driver Carlos Sainz drove for the team. The sponsorship said Telefónica Movistar on it and the stickers were on the front bumper, the rear 3-quarters and the rear spoiler. As Sainz moved to Citroën team, Telefónica followed and sponsored Citroën rally team in 2003. Movistar- and Telefónica-sponsored teams contested the round-the-world Volvo Ocean Race in the 2005-06, 2008-09 and 2011-12 events. Telefonica sponsors, through its Movistar brand, the Movistar Riders eSports team. Telefónica has received several fines due to convictions over unfair competition, abuse of its position as dominant provider, and antitrust violations through the Commission of Telecommunications, European Commission, and Spanish tribunals. These fines include: As of 2008, Telefónica had in court two more fines, with a value of 793 million euros. On 5 July 2007, the European Commission ordered Telefónica to pay a record antitrust fine of almost €152 million for activities in the Spanish broadband market which, according to European Union competition commissioner Neelie Kroes, "harmed Spanish consumers, Spanish businesses and the Spanish economy as a whole, and by extension Europe's economy". Several consumer groups in Spain have reported unnecessary delays in cancelling Telefónica's ADSL service. These consumer groups also claim that services continue to be billed after being cancelled and that service cancellation requests are ignored. This has led Spanish people to organize themselves in consumer groups such as the "Asociación de Internautas" and user communities like "Bandaancha" in order to defend themselves from Telefónica's abuses, and to give support and help to each other in their various complaints about Telefónica's unfair practices. The practices are claimed to include the complex process involved in cancelling lines. These line cancellation procedures are justified by Telefónica as a way of "defending customers against hoaxes". Furthermore, in areas where ADSL lines are scarce, there are also reports of customers who claim to have had their service cancelled or inexplicably transferred to another customer although they have paid their bills. This practice is considered by some to be used by Telefónica in certain areas of Spain where there are few broadband connections. In February 2010, Telefónica CEO Cesar Alierta expressed in a meeting at Bilbao, Spain that his company intends to charge Google and other search engines for the use of their network. Alierta complained that such search engines were benefiting from the platform without contributing to the company's expenses and that such a trend will change in the near future. Additionally he said that Telefónica will seek to push its own content. Telefónica is a supporter of the Hybrid Broadcast Broadband TV (HbbTV) initiative that is promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast TV and broadband multimedia applications with a single user interface, and has run pilot HbbTV services in Spain. Telefónica's Wayra subsidiary first launched in Latin America and Spain in 2011 to provide seed investment and mentoring to new companies. Since its inception, Wayra has backed over 300 companies including Trustev, Venddo, Cloudwear and NFWare. As of 1 December 2014, the Firefox web browser includes the "Firefox Hello" WebRTC feature, that allows real-time voice and video online chats. "Firefox Hello" is powered by Telefónica and was also co-developed by Telefónica. Telefónica Dynamic services offers mobile-money using Sybase 365 Mobile wallet systems, with a service-centre based in Tel-Aviv. In September 2017, Nokia and Telefónica signed an agreement in order to evaluate technologies enabling an efficient network evolution to 5G in line with Telefónica's business objectives. On 12 May 2017, Telefónica computer's network was critically attacked by a malware known as WannaCry ransomware attack.
https://en.wikipedia.org/wiki?curid=30005
The Silence of the Lambs (film) The Silence of the Lambs is a 1991 American psychological horror film directed by Jonathan Demme from a screenplay written by Ted Tally, adapted from Thomas Harris' 1988 novel of the same name. The film stars Jodie Foster, Anthony Hopkins, Scott Glenn, Ted Levine, and Anthony Heald. In the film, Clarice Starling, a young FBI trainee, seeks the advice of the imprisoned Dr. Hannibal Lecter, a brilliant psychiatrist and cannibalistic serial killer to apprehend another serial killer, known only as "Buffalo Bill", who skins his female victims' corpses. The novel was Harris' first and second respectively to feature the characters of Starling and Lecter, and was the second adaptation of a Harris novel to feature Lecter, preceded by the Michael Mann-directed "Manhunter" (1986). "The Silence of the Lambs" was released on February 14, 1991 and grossed $272.7 million worldwide against its $19 million budget, becoming the fifth-highest-grossing film of 1991 worldwide. The film premiered at the 41st Berlin International Film Festival, where it competed for the Golden Bear, while Demme received the Silver Bear for Best Director. Critically acclaimed upon release, it became only the third film (the other two being "It Happened One Night" (1934) and "One Flew Over the Cuckoo's Nest" (1975)) to win Academy Awards in all the top five categories: Best Picture, Best Director, Best Actor, Best Actress, and Best Adapted Screenplay. It is also the first, and to date only, Best Picture winner widely considered to be a horror film, and one of only six such films to be nominated in the category. It is regularly cited by critics, film directors and audiences alike as one of the greatest and most influential films of all time. In 2018 "Empire" ranked it 48th on their list of the 500 greatest movies of all time. The American Film Institute ranked it as the 5th-greatest and most influential thriller film of all time while the characters Clarice Starling and Hannibal Lecter were ranked as the greatest film heroine and villain, respectively. The film is considered "culturally, historically or aesthetically" significant by the U.S. Library of Congress and was selected to be preserved in the National Film Registry in 2011. A sequel titled "Hannibal" was released in 2001, in which Hopkins reprised his role. It was followed by two prequels: "Red Dragon" (2002) and "Hannibal Rising" (2007). FBI trainee Clarice Starling is pulled from her training at the FBI Academy at Quantico, Virginia by Jack Crawford of the Bureau's Behavioral Science Unit. He assigns her to interview Hannibal Lecter, a former psychiatrist and incarcerated cannibalistic serial killer, whose insight might prove useful in the pursuit of a psychopath serial killer nicknamed "Buffalo Bill," who kills young women and then removes the skin from their bodies. Starling travels to the Baltimore State Hospital for the Criminally Insane, where she is led by Frederick Chilton to Lecter's solitary quarters. Although initially pleasant and courteous, Lecter grows impatient with Starling's attempts at "dissecting" him and rebuffs her. As she is leaving, a prisoner named Miggs flicks semen at her. Lecter, who considers this act "unspeakably ugly," calls Starling back and tells her to seek out an old patient of his. This leads her to a storage shed, where she discovers a man's severed head with a death's head moth lodged in its throat. She returns to Lecter, who tells her that the man is linked to Buffalo Bill. He offers to profile Buffalo Bill on the condition that he may be transferred away from Chilton, whom he detests. Buffalo Bill abducts a Senator's daughter, Catherine Martin. Crawford authorizes Starling to offer Lecter a fake deal, promising a prison transfer if he provides information that helps them find Buffalo Bill and rescue Catherine. Instead, Lecter demands a "quid pro quo" from Starling, offering clues about Buffalo Bill in exchange for personal information. Starling tells Lecter about the murder of her father when she was ten years old. Chilton secretly records the conversation and reveals Starling's deceit before offering Lecter a deal of Chilton's own making. Lecter agrees and is flown to Memphis, where he verbally torments Senator Ruth Martin, and gives her misleading information on Buffalo Bill, including the name "Louis Friend." Starling notices that "Louis Friend" is an anagram of "iron sulfide"—fool's gold. She visits Lecter, who is now being held in a cage-like cell in a Tennessee courthouse, and asks for the truth. Lecter tells her that all the information she needs is contained in the case file. Rather than give her the real name, he insists that they continue their "quid pro quo" and she recounts a traumatic childhood incident where she was awakened by the sound of spring lambs being slaughtered on a relative's farm in Montana. Starling admits that she still sometimes wakes thinking she can hear lambs screaming, and Lecter speculates that she is motivated to save Catherine in the hope that it will end the nightmares. Lecter gives her back the case files on Buffalo Bill after their conversation is interrupted by Chilton and the police, who escort her from the building. Later that evening, Lecter kills his guards, escapes from his cell, and disappears. Starling analyzes Lecter's annotations to the case files and realizes that Buffalo Bill knew his first victim personally. Starling travels to the victim's hometown and discovers that Buffalo Bill was a tailor, with dresses and dress patterns identical to the patches of skin removed from each of his victims. She telephones Crawford to inform him that Buffalo Bill is trying to form a "woman suit" out of real skin, but Crawford is already en route to make an arrest, having cross-referenced Lecter's notes with hospital archives and finding an autogynephilic man named Jame Gumb, who once applied unsuccessfully for a sex-change operation, believing himself to be a transsexual. Starling continues interviewing friends of Buffalo Bill's first victim in Ohio, while Crawford leads an FBI HRT team to Gumb's address in Illinois. The house in Illinois is empty, and Starling is led to the house of "Jack Gordon," whom she realizes is actually Jame Gumb, again by finding a death's head moth. She pursues him into his multi-room basement, where she discovers that Catherine is still alive, but trapped in a dry well. After turning off the basement lights, Gumb stalks Starling in the dark with night-vision goggles, but gives his position away when he cocks his revolver. Starling reacts just in time and fires all of her rounds, killing Gumb. Some time later, at the FBI Academy graduation party, Starling receives a phone call from Lecter, who is at an airport in Bimini. He assures her that he does not plan to pursue her and asks her to return the favor, which she says she cannot do. Lecter then hangs up the phone, saying that he is "having an old friend for dinner," and starts following a newly arrived Chilton before disappearing into the crowd. "The Silence of the Lambs" is based on Thomas Harris's 1988 novel of the same name and is the second film to feature the character Hannibal Lecter following the 1986 film "Manhunter". Prior to the novel's release, Orion Pictures partnered with Gene Hackman to bring the novel to the big screen. With Hackman set to direct and possibly star in the role of Crawford, negotiations were made to split the $500,000 cost of rights between Hackman and the studio. In addition to securing the rights to the novel, producers also had to acquire the rights to the name "Hannibal Lecter," which were owned by "Manhunter" producer Dino De Laurentiis. Owing to the financial failure of the earlier film, De Laurentiis lent the character rights to Orion Pictures for free. In November 1987, Ted Tally was brought on to write the adaptation; Tally had previously crossed paths with Harris many times, with his interest in adapting "The Silence of the Lambs" originating from receiving an advance copy of the book from Harris himself. When Tally was about halfway through with the first draft, Hackman withdrew from the project and financing fell through. However, Orion Pictures co-founder Mike Medavoy assured Tally to keep writing as the studio itself took care of financing and searched for a replacement director. As a result, Orion Pictures sought director Jonathan Demme to helm the project. With the screenplay not yet completed, Demme signed on after reading the novel. From there, the project quickly took off, as Tally explained, "[Demme] read my first draft not long after it was finished, and we met, then I was just startled by the speed of things. We met in May 1989 and were shooting in November. I don't remember any big revisions." Jodie Foster was interested in playing the role of Clarice Starling immediately after reading the novel. However, in spite of the fact that Foster had just won an Academy Award for her performance in "The Accused" (1988), Demme was not convinced that she was right for the role. Having just collaborated on "Married to the Mob" (1988), Demme's first choice for the role of Starling was Michelle Pfeiffer, who turned it down, later saying, "It was a difficult decision, but I got nervous about the subject matter." Still not convinced, he approached Meg Ryan who turned it down as well for its gruesome themes and then Laura Dern, of whom the studio was skeptical as not being a bankable choice. As a result, Foster was awarded the role due to her passion towards the character. For the role of Dr. Hannibal Lecter, Demme originally approached Sean Connery. After the actor turned it down, Anthony Hopkins was then offered the role based on his performance in "The Elephant Man" (1980). Other actors considered for the role included Al Pacino, Robert De Niro, Dustin Hoffman, Derek Jacobi and Daniel Day-Lewis. The mask Hopkins wore became an iconic symbol for the film. It was created by Ed Cubberly, of Frenchtown, New Jersey, who had made numerous masks for NHL goalies. Gene Hackman was originally cast to play Jack Crawford, the Agent-in-Charge of the Behavioral Science Unit of the FBI in Quantico, Virginia but he found the script "too violent." Scott Glenn was then cast in the role. In preparation for the role, Glenn met with John E. Douglas. Douglas gave Glenn a tour of the Quantico facility and also played for him an audio tape containing various recordings that serial killers Lawrence Bittaker and Roy Norris had made of themselves raping and torturing a 16-year-old girl. According to Douglas, Glenn wept as he experienced the recordings and even changed his liberal stance on the death penalty. Principal photography on "The Silence of the Lambs" began on November 15, 1989 and wrapped on March 1, 1990. Filming primarily took place in and around Pittsburgh, Pennsylvania, with some scenes shot in nearby northern West Virginia. The Victorian home in Perryopolis, Pennsylvania used as Buffalo Bill's home in the film went up for sale in August 2015 for $300,000. The home sat on the market for nearly a year, before finally selling for $195,000. The exterior of the Western Center near Canonsburg, Pennsylvania served as the setting for Baltimore State Hospital for the Criminally Insane. In what was a rare act of cooperation at the time, the FBI allowed scenes to be filmed at the FBI Academy in Quantico; some FBI staff members even acted in bit parts. The musical score for "The Silence of the Lambs" was composed by Howard Shore, who would also go on to collaborate with Demme on "Philadelphia". Recorded in Munich during the latter half of the summer of 1990, the score was performed by the Munich Symphony Orchestra. "I tried to write in a way that goes right into the fabric of the movie," explained Shore on his approach. "I tried to make the music just fit in. When you watch the movie you are not aware of the music. You get your feelings from all elements simultaneously, lighting, cinematography, costumes, acting, music. Jonathan Demme was very specific about the music." A soundtrack album was released by MCA Records on February 5, 1991. Music from the film was later used in the trailers for its 2001 sequel, "Hannibal". "The Silence of the Lambs" was released on February 14, 1991, grossing $14 million during its opening weekend. At the time it closed on October 10, 1991, the film had grossed $131 million domestically with a total worldwide gross of $273 million. It was the fifth-highest-grossing film of 1991 worldwide. The film was released on VHS on August 27, 2002 and on DVD on March 6, 2001 by MGM. "The Silence of the Lambs" was a sleeper hit that gradually gained widespread success and critical acclaim. Foster, Hopkins, and Levine garnered much acclaim for their performances. Review aggregator Rotten Tomatoes reports that 96% of 97 film critics have given the film a positive review, with an average rating of 8.87/10. The website's critical consensus reads: "Director Jonathan Demme's smart, taut thriller teeters on the edge between psychological study and all-out horror, and benefits greatly from stellar performances by Anthony Hopkins and Jodie Foster." Metacritic, another review aggregator, assigned the film a weighted average score of 85 out of 100, based on 19 reviews from mainstream critics, indicating "universal acclaim". Audiences polled by CinemaScore gave the film an average grade of "A–" on an A+ to F scale. Roger Ebert of the "Chicago Sun-Times", specifically mentioned the "terrifying qualities" of Hannibal Lecter. Ebert later added the film to his list of "The Great Movies", recognizing the film as a "horror masterpiece" alongside such classics as "Nosferatu", "Psycho", and "Halloween". However, the film is also notable for being one of two multi-Academy Award winners (the other being "Unforgiven") disapproved of by Ebert's colleague, Gene Siskel. Writing for "Chicago Tribune", Siskel said, "Foster's character, who is appealing, is dwarfed by the monsters she is after. I'd rather see her work on another case." The film won the Big Five Academy Awards: Best Picture, Best Director (Demme), Best Actor (Hopkins), Best Actress (Foster), and Best Adapted Screenplay (Ted Tally), making it only the third film in history to accomplish that feat. It was also nominated for Best Sound (Tom Fleischman and Christopher Newman) and Best Film Editing, but lost to "" and "JFK", respectively. Other awards include being named Best Film by the National Board of Review of Motion Pictures, CHI Awards and PEO Awards. Demme won the Silver Bear for Best Director at the 41st Berlin International Film Festival and was nominated for the Golden Globe Award for Best Director. The film was nominated for the Grand Prix of the Belgian Film Critics Association. It was also nominated for the British Academy Film Award for Best Film. Screenwriter Ted Tally received an Edgar Award for Best Motion Picture Screenplay. The film was awarded Best Horror Film of the Year during the 2nd Horror Hall of Fame telecast, with Vincent Price presenting the award to the film's executive producer Gary Goetzman. In 1998, the film was listed as one of the 100 greatest films in the past 100 years by the American Film Institute. In 2006, at the Key Art Awards, the original poster for "The Silence of the Lambs" was named best film poster "of the past 35 years". "The Silence of the Lambs" placed seventh on Bravo's "The 100 Scariest Movie Moments" for Lecter's escape scene. The American Film Institute named Hannibal Lecter (as portrayed by Hopkins) the number one film villain of all time and Clarice Starling (as portrayed by Foster) the sixth-greatest film hero of all time. In 2011, ABC aired a prime-time special, "", that counted down the best films chosen by fans based on results of a poll conducted by ABC and "People" magazine. "The Silence of the Lambs" was selected as the best suspense/thriller and Dr. Hannibal Lecter was selected as the fourth-greatest film character. The film and its characters have appeared in the following AFI "100 Years" lists: In 2015, "Entertainment Weekly"s 25th anniversary year, it included "The Silence of the Lambs" in its list of the 25 best movies made since the magazine's beginning. Upon its release, "The Silence of the Lambs" was criticized by members of the LGBT community for its portrayal of Buffalo Bill as bisexual and transgender, although Bill's sexual orientation is never explicitly stated and Lecter expressly states Bill is "not really transsexual." In response to the critiques, Demme replied that Buffalo Bill "wasn't a gay character. He was a tormented man who hated himself and wished he was a woman because that would have made him as far away from himself as he possibly could be." Demme added that he "came to realize that there is a tremendous absence of positive gay characters in movies". Much of the criticism was drawn towards Foster, who critics alleged was herself a lesbian. In a 1992 interview with "Playboy" magazine, the feminist and women's rights advocate Betty Friedan stated: "I thought it was absolutely outrageous that "The Silence of the Lambs" won four Oscars. […] I'm not saying that the movie shouldn't have been shown. I'm not denying the movie was an artistic triumph, but it was about the evisceration, the skinning alive of women. That is what I find offensive. Not the "Playboy" centerfold."
https://en.wikipedia.org/wiki?curid=30006
The Matrix The Matrix is a 1999 science fiction action film written and directed by the Wachowskis. It stars Keanu Reeves, Laurence Fishburne, Carrie-Anne Moss, Hugo Weaving, and Joe Pantoliano in supporting roles and is the first installment in the "Matrix" franchise. It depicts a dystopian future in which humanity is unknowingly trapped inside a simulated reality, the Matrix, created by intelligent machines to distract humans while using their bodies as an energy source. When computer programmer Thomas Anderson, under the hacker alias "Neo", uncovers the truth, he "is drawn into a rebellion against the machines" along with other people who have been freed from the Matrix. "The Matrix" is an example of the cyberpunk subgenre of science fiction. The Wachowskis' approach to action scenes was influenced by Japanese animation and martial arts films, and the film's use of fight choreographers and wire fu techniques from Hong Kong action cinema influenced subsequent Hollywood action film productions. The film popularized the "bullet time" visual effect, by which the action within a shot progresses in slow motion while the camera appears to move at normal speed, allowing the sped-up movements of certain characters to be perceived normally. It was also influential for its impact on superhero films. While some critics have praised the film for its handling of difficult subjects, others have said the deeper themes are largely overshadowed by its action scenes. "The Matrix" was first released in the United States on March 31, 1999, and grossed over worldwide. It was well-received by many critics and won four Academy Awards, as well as other accolades, including BAFTA Awards and Saturn Awards. "The Matrix" was praised for its innovative visual effects, action sequences, cinematography and entertainment value. The film is considered to be among the best science fiction films of all time, and was added to the National Film Registry for preservation in 2012. The success of the film led to the release of two feature film sequels in 2003, "The Matrix Reloaded" and "The Matrix Revolutions", which were also written and directed by the Wachowskis. The "Matrix" franchise was further expanded through the production of comic books, video games and animated short films, with which the Wachowskis were heavily involved. The franchise has also inspired books and theories expanding on some of the religious and philosophical ideas alluded to in the films. A fourth film is scheduled for release on April 1, 2022. At an abandoned hotel within a major city, a woman (later revealed to be Trinity) is cornered by a police squad but overpowers them with superhuman abilities. She flees, pursued by the police and a group of mysterious suited Agents capable of similar superhuman feats. She answers a ringing public telephone and vanishes an instant before the Agents crash a truck into the booth. Computer programmer Thomas Anderson, known in the hacking scene by his alias "Neo", feels something is wrong with the world and is puzzled by repeated online encounters with the phrase "the Matrix". Trinity contacts him and tells him a man named Morpheus has the answers he seeks. A team of Agents and police, led by Agent Smith, arrives at Neo's workplace searching for him. Despite Morpheus's attempt to guide Neo to safety via telephone, Neo is captured and coerced into helping the Agents locate Morpheus, whom they regard as a "known terrorist". Undeterred, Neo later meets Morpheus, who offers him a choice between two pills; red to reveal the truth about the Matrix, and blue to return him to his former life. After Neo swallows the red pill, his reality falls apart, and he awakens in a liquid-filled pod among countless others attached to an elaborate electrical system. He is retrieved and brought aboard Morpheus's hovercraft, the "Nebuchadnezzar". As Neo recuperates from a lifetime of physical inactivity from the pod, Morpheus explains the truth. In the early 21st century, there was a war between humans and intelligent machines. When humans blocked the machines' access to solar energy, the machines harvested the humans' bioelectric power, keeping them pacified in the Matrix, a shared simulated reality modeled after the world as it existed at the end of the 20th century. The machines have taken over the world; the city of Zion is the last refuge of free humans. Morpheus and his crew are a group of rebels who hack into the Matrix to "unplug" enslaved humans and recruit them; their understanding of the Matrix's simulated nature enables them to bend its physical laws. Morpheus warns Neo that death within the Matrix kills the physical body, and the Agents he met are powerful sentient computer programs that eliminate threats to the system, while machines called Sentinels destroy rebels in the real world. Neo's prowess during virtual combat training lends credibility to Morpheus's belief that Neo is "the One", an especially powerful human prophesied to free humanity and end the war. The group enters the Matrix to visit the Oracle, the prophet who predicted the emergence of the One. She suggests to Neo that he is not the One and warns that he will have to choose between Morpheus's life and his own. The group is ambushed by Agents and tactical police, tipped by Cypher, a disgruntled crew member who betrays Morpheus in exchange for a comfortable life in the Matrix. Morpheus allows himself to be captured so the rest of the crew can escape. Cypher exits the Matrix first and murders several crew members as they lie defenseless in the real world. Before he can kill Neo, Cypher is killed by Tank, a crewman whom he only wounded. In the Matrix, the Agents interrogate Morpheus to learn his access codes to the mainframe computer in Zion. Tank proposes killing Morpheus to prevent this, but Neo resolves to return to the Matrix to rescue Morpheus, as prophesied by the Oracle; Trinity insists she accompany him. While rescuing Morpheus, Neo gains confidence in his abilities, performing feats comparable to the Agents'. Morpheus and Trinity exit the Matrix, but Smith ambushes and kills Neo before he can leave. As a group of Sentinels attack the "Nebuchadnezzar", Trinity whispers to Neo that he cannot be dead, because she loves him and the Oracle told her she would fall in love with the One. She kisses Neo and he revives with newfound power to perceive and control the Matrix. He effortlessly defeats Smith, and leaves the Matrix just in time for the ship's electromagnetic pulse to disable the Sentinels. Later, Neo makes a telephone call inside the Matrix, promising the machines that he will show their prisoners "a world where anything is possible". He hangs up and flies into the sky. In 1994, the Wachowskis presented the script for the film "Assassins" to Warner Bros. Pictures. After Lorenzo di Bonaventura, the president of production of the company at the time, read the script, he decided to buy rights to it and included two more pictures, "Bound" and "The Matrix", in the contract. The first movie the Wachowskis directed, "Bound", then became a critical success. Using this momentum, they later asked to direct "The Matrix". In 1996 the Wachowskis pitched the role of Neo to Will Smith. Smith explained on his YouTube channel that the idea was for him to be Neo, while Morpheus was to be played by Val Kilmer. He later explained that he did not quite understand the concept and he turned down the role to instead film "Wild Wild West". Producer Joel Silver soon joined the project. Although the project had key supporters like Silver and Di Bonaventura to influence the company, "The Matrix" was still a huge investment for Warner Bros, which had to invest $60 million to create a movie with deep philosophical ideas and difficult special effects. The Wachowskis therefore hired underground comic book artists Geof Darrow and Steve Skroce to draw a 600-page, shot-by-shot storyboard for the entire film. The storyboard eventually earned the studio's approval, and it was decided to film in Australia to make the most of the budget. Soon, "The Matrix" became a co-production of Warner Bros. and Village Roadshow Pictures. The cast were required to be able to understand and explain "The Matrix". French philosopher Jean Baudrillard's "Simulacra and Simulation" was required reading for most of the principal cast and crew. Reeves stated that the Wachowskis had him read "Simulacra and Simulation", Kevin Kelly's "", and Dylan Evans’s ideas on evolutionary psychology even before they opened up the script, and eventually he was able to explain all the philosophical nuances involved. Moss commented that she had difficulty with this process. The directors had long been admirers of Hong Kong action cinema, so they decided to hire the Chinese martial arts choreographer and film director Yuen Woo-ping to work on fight scenes. To prepare for the wire fu, the actors had to train hard for several months. The Wachowskis first scheduled four months for training. Yuen was optimistic but then began to worry when he realized how unfit the actors were. Yuen let their body style develop and then worked with each actor's strength. He built on Reeves's diligence, Fishburne's resilience, Weaving's precision, and Moss's feminine grace. Yuen designed Moss's moves to suit her deftness and lightness. Prior to the pre-production, Reeves suffered a two-level fusion of his cervical spine which had begun to cause paralysis in his legs, requiring him to undergo neck surgery. He was still recovering by the time of pre-production, but he insisted on training, so Yuen let him practice punches and lighter moves. Reeves trained hard and even requested training on days off. However, the surgery still made him unable to kick for two out of four months of training. As a result, Reeves did not kick much in the film. Weaving had to undergo hip surgery after he sustained an injury during the training process. In the film, the code that composes the Matrix itself is frequently represented as downward-flowing green characters. This code uses a custom typeface designed by Simon Whiteley, which includes mirror images of half-width kana characters and Western Latin letters and numerals. In a 2017 interview at CNET, he attributed the design to his wife, who is from Japan, and added, "I like to tell everybody that The Matrix's code is made out of Japanese sushi recipes". "The color green reflects the green tint commonly used on early monochrome computer monitors". Lynne Cartwright, the Visual Effects Supervisor at Animal Logic, supervised the creation of the film's opening title sequence, as well as the general look of the Matrix code throughout the film, in collaboration with Lindsay Fleay and Justen Marshall. The portrayal resembles the opening credits of the 1995 Japanese cyberpunk film, "Ghost in the Shell", which had a strong influence on the "Matrix" series (see below). It was also used in the subsequent films, on the related website, and in the game "", and its drop-down effect is reflected in the design of some posters for the "Matrix" series. The code received the Runner-up Award in the 1999 Jesse Garson Award for In-film typography or opening credit sequence. "The Matrix"s production designer, Owen Paterson, used methods to distinguish the "real world" and the Matrix in a pervasive way. The production design team generally placed a bias towards the Matrix code's distinctive green color in scenes set within the simulation, whereas there is an emphasis on the color blue during scenes set in the "real world". In addition, the Matrix scenes' sets were slightly more decayed, monolithic, and grid-like, to convey the cold, logical and artificial nature of that environment. For the "real world", the actors' hair was less styled, their clothing had more textile content, and the cinematographers used longer lenses to soften the backgrounds and emphasize the actors. The "Nebuchadnezzar" was designed to have a patched-up look, instead of clean, cold and sterile space ship interior sets as used on films like "Star Trek". The wires were made visible to show the ship's working internals, and each composition was carefully designed to convey the ship as "a marriage between Man and Machine". For the scene when Neo wakes up in the pod connected to the Matrix, the pod was constructed to look dirty, used, and sinister. During the testing of a breathing mechanism in the pod, the tester suffered hypothermia in under eight minutes, so the pod had to be heated. Kym Barrett, costume designer, said that she defined the characters and their environment by their costume. For example, Reeves' office costume was designed for Thomas Anderson to look uncomfortable, disheveled, and out of place. Barrett sometimes used three types of fabric for each costume, and also had to consider the practicality of the acting. The actors needed to perform martial art actions in their costume, hang upside-down without people seeing up their clothing, and be able to work the wires while strapped into the harnesses. For Trinity, Barrett experimented with how each fabric absorbed and reflected different types of light, and was eventually able to make Trinity's costume mercury-like and oil-slick to suit the character. For the Agents, their costume was designed to create a secret service, undercover look, resembling the film "JFK" and classic men in black. The sunglasses, a staple of the film's esthetics, were commissioned for the film by designer Richard Walker from sunglass maker Blinde Design. All but a few scenes were filmed at Fox Studios in Sydney, and in the city itself, although recognizable landmarks were not included in order to maintain the impression of a generic American city. The filming helped establish New South Wales as a major film production center. Filming began in March 1998 and wrapped in August 1998; principal photography took 118 days. Because of Reeves' neck injury, some of the action scenes had to be rescheduled to wait for his full recovery. As a result, the filming began with scenes that did not require much physical exertion, such as the scene in Thomas Anderson's office, the interrogation room, or the car ride in which Neo is taken to see the Oracle. Locations for these scenes included Martin Place's fountain in Sydney, half-way between it and the adjacent Colonial Building, and the Colonial Building itself. During the scene set on a government building rooftop, the team filmed extra footage of Neo dodging bullets in case the bullet time process did not work. The bullet-time fight scene was filmed on the roof of Symantec Corporation building in Kent Street, opposite Sussex Street. Moss performed the shots featuring Trinity at the beginning of the film and all the wire stunts herself. The rooftop set that Trinity uses to escape from Agent Brown early in the film was left over from the production of "Dark City", which has prompted comments due to the thematic similarities of the films. During the rehearsal of the lobby scene, in which Trinity runs on a wall, Moss injured her leg and was ultimately unable to film the shot in one take. She stated that she was under a lot of pressure at the time and was devastated when she realized that she would be unable to do it. The dojo set was built well before the actual filming. During the filming of these action sequences, there was significant physical contact between the actors, earning them bruises. Because of Reeves's injury and his insufficient training with wires prior to the filming, he was unable to perform the triple kicks satisfactorily and became frustrated with himself, causing the scene to be postponed. The scene was shot successfully a few days later, with Reeves using only three takes. Yuen altered the choreography and made the actors pull their punches in the last sequence of the scene, creating a training feel. The filmmakers originally planned to shoot the subway scene in an actual subway station, but the complexity of the fight and related wire work required shooting the scene on a set. The set was built around an existing train storage facility, which had real train tracks. Filming the scene when Neo slammed Smith into the ceiling, Chad Stahelski, Reeves' stunt double, sustained several injuries, including broken ribs, knees, and a dislocated shoulder. Another stuntman was injured by a hydraulic puller during a shot where Neo was slammed into a booth. The office building in which Smith interrogated Morpheus was a large set, and the outside view from inside the building was a large, three story high cyclorama. The helicopter was a full-scale light-weight mock-up suspended by a wire rope operated a tilting mechanism mounted to the studio roofbeams. The helicopter had a real minigun side-mounted to it, which was set to cycle at half its regular (3000 rounds per min) firing rate. To prepare for the scene in which Neo wakes up in a pod, Reeves lost 15 pounds and shaved his whole body to give Neo an emaciated look. The scene in which Neo fell into the sewer system concluded the principal photography. According to "The Art of the Matrix", at least one filmed scene and a variety of short pieces of action were omitted from the final cut of the film. The film is known for popularizing a visual effect known as "bullet time", which allows a shot to progress in slow-motion while the camera appears to move through the scene at normal speed. Bullet time has been described as "a visual analogy for privileged moments of consciousness within the Matrix", and throughout the film, the effect is used to illustrate characters' exertion of control over time and space. The Wachowskis first imagined an action sequence that slowed time while the camera pivoted rapidly around the subjects, and proposed the effect in their screenplay for the film. When John Gaeta read the script, he pleaded with an effects producer at Mass.Illusion to let him work on the project, and created a prototype that led to him becoming the film's visual effects supervisor. The method used for creating these effects involved a technically expanded version of an old art photography technique known as time-slice photography, in which an array of cameras are placed around an object and triggered simultaneously. Each camera captures a still picture, contributing one frame to the video sequence, which creates the effect of "virtual camera movement"; the illusion of a viewpoint moving around an object that appears frozen in time. The bullet time effect is similar but slightly more complicated, incorporating temporal motion so that rather than appearing totally frozen, the scene progresses in slow and variable motion. The cameras' positions and exposures were previsualized using a 3D simulation. Instead of firing the cameras simultaneously, the visual effect team fired the cameras fractions of a second after each other, so that each camera could capture the action as it progressed, creating a super slow-motion effect. When the frames were put together, the resulting slow-motion effects reached a frame frequency of 12,000 per second, as opposed to the normal 24 frames per second of film. Standard movie cameras were placed at the ends of the array to pick up the normal speed action before and after. Because the cameras circle the subject almost completely in most of the sequences, computer technology was used to edit out the cameras that appeared in the background on the other side. To create backgrounds, Gaeta hired George Borshukov, who created 3D models based on the geometry of buildings and used the photographs of the buildings themselves as texture. The photo-realistic surroundings generated by this method were incorporated into the bullet time scene, and algorithms based on optical flow were used to interpolate between the still images to produce a fluent dynamic motion; the computer-generated "lead in" and "lead out" slides were filled in between frames in sequence to get an illusion of orbiting the scene. Manex Visual Effects used a cluster farm running the Unix-like operating system FreeBSD to render many of the film's visual effects. Manex also handled creature effects, such as Sentinels and machines in real world scenes; Animal Logic created the code hallway and the exploding Agent at the end of the film. DFilm managed scenes that required heavy use of digital compositing, such as Neo's jump off a skyscraper and the helicopter crash into a building. The ripple effect in the latter scene was created digitally, but the shot also included practical elements, and months of extensive research were needed to find the correct kind of glass and explosives to use. The scene was shot by colliding a quarter-scale helicopter mock-up into a glass wall wired to concentric rings of explosives; the explosives were then triggered in sequence from the center outward, to create a wave of exploding glass. The photogrammetric and image-based computer-generated background approaches in "The Matrix"s bullet time evolved into innovations unveiled in the sequels "The Matrix Reloaded" and "The Matrix Revolutions". The method of using real photographs of buildings as texture for 3D models eventually led the visual effect team to digitize all data, such as scenes, characters' motions and expressions. It also led to the development of "Universal Capture", a process which samples and stores facial details and expressions at high resolution. With these highly detailed collected data, the team were able to create virtual cinematography in which characters, locations, and events can all be created digitally and viewed through virtual cameras, eliminating the restrictions of real cameras. Dane A. Davis was responsible for creating the sound effects for the film. The fight scene sound effects, such as the whipping sounds of punches were created using thin metal rods and recording them, then editing the sounds. The sound of the pod containing a human baby closing required almost fifty sounds put together. The film's score was composed by Don Davis. He noted that mirrors appear frequently in the film: reflections of the blue and red pills are seen in Morpheus's glasses; Neo's capture by Agents is viewed through the rear-view mirror of Trinity's Triumph Speed Triple motorcycle; Neo observes a broken mirror mending itself; reflections warp as a spoon is bent; the reflection of a helicopter is visible as it approaches a skyscraper. Davis focused on this theme of reflections when creating his score, alternating between sections of the orchestra and attempting to incorporate contrapuntal ideas. Davis' score combines orchestral, choral and synthesizer elements; the balance between these elements varies depending on whether humans or machines are the dominant subject of a given scene. In addition to Davis' score, "The Matrix" soundtrack also features music from acts such as Rammstein, Rob Dougan, Rage Against the Machine, Propellerheads, Ministry, Lunatic Calm, Deftones, Monster Magnet, The Prodigy, Rob Zombie, Meat Beat Manifesto, and Marilyn Manson. The film earned $171,479,930 (37.0%) in the United States and Canada and $292,037,453 (63.0%) in other countries, for a worldwide total of $463,517,383. In North America, it became the fifth highest grossing film of 1999 and the highest grossing R-rated film of 1999. Worldwide it was the fourth highest grossing film of the year. it was placed 122nd on the list of highest grossing films of all time, and the second highest grossing film in the "Matrix" franchise after "The Matrix Reloaded" ($742.1 million). "The Matrix" was praised by many critics, as well as filmmakers, and authors of science fiction, especially for its "spectacular action" scenes and its "groundbreaking special effects". Some have described "The Matrix" as one of the best science fiction films of all time, "Entertainment Weekly" called "The Matrix" "the most influential action movie of the generation". There have also been those, including philosopher William Irwin, who have suggested that the film explores significant philosophical and spiritual themes. Review aggregator Rotten Tomatoes reported an 87% of positive reviews, with a weighted average score of 7.7/10 based upon a sample of 151 reviews. The site's critical consensus reads, "Thanks to the Wachowskis' imaginative vision, "The Matrix" is a smartly crafted combination of spectacular action and groundbreaking special effects". At Metacritic, which assigns a rating out of 100 to reviews from mainstream critics, the film received a score of 73 based on 35 reviews, indicating "generally favorable reviews." Audiences polled by CinemaScore gave the film an average grade of "A-" on an A+ to F scale. It ranked 323rd among critics, and 546th among directors, in the 2012 "Sight & Sound" polls of the greatest films ever made. Philip Strick commented in "Sight & Sound", if the Wachowskis "claim no originality of message, they are startling innovators of method," praising the film's details and its "broadside of astonishing images". Roger Ebert gave the film three stars out of four, he praised the film's visuals and premise, but disliked the third act's focus on action. Similarly, "Time Out" praised the "entertainingly ingenious" switches between different realities, Hugo Weaving's "engagingly odd" performance, and the film's cinematography and production design, but concluded, "the promising premise is steadily wasted as the film turns into a fairly routine action pic ... yet another slice of overlong, high concept hokum." Jonathan Rosenbaum of the "Chicago Reader" reviewed the film negatively, criticizing it as "simpleminded fun for roughly the first hour, until the movie becomes overwhelmed by its many sources ... There's not much humor to keep it all life-size, and by the final stretch it's become bloated, mechanical, and tiresome." Ian Nathan of "Empire" described Carrie-Anne Moss as "a major find", praised the "surreal visual highs" enabled by the bullet time (or "flo-mo") effect, and described the film as "technically mind-blowing, style merged perfectly with content and just so damn cool". Nathan remarked that although the film's "looney plot" would not stand up to scrutiny, that was not a big flaw because ""The Matrix" is about pure experience". Maitland McDonagh said in her review for "TV Guide", the Wachowskis' "through-the-looking-glass plot... manages to work surprisingly well on a number of levels: as a dystopian sci-fi thriller, as a brilliant excuse for the film's lavish and hyperkinetic fight scenes, and as a pretty compelling call to the dead-above-the-eyeballs masses to unite and cast off their chains... This dazzling pop allegory is steeped in a dark, pulpy sensibility that transcends nostalgic pastiche and stands firmly on its own merits." "Salon"s reviewer Andrew O'Hehir acknowledged that although "The Matrix" is a fundamentally immature and unoriginal film ("It lacks anything like adult emotion... all this pseudo-spiritual hokum, along with the over-ramped onslaught of special effects—some of them quite amazing—will hold 14-year-old boys in rapture, not to mention those of us of all ages and genders who still harbor a 14-year-old boy somewhere inside"), he concluded, "as in "Bound", there's an appealing scope and daring to the Wachowskis' work, and their eagerness for more plot twists and more crazy images becomes increasingly infectious. In a limited and profoundly geeky sense, this might be an important and generous film. The Wachowskis have little feeling for character or human interaction, but their passion for "movies"—for making them, watching them, inhabiting their world—is pure and deep." Filmmakers and science fiction creators alike generally took a complimentary perspective of "The Matrix". William Gibson, a key figure in cyberpunk fiction, called the film "an innocent delight I hadn't felt in a long time," and stated, "Neo is my favourite-ever science fiction hero, absolutely." Joss Whedon called the film "my number one" and praised its storytelling, structure and depth, concluding, "It works on whatever level you want to bring to it." Darren Aronofsky commented, "I walked out of "The Matrix" ... and I was thinking, 'What kind of science fiction movie can people make now?' The [Wachowskis] basically took all the great sci-fi ideas of the 20th century and rolled them into a delicious pop culture sandwich that everyone on the planet devoured." M. Night Shyamalan expressed admiration for the Wachowskis, stating, "Whatever you think of "The Matrix", every shot is there because of the passion they have! You can see they argued it out!". Simon Pegg said that "The Matrix" provided "the excitement and satisfaction that "" failed to inspire. "The Matrix" seemed fresh and cool and visually breathtaking; making wonderful, intelligent use of CGI to augment the on-screen action, striking a perfect balance of the real and the hyperreal. It was possibly the coolest film I had ever seen." Quentin Tarantino counted "The Matrix" as one of his twenty favorite movies from 1992 to 2009. James Cameron called it "one of the most profoundly fresh science fiction films ever made". Christopher Nolan described it as "an incredibly palpable mainstream phenomenon that made people think, Hey, what if this isn't real?". Chad Stahelski, who had been a stunt double on "The Matrix" prior to directing Reeves in the "John Wick" series, acknowledged the film's strong influence on the "Wick" films, and commented, ""The Matrix" literally changed the industry. The influx of martial-arts choreographers and fight coordinators now make more, and are more prevalent and powerful in the industry, than stunt coordinators. "The Matrix" revolutionized that. Today, action movies want their big sequences designed around the fights." "The Matrix" received Academy Awards for film editing, sound effects editing, visual effects, and sound. The filmmakers were competing against other films with established franchises, like "", yet they won all four of their nominations. "The Matrix" also received BAFTA awards for Best Sound and Best Achievement in Special Visual Effects, in addition to nominations in the cinematography, production design and editing categories. In 1999, it won Saturn Awards for Best Science Fiction Film and Best Direction. The film's mainstream success led to the making of two sequels, "The Matrix Reloaded" and "The Matrix Revolutions", both directed by the Wachowskis. These were filmed back-to-back in one shoot and released on separate dates in 2003. The first film's introductory tale is succeeded by the story of the impending attack on the human enclave of Zion by a vast machine army. The sequels also incorporate longer and more ambitious action scenes, as well as improvements in bullet time and other visual effects. Also released was "The Animatrix", a collection of nine animated short films, many of which were created in the same Japanese animation style that was a strong influence on the live action trilogy. "The Animatrix" was overseen and approved by the Wachowskis, who only wrote four of the segments themselves but did not direct any of them; much of the project was developed by notable figures from the world of anime. The franchise also contains three video games: "Enter the Matrix" (2003), which contains footage shot specifically for the game and chronicles events taking place before and during "The Matrix Reloaded"; "The Matrix Online" (2004), an MMORPG which continued the story beyond "The Matrix Revolutions"; and "" (2005), which focuses on Neo's journey through the trilogy of films. The franchise also includes "The Matrix Comics", a series of comics and short stories set in the world of "The Matrix", written and illustrated by figures from the comics industry. Most of the comics were originally presented for free on the official "Matrix" website; they were later republished, along with some new material, in two printed trade paperback volumes, called "The Matrix Comics, Vol 1 and Vol 2". In March 2017, Warner Bros. was in early stages of developing a relaunch of the franchise with Zak Penn in talks to write a treatment and interest in getting Michael B. Jordan attached to star. According to "The Hollywood Reporter" neither the Wachowskis nor Joel Silver were involved with the endeavor, although the studio would like to get at minimum the blessing of the Wachowskis. On August 20, 2019, Warner Bros. Pictures Group chairman Toby Emmerich officially announced that a fourth Matrix movie was in the works, with Keanu Reeves and Carrie-Anne Moss set to reprise their roles as Neo and Trinity respectively. "The Matrix" was released on Laserdisc in its original aspect ratio of on September 21, 1999 in the US from Warner Home Video as well as in a cropped 1.33:1 aspect ratio in Hong Kong from ERA Home Entertainment. It was also released on VHS in both fullscreen and widescreen formats followed on , 1999. After its DVD release, it was the first DVD to sell more than one million copies in the US, and went on to be the first to sell more than three million copies in the US. By , 2003, one month after "The Matrix Reloaded" DVD was released, the sales of "The Matrix" DVD had exceeded 30 million copies. The Ultimate Matrix Collection was released on HD DVD on , 2007 and on Blu-ray on , 2008. The film was also released standalone in a 10th anniversary edition Blu-ray in the Digibook format on , 2009, 10 years to the day after the film was released theatrically. In 2010, the film had another DVD release along with the two sequels as "The Complete Matrix Trilogy". It was also released on 4K HDR Blu-ray on May 22, 2018. The film as part of "The Matrix Trilogy" was released on 4K Ultra HD Blu-Ray on October 30, 2018. "The Matrix" draws from and alludes to numerous cinematic and literary works, and concepts from mythology, religion and philosophy, including the ideas of Buddhism, Christianity, Gnosticism, Hinduism, and Judaism. The pods in which the machines keep humans have been compared to images in the 1927 film "Metropolis", and the work of M. C. Escher. and can be seen in "Welcome to Paradox" Episode 4 "News from D Street" from a 1986 short story of the same name by Andrew Weiner which aired on September 7, 1998 on the SYFY Channel and has a remarkably similar concept. In this episode the hero is unaware he is living in virtual reality until he is told so by "the code man" who created the simulation and enters it knowingly. The Wachowskis have described Stanley Kubrick's 1968 film "" as a formative cinematic influence, and as a major inspiration on the visual style they aimed for when making "The Matrix". Reviewers have also commented on similarities between "The Matrix" and other late-1990s films such as "Strange Days", "Dark City", and "The Truman Show". The similarity of the film's central concept to a device in the long-running series "Doctor Who" has also been noted. As in the film, the Matrix of that series (introduced in the 1976 serial "The Deadly Assassin") is a massive computer system which one enters using a device connecting to the head, allowing users to see representations of the real world and change its laws of physics; but if killed there, they will die in reality. The action scenes of "The Matrix" were also strongly influenced by live-action films such as those of director John Woo. The martial arts sequences were inspired by "Fist of Legend", a critically acclaimed 1995 martial arts film starring Jet Li. The fight scenes in "Fist of Legend" led to the hiring of Yuen as fight choreographer. The Wachowskis' approach to action scenes drew upon their admiration for Japanese animation such as "Ninja Scroll" and "Akira". Director Mamoru Oshii's 1995 animated film "Ghost in the Shell" was a particularly strong influence; producer Joel Silver has stated that the Wachowskis first described their intentions for "The Matrix" by showing him that anime and saying, "We wanna do that for real". Mitsuhisa Ishikawa of Production I.G, which produced "Ghost in the Shell", noted that the anime's high-quality visuals were a strong source of inspiration for the Wachowskis. He also commented, "...cyberpunk films are very difficult to describe to a third person. I'd imagine that "The Matrix" is the kind of film that was very difficult to draw up a written proposal for to take to film studios". He stated that since "Ghost in the Shell" had gained recognition in America, the Wachowskis used it as a "promotional tool". In "The Matrix", a copy of Jean Baudrillard's philosophical work "Simulacra and Simulation", which was published in French in 1981, is visible on-screen as "the book used to conceal disks", and Morpheus quotes the phrase "desert of the real" from it. "The book was required reading for" the actors prior to filming. However, Baudrillard himself said that "The Matrix" misunderstands and distorts his work. Some interpreters of "The Matrix" mention Baudrillard's philosophy to support their claim "that the [film] is an allegory for contemporary experience in a heavily commercialized, media-driven society, especially in developed countries". "The influence of [Baudrillard] was brought to the public's attention through the writings of art historians such as Griselda Pollock and film theorists such as Heinz-Peter Schwerfel". In addition to Baudrillard, the Wachowskis were also significantly influenced by Kevin Kelly's "", and Dylan Evans’s ideas on evolutionary psychology. The film makes several references to Lewis Carroll's "Alice's Adventures in Wonderland". Comparisons have also been made to Grant Morrison's comic series "The Invisibles", with Morrison describing it in 2011 as "(it) seemed to me (to be) my own combination of ideas enacted on the screen". Comparisons have also been made between "The Matrix" and the books of Carlos Castaneda. "The Matrix" belongs to the cyberpunk genre of science fiction, and draws from earlier works in the genre such as the 1984 novel "Neuromancer" by William Gibson. For example, the film's use of the term "Matrix" is adopted from Gibson's novel, though L. P. Davies had already used the term "Matrix" fifteen years earlier for a similar concept in his 1969 novel "The White Room" ("It had been tried in the States some years earlier, but their 'matrix' as they called it hadn't been strong enough to hold the fictional character in place"). After watching "The Matrix", Gibson commented that the way that the film's creators had drawn from existing cyberpunk works was "exactly the kind of creative cultural osmosis" he had relied upon in his own writing; however, he noted that the film's Gnostic themes distinguished it from "Neuromancer", and believed that "The Matrix" was thematically closer to the work of science fiction author Philip K. Dick, particularly Dick's speculative "Exegesis". Other writers have also commented on the similarities between "The Matrix" and Dick's work; one example of such influence is a Philip K. Dick's 1977 conference, in which he stated: "We are living in a computer-programmed reality, and the only clue we have to it is when some variable is changed, and some alteration in our reality occurs". It has been suggested by philosopher William Irwin that the idea of the "Matrix" – a generated reality invented by malicious machines – is an allusion to Descartes' "First Meditation", and his idea of an evil demon. The Meditation hypothesizes that the perceived world might be a comprehensive illusion created to deceive us. The same premise can be found in Hilary Putnam's brain in a vat scenario proposed in the 1980s. A connection between the premise of "The Matrix" and Plato's Allegory of the Cave has also been suggested. The allegory is related to Plato's theory of Forms, which holds that the true essence of an object is not what we perceive with our senses, but rather its quality, and that most people perceive only the shadow of the object and are thus limited to false perception. The philosophy of Immanuel Kant has also been claimed as another influence on the film, and in particular how individuals within the Matrix interact with one another and with the system. Kant states in his "Critique of Pure Reason" that people come to know and explore our world through synthetic means (language, etc.), and thus this makes it rather difficult to discern truth from falsely perceived views. This means people are their own agents of deceit, and so in order for them to know truth, they must choose to openly pursue truth. This idea can be examined in Agent Smith's monologue about the first version of the Matrix, which was designed as a human utopia, a perfect world without suffering and with total happiness. Agent Smith explains that, "it was a disaster. No one accepted the program. Entire crops [of people] were lost." The machines had to amend their choice of programming in order to make people subservient to them, and so they conceived the Matrix in the image of the world in 1999. The world in 1999 was far from a utopia, but still humans accepted this over the suffering-less utopia. According to William Irwin this is Kantian, because the machines wished to impose a perfect world on humans in an attempt to keep people content, so that they would remain completely submissive to the machines, both consciously and subconsciously, but humans were not easy to make content. Andrew Godoski sees allusions to Christ, including Neo's "virgin birth", his doubt in himself, the prophecy of his coming, along with many other Christian references. Amongst these possible allusions, it is suggested that the name of the character Trinity refers to Christianity's doctrine of the Trinity. It has also been noted that the character Morpheus paraphrases the Chinese taoist philosopher Zhuangzi when he asks Neo, "Have you ever had a dream, Neo, that you were so sure was real? What if you weren't able to wake from that dream? How would you know the difference from the real world and the dream world?" Matrixism is a fan-based religion created as "the matrix religion." Years after the release of "The Matrix", both the Wachowskis came out as transgender women, and some have seen transgender themes in the film. The red pill has been compared with red estrogen pills. Morpheus's description of the Matrix giving you a sense that something is fundamentally wrong, "like a splinter in your mind", has been compared to gender dysphoria. Also, in the original script, Switch was a woman in the Matrix and a man in the real world, but this idea was dropped. In a 2016 GLAAD Media Awards speech, Lilly Wachowski said "There’s a critical eye being cast back on Lana and I's work through the lens of our transness. This is a cool thing because it's an excellent reminder that art is never static." "The Matrix" had a strong effect on action filmmaking in Hollywood. The film's incorporation of wire fu techniques, including the involvement of fight choreographer Yuen Woo-ping and other personnel with a background in Hong Kong action cinema, affected the approaches to fight scenes taken by subsequent Hollywood action films, moving them towards more Eastern approaches. The success of "The Matrix" created high demand for those choreographers and their techniques from other filmmakers, who wanted fights of similar sophistication: for example, wire work was employed in "X-Men" (2000) and "Charlie's Angels" (2000), and Yuen Woo-ping's brother Yuen Cheung-Yan was choreographer on "Daredevil" (2003). "The Matrix"s Asian approach to action scenes also created an audience for Asian action films such as "Crouching Tiger, Hidden Dragon" (2000) that they might not otherwise have had. Following "The Matrix", films made abundant use of slow-motion, spinning cameras, and, often, the bullet time effect of a character freezing or slowing down and the camera dollying around them. The ability to slow down time enough to distinguish the motion of bullets was used as a central gameplay mechanic of several video games, including "Max Payne", in which the feature was explicitly referred to as "bullet time". It was also the defining game mechanic of the game "Superhot" and its sequels. "The Matrix"s signature special effect, and other aspects of the film, have been parodied numerous times, in comedy films such as "" (1999), "Scary Movie" (2000), "Shrek" (2001), "Kung Pow! Enter the Fist" (2002), "Lastikman" (2003); "Marx Reloaded" in which the relationship between Neo and Morpheus is represented as an imaginary encounter between Karl Marx and Leon Trotsky; and in video games such as "Conker's Bad Fur Day". It also inspired films featuring a black-clad hero, a sexy yet deadly heroine, and bullets ripping slowly through the air; these included "Charlie's Angels" (2000) featuring Cameron Diaz floating through the air while the cameras flo-mo around her; "Equilibrium" (2002), starring Christian Bale, whose character wore long black leather coats like Reeves' Neo; "Night Watch" (2004), a Russian megahit heavily influenced by "The Matrix" and directed by Timur Bekmambetov, who later made "Wanted" (2008), which also features bullets ripping through air; and "Inception" (2010), which centers on a team of sharply dressed rogues who enter a wildly malleable alternate reality by "wiring in". The original "Tron" (1982) paved the way for "The Matrix", and "The Matrix", in turn, inspired Disney to make its own Matrix with a "Tron" sequel, "" (2010). Also, the film's lobby shootout sequence was recreated in the 2002 Indian action comedy "Awara Paagal Deewana". Carrie-Anne Moss asserted that prior to being cast in "The Matrix", she had "no career". It launched Moss into international recognition and transformed her career; in a "New York Daily News" interview, she stated, ""The Matrix" gave me so many opportunities. Everything I've done since then has been because of that experience. It gave me so much". The film also created one of the most devoted movie fan-followings since "Star Wars". The combined success of the "Matrix" trilogy, the "Lord of the Rings" films and the "Star Wars" prequels made Hollywood interested in creating trilogies. Stephen Dowling from the BBC noted that "The Matrix"s success in taking complex philosophical ideas and presenting them in ways palatable for impressionable minds might be its most influential aspect. "The Matrix" was also influential for its impact on superhero films. John Kenneth Muir in "The Encyclopedia of Superheroes on Film and Television" called the film a "revolutionary" reimagination of movie visuals, paving the way for the visuals of later superhero films, and credits it with helping to "make comic-book superheroes hip" and effectively demonstrating the concept of "faster than a speeding bullet" with its bullet time effect. Adam Sternbergh of "Vulture.com" credits "The Matrix" with reinventing and setting the template for modern superhero blockbusters, and inspiring the superhero renaissance in the early 21st century. In 2001, "The Matrix" placed 66th in the American Film Institute's "100 Years...100 Thrills" list. In 2007, "Entertainment Weekly" called "The Matrix" the best science-fiction piece of media for the past 25 years. In 2009, the film was ranked 39th on "Empire"s reader-, actor- and critic-voted list of "The 500 Greatest Movies of All Time". "The Matrix" was voted as the fourth best sci-fi film in the 2011 list "", based on a poll conducted by ABC and "People". In 2012, the film was selected for preservation in the National Film Registry by the Library of Congress for being "culturally, historically, and aesthetically significant."
https://en.wikipedia.org/wiki?curid=30007
Telegraphy Telegraphy is the long-distance transmission of textual messages where the sender uses symbolic codes, known to the recipient, rather than a physical exchange of an object bearing the message. Thus flag semaphore is a method of telegraphy, whereas pigeon post is not. Ancient signalling systems, although sometimes quite extensive and sophisticated as in China, were generally not capable of transmitting arbitrary text messages. Possible messages were fixed and predetermined and such systems are thus not true telegraphs. The earliest true telegraph put into widespread use was the optical telegraph of Claude Chappe, invented in the late 18th century. The system was extensively used in France, and European countries controlled by France, during the Napoleonic era. The electric telegraph started to replace the optical telegraph in the mid-19th century. It was first taken up in Britain in the form of the Cooke and Wheatstone telegraph, initially used mostly as an aid to railway signalling. This was quickly followed by a different system developed in the United States by Samuel Morse. The electric telegraph was slower to develop in France due to the established optical telegraph system, but an electrical telegraph was put into use with a code compatible with the Chappe optical telegraph. The Morse system was adopted as the international standard in 1865, using a modified Morse code developed in Germany. The heliograph is a telegraph system using reflected sunlight for signalling. It was mainly used in areas where the electrical telegraph had not been established and generally uses the same code. The most extensive heliograph network established was in Arizona and New Mexico during the Apache Wars. The heliograph was standard military equipment as late as World War II. Wireless telegraphy developed in the early 20th century. Wireless telegraphy became important for maritime use, and was a competitor to electrical telegraphy using submarine telegraph cables in international communications. Telegrams became a popular means of sending messages once telegraph prices had fallen sufficiently. Traffic became high enough to spur the development of automated systems—teleprinters and punched tape transmission. These systems led to new telegraph codes, starting with the Baudot code. However, telegrams were never able to compete with the letter post on price, and competition from the telephone, which removed their speed advantage, drove the telegraph into decline from 1920 onwards. The few remaining telegraph applications were largely taken over by alternatives on the internet towards the end of the 20th century. The word "telegraph" (from Ancient Greek: τῆλε, "têle", "at a distance" and γράφειν, "gráphein", "to write") was first coined by the French inventor of the Semaphore telegraph, Claude Chappe, who also coined the word "semaphore". A "telegraph" is a device for transmitting and receiving messages over long distances, i.e., for telegraphy. The word "telegraph" alone now generally refers to an electrical telegraph. Wireless telegraphy is transmission of messages over radio with telegraphic codes. Contrary to the extensive definition used by Chappe, Morse argued that the term "telegraph" can strictly be applied only to systems that transmit "and" record messages at a distance. This is to be distinguished from "semaphore", which merely transmits messages. Smoke signals, for instance, are to be considered semaphore, not telegraph. According to Morse, telegraph dates only from 1832 when Pavel Schilling invented one of the earliest electrical telegraphs. A telegraph message sent by an electrical telegraph operator or telegrapher using Morse code (or a printing telegraph operator using plain text) was known as a "telegram". A "cablegram" was a message sent by a submarine telegraph cable, often shortened to a "cable" or a "wire". Later, a "Telex" was a message sent by a Telex network, a switched network of teleprinters similar to a telephone network. A "wirephoto" or "wire picture" was a newspaper picture that was sent from a remote location by a facsimile telegraph. A "diplomatic telegram", also known as a diplomatic cable, is the term given to a confidential communication between a diplomatic mission and the foreign ministry of its parent country. These continue to be called telegrams or cables regardless of the method used for transmission. Passing messages by signalling over distance is an ancient practice. One of the oldest examples is the signal towers of the Great Wall of China. In , signals could be sent by beacon fires or drum beats. By complex flag signalling had developed, and by the Han dynasty (200 BC–220 AD) signallers had a choice of lights, flags, or gunshots to send signals. By the Tang dynasty (618–907) a message could be sent 700 miles in 24 hours. The Ming dynasty (1368–1644) added artillery to the possible signals. While the signalling was complex (for instance, different-coloured flags could be used to indicate enemy strength), only predetermined messages could be sent. The Chinese signalling system extended well beyond the Great Wall. Signal towers away from the wall were used to give early warning of an attack. Others were built even further out as part of the protection of trade routes, especially the Silk Road. Signal fires were widely used in Europe and elsewhere for military purposes. The Roman army made frequent use of them, as did their enemies, and the remains of some of the stations still exist. Few details have been recorded of European/Mediterranean signalling systems and the possible messages. One of the few for which details are known is a system invented by Aeneas Tacticus (4th century BC). Tacticus's system had water filled pots at the two signal stations which were drained in synchronisation. Annotation on a floating scale indicated which message was being sent or received. Signals sent by means of torches indicated when to start and stop draining to keep the synchronisation. None of the signalling systems discussed above are true telegraphs in the sense of a system that can transmit arbitrary messages over arbitrary distances. Lines of signalling relay stations can send messages to any required distance, but all these systems are limited to one extent or another in the range of messages that they can send. A system like flag semaphore, with an alphabetic code, can certainly send any given message, but the system is designed for short-range communication between two persons. An engine order telegraph, used to send instructions from the bridge of a ship to the engine room, fails to meet both criteria; it has a limited distance and very simple message set. There was only one ancient signalling system described that "does" meet these criteria. That was a system using the Polybius square to encode an alphabet. Polybius (2nd century BC) suggested using two successive groups of torches to identify the coordinates of the letter of the alphabet being transmitted. The number of said torches held up signalled the grid square that contained the letter. There is no definite record of the system ever being used, but there are several passages in ancient texts that some think are suggestive. Holzmann and Pehrson, for instance, suggest that Livy is describing its use by Philip V of Macedon in 207 BC during the First Macedonian War. Nothing else that could be described as a true telegraph existed until the 17th century. Possibly the first alphabetic telegraph code in the modern era is due to Franz Kessler who published his work in 1616. Kessler used a lamp placed inside a barrel with a moveable shutter operated by the signaller. The signals were observed at a distance with the newly-invented telescope. In several places around the world, a system of passing messages from village to village using drum beats was developed. This was particularly highly developed in Africa. At the time of its discovery in Africa, the speed of message transmission was faster than any existing European system using optical telegraphs. The African drum system was not alphabetical. Rather, the drum beats followed the tones of the language. This made messages highly ambiguous and context was important for their correct interpretation. An optical telegraph is a telegraph consisting of a line of stations in towers or natural high points which signal to each other by means of shutters or paddles. Signalling by means of indicator pointers was called "semaphore". Early proposals for an optical telegraph system were made to the Royal Society by Robert Hooke in 1684 and were first implemented on an experimental level by Sir Richard Lovell Edgeworth in 1767. The first successful optical telegraph network was invented by Claude Chappe and operated in France from 1793 to 1846. The two most extensive systems were Chappe's in France, with branches into neighbouring countries, and the system of Abraham Niclas Edelcrantz in Sweden. During 1790–1795, at the height of the French Revolution, France needed a swift and reliable communication system to thwart the war efforts of its enemies. In 1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. On 2 March 1791, at 11 am, they sent the message "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory) between Brulon and Parce, a distance of . The first means used a combination of black and white panels, clocks, telescopes, and codebooks to send their message. In 1792, Claude was appointed "Ingénieur-Télégraphiste" and charged with establishing a line of stations between Paris and Lille, a distance of . It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. The Prussian system was put into effect in the 1830s. However, they were highly dependent on good weather and daylight to work and even then could accommodate only about two words per minute. The last commercial semaphore link ceased operation in Sweden in 1880. As of 1895, France still operated coastal commercial semaphore telegraph stations, for ship-to-shore communication. The early ideas for an electric telegraph included in 1753 using electrostatic deflections of pith balls, proposals for electrochemical bubbles in acid by Campillo in 1804 and von Sömmering in 1809. The first experimental system over a substantial distance was electrostatic by Ronalds in 1816. Ronalds offered his invention to the British Admiralty, but it was rejected as unnecessary, the existing optical telegraph connecting the Admiralty in London to their main fleet base in Portsmouth being deemed adequate for their purposes. As late as 1844, after the electrical telegraph had come into use, the Admiralty's optical telegraph was still used, although it was accepted that poor weather ruled it out on many days of the year. France had an extensive optical telegraph dating from Napoleonic times and was even slower to take up electrical systems. Eventually, electrostatic telegraphs were abandoned in favour of electromagnetic systems. An early experimental system (Schilling, 1832) led to a proposal to establish a telegraph between St Petersburg and Kronstadt, but it was never completed. The first operative electric telegraph (Gauss and Weber, 1833) connected Göttingen Observatory to the Institute of Physics about 1 km away during experimental investigations of the geomagnetic field. The first commercial telegraph was by Cooke and Wheatstone following their English patent of 10 June 1837. It was demonstrated on the London and Birmingham Railway in July of the same year. In July 1839, a five-needle, five-wire system was installed to provide signalling over a record distance of 21 km on a section of the Great Western Railway between London Paddington station and West Drayton. However, in trying to get railway companies to take up his telegraph more widely for railway signalling, Cooke was rejected several times in favour of the more familiar, but shorter range, steam-powered pneumatic signalling. Even when his telegraph was taken up, it was considered experimental and the company backed out of a plan to finance extending the telegraph line out to Slough. However, this led to a breakthrough for the electric telegraph, as up to this point the Great Western had insisted on exclusive use and refused Cooke permission to open public telegraph offices. Cooke extended the line at his own expense and agreed that the railway could have free use of it in exchange for the right to open it up to the public. Most of the early electrical systems required multiple wires (Ronalds' system was an exception), but the system developed in the United States by Morse and Vail was a single-wire system. This was the system that first used the soon-to-become-ubiquitous Morse code. By 1844, the Morse system connected Baltimore to Washington, and by 1861 the west coast of the continent was connected to the east coast. The Cooke and Wheatstone telegraph, in a series of improvements, also ended up with a one-wire system, but still using their own code and needle displays. The electric telegraph quickly became a means of more general communication. The Morse system was officially adopted as the standard for continental European telegraphy in 1851 with a revised code, which later became the basis of International Morse Code. However, Great Britain and the British Empire continued to use the Cooke and Wheatstone system, in some places as late as the 1930s. Likewise, the United States continued to use American Morse code internally, requiring translation operators skilled in both codes for international messages. Railway signal telegraphy was developed in Britain from the 1840s onward. It was used to manage railway traffic and to prevent accidents as part of the railway signalling system. On 12 June 1837 Cooke and Wheatstone were awarded a patent for an electric telegraph. This was demonstrated between Euston railway station—where Wheatstone was located—and the engine house at Camden Town—where Cooke was stationed, together with Robert Stephenson, the London and Birmingham Railway line's chief engineer. The messages were for the operation of the rope-haulage system for pulling trains up the 1 in 77 bank. The world's first permanent railway telegraph was completed in July 1839 between London Paddington and West Drayton on the Great Western Railway with an electric telegraph using a four-needle system. The concept of a signalling "block" system was proposed by Cooke in 1842. Railway signal telegraphy did not change in essence from Cooke's initial concept for more than a century. In this system each line of railway was divided into sections or blocks of several miles length. Entry to and exit from the block was to be authorised by electric telegraph and signalled by the line-side semaphore signals, so that only a single train could occupy the rails. In Cooke's original system, a single-needle telegraph was adapted to indicate just two messages: "Line Clear" and "Line Blocked". The signaller would adjust his line-side signals accordingly. As first implemented in 1844 each station had as many needles as there were stations on the line, giving a complete picture of the traffic. As lines expanded, a sequence of pairs of single-needle instruments were adopted, one pair for each block in each direction. Wigwag is a form of flag signalling using a single flag. Unlike most forms of flag signalling, which are used over relatively short distances, wigwag is designed to maximise the distance covered—up to 20 miles in some cases. Wigwag achieved this by using a large flag—a single flag can be held with both hands unlike flag semaphore which has a flag in each hand—and using motions rather than positions as its symbols since motions are more easily seen. It was invented by US Army surgeon Albert J. Myer in the 1850s who later became the first head of the Signal Corps. Wigwag was used extensively during the American Civil War where it filled a gap left by the electrical telegraph. Although the electrical telegraph had been in use for more than a decade, the network did not yet reach everywhere and portable, ruggedized equipment suitable for military use was not immediately available. Permanent or semi-permanent stations were established during the war, some of them towers of enormous height and the system for a while could be described as a communications network. A heliograph is a telegraph that transmits messages by flashing sunlight with a mirror, usually using Morse code. The idea for a telegraph of this type was first proposed as a modification of surveying equipment (Gauss, 1821). Various uses of mirrors were made for communication in the following years, mostly for military purposes, but the first device to become widely used was a heliograph with a moveable mirror (Mance, 1869). The system was used by the French during the 1870–71 siege of Paris, with night-time signalling using kerosene lamps as the source of light. An improved version (Begbie, 1870) was used by British military in many colonial wars, including the Anglo-Zulu War (1879). At some point, a morse key was added to the apparatus to give the operator the same degree of control as in the electric telegraph. Another type of heliograph was the heliostat fitted with a Colomb shutter. The heliostat was essentially a surveying instrument with a fixed mirror and so could not transmit a code by itself. The term "heliostat" is sometimes used as a synonym for "heliograph" because of this origin. The Colomb shutter (Bolton and Colomb, 1862) was originally invented to enable the transmission of morse code by signal lamp between Royal Navy ships at sea. The heliograph was heavily used by Nelson A. Miles in Arizona and New Mexico after he took over command (1886) of the fight against Geronimo and other Apache bands in the Apache Wars. Miles had previously set up the first heliograph line in the US between Fort Keogh and Fort Custer in Montana. He used the heliograph to fill in vast, thinly populated areas that were not covered by the electric telegraph. Twenty-six stations covered an area 200 by 300 miles. In a test of the system, a message was relayed 400 miles in four hours. Miles' enemies used smoke signals and flashes of sunlight from metal, but lacked a sophisticated telegraph code. The heliograph was ideal for use in the American Southwest due to its clear air and mountainous terrain on which stations could be located. It was found necessary to lengthen the morse dash (which is much shorter in American Morse code than in the modern International Morse code) to aid differentiating from the morse dot. Use of the heliograph declined from 1915 onwards, but remained in service in Britain and British Commonwealth countries for some time. Australian forces used the heliograph as late as 1942 in the Western Desert Campaign of World War II. Some form of heliograph was used by the mujahideen in the Soviet–Afghan War (1979-1989). A teleprinter is a telegraph machine that can send messages from a typewriter-like keyboard and print incoming messages in readable text with no need for the operators to be trained in the telegraph code used on the line. It developed from various earlier printing telegraphs and resulted in improved transmission speeds. The Morse telegraph (1837) was originally conceived as a system marking indentations on paper tape. A chemical telegraph making blue marks improved the speed of recording (Bain, 1846), but was retarded by a patent challenge from Morse. The first true printing telegraph (that is printing in plain text) used a spinning wheel of types in the manner of a daisy wheel printer (House, 1846, improved by Hughes, 1855). The system was adopted by Western Union. Early teleprinters used the Baudot code, a five-bit sequential binary code. This was a telegraph code developed for use on the French telegraph using a five-key keyboard (Baudot, 1874). Teleprinters generated the same code from a full alphanumeric keyboard. A feature of the Baudot code, and subsequent telegraph codes, was that, unlike Morse code, every character has a code of the same length making it more machine friendly. The Baudot code was used on the earliest ticker tape machines (Calahan, 1867), a system for mass distributing stock price information. In a punched-tape system, the message is first typed onto punched tape using the code of the telegraph system—Morse code for instance. It is then, either immediately or at some later time, run through a transmission machine which sends the message to the telegraph network. Multiple messages can be sequentially recorded on the same run of tape. The advantage of doing this is that messages can be sent at a steady, fast rate making maximum use of the available telegraph lines. The economic advantage of doing this is greatest on long, busy routes where the cost of the extra step of preparing the tape is outweighed by the cost of providing more telegraph lines. The first machine to use punched tape was Bain's teleprinter (Bain, 1843), but the system saw only limited use. Later versions of Bain's system achieved speeds up to 1000 words per minute, far faster than a human operator could achieve. The first widely used system (Wheatstone, 1858) was first put into service with the British General Post Office in 1867. A novel feature of the Wheatstone system was the use of bipolar encoding. That is, both positive and negative polarity voltages were used. Bipolar encoding has several advantages, one of which is that it permits duplex communication. The Wheatstone tape reader was capable of a speed of 400 words per minute. A worldwide communication network meant that telegraph cables would have to be laid across oceans. On land cables could be run uninsulated suspended from poles. Underwater, a good insulator that was both flexible and capable of resisting the ingress of seawater was required, and at first this was not available. A solution presented itself with gutta-percha, a natural rubber from the "Palaquium gutta" tree, after William Montgomerie sent samples to London from Singapore in 1843. The new material was tested by Michael Faraday and in 1845 Wheatstone suggested that it should be used on the cable planned between Dover and Calais by John Watkins Brett. The idea was proved viable when the South Eastern Railway company successfully tested a two-mile gutta-percha insulated cable with telegraph messages to a ship off the coast of Folkstone. The cable to France was laid in 1850 but was almost immediately severed by a French fishing vessel. It was relaid the next year and connections to Ireland and the Low Countries soon followed. Getting a cable across the Atlantic Ocean proved much more difficult. The Atlantic Telegraph Company, formed in London in 1856, had several failed attempts. A cable laid in 1858 worked poorly for a few days (sometimes taking all day to send a message despite the use of the highly sensitive mirror galvanometer developed by William Thomson (the future Lord Kelvin) before being destroyed by applying too high a voltage. Its failure and slow speed of transmission prompted Thomson and Oliver Heaviside to find better mathematical descriptions of long transmission lines. The company finally succeeded in 1866 with an improved cable laid by SS "Great Eastern", the largest ship of its day, designed by Isambard Kingdom Brunel. An overland telegraph from Britain to India was first connected in 1866 but was unreliable so a submarine telegraph cable was connected in 1870. Several telegraph companies were combined to form the "Eastern Telegraph Company" in 1872. Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable-laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted while it was able to quickly cut Germany's cables worldwide. In 1843, Scottish inventor Alexander Bain invented a device that could be considered the first facsimile machine. He called his invention a "recording telegraph". Bain's telegraph was able to transmit images by electrical wires. Frederick Bakewell made several improvements on Bain's design and demonstrated a telefax machine. In 1855, an Italian abbot, Giovanni Caselli, also created an electric telegraph that could transmit images. Caselli called his invention "Pantelegraph". Pantelegraph was successfully tested and approved for a telegraph line between Paris and Lyon. In 1881, English inventor Shelford Bidwell constructed the "scanning phototelegraph" that was the first telefax machine to scan any two-dimensional original, not requiring manual plotting or drawing. Around 1900, German physicist Arthur Korn invented the "" widespread in continental Europe especially since a widely noticed transmission of a wanted-person photograph from Paris to London in 1908 used until the wider distribution of the radiofax. Its main competitors were the "Bélinographe" by Édouard Belin first, then since the 1930s, the "Hellschreiber", invented in 1929 by German inventor Rudolf Hell, a pioneer in mechanical image scanning and transmission. The late 1880s through to the 1890s saw the discovery and then development of a newly understood phenomenon into a form of wireless telegraphy, called "Hertzian wave" wireless telegraphy, radiotelegraphy, or (later) simply "radio". Between 1886 and 1888, Heinrich Rudolf Hertz published the results of his experiments where he was able to transmit electromagnetic waves (radio waves) through the air, proving James Clerk Maxwell's 1873 theory of electromagnetic radiation. Many scientists and inventors experimented with this new phenomenon but the general consensus was that these new waves (similar to light) would be just as short range as light, and, therefore, useless for long range communication. At the end of 1894, the young Italian inventor Guglielmo Marconi began working on the idea of building a commercial wireless telegraphy system based on the use of Hertzian waves (radio waves), a line of inquiry that he noted other inventors did not seem to be pursuing. Building on the ideas of previous scientists and inventors Marconi re-engineered their apparatus by trial and error attempting to build a radio-based wireless telegraphic system that would function the same as wired telegraphy. He would work on the system through 1895 in his lab and then in field tests making improvements to extend its range. After many breakthroughs, including applying the wired telegraphy concept of grounding the transmitter and receiver, Marconi was able, by early 1896, to transmit radio far beyond the short ranges that had been predicted. Having failed to interest the Italian government, the 22-year-old inventor brought his telegraphy system to Britain in 1896 and met William Preece, a Welshman, who was a major figure in the field and Chief Engineer of the General Post Office. A series of demonstrations for the British government followed—by March 1897, Marconi had transmitted Morse code signals over a distance of about across Salisbury Plain. On 13 May 1897, Marconi, assisted by George Kemp, a Cardiff Post Office engineer, transmitted the first wireless signals over water to Lavernock (near Penarth in Wales) from Flat Holm. The message sent was "ARE YOU READY". From his Fraserburgh base, he transmitted the first long-distance, cross-country wireless signal to Poldhu in Cornwall. His star rising, he was soon sending signals across The English channel (1899), from shore to ship (1899) and finally across the Atlantic (1901). A study of these demonstrations of radio, with scientists trying to work out how a phenomenon predicted to have a short range could transmit "over the horizon", led to the discovery of a radio reflecting layer in the Earth's atmosphere in 1902, later called the ionosphere. Radiotelegraphy proved effective for rescue work in sea disasters by enabling effective communication between ships and from ship to shore. In 1904, Marconi began the first commercial service to transmit nightly news summaries to subscribing ships, which could incorporate them into their on-board newspapers. A regular transatlantic radio-telegraph service was finally begun on 17 October 1907. Notably, Marconi's apparatus was used to help rescue efforts after the sinking of "Titanic". Britain's postmaster-general summed up, referring to the "Titanic" disaster, "Those who have been saved, have been saved through one man, Mr. Marconi...and his marvellous invention." A telegram service is a company or public entity that delivers telegraphed messages directly to the recipient. Telegram services were not inaugurated until electric telegraphy became available. Earlier optical systems were largely limited to official government and military purposes. Historically, telegrams were sent between a network of interconnected telegraph offices. A person visiting a local telegraph office paid by the word to have a message telegraphed to another office and delivered to the addressee on a paper form. Messages sent by telegraph could be delivered faster than mail, and even in the telephone age, the telegram remained popular for social and business correspondence. At their peak in 1929, an estimated 200 million telegrams were sent. Telegram services still operate in much of the world (see worldwide use of telegrams by country), but e-mail and text messaging have rendered telegrams obsolete in many countries, and the number of telegrams sent annually has been declining rapidly since the 1980s. Where telegram services still exist, the transmission method between offices is no longer by telegraph, but by telex or IP link. As telegrams have been traditionally charged by the word, messages were often abbreviated to pack information into the smallest possible number of words, in what came to be called "telegram style". The average length of a telegram in the 1900s in the US was 11.93 words; more than half of the messages were 10 words or fewer. According to another study, the mean length of the telegrams sent in the UK before 1950 was 14.6 words or 78.8 characters. For German telegrams, the mean length is 11.5 words or 72.4 characters. At the end of the 19th century, the average length of a German telegram was calculated as 14.2 words. Telex (TELegraph EXchange) was a public switched network of teleprinters. It used rotary-telephone-style pulse dialling for automatic routing through the network. It initially used the Baudot code for messages. Telex development began in Germany in 1926, becoming an operational service in 1933 run by the Reichspost (Reich postal service). It had a speed of 50 baud—approximately 66 words per minute. Up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication. Telex was introduced into Canada in July 1957, and the United States in 1958. A new code, ASCII, was introduced in 1963 by the American Standards Association. ASCII was a 7-bit code and could thus support a larger number of characters than Baudot. In particular, ASCII supported upper and lower case whereas Baudot was upper case only. Telegraph use began to permanently decline around 1920. The decline began with the growth of the use of the telephone. Ironically, the invention of the telephone grew out of the development of the harmonic telegraph, a device which was supposed to increase the efficiency of telegraph transmission and improve the profits of telegraph companies. Western Union gave up their patent battle with Alexander Graham Bell because they believed the telephone was not a threat to their telegraph business. The Bell Telephone Company was formed in 1877 and had 230 subscribers which grew to 30,000 by 1880. By 1886 there were a quarter of a million phones worldwide, and nearly 2 million by 1900. The decline was briefly postponed by the rise of special occasion congratulatory telegrams. Traffic continued to grow between 1867 and 1893 despite the introduction of the telephone in this period, but by 1900 the telegraph was definitely in decline. There was a brief resurgence in telegraphy during World War I but the decline continued as the world entered the Great Depression years of the 1930s. Telegraph lines continued to be an important means of distributing news feeds from news agencies by teleprinter machine until the rise of the internet in the 1990s. For Western Union, one service remained highly profitable—the wire transfer of money. This service kept Western Union in business long after the telegraph had ceased to be important. The telegraph freed communication from the time constraints of postal mail and revolutionized the global economy and society. By the end of the 19th century, the telegraph was becoming an increasingly common medium of communication for ordinary people. The telegraph isolated the message (information) from the physical movement of objects or the process. There was some fear of the new technology. According to author Allan J. Kimmel, some people "feared that the telegraph would erode the quality of public discourse through the transmission of irrelevant, context-free information." Henry David Thoreau thought of the Transatlantic cable "...perchance the first news that will leak through into the broad flapping American ear will be that Princess Adelaide has the whooping cough." Kimmel says these fears anticipate many of the characteristics of the modern internet age. Initially, the telegraph was expensive to use, so was mostly limited to businesses that could use it to improve profits. The telegraph had an enormous effect on three industries; finance, newspapers, and railways. Telegraphy facilitated the growth of organizations "in the railroads, consolidated financial and commodity markets, and reduced information costs within and between firms". In the US, there were 200 to 300 stock exchanges before the telegraph, but most of these were unnecessary and unprofitable once the telegraph made financial transactions at a distance easy and drove down transaction costs. This immense growth in the business sectors influenced society to embrace the use of telegrams once the cost had fallen. Worldwide telegraphy changed the gathering of information for news reporting. Journalists were using the telegraph for war reporting as early as 1846 when the Mexican–American War broke out. News agencies were formed, such as the Associated Press, for the purpose of reporting news by telegraph. Messages and information would now travel far and wide, and the telegraph demanded a language "stripped of the local, the regional; and colloquial", to better facilitate a worldwide media language. Media language had to be standardized, which led to the gradual disappearance of different forms of speech and styles of journalism and storytelling. The spread of the railways created a need for an accurate standard time to replace local arbitrary standards based on local noon. The means of achieving this synchronisation was the telegraph. This emphasis on precise time has led to major societal changes such as the concept of the time value of money. The shortage of men to work as telegraph operators in the American Civil War opened up the opportunity for women of a well-paid skilled job. The economic impact of the telegraph was not much studied by economic historians until parallels started to be drawn with the rise of the internet. In fact, the electric telegraph was as important as the invention of printing in this respect. According to economist Ronnie J. Phillips, the reason for this may be that institutional economists paid more attention to advances that required greater capital investment. The investment required to build railways, for instance, is orders of magnitude greater than that for the telegraph. The optical telegraph was quickly forgotten once it went out of service. While it was in operation, it was very familiar to the public across Europe. Examples appear in many paintings of the period. Poems include "Le Telégraphe", by Victor Hugo, and the collection "Telegrafen: Optisk kalender för 1858" by is dedicated to the telegraph. In novels, the telegraph is a major component in "Lucien Leuwen" by Stendhal, and it features in "The Count of Monte Cristo", by Alexandre Dumas. Joseph Chudy's 1796 opera, "Der Telegraph oder die Fernschreibmaschine", was written to publicise Chudy's telegraph (a binary code with five lamps) when it became clear that Chappe's design was being taken up. Rudyard Kipling wrote a poem in praise of submarine telegraph cables; "And a new Word runs between: whispering, 'Let us be one!'" Kipling's poem represented a widespread idea in the late nineteenth century that international telegraphy (and new technology in general) would bring peace and mutual understanding to the world. When a submarine telegraph cable first connected America and Britain, the "Post" declared; Numerous newspapers and news outlets in various countries, such as "The Daily Telegraph" in Britain, "The Telegraph" in India, "De Telegraaf" in the Netherlands, and the Jewish Telegraphic Agency in the US, were given names which include the word "telegraph" due to their having received news by means of electric telegraphy. Some of these names are retained even though different means of news acquisition are now used.
https://en.wikipedia.org/wiki?curid=30010
Transistor A transistor is a semiconductor device used to amplify or switch electronic signals and electrical power. It is composed of semiconductor material usually with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals controls the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits. Austro-Hungarian physicist Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that time. The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs. They shared the 1956 Nobel Prize in Physics for their achievement. The most widely used transistor is the MOSFET (metal–oxide–semiconductor field-effect transistor), also known as the MOS transistor, which was invented by Mohamed Atalla with Dawon Kahng at Bell Labs in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Transistors revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The first transistor and the MOSFET are on the list of IEEE milestones in electronics. The MOSFET is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. An estimated total of 13sextillion MOSFETs have been manufactured between 1960 and 2018 (at least 99.9% of all transistors), making the MOSFET the most widely manufactured device in history. Most transistors are made from very pure silicon, and some from germanium, but certain other semiconductor materials are sometimes used. A transistor may have only one kind of charge carrier, in a field-effect transistor, or may have two kinds of charge carriers in bipolar junction transistor devices. Compared with the vacuum tube, transistors are generally smaller, and require less power to operate. Certain vacuum tubes have advantages over transistors at very high operating frequencies or high operating voltages. Many types of transistors are made to standardized specifications by multiple manufacturers. The thermionic triode, a vacuum tube invented in 1907, enabled amplified radio technology and long-distance telephony. The triode, however, was a fragile device that consumed a substantial amount of power. In 1909, physicist William Eccles discovered the crystal diode oscillator. Austro-Hungarian physicist Julius Edgar Lilienfeld filed a patent for a field-effect transistor (FET) in Canada in 1925, which was intended to be a solid-state replacement for the triode. Lilienfeld also filed identical patents in the United States in 1926 and 1928. However, Lilienfeld did not publish any research articles about his devices nor did his patents cite any specific examples of a working prototype. Because the production of high-quality semiconductor materials was still decades away, Lilienfeld's solid-state amplifier ideas would not have found practical use in the 1920s and 1930s, even if such a device had been built. In 1934, German inventor Oskar Heil patented a similar device in Europe. From November 17, 1947, to December 23, 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in Murray Hill, New Jersey, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors. The term "transistor" was coined by John R. Pierce as a contraction of the term "transresistance". According to Lillian Hoddeson and Vicki Daitch, authors of a biography of John Bardeen, Shockley had proposed that Bell Labs' first patent for a transistor should be based on the field-effect and that he be named as the inventor. Having unearthed Lilienfeld's patents that went into obscurity years earlier, lawyers at Bell Labs advised against Shockley's proposal because the idea of a field-effect transistor that used an electric field as a "grid" was not new. Instead, what Bardeen, Brattain, and Shockley invented in 1947 was the first point-contact transistor. In acknowledgement of this accomplishment, Shockley, Bardeen, and Brattain were jointly awarded the 1956 Nobel Prize in Physics "for their researches on semiconductors and their discovery of the transistor effect". Shockley's research team initially attempted to build a field-effect transistor (FET), by trying to modulate the conductivity of a semiconductor, but was unsuccessful, mainly due to problems with the surface states, the dangling bond, and the germanium and copper compound materials. In the course of trying to understand the mysterious reasons behind their failure to build a working FET, this led them instead to invent the bipolar point-contact and junction transistors. In 1948, the point-contact transistor was independently invented by German physicists Herbert Mataré and Heinrich Welker while working at the "Compagnie des Freins et Signaux", a Westinghouse subsidiary located in Paris. Mataré had previous experience in developing crystal rectifiers from silicon and germanium in the German radar effort during World War II. Using this knowledge, he began researching the phenomenon of "interference" in 1947. By June 1948, witnessing currents flowing through point-contacts, Mataré produced consistent results using samples of germanium produced by Welker, similar to what Bardeen and Brattain had accomplished earlier in December 1947. Realizing that Bell Labs' scientists had already invented the transistor before them, the company rushed to get its "transistron" into production for amplified use in France's telephone network and filed his first transistor patent application on August 13, 1948. The first bipolar junction transistors were invented by Bell Labs' William Shockley, which applied for patent (2,569,347) on June 26, 1948. On April 12, 1950, Bell Labs chemists Gordon Teal and Morgan Sparks had successfully produced a working bipolar NPN junction amplifying germanium transistor. Bell Labs had announced the discovery of this new "sandwich" transistor in a press release on July 4, 1951. The first high-frequency transistor was the surface-barrier germanium transistor developed by Philco in 1953, capable of operating up to . These were made by etching depressions into an N-type germanium base from both sides with jets of Indium(III) sulfate until it was a few ten-thousandths of an inch thick. Indium electroplated into the depressions formed the collector and emitter. The first "prototype" pocket transistor radio was shown by INTERMETALL (a company founded by Herbert Mataré in 1952) at the "Internationale Funkausstellung Düsseldorf" between August 29, 1953 and September 6, 1953. The first "production" pocket transistor radio was the Regency TR-1, released in October 1954. Produced as a joint venture between the Regency Division of Industrial Development Engineering Associates, I.D.E.A. and Texas Instruments of Dallas Texas, the TR-1 was manufactured in Indianapolis, Indiana. It was a near pocket-sized radio featuring 4 transistors and one germanium diode. The industrial design was outsourced to the Chicago firm of Painter, Teague and Petertil. It was initially released in one of six different colours: black, ivory, mandarin red, cloud grey, mahogany and olive green. Other colours were to shortly follow. The first "production" all-transistor car radio was developed by Chrysler and Philco corporations and it was announced in the April 28, 1955 edition of the Wall Street Journal. Chrysler had made the all-transistor car radio, Mopar model 914HR, available as an option starting in fall 1955 for its new line of 1956 Chrysler and Imperial cars which first hit the dealership showroom floors on October 21, 1955. The Sony TR-63, released in 1957, was the first mass-produced transistor radio, leading to the mass-market penetration of transistor radios. The TR-63 went on to sell seven million units worldwide by the mid-1960s. Sony's success with transistor radios led to transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s. The first working silicon transistor was developed at Bell Labs on January 26, 1954 by Morris Tanenbaum. The first commercial silicon transistor was produced by Texas Instruments in 1954. This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. Semiconductor companies initially focused on junction transistors in the early years of the semiconductor industry. However, the junction transistor was a relatively bulky device that was difficult to manufacture on a mass-production basis, which limited it to a number of specialised applications. Field-effect transistors (FETs) were theorized as potential alternatives to junction transistors, but researchers could not get FETs to work properly, largely due to the troublesome surface state barrier that prevented the external electric field from penetrating into the material. In the 1950s, Egyptian engineer Mohamed Atalla investigated the surface properties of silicon semiconductors at Bell Labs, where he proposed a new method of semiconductor device fabrication, coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below, overcoming the surface states that prevented electricity from reaching the semiconducting layer. This is known as surface passivation, a method that became critical to the semiconductor industry as it later made possible the mass-production of silicon integrated circuits. He presented his findings in 1957. Building on his surface passivation method, he developed the metal–oxide–semiconductor (MOS) process. He proposed the MOS process could be used to build the first working silicon FET, which he began working on building with the help of his Korean colleague Dawon Kahng. The metal–oxide–semiconductor field-effect transistor (MOSFET), also known as the MOS transistor, was invented by Mohamed Atalla and Dawon Kahng in 1959. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits, allowing the integration of more than 10,000 transistors in a single IC. CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. A double-gate MOSFET was first demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa and Yutaka Hayashi. FinFET (fin field-effect transistor), a type of 3D non-planar multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at Hitachi Central Research Laboratory in 1989. Transistors are the key active components in practically all modern electronics. Many thus consider the transistor to be one of the greatest inventions of the 20th century. The MOSFET (metal–oxide–semiconductor field-effect transistor), also known as the MOS transistor, is by far the most widely used transistor, used in applications ranging from computers and electronics to communications technology such as smartphones. The MOSFET has been considered to be the most important transistor, possibly the most important invention in electronics, and the birth of modern electronics. The MOS transistor has been the fundamental building block of modern digital electronics since the late 20th century, paving the way for the digital age. The US Patent and Trademark Office calls it a "groundbreaking invention that transformed life and culture around the world". Its importance in today's society rests on its ability to be mass-produced using a highly automated process (semiconductor device fabrication) that achieves astonishingly low per-transistor costs. The invention of the first transistor at Bell Labs was named an IEEE Milestone in 2009. The list of IEEE Milestones also includes the inventions of the junction transistor in 1948 and the MOSFET in 1959. Although several companies each produce over a billion individually packaged (known as "discrete") MOS transistors every year, the vast majority of transistors are now produced in integrated circuits (often shortened to "IC", "microchips" or simply "chips"), along with diodes, resistors, capacitors and other electronic components, to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2009, can use as many as 3 billion transistors (MOSFETs). "About 60 million transistors were built in 2002… for [each] man, woman, and child on Earth." The MOS transistor is the most widely manufactured device in history. As of 2013, billions of transistors are manufactured every day, nearly all of which are MOSFET devices. Between 1960 and 2018, an estimated total of 13sextillion MOS transistors have been manufactured, accounting for at least 99.9% of all transistors. The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical system to control that same function. A transistor can use a small signal applied between one pair of its terminals to control a much larger signal at another pair of terminals. This property is called gain. It can produce a stronger output signal, a voltage or current, which is proportional to a weaker input signal and thus, it can act as an amplifier. Alternatively, the transistor can be used to turn current on or off in a circuit as an electrically controlled switch, where the amount of current is determined by other circuit elements. There are two types of transistors, which have slight differences in how they are used in a circuit. A "bipolar transistor" has terminals labeled base, collector, and emitter. A small current at the base terminal (that is, flowing between the base and the emitter) can control or switch a much larger current between the collector and emitter terminals. For a "field-effect transistor", the terminals are labeled gate, source, and drain, and a voltage at the gate can control a current between source and drain. The image represents a typical bipolar transistor in a circuit. Charge will flow between emitter and collector terminals depending on the current in the base. Because internally the base and emitter connections behave like a semiconductor diode, a voltage drop develops between base and emitter while the base current exists. The amount of this voltage depends on the material the transistor is made from, and is referred to as "V"BE. Transistors are commonly used in digital circuits as electronic switches which can be either in an "on" or "off" state, both for high-power applications such as switched-mode power supplies and for low-power applications such as logic gates. Important parameters for this application include the current switched, the voltage handled, and the switching speed, characterised by the rise and fall times. In a grounded-emitter transistor circuit, such as the light-switch circuit shown, as the base voltage rises, the emitter and collector currents rise exponentially. The collector voltage drops because of reduced resistance from collector to emitter. If the voltage difference between the collector and emitter were zero (or near zero), the collector current would be limited only by the load resistance (light bulb) and the supply voltage. This is called "saturation" because current is flowing from collector to emitter freely. When saturated, the switch is said to be "on". Providing sufficient base drive current is a key problem in the use of bipolar transistors as switches. The transistor provides current gain, allowing a relatively large current in the collector to be switched by a much smaller current into the base terminal. The ratio of these currents varies depending on the type of transistor, and even for a particular type, varies depending on the collector current. In the example light-switch circuit shown, the resistor is chosen to provide enough base current to ensure the transistor will be saturated. In a switching circuit, the idea is to simulate, as near as possible, the ideal switch having the properties of open circuit when off, short circuit when on, and an instantaneous transition between the two states. Parameters are chosen such that the "off" output is limited to leakage currents too small to affect connected circuitry, the resistance of the transistor in the "on" state is too small to affect circuitry, and the transition between the two states is fast enough not to have a detrimental effect. The common-emitter amplifier is designed so that a small change in voltage ("V"in) changes the small current through the base of the transistor whose current amplification combined with the properties of the circuit means that small swings in "V"in produce large changes in "V"out. Various configurations of single transistor amplifier are possible, with some providing current gain, some voltage gain, and some both. From mobile phones to televisions, vast numbers of products include amplifiers for sound reproduction, radio transmission, and signal processing. The first discrete-transistor audio amplifiers barely supplied a few hundred milliwatts, but power and audio fidelity gradually increased as better transistors became available and amplifier architecture evolved. Modern transistor audio amplifiers of up to a few hundred watts are common and relatively inexpensive. Before transistors were developed, vacuum (electron) tubes (or in the UK "thermionic valves" or just "valves") were the main active components in electronic equipment. The key advantages that have allowed transistors to replace vacuum tubes in most applications are Transistors have the following limitations: Transistors are categorized by Hence, a particular transistor may be described as "silicon, surface-mount, BJT, n–p–n, low-power, high-frequency switch". A popular way to remember which symbol represents which type of transistor is to look at the arrow and how it is arranged. Within an NPN transistor symbol, the arrow will Not Point iN. Conversely, within the PNP symbol you see that the arrow Points iN Proudly. The "field-effect transistor", sometimes called a "unipolar transistor", uses either electrons (in "n-channel FET") or holes (in "p-channel FET") for conduction. The four terminals of the FET are named "source", "gate", "drain", and "body" ("substrate"). On most FETs, the body is connected to the source inside the package, and this will be assumed for the following description. In a FET, the drain-to-source current flows via a conducting channel that connects the "source" region to the "drain" region. The conductivity is varied by the electric field that is produced when a voltage is applied between the gate and source terminals, hence the current flowing between the drain and source is controlled by the voltage applied between the gate and source. As the gate–source voltage ("V"GS) is increased, the drain–source current ("I"DS) increases exponentially for "VGS" below threshold, and then at a roughly quadratic rate ("I"DS ∝ ("V"GS − "V"T)2) (where "V"T is the threshold voltage at which drain current begins) in the "space-charge-limited" region above threshold. A quadratic behavior is not observed in modern devices, for example, at the 65 nm technology node. For low noise at narrow bandwidth the higher input resistance of the FET is advantageous. FETs are divided into two families: "junction FET" (JFET) and "insulated gate FET" (IGFET). The IGFET is more commonly known as a "metal–oxide–semiconductor FET" (MOSFET), reflecting its original construction from layers of metal (the gate), oxide (the insulation), and semiconductor. Unlike IGFETs, the JFET gate forms a p–n diode with the channel which lies between the source and drain. Functionally, this makes the n-channel JFET the solid-state equivalent of the vacuum tube triode which, similarly, forms a diode between its grid and cathode. Also, both devices operate in the "depletion mode", they both have a high input impedance, and they both conduct current under the control of an input voltage. Metal–semiconductor FETs (MESFETs) are JFETs in which the reverse biased p–n junction is replaced by a metal–semiconductor junction. These, and the HEMTs (high-electron-mobility transistors, or HFETs), in which a two-dimensional electron gas with very high carrier mobility is used for charge transport, are especially suitable for use at very high frequencies (several GHz). FETs are further divided into "depletion-mode" and "enhancement-mode" types, depending on whether the channel is turned on or off with zero gate-to-source voltage. For enhancement mode, the channel is off at zero bias, and a gate potential can "enhance" the conduction. For the depletion mode, the channel is on at zero bias, and a gate potential (of the opposite polarity) can "deplete" the channel, reducing conduction. For either mode, a more positive gate voltage corresponds to a higher current for n-channel devices and a lower current for p-channel devices. Nearly all JFETs are depletion-mode because the diode junctions would forward bias and conduct if they were enhancement-mode devices, while most IGFETs are enhancement-mode types. The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of field-effect transistor that is fabricated by the controlled oxidation of a semiconductor, typically silicon. It has an insulated gate, whose voltage determines the conductivity of the device. This ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is by far the most common transistor, and the basic building block of most modern electronics. The MOSFET accounts for 99.9% of all transistors in the world. Bipolar transistors are so named because they conduct by using both majority and minority carriers. The bipolar junction transistor, the first type of transistor to be mass-produced, is a combination of two junction diodes, and is formed of either a thin layer of p-type semiconductor sandwiched between two n-type semiconductors (an n–p–n transistor), or a thin layer of n-type semiconductor sandwiched between two p-type semiconductors (a p–n–p transistor). This construction produces two p–n junctions: a base–emitter junction and a base–collector junction, separated by a thin region of semiconductor known as the base region. (Two junction diodes wired together without sharing an intervening semiconducting region will not make a transistor). BJTs have three terminals, corresponding to the three layers of semiconductor—an "emitter", a "base", and a "collector". They are useful in amplifiers because the currents at the emitter and collector are controllable by a relatively small base current. In an n–p–n transistor operating in the active region, the emitter–base junction is forward biased (electrons and holes recombine at the junction), and the base-collector junction is reverse biased (electrons and holes are formed at, and move away from the junction), and electrons are injected into the base region. Because the base is narrow, most of these electrons will diffuse into the reverse-biased base–collector junction and be swept into the collector; perhaps one-hundredth of the electrons will recombine in the base, which is the dominant mechanism in the base current. As well, as the base is lightly doped (in comparison to the emitter and collector regions), recombination rates are low, permitting more carriers to diffuse across the base region. By controlling the number of electrons that can leave the base, the number of electrons entering the collector can be controlled. Collector current is approximately β (common-emitter current gain) times the base current. It is typically greater than 100 for small-signal transistors but can be smaller in transistors designed for high-power applications. Unlike the field-effect transistor (see below), the BJT is a low-input-impedance device. Also, as the base–emitter voltage ("V"BE) is increased the base–emitter current and hence the collector–emitter current ("I"CE) increase exponentially according to the Shockley diode model and the Ebers-Moll model. Because of this exponential relationship, the BJT has a higher transconductance than the FET. Bipolar transistors can be made to conduct by exposure to light, because absorption of photons in the base region generates a photocurrent that acts as a base current; the collector current is approximately β times the photocurrent. Devices designed for this purpose have a transparent window in the package and are called phototransistors. The MOSFET is by far the most widely used transistor for both digital circuits as well as analog circuits, accounting for 99.9% of all transistors in the world. The bipolar junction transistor (BJT) was previously the most commonly used transistor during the 1950s to 1960s. Even after MOSFETs became widely available in the 1970s, the BJT remained the transistor of choice for many analog circuits such as amplifiers because of their greater linearity, up until MOSFET devices (such as power MOSFETs, LDMOS and RF CMOS) replaced them for most power electronic applications in the 1980s. In integrated circuits, the desirable properties of MOSFETs allowed them to capture nearly all market share for digital circuits in the 1970s. Discrete MOSFETs (typically power MOSFETs) can be applied in transistor applications, including analog circuits, voltage regulators, amplifiers, power transmitters and motor drivers. The types of some transistors can be parsed from the part number. There are three major semiconductor naming standards. In each, the alphanumeric prefix provides clues to type of the device. The "JIS-C-7012" specification for transistor part numbers starts with "2S", e.g. 2SD965, but sometimes the "2S" prefix is not marked on the package – a 2SD965 might only be marked "D965"; a 2SC1815 might be listed by a supplier as simply "C1815". This series sometimes has suffixes (such as "R", "O", "BL", standing for "red", "orange", "blue", etc.) to denote variants, such as tighter "h"FE (gain) groupings. The Pro Electron standard, the European Electronic Component Manufacturers Association part numbering scheme, begins with two letters: the first gives the semiconductor type (A for germanium, B for silicon, and C for materials like GaAs); the second letter denotes the intended use (A for diode, C for general-purpose transistor, etc.). A 3-digit sequence number (or one letter then two digits, for industrial types) follows. With early devices this indicated the case type. Suffixes may be used, with a letter (e.g. "C" often means high "h"FE, such as in: BC549C) or other codes may follow to show gain (e.g. BC327-25) or voltage rating (e.g. BUK854-800A). The more common prefixes are: The JEDEC "EIA370" transistor device numbers usually start with "2N", indicating a three-terminal device (dual-gate field-effect transistors are four-terminal devices, so begin with 3N), then a 2, 3 or 4-digit sequential number with no significance as to device properties (although early devices with low numbers tend to be germanium). For example, 2N3055 is a silicon n–p–n power transistor, 2N1301 is a p–n–p germanium switching transistor. A letter suffix (such as "A") is sometimes used to indicate a newer variant, but rarely gain groupings. Manufacturers of devices may have their own proprietary numbering system, for example CK722. Since devices are second-sourced, a manufacturer's prefix (like "MPF" in MPF102, which originally would denote a Motorola FET) now is an unreliable indicator of who made the device. Some proprietary naming schemes adopt parts of other naming schemes, for example a PN2222A is a (possibly Fairchild Semiconductor) 2N2222A in a plastic case (but a PN108 is a plastic version of a BC108, not a 2N108, while the PN100 is unrelated to other xx100 devices). Military part numbers sometimes are assigned their own codes, such as the British Military CV Naming System. Manufacturers buying large numbers of similar parts may have them supplied with "house numbers", identifying a particular purchasing specification and not necessarily a device with a standardized registered number. For example, an HP part 1854,0053 is a (JEDEC) 2N2218 transistor which is also assigned the CV number: CV7763 With so many independent naming schemes, and the abbreviation of part numbers when printed on the devices, ambiguity sometimes occurs. For example, two different devices may be marked "J176" (one the J176 low-power JFET, the other the higher-powered MOSFET 2SJ176). As older "through-hole" transistors are given surface-mount packaged counterparts, they tend to be assigned many different part numbers because manufacturers have their own systems to cope with the variety in pinout arrangements and options for dual or matched n–p–n + p–n–p devices in one pack. So even when the original device (such as a 2N3904) may have been assigned by a standards authority, and well known by engineers over the years, the new versions are far from standardized in their naming. The first BJTs were made from germanium (Ge). Silicon (Si) types currently predominate but certain advanced microwave and high-performance versions now employ the "compound semiconductor" material gallium arsenide (GaAs) and the "semiconductor alloy" silicon germanium (SiGe). Single element semiconductor material (Ge and Si) is described as "elemental". Rough parameters for the most common semiconductor materials used to make transistors are given in the adjacent table. These parameters will vary with increase in temperature, electric field, impurity level, strain, and sundry other factors. The "junction forward voltage" is the voltage applied to the emitter–base junction of a BJT in order to make the base conduct a specified current. The current increases exponentially as the junction forward voltage is increased. The values given in the table are typical for a current of 1 mA (the same values apply to semiconductor diodes). The lower the junction forward voltage the better, as this means that less power is required to "drive" the transistor. The junction forward voltage for a given current decreases with increase in temperature. For a typical silicon junction the change is −2.1 mV/°C. In some circuits special compensating elements (sensistors) must be used to compensate for such changes. The density of mobile carriers in the channel of a MOSFET is a function of the electric field forming the channel and of various other phenomena such as the impurity level in the channel. Some impurities, called dopants, are introduced deliberately in making a MOSFET, to control the MOSFET electrical behavior. The "electron mobility" and "hole mobility" columns show the average speed that electrons and holes diffuse through the semiconductor material with an electric field of 1 volt per meter applied across the material. In general, the higher the electron mobility the faster the transistor can operate. The table indicates that Ge is a better material than Si in this respect. However, Ge has four major shortcomings compared to silicon and gallium arsenide: Because the electron mobility is higher than the hole mobility for all semiconductor materials, a given bipolar n–p–n transistor tends to be swifter than an equivalent p–n–p transistor. GaAs has the highest electron mobility of the three semiconductors. It is for this reason that GaAs is used in high-frequency applications. A relatively recent FET development, the "high-electron-mobility transistor" (HEMT), has a heterostructure (junction between different semiconductor materials) of aluminium gallium arsenide (AlGaAs)-gallium arsenide (GaAs) which has twice the electron mobility of a GaAs-metal barrier junction. Because of their high speed and low noise, HEMTs are used in satellite receivers working at frequencies around 12 GHz. HEMTs based on gallium nitride and aluminium gallium nitride (AlGaN/GaN HEMTs) provide a still higher electron mobility and are being developed for various applications. 'Max. junction temperature' values represent a cross section taken from various manufacturers' data sheets. This temperature should not be exceeded or the transistor may be damaged. 'Al–Si junction' refers to the high-speed (aluminum–silicon) metal–semiconductor barrier diode, commonly known as a Schottky diode. This is included in the table because some silicon power IGFETs have a parasitic reverse Schottky diode formed between the source and drain as part of the fabrication process. This diode can be a nuisance, but sometimes it is used in the circuit. Discrete transistors can be individually packaged transistors or unpackaged transistor chips (dice). Transistors come in many different semiconductor packages (see image). The two main categories are "through-hole" (or "leaded"), and "surface-mount", also known as "surface-mount device" (SMD). The "ball grid array" (BGA) is the latest surface-mount package (currently only for large integrated circuits). It has solder "balls" on the underside in place of leads. Because they are smaller and have shorter interconnections, SMDs have better high-frequency characteristics but lower power rating. Transistor packages are made of glass, metal, ceramic, or plastic. The package often dictates the power rating and frequency characteristics. Power transistors have larger packages that can be clamped to heat sinks for enhanced cooling. Additionally, most power transistors have the collector or drain physically connected to the metal enclosure. At the other extreme, some surface-mount "microwave" transistors are as small as grains of sand. Often a given transistor type is available in several packages. Transistor packages are mainly standardized, but the assignment of a transistor's functions to the terminals is not: other transistor types can assign other functions to the package's terminals. Even for the same transistor type the terminal assignment can vary (normally indicated by a suffix letter to the part number, q.e. BC212L and BC212K). Nowadays most transistors come in a wide range of SMT packages, in comparison the list of available through-hole packages is relatively small, here is a short list of the most common through-hole transistors packages in alphabetical order: ATV, E-line, MRT, HRT, SC-43, SC-72, TO-3, TO-18, TO-39, TO-92, TO-126, TO220, TO247, TO251, TO262, ZTX851. Unpackaged transistor chips (die) may be assembled into hybrid devices. The IBM SLT module of the 1960s is one example of such a hybrid circuit module using glass passivated transistor (and diode) die. Other packaging techniques for discrete transistors as chips include Direct Chip Attach (DCA) and Chip On Board (COB). Researchers have made several kinds of flexible transistors, including organic field-effect transistors. Flexible transistors are useful in some kinds of flexible displays and other flexible electronics.
https://en.wikipedia.org/wiki?curid=30011
Time Time is the indefinite continued progress of existence and events that occur in an apparently irreversible succession from the past, through the present, into the future. It is a component quantity of various measurements used to sequence events, to compare the duration of events or the intervals between them, and to quantify rates of change of quantities in material reality or in the conscious experience. Time is often referred to as a fourth dimension, along with three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, but defining it in a manner applicable to all fields without circularity has consistently eluded scholars. Nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems.
https://en.wikipedia.org/wiki?curid=30012
Tifinagh Tifinagh (; in Tamazight Latin: ; in Neo-Tifinagh: ; in Tuareg Tifinagh: or ) is an abjad script used to write the Tamazight languages. Neo-Tifinagh, a modern alphabetical derivative of the traditional script, was reintroduced in the 20th century. A slightly-modified version of the traditional script, called "Tifinagh IRCAM", is used in a number of Moroccan elementary schools in teaching the Berber language to children as well as a number of publications. Tifinagh or Libyc was widely used in antiquity by speakers of Libyc languages throughout North Africa and on the Canary Islands. Some authors believe it to be attested from as far back as the 2nd millennium BC, to the present time. The script's origin is considered by most scholars as being of local origin, although a relationship between the Punic alphabet or the Phoenician alphabet has also been suggested. An alternative suggestion, by Helmut Satzinger, is that its origin is to be seen in the Ancient South Arabian script. The ancient Tifinagh script was a pure abjad; it had no vowels. Gemination was not marked. The writing was usually from the bottom to the top, although right-to-left, and even other orders, were also found. The letters would take different forms when written vertically than when they were written horizontally. There are four known variants: Eastern Libyc, Western Libyc, Bu Njem Libyc and Saharan Libyc. The eastern variant covers approximately the north-west of Tunisia as well as eastern Algeria, the western limit of its use is placed at the east of Sétif although inscriptions of the eastern type can exceptionally be in Kabylia, it shows a clear Phoenician influence. It is the best-deciphered variant, due to the discovery of several Numidian bilingual inscriptions in Libyan and Punic (notably at Dougga in Tunisia). Researcher Lionel Galand maintains that there are two versions of Eastern Libyc: one used for monuments, which he called the Dougga script, and one for funerary steles, which is Eastern Libyc proper. The latter contains only 23 letters, which agrees with observations made by historian Fabius Planciades Fulgentius. In the Dougga script, 22 letters out of the 24 were deciphered so far. The western variant covers Morocco and the western half of Algeria (country populated by the Mauri), as well as the Canary Islands. It is more archaic and shows no Phoenician influence. Its inscriptions are fewer and generally shorter and rougher. The characteristic of this alphabet is that it includes additional signs, that the eastern one is unaware of, whose value could not be given. Some of these characters are identical to the Tuareg letters of the alphabet. There are graffiti discovered at Bou Njem, the ancient Gholaia in Libya, on the wall of an old monument which dated from the 3rd century. The writing is horizontal, made up of nine inscriptions. This variant was heavily influenced by Latin to the point of constituting a special alphabet. This variant was widespread in pre-Saharan and Saharan Libya, territory of the Gaetuli and Garamantes, where it was used by the inhabitants to engrave their messages. It is mostly unknown and badly located. The Libyco-Berber script is used today in the form of Tifinagh to write the Tuareg languages, which belong to the Berber branch of the Afroasiatic family. Early uses of the script have been found on rock art and in various tombs. Among these are the 1500 year old monumental tomb of the Tuareg queen Tin Hinan, where vestiges of a Tifinagh inscription have been found on one of its walls. According to historians, the Tuareg are "an entirely oral society in which memory and oral communication perform all the functions which reading and writing have in a literate society… The Tifinagh are used primarily for games and puzzles, short graffiti and brief messages." Occasionally, the script has been used to write other neighbouring languages such as Tagdalt, which is a Northern Songhay language and not a member of the Afroasiatic family. Common forms of the letters are illustrated at left, including various ligatures of "t" and "n". Gemination, though phonemic, is not indicated in Tifinagh. The letter "t", +, is often combined with a preceding letter to form an orthographic ligature. Most of the letters have more than one common form, including mirror-images of the forms shown here. When the letters "l" and "n" are adjacent to themselves or to each other, the second is offset, either by inclining, lowering, raising, or shortening it. For example, since the letter "l" is a double line, codice_1, and "n" a single line, codice_2, the sequence "nn" may be written codice_3 to differentiate it from "l". Similarly, "ln" is codice_4, "nl" codice_5, "ll" codice_6, "nnn" codice_7, etc. Traditionally, the Tifinagh script does not indicate vowels except word-finally, where a single dot stands for any vowel. In some areas, Arabic vowel diacritics are combined with Tifinagh letters to transcribe vowels, or "y, w" may be used for long "ī" and "ū". Neo-Tifinagh is the modern fully alphabetic script developed from earlier forms of Tifinagh. It is written left to right. Until recently, virtually no books or websites were published in this alphabet, with activists favouring the Latin (or, more rarely, Arabic) scripts for serious use; however, it is extremely popular for symbolic use, with many books and websites written in a different script featuring logos or title pages using Neo-Tifinagh. In Morocco, use of Neo-Tifinagh was suppressed until recently. The Moroccan state arrested and imprisoned people using this script during the 1980s and 1990s. In 2003, however, the king took a "neutral" position between the claims of Latin script and Arabic script by adopting Neo-Tifinagh; as a result, books are beginning to be published in this script, and it is taught in some schools. However, many independent Berber-language publications are still published using the Berber Latin alphabet. Outside Morocco, it has no official status. In Algeria, almost all Berber publications use the Berber Latin Alphabet. The Algerian Black Spring was partly caused by the repression of Berber languages. In Libya, the government of Muammar Gaddafi consistently banned Tifinagh from being used in public contexts such as store displays and banners. After the Libyan Civil War, the National Transitional Council has shown an openness towards the Berber languages. The rebel Libya TV, based in Qatar, has included the Berber language and the Tifinagh alphabet in some of its programming. The following are the letters and a few ligatures of traditional Tifinagh and Neo-Tifinagh: Tifinagh was added to the Unicode Standard in March 2005, with the release of version 4.1. The Unicode block range for Tifinagh is U+2D30–U+2D7F:
https://en.wikipedia.org/wiki?curid=30015
Turkic languages The Turkic languages are a language family of at least 35 documented languages, spoken by the Turkic peoples of Eurasia from Eastern Europe, the Caucasus, Central Asia and West Asia all the way to North Asia (particularly in Siberia) and East Asia. The Turkic languages originated in a region of East Asia spanning Western China to Mongolia, where Proto-Turkic is thought to have been spoken, from where they expanded to Central Asia and farther west during the first millennium. Turkic languages are spoken as a native language by some 170 million people, and the total number of Turkic speakers, including second language speakers, is over 200 million. The Turkic language with the greatest number of speakers is Turkish, spoken mainly in Anatolia and the Balkans; its native speakers account for about 40% of all Turkic speakers. Characteristic features such as vowel harmony, agglutination, and lack of grammatical gender, are universal within the Turkic family. "“It seems strange that a language elaborated by the rude and nomad tribes of Central Asia,’’" wrote the British explorer Robert Barkley Shaw in his 1875 ‘‘Sketch of the Turki Language, "’’should present ... an example of symmetry such as few of the more cultivated forms of speech can boast”." There is a high degree of mutual intelligibility among the various Oghuz languages, which include Turkish, Azerbaijani, Turkmen, Qashqai, Gagauz, Balkan Gagauz Turkish and Oghuz-influenced Crimean Tatar. Although methods of classification vary, the Turkic languages are usually considered to be divided equally into two branches: Oghur, the only surviving member of which is Chuvash, and Common Turkic, which includes all other Turkic languages including the Oghuz subbranch. Languages belonging to the Kipchak subbranch also share a high degree of mutual intelligibility among themselves. Qazaq and Qirghiz may be better seen a mutually intelligible dialects of a single tongue which are regarded as separate languages for sociopolitical reasons. They differ mainly phonetically while the lexicon and grammar are much the same, although both have standardized written forms that may differ in some ways. Until the 20th century, both languages used a common written form of Chaghatay Turki. Turkic languages show some similarities with the Mongolic, Tungusic, Koreanic and Japonic languages. These similarities led some linguists to propose an Altaic language family, though this proposal is widely rejected by historical linguists. Apparent similarities with the Uralic languages even caused these families to be regarded as one for a long time under the Ural-Altaic hypothesis. However, there has not been sufficient evidence to conclude the existence of either of these macrofamilies, the shared characteristics between the languages being attributed presently to extensive prehistoric language contact. Turkic languages are null-subject languages, have vowel harmony, extensive agglutination by means of suffixes and postpositions, and lack of grammatical articles, noun classes, and grammatical gender. Subject–object–verb word order is universal within the family. The root of a word is basically of one, two or three consonants. The homeland of the Turkic peoples and their language is suggested to be somewhere between the Transcaspian steppe and Northeastern Asia (Manchuria), with genetic evidence pointing to the region near South Siberia and Mongolia as the "Inner Asian Homeland" of the Turkic ethnicity. Similarly several linguists, including Juha Janhunen, Roger Blench and Matthew Spriggs, suggest that modern-day Mongolia is the homeland of the early Turkic language. Extensive contact took place between Proto-Turks and Proto-Mongols approximately during the first millennium BC; the shared cultural tradition between the two Eurasian nomadic groups is called the "Turco-Mongol" tradition. The two groups shared a similar religion-system, Tengrism, and there exists a multitude of evident loanwords between Turkic languages and Mongolic languages. Although the loans were bidirectional, today Turkic loanwords constitute the largest foreign component in Mongolian vocabulary. Some lexical and extensive typological similarities between Turkic and the nearby Tungusic and Mongolic families, as well as the Korean and Japonic families (all formerly widely considered to be part of the so-called Altaic language family) has in more recent years been instead attributed to prehistoric contact amongst the group, sometimes referred to as the Northeast Asian sprachbund. A more recent (circa first millennium BCE) contact between "core Altaic" (Turkic, Mongolic, and Tungusic) is distinguished from this, due to the existence of definitive common words that appear to have been mostly borrowed from Turkic into Mongolic, and later from Mongolic into Tungusic, as Turkic borrowings into Mongolic significantly outnumber Mongolic borrowings into Turkic, and Turkic and Tungusic do not share any words that do not also exist in Mongolic. Alexander Vovin (2004, 2010) notes that Old Turkic had borrowed some words from the Ruan-ruan language (the language of the Rouran Khaganate), which Vovin considers to be an extinct non-Altaic language that is possibly a Yeniseian language or not related to any modern-day language. Turkic languages also show some Chinese loanwords that point to early contact during the time of proto-Turkic. Robbeets (et al. 2015 and et al. 2017) suggest that the homeland of the Turkic languages was somewhere in Manchuria, close to the Mongolic, Tungusic and Koreanic homeland (including the ancestor of Japonic), and that these languages share a common "Transeurasian" origin. More evidence for the proposed ancestral "Transeurasian" origin was presented by Nelson et al. 2020 and Li et al. 2020. The first established records of the Turkic languages are the eighth century AD Orkhon inscriptions by the Göktürks, recording the Old Turkic language, which were discovered in 1889 in the Orkhon Valley in Mongolia. The "Compendium of the Turkic Dialects" ("Divânü Lügati't-Türk"), written during the 11th century AD by Kaşgarlı Mahmud of the Kara-Khanid Khanate, constitutes an early linguistic treatment of the family. The "Compendium" is the first comprehensive dictionary of the Turkic languages and also includes the first known map of the Turkic speakers' geographical distribution. It mainly pertains to the Southwestern branch of the family. The Codex Cumanicus (12th–13th centuries AD) concerning the Northwestern branch is another early linguistic manual, between the Kipchak language and Latin, used by the Catholic missionaries sent to the Western Cumans inhabiting a region corresponding to present-day Hungary and Romania. The earliest records of the language spoken by Volga Bulgars, the parent to today's Chuvash language, are dated to the 13th–14th centuries AD. With the Turkic expansion during the Early Middle Ages (c. 6th–11th centuries AD), Turkic languages, in the course of just a few centuries, spread across Central Asia, from Siberia to the Mediterranean. Various terminologies from the Turkic languages have passed into Persian, Hindustani, Russian, Chinese, and to a lesser extent, Arabic. The geographical distribution of Turkic-speaking peoples across Eurasia since the Ottoman era ranges from the North-East of Siberia to Turkey in the West. (See picture in the box on the right above.) For centuries, the Turkic-speaking peoples have migrated extensively and intermingled continuously, and their languages have been influenced mutually and through contact with the surrounding languages, especially the Iranian, Slavic, and Mongolic languages. This has obscured the historical developments within each language and/or language group, and as a result, there exist several systems to classify the Turkic languages. The modern genetic classification schemes for Turkic are still largely indebted to Samoilovich (1922). The Turkic languages may be divided into six branches: In this classification, Oghur Turkic is also referred to as Lir-Turkic, and the other branches are subsumed under the title of Shaz-Turkic or Common Turkic. It is not clear when these two major types of Turkic can be assumed to have actually diverged. With less certainty, the Southwestern, Northwestern, Southeastern and Oghur groups may further be summarized as West Turkic, the Northeastern, Kyrgyz-Kipchak and Arghu (Khalaj) groups as East Turkic. Geographically and linguistically, the languages of the Northwestern and Southeastern subgroups belong to the central Turkic languages, while the Northeastern and Khalaj languages are the so-called peripheral languages. Hruschka, et al. (2014) use computational phylogenetic methods to calculate a tree of Turkic based on phonological sound changes. The following isoglosses are traditionally used in the classification of the Turkic languages: Additional isoglosses include: *In the standard Istanbul dialect of Turkish, the "ğ" in "dağ" and "dağlı" is not realized as a consonant, but as a slight lengthening of the preceding vowel. The following table is based upon the classification scheme presented by Lars Johanson (1998) The following is a brief comparison of cognates among the basic vocabulary across the Turkic language family (about 60 words). Empty cells do not necessarily imply that a particular language is lacking a word to describe the concept, but rather that the word for the concept in that language may be formed from another stem and is not a cognate with the other words in the row or that a loanword is used in its place. Also, there may be shifts in the meaning from one language to another, and so the "Common meaning" given is only approximate. In some cases the form given is found only in some dialects of the language, or a loanword is much more common (e.g. in Turkish, the preferred word for "fire" is the Persian-derived "ateş", whereas the native "od" is dead). Forms are given in native Latin orthographies unless otherwise noted. The Turkic language family is currently regarded as one of the world's primary language families. Turkic is one of the main members of the controversial Altaic language family. There are some other theories about an external relationship but none of them are generally accepted. The possibility of a genetic relation between Turkic and Korean, independently from Altaic, is suggested by some linguists. The linguist Kabak (2004) of the University of Würzburg states that Turkic and Korean share similar phonology as well as morphology. Yong-Sŏng Li (2014) suggest that there are several cognates between Turkic and Old Korean. He states that these supposed cognates can be useful to reconstruct the early Turkic language. According to him, words related to nature, earth and ruling but especially to the sky and stars seem to be cognates. The linguist Choi suggested already in 1996 a close relationship between Turkic and Korean regardless of any Altaic connections: Many historians also point out a close non-linguistic relationship between Turkic peoples and Koreans. Especially close were the relations between the Göktürks and Goguryeo. Some linguists suggested a relation to Uralic languages, especially to the Ugric languages. This view is rejected and seen as obsolete by mainstream linguists. Similarities are because of language contact and borrowings mostly from Turkic into Ugric languages. Stachowski (2015) states that any relation between Turkic and Uralic must be a contact one.
https://en.wikipedia.org/wiki?curid=30018
The Sound of Music The Sound of Music is a musical with music by Richard Rodgers, lyrics by Oscar Hammerstein II, and a book by Howard Lindsay and Russel Crouse. It is based on the 1949 memoir of Maria von Trapp, "The Story of the Trapp Family Singers". Set in Austria on the eve of the "Anschluss" in 1938, the musical tells the story of Maria, who takes a job as governess to a large family while she decides whether to become a nun. She falls in love with the children, and eventually their widowed father, Captain von Trapp. He is ordered to accept a commission in the German navy, but he opposes the Nazis. He and Maria decide on a plan to flee Austria with the children. Many songs from the musical have become standards, such as "Edelweiss", "My Favorite Things", "Climb Ev'ry Mountain", "Do-Re-Mi", and the title song "The Sound of Music". The original Broadway production, starring Mary Martin and Theodore Bikel, opened in 1959 and won five Tony Awards, including Best Musical, out of nine nominations. The first London production opened at the Palace Theatre in 1961. The show has enjoyed numerous productions and revivals since then. It was adapted as a 1965 film musical starring Julie Andrews and Christopher Plummer, which won five Academy Awards. "The Sound of Music" was the last musical written by Rodgers and Hammerstein; Oscar Hammerstein died of stomach cancer nine months after the Broadway premiere. After viewing "The Trapp Family", a 1956 West German film about the von Trapp family, and its 1958 sequel (""), stage director Vincent J. Donehue thought that the project would be perfect for his friend Mary Martin; Broadway producers Leland Hayward and Richard Halliday (Martin's husband) agreed. The producers originally envisioned a non-musical play that would be written by Lindsay and Crouse and that would feature songs from the repertoire of the Trapp Family Singers. Then they decided to add an original song or two, perhaps by Rodgers and Hammerstein. But it was soon agreed that the project should feature all new songs and be a musical rather than a play. Details of the history of the von Trapp family were altered for the musical. The real Georg von Trapp did live with his family in a villa in Aigen, a suburb of Salzburg. He wrote to the Nonnberg Abbey in 1926 asking for a nun to help tutor his sick daughter, and the Mother Abbess sent Maria. His wife had died in 1922. The real Maria and Georg married at the Nonnberg Abbey in 1927. Lindsay and Crouse altered the story so that Maria was governess to all of the children, whose names and ages were changed, as was Maria's original surname (the show used "Rainer" instead of "Kutschera"). The von Trapps spent some years in Austria after Maria and the Captain married and he was offered a commission in Germany's navy. Since von Trapp opposed the Nazis by that time, the family left Austria after the "Anschluss", going by train to Italy and then traveling on to London and the United States. To make the story more dramatic, Lindsay and Crouse had the family, soon after Maria's and the Captain's wedding, escape over the mountains to Switzerland on foot. In Salzburg, Austria, just before World War II, nuns from Nonnberg Abbey sing the "Dixit Dominus". One of the postulants, Maria Rainer, is on the nearby mountainside, regretting leaving the beautiful hills ("The Sound of Music"). She returns late to the abbey where the Mother Abbess and the other nuns have been considering what to do about the free-spirit ("Maria"). Maria explains her lateness, saying she was raised on that mountain, and apologizes for singing in the garden without permission. The Mother Abbess joins her in song ("My Favorite Things"). The Mother Abbess tells her that she should spend some time outside the abbey to decide whether she is suited for the monastic life. She will act as the governess to the seven children of a widower, Austro-Hungarian Navy submarine Captain Georg von Trapp. Maria arrives at the villa of Captain von Trapp. He explains her duties and summons the children with a boatswain's call. They march in, clad in uniforms. He teaches her their individual signals on the call, but she openly disapproves of this militaristic approach. Alone with them, she breaks through their wariness and teaches them the basics of music ("Do-Re-Mi"). Rolf, a young messenger, delivers a telegram and then meets with the oldest child, Liesl, outside the villa. He claims he knows what is right for her because he is a year older than she ("Sixteen Going on Seventeen"). They kiss, and he runs off, leaving her squealing with joy. Meanwhile, the housekeeper, Frau Schmidt, gives Maria material to make new clothes, as Maria had given all her possessions to the poor. Maria sees Liesl slipping in through the window, wet from a sudden thunderstorm, but agrees to keep her secret. The other children are frightened by the storm. Maria sings "The Lonely Goatherd" to distract them. Captain von Trapp arrives a month later from Vienna with Baroness Elsa Schräder and Max Detweiler. Elsa tells Max that something is preventing the Captain from marrying her. He opines that only poor people have the time for great romances ("How Can Love Survive"). Rolf enters, looking for Liesl, and greets them with "Heil". The Captain orders him away, saying that he is Austrian, not German. Maria and the children leapfrog in, wearing play-clothes that she made from the old drapes in her room. Infuriated, the Captain sends them off to change. She tells him that they need him to love them, and he angrily orders her back to the abbey. As she apologizes, they hear the children singing "The Sound of Music", which she had taught them, to welcome Elsa Schräder. He joins in and embraces them. Alone with Maria, he asks her to stay, thanking her for bringing music back into his house. Elsa is suspicious of her until she explains that she will be returning to the abbey in September. The Captain gives a party to introduce Elsa, and guests argue over the Nazi German "Anschluss" (annexation) of Austria. Kurt asks Maria to teach him to dance the Ländler. When he fails to negotiate a complicated figure, the Captain steps in to demonstrate. He and Maria dance until they come face-to-face; and she breaks away, embarrassed and confused. Discussing the expected marriage between Elsa and the Captain, Brigitta tells Maria that she thinks Maria and the Captain are really in love with each other. Elsa asks the Captain to allow the children to say goodnight to the guests with a song, "So Long, Farewell". Max is amazed at their talent and wants them for the Kaltzberg Festival, which he is organizing. The guests leave for the dining room, and Maria slips out the front door with her luggage. At the abbey, Maria says that she is ready to take her monastic vows; but the Mother Abbess realizes that she is running away from her feelings. She tells her to face the Captain and discover if they love each other, and tells her to search for and find the life she was meant to live ("Climb Ev'ry Mountain"). Max teaches the children how to sing on stage. When the Captain tries to lead them, they complain that he is not doing it as Maria did. He tells them that he has asked Elsa to marry him. They try to cheer themselves up by singing "My Favorite Things" but are unsuccessful until they hear Maria singing on her way to rejoin them. Learning of the wedding plans, she decides to stay only until the Captain can arrange for another governess. Max and Elsa argue with the Captain about the imminent "Anschluss", trying to convince him that it is inevitable ("No Way to Stop It"). When he refuses to compromise on his opposition to it, Elsa breaks off the engagement. Alone, the Captain and Maria finally admit their love, desiring only to be "An Ordinary Couple". As they marry, the nuns reprise "Maria" against the wedding processional. While Maria and the Captain are on their honeymoon, Max prepares the children to perform at the Kaltzberg Festival. Herr Zeller, the "Gauleiter" of the region, demands to know why they are not flying the flag of the Third Reich now that the "Anschluss" has occurred. The Captain and Maria return early from their honeymoon before the Festival. In view of the Nazi German occupation, the Captain decides the children should not sing at the event. Max argues that they would sing for Austria, but the Captain points out that it no longer exists. Maria and Liesl discuss romantic love; Maria predicts that in a few years Liesl will be married ("Sixteen Going on Seventeen (Reprise)"). Rolf enters with a telegram that offers the Captain a commission in the German Navy, and Liesl is upset to discover that Rolf is now a committed Nazi. The Captain consults Maria and decides that they must secretly flee Austria. German Admiral von Schreiber arrives to find out why Captain von Trapp has not answered the telegram. He explains that the German Navy holds him in high regard, offers him the commission, and tells him to report immediately to Bremerhaven to assume command. Maria says that he cannot leave immediately, as they are all singing in the Festival concert; and the Admiral agrees to wait. At the concert, after the von Trapps sing an elaborate reprise of "Do-Re-Mi", Max brings out the Captain's guitar. Captain von Trapp sings "Edelweiss", as a goodbye to his homeland, while using Austria's national flower as a symbol to declare his loyalty to the country. Max asks for an encore and announces that this is the von Trapp family's last chance to sing together, as the honor guard waits to escort the Captain to his new command. While the judges decide on the prizes, the von Trapps sing "So Long, Farewell", leaving the stage in small groups. Max then announces the runners-up, stalling as much as possible. When he announces that the first prize goes to the von Trapps and they do not appear, the Nazis start a search. The family hides at the Abbey, and Sister Margaretta tells them that the borders have been closed. Rolf comes upon them and calls his lieutenant, but after seeing Liesl he changes his mind and tells him they aren't there. The Nazis leave, and the von Trapps flee over the Alps as the nuns reprise "Climb Ev'ry Mountain". Sources: IBDB and Guidetomusicaltheatre.com "The Sound of Music" premiered at New Haven's Shubert Theatre where it played an eight-performance tryout in October and November 1959 before another short tryout in Boston. The musical then opened on Broadway at the Lunt-Fontanne Theatre on November 16, 1959, moved to the Mark Hellinger Theatre on November 6, 1962, and closed on June 15, 1963, after 1,443 performances. The director was Vincent J. Donehue, and the choreographer was Joe Layton. The original cast included Mary Martin as Maria, Theodore Bikel as Captain Georg von Trapp, Patricia Neway as Mother Abbess, Kurt Kasznar as Max Detweiler, Marion Marlowe as Elsa Schräder, Brian Davies as Rolf and Lauri Peters as Liesl. Patricia Brooks, June Card and Tatiana Troyanos were ensemble members in the original production. The show tied for the Tony Award for Best Musical with "Fiorello!". Other awards included Martin for Best Actress in a Musical, Neway for Best Featured Actress, Best Scenic Design (Oliver Smith) and Best Conductor And Musical Director (Frederick Dvonch). Bikel and Kasznar were nominated for acting awards, and Donehue was nominated for his direction. The entire children's cast was nominated for Best Featured Actress category as a single nominee, even though two of the children were boys. Martha Wright replaced Martin in the role of Maria on Broadway in October 1961, followed by Karen Gantz in July 1962, Jeannie Carson in August 1962 and Nancy Dussault in September 1962. Jon Voight, who eventually married co-star Lauri Peters, was a replacement for Rolf. The national tour starred Florence Henderson as Maria and Beatrice Krebs as Mother Abbess. It opened at the Grand Riviera Theater, Detroit, on February 27, 1961, and closed November 23, 1963, at the O'Keefe Centre, Toronto. Henderson was succeeded by Barbara Meister in June 1962. Theodore Bikel was not satisfied playing the role of the Captain, because of the role's limited singing, and Bikel did not like to play the same role over and over again. In his autobiography, he writes: "I promised myself then that if I could afford it, I would never do a run as long as that again." The original Broadway cast album sold three million copies. The musical premiered in London's West End at the Palace Theatre on May 18, 1961, and ran for 2,385 performances. It was directed by Jerome Whyte and used the original New York choreography, supervised by Joe Layton, and the original sets designed by Oliver Smith. The cast included Jean Bayless as Maria, followed by Sonia Rees, Roger Dann as Captain von Trapp, Constance Shacklock as Mother Abbess, Eunice Gayson as Elsa Schrader, Harold Kasket as Max Detweiler, Barbara Brown as Liesl, Nicholas Bennett as Rolf and Olive Gilbert as Sister Margaretta. In 1981, at producer Ross Taylor's urging, Petula Clark agreed to star in a revival of the show at the Apollo Victoria Theatre in London's West End. Michael Jayston played Captain von Trapp, Honor Blackman was the Baroness and June Bronhill the Mother Abbess. Other notable cast members included Helen Anker, John Bennett and Martina Grant. Despite her misgivings that, at age 49, she was too old to play the role convincingly, Clark opened to unanimous rave reviews and the largest advance sale in the history of British theatre at that time. Maria von Trapp, who attended the opening night performance, described Clark as "the best" Maria ever. Clark extended her initial six-month contract to thirteen months. Playing to 101 percent of seating capacity, the show set the highest attendance figure for a single week (October 26–31, 1981) of any British musical production in history (as recorded in "The Guinness Book of Theatre"). It was the first stage production to incorporate the two additional songs ("Something Good" and "I Have Confidence") that Richard Rodgers composed for the film version. "My Favorite Things" had a similar context to the film version, while the short verse "A Bell is No Bell" was extended into a full-length song for Maria and the Mother Abbess. "The Lonely Goatherd" was set in a new scene at a village fair. The cast recording of this production was the first to be recorded digitally. It was released on CD for the first time in 2010 by the UK label Pet Sounds and included two bonus tracks from the original single issued by Epic to promote the production. Director Susan H. Schulman staged the first Broadway revival of "The Sound of Music", with Rebecca Luker as Maria and Michael Siberry as Captain von Trapp. It also featured Patti Cohenour as Mother Abbess, Jan Maxwell as Elsa Schrader, Fred Applegate as Max Detweiler, Dashiell Eaves as Rolf, Patricia Conolly as Frau Schmidt and Laura Benanti, in her Broadway debut, as Luker's understudy. Later, Luker and Siberry were replaced by Richard Chamberlain as the Captain and Benanti as Maria. Lou Taylor Pucci made his Broadway debut as the understudy for Kurt von Trapp. The production opened on March 12, 1998, at the Martin Beck Theatre, and closed on June 20, 1999, after 533 performances. This production was nominated for a Tony Award for Best Revival of a Musical. It then toured in North America. An Andrew Lloyd Webber production opened on November 15, 2006, at the London Palladium and ran until February 2009, produced by Live Nation's David Ian and Jeremy Sams. Following failed negotiations with Hollywood star Scarlett Johansson, the role of Maria was cast through a UK talent search reality TV show called "How Do You Solve a Problem like Maria?" The talent show was produced by (and starred) Andrew Lloyd Webber and featured presenter/comedian Graham Norton and a judging panel of David Ian, John Barrowman and Zoe Tyler. Connie Fisher was selected by public voting as the winner of the show. In early 2007, Fisher suffered from a heavy cold that prevented her from performing for two weeks. To prevent further disruptions, an alternate Maria, Aoife Mulholland, a fellow contestant on "How Do You Solve a Problem like Maria?", played Maria on Monday evenings and Wednesday matinee performances. Simon Shepherd was originally cast as Captain von Trapp, but after two preview performances he was withdrawn from the production, and Alexander Hanson moved into the role in time for the official opening date along with Lesley Garrett as the Mother Abbess. After Garrett left, Margaret Preece took the role. The cast also featured Lauren Ward as the Baroness, Ian Gelder as Max, Sophie Bould as Liesl, and Neil McDermott as Rolf. Other notable replacements have included Simon Burke and Simon MacCorkindale as the Captain and newcomer Amy Lennox as Liesl. Summer Strallen replaced Fisher in February 2008, with Mulholland portraying Maria on Monday evenings and Wednesday matinees. The revival received enthusiastic reviews, especially for Fisher, Preece, Bould and Garrett. A cast recording of the London Palladium cast was released. The production closed on February 21, 2009, after a run of over two years and was followed by a UK national tour, described below. The first Australian production opened at Melbourne's Princess Theatre in 1961 and ran for three years. The production was directed by Charles Hickman, with musical numbers staged by Ernest Parham. The cast included June Bronhill as Maria, Peter Graves as Captain von Trapp and Rosina Raisbeck as Mother Abbess. A touring company then played for years, with Vanessa Lee (Graves' wife) in the role of Maria. The cast recording made in 1961 was the first time a major overseas production featuring Australian artists was transferred to disc. A Puerto Rican production, performed in English, opened at the Tapia Theatre in San Juan under the direction of Pablo Cabrera in 1966. It starred Camille Carrión as María and Raúl Dávila as Captain Von Trapp, and it featured a young Johanna Rosaly as Liesl. In 1968, the production transferred to the Teatro de la Zarzuela in Madrid, Spain, where it was performed in Spanish with Carrión reprising the role of María, Alfredo Mayo as Captain Von Trapp and Roberto Rey as Max. In 1988, the Moon Troupe of Takarazuka Revue performed the musical at the Bow Hall (Takarazuka, Hyōgo). Harukaze Hitomi and Gou Mayuka starred. A 1990 New York City Opera production, directed by Oscar Hammerstein II's son, James, featured Debby Boone as Maria, Laurence Guittard as Captain von Trapp, and Werner Klemperer as Max. In the 1993 Stockholm production, Carola Häggkvist played Maria and Tommy Körberg played Captain von Trapp. An Australian revival played in the Lyric Theatre, Sydney, New South Wales, from November 1999 to February 2000. Lisa McCune played Maria, John Waters was Captain von Trapp, Bert Newton was Max, Eilene Hannan was Mother Abbess and Rachel Marley was Marta. This production was based on the 1998 Broadway revival staging. The production then toured until February 2001, in Melbourne, Brisbane, Perth and Adelaide. Rachael Beck took over as Maria in Perth and Adelaide, and Rob Guest took over as Captain von Trapp in Perth. An Austrian production premiered in 2005 at the Volksoper Wien in German. It was directed and choreographed by Renaud Doucet. The cast included Sandra Pires as Maria, Kurt Schreibmayer and Michael Kraus as von Trapp, with Heidi Brunner as Mother Abbess. As of 2012, the production was still in the repertoire of the Volksoper with 12–20 performances per season. The Salzburg Marionette Theatre has toured extensively with their version that features the recorded voices of Broadway singers such as Christiane Noll as Maria. The tour began in Dallas, Texas, in 2007 and continued in Salzburg in 2008. The director is Richard Hamburger. In 2010, the production was given in Paris, France, with dialogue in French and the songs in English. In 2008, a Brazilian production with Kiara Sasso as Maria and Herson Capri as the Captain played in Rio de Janeiro and São Paulo, and a Dutch production was mounted with Wieneke Remmers as Maria, directed by John Yost. Andrew Lloyd Webber, David Ian and David Mirvish presented "The Sound of Music" at the Princess of Wales Theatre in Toronto from 2008 to 2010. The role of Maria was chosen by the public through a television show, "How Do You Solve a Problem Like Maria?", which was produced by Lloyd Webber and Ian and aired in mid-2008. Elicia MacKenzie won and played the role six times a week, while the runner-up in the TV show, Janna Polzin, played Maria twice a week. Captain von Trapp was played by Burke Moses. The show ran for more than 500 performances. It was Toronto's longest running revival ever. A UK tour began in 2009 and visited more than two dozen cities before ending in 2011. The original cast included Connie Fisher as Maria, Michael Praed as Captain von Trapp and Margaret Preece as the Mother Abbess. Kirsty Malpass was the alternate Maria. Jason Donovan assumed the role of Captain Von Trapp, and Verity Rushworth took over as Maria, in early 2011. Lesley Garrett reprised her role as Mother Abbess for the tour's final engagement in Wimbledon in October 2011. A production ran at the Ópera-Citi theater in Buenos Aires, Argentina in 2011. The cast included Laura Conforte as Maria and Diego Ramos as Captain Von Trapp. A Spanish national tour began in November 2011 at the Auditorio de Tenerife in Santa Cruz de Tenerife in the Canary Islands. The tour visited 29 Spanish cities, spending one year in Madrid's Gran Vía at the Teatro Coliseum, and one season at the Tívoli Theatre in Barcelona. It was directed by Jaime Azpilicueta and starred Silvia Luchetti as Maria and Carlos J. Benito as Captain Von Trapp. A production was mounted at the Open Air Theatre, Regent's Park from July to September 2013. It starred Charlotte Wakefield as Maria, with Michael Xavier as Captain von Trapp and Caroline Keiff as Elsa. It received enthusiastic reviews and became the highest-grossing production ever at the theatre. In 2014, the show was nominated for Best Musical Revival at the Laurence Olivier Awards and Wakefield was nominated for Best Actress in a Musical. A brief South Korean production played in 2014, as did a South African production at the Artscape in Cape Town and at the Teatro at Montecasino based on Lloyd Webber and Ian's London Palladium production. The same year, a Spanish language translation opened at Teatro de la Universidad in San Juan, under the direction of Edgar García. It starred Lourdes Robles as Maria and Braulio Castillo as Captain Von Trapp, with Dagmar as Elsa. A production (in Thai: "มนต์รักเพลงสวรรค์") ran at Muangthai ratchadalai Theatre, Bangkok, Thailand, in April 2015 in the Thai language. The production replaced the song "Ordinary couple" with "Something Good". A North American tour, directed by Jack O'Brien and choreographed by Danny Mefford, began at the Ahmanson Theatre in Los Angeles in September 2015. The tour is scheduled to run until at least July 2017. Kerstin Anderson plays Maria, with Ben Davis as Capt. von Trapp, Kyla Carter as Gretl von Trapp and Ashley Brown as Mother Abess. The production has received warm reviews. A UK tour produced by Bill Kenwright began in 2015 and toured into 2016. It was directed by Martin Connor and starred Lucy O'Byrne as Maria. A 2016 Australian tour of the Lloyd Webber production, directed by Sams, included stops in Sydney, Brisbane, Melbourne and Adelaide. The cast included Cameron Daddo as Captain Von Trapp, Marina Prior as Baroness Schraeder and Lorraine Bayly as Frau Schmidt. The choreographer was Arlene Phillips. On March 2, 1965, 20th Century Fox released a film adaption of the musical starring Julie Andrews as Maria Rainer and Christopher Plummer as Captain Georg von Trapp. It was produced and directed by Robert Wise with the screenplay adaption written by Ernest Lehman. Two songs were written by Rodgers specifically for the film, "I Have Confidence" and "Something Good". The film won five Oscars at the 38th Academy Awards, including Best Picture. A live televised production of the musical aired twice in December 2013 on NBC. It was directed by Beth McCarthy-Miller and Rob Ashford. Carrie Underwood starred as Maria Rainer, with Stephen Moyer as Captain von Trapp, Christian Borle as Max, Laura Benanti as Elsa, and Audra McDonald as the Mother Abbess. The production was released on DVD the same month. British network ITV presented a live version of its own on December 20, 2015. It starred Kara Tointon as Maria, Julian Ovenden as Captain von Trapp, Katherine Kelly as Baroness Schraeder and Alexander Armstrong as Max. Most reviews of the original Broadway production were favorable. Richard Watts, Jr. of the "New York Post" stated that the show had "strangely gentle charm that is wonderfully endearing. "The Sound of Music" strives for nothing in the way of smash effects, substituting instead a kind of gracious and unpretentious simplicity." The "New York World-Telegram and Sun" pronounced "The Sound of Music" "the loveliest musical imaginable. It places Rodgers and Hammerstein back in top form as melodist and lyricist. The Lindsay-Crouse dialogue is vibrant and amusing in a plot that rises to genuine excitement." The "New York Journal American"s review opined that "The Sound of Music" is "the most mature product of the team ... it seemed to me to be the full ripening of these two extraordinary talents". Brooks Atkinson of "The New York Times" gave a mixed assessment. He praised Mary Martin's performance, saying "she still has the same common touch ... same sharp features, goodwill, and glowing personality that makes music sound intimate and familiar" and stated that "the best of the "Sound of Music" is Rodgers and Hammerstein in good form". However, he said, the libretto "has the hackneyed look of the musical theatre replaced with "Oklahoma!" in 1943. It is disappointing to see the American musical stage succumbing to the clichés of operetta." Walter Kerr's review in the "New York Herald Tribune" was unfavorable: "Before "The Sound of Music" is halfway through its promising chores it becomes not only too sweet for words but almost too sweet for music", stating that the "evening suffer(s) from little children". Columbia Masterworks recorded the original Broadway cast album a week after the show's 1959 opening. The album was the label's first deluxe package in a gatefold jacket, priced $1 higher than previous cast albums. It was #1 on Billboard's best-selling albums chart for 16 weeks in 1960. It was released on CD from Sony in the Columbia Broadway Masterworks series. In 1959, singer Patti Page recorded the title song from the show for Mercury Records on the day that the musical opened on Broadway. Since it was recorded a week before the original Broadway cast album, Page was the first artist to record any song from the musical. She featured the song on her TV show, "The Patti Page Olds Show", helping to popularize the musical. The 1961 London production was recorded by EMI and released on the HMV label and later re-issued on CD in 1997, on the Broadway Angel label. The 1965 film soundtrack was released by RCA Victor and is one of the most successful soundtrack albums in history, having sold over 20 million copies worldwide. Recent CD editions incorporate musical material from the film that would not fit on the original LP. The label has also issued the soundtrack in German, Italian, Spanish and French editions. RCA Victor also released an album of the 1998 Broadway revival produced by Hallmark Entertainment and featuring the full revival cast, including Rebecca Luker, Michael Siberry, Jan Maxwell and Fred Applegate. The Telarc label made a studio cast recording of "The Sound of Music", with the Cincinnati Pops Orchestra conducted by Erich Kunzel (1987). The lead roles went to opera stars: Frederica von Stade as Maria, Håkan Hagegård as Captain von Trapp, and Eileen Farrell as the Mother Abbess. The recording "includes both the two new songs written for the film version and the three Broadway songs they replace, as well as a previously unrecorded verse of "An Ordinary Couple"". The 2006 London revival was recorded and has been released on the Decca Broadway label. There have been numerous studio cast albums and foreign cast albums issued, though many have only received regional distribution. According to the cast album database, there are 62 recordings of the score that have been issued over the years. The from the 2013 NBC television production starring Carrie Underwood and Stephen Moyer was released on CD and digital download in December 2013 on the Sony Masterworks label. Also featured on the album are Audra McDonald, Laura Benanti and Christian Borle.
https://en.wikipedia.org/wiki?curid=30019
Trip hop Trip hop (sometimes used synonymously with "downtempo") is a musical genre that originated in the early 1990s in the United Kingdom, especially Bristol. It has been described as "a fusion of hip hop and electronica until neither genre is recognizable", and may incorporate a variety of styles, including funk, dub, soul, psychedelia, R&B, and house, as well as other forms of electronic music. Trip hop can be highly experimental. Deriving from later idioms of acid house, the term was first used by the British music media to describe the more experimental variant of breakbeat emerging from the Bristol Sound scene in the early 1990s, which contained influences of soul, funk, and jazz. It was pioneered by acts like Massive Attack, Tricky, and Portishead. Trip hop achieved commercial success in the 1990s, and has been described as "Europe's alternative choice in the second half of the '90s." Common musical aesthetics include a bass-heavy drumbeat, often providing the slowed down breakbeat samples similar to standard 1990s hip hop beats, giving the genre a more psychedelic and mainstream feel. Vocals in trip hop are oftentimes female and feature characteristics of various singing styles including R&B, jazz and rock. The female-dominant vocals of trip hop may be partially attributable to the influence of genres such as jazz and early R&B, in which female vocalists were more common. However, there are notable exceptions: Massive Attack and Groove Armada collaborated with male and female vocalists,Tricky often features vocally in his own productions along with Martina Topley-Bird, and Chris Corner provided vocals for later albums with Sneaker Pimps. Trip hop is also known for its melancholic sound. This may be partly due to the fact that several acts were inspired by post-punk bands; Tricky and Massive Attack both covered and sampled songs of Siouxsie and the Banshees and The Cure. Tricky opened his second album "Nearly God" by a version of "Tattoo", a proto-trip-hop song of Siouxsie and the Banshees initially recorded in 1983. Trip hop tracks often incorporate Rhodes pianos, saxophones, trumpets, and flutes, and may employ unconventional instruments such as the theremin and Mellotron. Trip hop differs from hip hop in theme and overall tone. Instead of gangsta rap with its hard-hitting lyrics, trip hop offers a more aural atmospherics with instrumental hip hop, turntable scratching, and breakbeat rhythms. Regarded in some ways as a 1990s update of fusion, trip hop may be said to "transcend" the hardcore rap styles and lyrics with atmospheric overtones to create a more mellow tempo. The term "trip-hop" first appeared in print in June 1994. Andy Pemberton, a music journalist writing for "Mixmag", used it to describe "In/Flux", a single by American producer DJ Shadow and UK act RPM, with the latter signed to Mo' Wax Records. In Bristol, hip hop began to seep into the consciousness of a subculture already well-schooled in Jamaican forms of music. DJs, MCs, b-boys and graffiti artists grouped together into informal soundsystems. Like the pioneering Bronx crews of DJs Kool Herc, Afrika Bambataa and Grandmaster Flash, the soundsystems provided party music for public spaces, often in the economically deprived council estates from which some of their members originated. Bristol's soundsystem DJs, drawing heavily on Jamaican dub music, typically used a laid-back, slow and heavy drum beat ("down tempo"). Bristol's Wild Bunch crew became one of the soundsystems to put a local spin on the international phenomenon, helping to birth Bristol's signature sound of trip hop, often termed "the Bristol Sound". The Wild Bunch and its associates included at various times in its existence the MC Adrian "Tricky Kid" Thaws, the graffiti artist and lyricist Robert "3D" Del Naja, producer Jonny Dollar and the DJs Nellee Hooper, Andrew "Mushroom" Vowles and Grant "Daddy G" Marshall. As the hip hop scene matured in Bristol and musical trends evolved further toward acid jazz and house in the late 1980s, the golden era of the soundsystem began to end. The Wild Bunch signed a record deal and evolved into Massive Attack, a core collective of 3D, Mushroom and Daddy G, with significant contributions from Tricky Kid (soon shortened to Tricky), Dollar, and Hooper on production duties, along with a rotating cast of other vocalists. Another influence came from Gary Clail's Tackhead soundsystem. Clail often worked with former The Pop Group singer Mark Stewart. The latter experimented with his band Mark Stewart & the Maffia, which consisted of New York session musicians Skip McDonald, Doug Wimbish, and Keith LeBlanc, who had been a part of the house band for the Sugarhill Records record label. Produced by Adrian Sherwood, the music combined hip hop with experimental rock and dub and sounded like a premature version of what later became trip hop. In 1993, Kirsty MacColl released "Angel", one of the first examples of the genre crossing over to pop, a hybrid that dominated the charts toward the end of the 1990s. Massive Attack's first album "Blue Lines" was released in 1991 to huge success in the United Kingdom. "Blue Lines" was seen widely as the first major manifestation of a uniquely British hip hop movement, but the album's hit single "Unfinished Sympathy" and several other tracks, while their rhythms were largely sample-based, were not seen as hip hop songs in any conventional sense. Produced by Dollar, Shara Nelson (an R&B singer) featured on the orchestral "Unfinished", and Jamaican dance hall star Horace Andy provided vocals on several other tracks, as he would throughout Massive Attack's career. Massive Attack released their second album entitled "Protection" in 1994. Although Tricky stayed on in a lesser role, and Hooper again produced, the fertile dance music scene of the early 1990s had informed the record, and it was seen as an even more significant shift away from the Wild Bunch era. In the June 1994 issue of UK magazine "Mixmag", music journalist Andy Pemberton used the term "trip hop" to describe the hip hop instrumental "In/Flux", a 1993 single by San Francisco's DJ Shadow, and other similar tracks released on the Mo' Wax label and being played in London clubs at the time. "In/Flux", with its mixed up bpms, spoken word samples, strings, melodies, bizarre noises, prominent bass, and slow beats, gave the listener the impression they were on a musical trip, according to Pemberton. Soon, however, Massive Attack's dubby, jazzy, psychedelic, electronic textures, rooted in hip hop sampling technique but taking flight into many styles, were described by journalists as the template of the eponymous genre. In 1993, Icelandic musician Björk released "Debut", produced by Wild Bunch member Nellee Hooper. The album, although rooted in four-on-the-floor house music, contained elements of trip hop and is credited as one of the first albums to introduce electronic dance music into mainstream pop. She had been in contact with London's underground electronic music scene and was romantically involved with trip hop musician Tricky. Björk embraced trip hop even more with her 1995 album "Post" by collaborating with Tricky and Howie B. "Homogenic", her 1997 album, has been described as a pinnacle of trip hop music. 1994 and 1995 saw trip hop near the peak of its popularity, with artists such as Howie B and Earthling making significant contributions. Ninja Tune, the independent record label founded by the Coldcut duo, would significantly influence the trip-hop sound in London and beyond with breakthrough artists DJ Food, 9 Lazy 9, Up, Bustle & Out, Funki Porcini and The Herbaliser, among others. The period also marked the debut of two acts who, along with Massive Attack, would define the Bristol scene for years to come. In 1994 Portishead, a trio comprising singer Beth Gibbons, Geoff Barrow, and Adrian Utley, released their debut album "Dummy". Their background differed from Massive Attack in many ways: one of Portishead's primary influences was 1960s and 1970s film soundtrack LPs. Nevertheless, Portishead shared the scratchy, jazz-sample-based aesthetic of early Massive Attack (whom Barrow had briefly worked with during the recording of "Blue Lines"), and the sullen, fragile vocals of Gibbons also brought them wide acclaim. In 1995, "Dummy" was awarded the Mercury Music Prize as the best British album of the year, giving trip-hop as a genre its greatest exposure yet. Portishead's music, seen as cutting edge in its film-noir feel and stylish, yet emotional appropriations of past sounds, was also widely imitated, causing the band to recoil from the trip-hop label they had inadvertently helped popularize. Tricky also released his debut solo album "Maxinquaye" in 1995, to great critical acclaim. The album was produced largely in collaboration with Mark Saunders. Tricky employed whispered, often abstract stream-of-consciousness murmuring, remote from the gangsta-rap braggadocio of the mid-1990s US hip hop scene. Even more unusually, however, many of the solo songs on "Maxinquaye" featured little of Tricky's own voice: his then-lover, Martina Topley-Bird, sang them, including her reimagining of Public Enemy's militant 1988 rap "Black Steel in the Hour of Chaos", while other songs were male-female duets dealing with sex and love in oblique ways, over beds of sometimes dissonant samples. Within a year Tricky had released two more full-length albums which were considered even more challenging, without finding the same popularity as his Bristol contemporaries Massive Attack and Portishead. Through his brief collaborations with Björk, however, he also exerted influence closer to the pop and alternative rock mainstream, and he developed a large cult fan-base. Musician Poe released her 1995 debut "Hello", an album that featured trip-hop elements, to critical praise. Although not as popular in the United States, bands like Portishead and Sneaker Pimps saw moderate airplay on alternative-rock stations across the country. After the initial success of trip hop in the mid-1990s, "post-trip-hop" artists include Baby Fox, Bowery Electric, Esthero, Morcheeba, Sneaker Pimps, Anomie Belle, Alpha, Jaianto, Mudville and Cibo Matto and Lamb. These artists incorporated trip hop into other genres, including ambient, soul, IDM, industrial, dubstep, breakbeat, drum and bass, acid jazz, and new-age. The first printed use of the term "post-trip hop" was in an October 2002 article of "The Independent", and was used to describe the band Second Person. Trip hop has also influenced artists in other genres, including Gorillaz, Emancipator, Nine Inch Nails, Travis, Queens of the Stone Age, How to Destroy Angels, Beth Orton, The Flaming Lips, , Beck, and Deftones. Several tracks on Australian pop singer Kylie Minogue's 1997 album "Impossible Princess" also displayed a trip hop influence. Various prominent artists and groups, such as Janet Jackson, Kylie Minogue, Madonna, Björk, and Radiohead, have also been influenced by the genre. Trip hop has spawned several subgenres, including illbient (dub-based trip hop which combines ambient and industrial hip hop). Trip hop continued to influence notable artists in the 2000s. Norwegian avant-garde band Ulver incorporated trip hop in their ambient/electronic/jazzy album "Perdition City". Atmospheric rock band Antimatter included some trip hop elements in their first two albums. Australian composer Rob Dougan proposed a mix of trip hop beats, orchestral music and electronics. RJD2 began his career as a DJ, but in 2001, began releasing albums under El-P's Def Jux Label. Zero 7's album "Simple Things", and in particular, its lead single "Destiny", was regarded highly by underground listeners and achieved significant popularity. In 2006, Gotye debuted his second studio album, "Like Drawing Blood". The songs on the album featured down-tempo hip-hop beats and dub style bass reminiscent of trip hop. Hip hop groups Zion I and the Dub Pistols also displayed heavy trip hop influence. Norwegian singer and songwriter Kate Havnevik is a classically trained musician, but also incorporates trip hop into her work. Many producers who were not explicitly trip-hop artists also displayed its influence during the early 2000s. Daniel Nakamura, aka Dan The Automator, released two albums that were heavily inspired by trip hop. 2000 album "Deltron 3030", was a concept album about a rapper, portrayed by Del Tha Funkee Homosapien, from the future. 2001 saw the release of his side project, Lovage. "Music to Make Love to Your Old Lady By", with special guests Mike Patton, Prince Paul, Maseo, Damon Albarn, and Afrika Bambaataa. British producer Fatboy Slim's breakthrough album, "Halfway Between the Gutter and the Stars", was his most commercially successful release. Major notable releases include Massive Attack's "Heligoland," their first studio album in seven years, and Dutch's "A Bright Cold Day" in 2010, which was met with positive reviews including a 7/10 score from inyourspeakers.com. The latter group consists of members including Jedi Mind Tricks producer Stoupe the Enemy of Mankind. DJ Shadow's "The Less You Know, the Better" was released in 2011 after a highly publicised unveiling of songs, including appearances on Zane Lowe's BBC Radio 1 show and previews at a performance in Antwerp in August 2010. The album was met with "generally favorable reviews" on Metacritic, with some criticising Shadow's lack of originality. Sam Richards of "NME" felt that the album sounded "like the work of a man struggling to recall his motivations for making music in the first place." Geoff Barrow's album titled "»" was released in 2012 and received high scores from journalists, including an 8/10 from NME and Spin magazine. Lana Del Rey released her second album, "Born to Die" in 2012, which contained a string of trip hop ballads. The album topped the charts in eleven countries, including Australia, France, Germany, and the United Kingdom; it has sold 3.4 million copies worldwide as of 2013 according to International Federation of the Phonographic Industry.
https://en.wikipedia.org/wiki?curid=30021
Tycho Brahe Tycho Brahe ( ; born Tyge Ottesen Brahe; 14 December 154624 October 1601) was a Danish nobleman, astronomer, and writer known for his accurate and comprehensive astronomical observations. He was born in the then Danish peninsula of Scania. Tycho was well known in his lifetime as an astronomer, astrologer, and alchemist. He has been described as "the first competent mind in modern astronomy to feel ardently the passion for exact empirical facts". Most of his observations were more accurate than the best available observations at the time. An heir to several of Denmark's principal noble families, Tycho received a comprehensive education. He took an interest in astronomy and in the creation of more accurate instruments of measurement. As an astronomer, Tycho worked to combine what he saw as the geometrical benefits of the Copernican system with the philosophical benefits of the Ptolemaic system into his own model of the universe, the Tychonic system. His system correctly saw the Moon as orbiting Earth, and the planets as orbiting the Sun, but erroneously considered the Sun to be orbiting the Earth. Furthermore, he was the last of the major naked-eye astronomers, working without telescopes for his observations. In his "De nova stella" ("On the New Star") of 1573, he refuted the Aristotelian belief in an unchanging celestial realm. His precise measurements indicated that "new stars" (stellae novae, now known as supernovae), in particular that of 1572, lacked the parallax expected in sublunar phenomena and were therefore not tailless comets in the atmosphere as previously believed but were above the atmosphere and beyond the Moon. Using similar measurements, he showed that comets were also not atmospheric phenomena, as previously thought, and must pass through the supposedly immutable celestial spheres. King Frederick II granted Tycho an estate on the island of Hven and the funding to build Uraniborg, an early research institute, where he built large astronomical instruments and took many careful measurements, and later Stjerneborg, underground, when he discovered that his instruments in Uraniborg were not sufficiently steady. On the island (where he behaved autocratically toward the residents) he founded manufactories, such as a paper mill, to provide material for printing his results. After disagreements with the new Danish king, Christian IV, in 1597, Tycho went into exile. He was invited by the Bohemian king and Holy Roman Emperor Rudolph II to Prague, where he became the official imperial astronomer. He built an observatory at Benátky nad Jizerou. There, from 1600 until his death in 1601, he was assisted by Johannes Kepler, who later used Tycho's astronomical data to develop his three laws of planetary motion. Tycho's body has been exhumed twice, in 1901 and 2010, to examine the circumstances of his death and to identify the material from which his artificial nose was made. The conclusion was that his death was likely caused by a burst bladder, and not by poisoning, as had been suggested, and that the artificial nose was more likely made of brass than silver or gold, as some had believed in his time. Tycho Brahe was born as heir to several of Denmark's most influential noble families and in addition to his immediate ancestry with the Brahe and the Bille families, he also counted the Rud, Trolle, Ulfstand, and Rosenkrantz families among his ancestors. Both of his grandfathers and all of his great grandfathers had served as members of the Danish king's Privy Council. His paternal grandfather and namesake Thyge Brahe was the lord of Tosterup Castle in Scania and died in battle during the 1523 Siege of Malmö during the Lutheran Reformation Wars. His maternal grandfather Claus Bille, lord to Bohus Castle and a second cousin of Swedish king Gustav Vasa, participated in the Stockholm Bloodbath on the side of the Danish king against the Swedish nobles. Tycho's father Otte Brahe, a royal Privy Councilor (like his own father), married Beate Bille, who was herself a powerful figure at the Danish court holding several royal land titles. Both parents are buried under the floor of Kågeröd Church, four kilometres east of Knutstorp. Tycho was born at his family's ancestral seat of Knutstorp Castle (Danish: "Knudstrup borg"; Swedish: "Knutstorps borg"), about eight kilometres north of Svalöv in then Danish Scania. He was the oldest of 12 siblings, 8 of whom lived to adulthood, including Steen Brahe. His twin brother died before being baptized. Tycho later wrote an ode in Latin to his dead twin, which was printed in 1572 as his first published work. An epitaph, originally from Knutstorp, but now on a plaque near the church door, shows the whole family, including Tycho as a boy. When he was only two years old Tycho was taken away to be raised by his uncle Jørgen Thygesen Brahe and his wife Inger Oxe (sister to Peder Oxe, Steward of the Realm) who were childless. It is unclear why Otte Brahe reached this arrangement with his brother, but Tycho was the only one of his siblings not to be raised by his mother at Knutstorp. Instead, Tycho was raised at Jørgen Brahe's estate at Tosterup and at Tranekær on the island of Langeland, and later at Næsbyhoved Castle near Odense, and later again at the Castle of Nykøbing on the island of Falster. Tycho later wrote that Jørgen Brahe "raised me and generously provided for me during his life until my eighteenth year; he always treated me as his own son and made me his heir". From ages 6 to 12, Tycho attended Latin school, probably in Nykøbing. At age 12, on 19 April 1559, Tycho began studies at the University of Copenhagen. There, following his uncle's wishes, he studied law, but also studied a variety of other subjects and became interested in astronomy. At the University, Aristotle was a staple of scientific theory, and Tycho likely received a thorough training in Aristotelian physics and cosmology. He experienced the solar eclipse of 21 August 1560, and was greatly impressed by the fact that it had been predicted, although the prediction based on current observational data was a day off. He realized that more accurate observations would be the key to making more exact predictions. He purchased an ephemeris and books on astronomy, including Johannes de Sacrobosco's "De sphaera mundi", Petrus Apianus's "Cosmographia seu descriptio totius orbis" and Regiomontanus's "De triangulis omnimodis". Jørgen Thygesen Brahe, however, wanted Tycho to educate himself in order to become a civil servant, and sent him on a study tour of Europe in early 1562. 15-year old Tycho was given as mentor the 19-year-old Anders Sørensen Vedel, whom he eventually talked into allowing the pursuit of astronomy during the tour. Vedel and his pupil left Copenhagen in February 1562. On 24 March, they arrived in Leipzig, where they matriculated at the Lutheran Leipzig University. In 1563, he observed a conjunction of Jupiter and Saturn, and noticed that the Copernican and Ptolemaic tables used to predict the conjunction were inaccurate. This led him to realize that progress in astronomy required systematic, rigorous observation, night after night, using the most accurate instruments obtainable. He began maintaining detailed journals of all his astronomical observations. In this period, he combined the study of astronomy with astrology, laying down horoscopes for different famous personalities. When Tycho and Vedel returned from Leipzig in 1565, Denmark was at war with Sweden, and as vice-admiral of the Danish fleet, Jørgen Brahe had become a national hero for having participated in the sinking of the Swedish warship "Mars" during the First battle of Öland (1564). Shortly after Tycho's arrival in Denmark, Jørgen Brahe was defeated in the Action of 4 June 1565, and shortly afterwards died of a fever. Stories have it that he contracted pneumonia after a night of drinking with the Danish King Frederick II when the king fell into the water in a Copenhagen canal and Brahe jumped in after him. Brahe's possessions passed on to his wife Inger Oxe, who considered Tycho with special fondness. In 1566, Tycho left to study at the University of Rostock. Here, he studied with professors of medicine at the university's famous medical school and became interested in medical alchemy and botanical medicine. On 29 December 1566 at the age of 20, Tycho lost part of his nose in a sword duel with a fellow Danish nobleman, his third cousin Manderup Parsberg. The two had drunkenly quarreled over who was the superior mathematician at an engagement party at the home of Professor Lucas Bachmeister on 10 December. Coming nearly to quarrel again with his cousin on 29 December, they ended up resolving their feud with a duel in the dark. Though the two were later reconciled, the duel resulted in Tycho losing the bridge of his nose and gaining a broad scar across his forehead. He received the best possible care at the university and wore a prosthetic nose for the rest of his life. It was kept in place with paste or glue and said to be made of silver and gold. In November 2012, Danish and Czech researchers reported that the prosthetic was actually made of brass after chemically analyzing a small bone sample from the nose from the body exhumed in 2010. The prosthetics made of gold and silver were mostly worn for special occasions, rather than everyday wear. In April 1567, Tycho returned home from his travels, with a firm intention of becoming an astrologer. Although he had been expected to go into politics and the law, like most of his kinsmen, and although Denmark was still at war with Sweden, his family supported his decision to dedicate himself to the sciences. His father wanted him to take up law, but Tycho was allowed to travel to Rostock and then to Augsburg (where he built a great quadrant), Basel, and Freiburg. In 1568, he was appointed a canon at the Cathedral of Roskilde, a largely honorary position that would allow him to focus on his studies. At the end of 1570, he was informed of his father's ill health, so he returned to Knutstorp Castle, where his father died on 9 May 1571. The war was over, and the Danish lords soon returned to prosperity. Soon, another uncle, Steen Bille, helped him build an observatory and alchemical laboratory at Herrevad Abbey. Tycho was acknowledged by King Frederick II who proposed to him that an observatory be built to better study the night sky. After accepting this proposal, the location for the Uraniborg's construction took place at a remote island called Hven in the Sont near Copenhagen, which made a name for itself as the most promising observatory in Europe at the time. Towards the end of 1571, Tycho fell in love with Kirsten, daughter of Jørgen Hansen, the Lutheran minister in Knudstrup. As she was a commoner, Tycho never formally married her, since if he did he would lose his noble privileges. However, Danish law permitted morganatic marriage, which meant that a nobleman and a common woman could live together openly as husband and wife for three years, and their alliance then became a legally binding marriage. However, each would maintain their social status, and any children they had together would be considered commoners, with no rights to titles, landholdings, coat of arms, or even their father's noble name. While King Frederick respected Tycho's choice of wife, himself having been unable to marry the woman he loved, many of Tycho's family members disagreed, and many churchmen would continue to hold the lack of a divinely sanctioned marriage against him. Kirsten Jørgensdatter gave birth to their first daughter, Kirstine (named after Tycho's late sister) on 12October 1573. Kirstine died from the plague in 1576, and Tycho wrote a heartfelt elegy for her tombstone. In 1574, they moved to Copenhagen where their daughter Magdalene was born, and later the family followed him into exile. Kirsten and Tycho lived together for almost thirty years until Tycho's death. Together, they had eight children, six of whom lived to adulthood. On 11 November 1572, Tycho observed (from Herrevad Abbey) a very bright star, now numbered SN 1572, which had unexpectedly appeared in the constellation Cassiopeia. Because it had been maintained since antiquity that the world beyond the Moon's orbit was eternally unchangeable (celestial immutability was a fundamental axiom of the Aristotelian world-view), other observers held that the phenomenon was something in the terrestrial sphere below the Moon. However, Tycho observed that the object showed no daily parallax against the background of the fixed stars. This implied that it was at least farther away than the Moon and those planets that do show such parallax. He also found that the object did not change its position relative to the fixed stars over several months, as all planets did in their periodic orbital motions, even the outer planets, for which no daily parallax was detectable. This suggested that it was not even a planet, but a fixed star in the stellar sphere beyond all the planets. In 1573, he published a small book "De nova stella", thereby coining the term nova for a "new" star (we now classify this star as a supernova and know that it is 7500 light-years from Earth). This discovery was decisive for his choice of astronomy as a profession. Tycho was strongly critical of those who dismissed the implications of the astronomical appearance, writing in the preface to "De nova stella": ""O crassa ingenia. O caecos coeli spectatores"" ("Oh thick wits. Oh blind watchers of the sky"). The publication of his discovery made him a well-known name among scientists across Europe. Tycho continued with his detailed observations, often assisted by his first assistant and student, his younger sister Sophie Brahe. In 1574, Tycho published the observations made in 1572 from his first observatory at Herrevad Abbey. He then started lecturing on astronomy, but gave it up and left Denmark in spring 1575 to tour abroad. He first visited William IV, Landgrave of Hesse-Kassel's observatory at Kassel, then went on to Frankfurt, Basel and Venice, where he acted as an agent for the Danish king, contacting artisans and craftsmen whom the king wanted to work on his new palace at Elsinore. Upon his return, the King wished to repay Tycho's service by offering him a position worthy of his family; he offered him a choice of lordships of militarily and economically important estates, such as the castles of Hammershus or Helsingborg. But Tycho was reluctant to take up a position as a lord of the realm, preferring to focus on his science. He wrote to his friend Johannes Pratensis, "I did not want to take possession of any of the castles our benevolent king so graciously offered me. I am displeased with society here, customary forms and the whole rubbish". Tycho secretly began to plan to move to Basel, wishing to participate in the burgeoning academic and scientific life there. But the King heard of Tycho's plans, and desiring to keep the distinguished scientist, he offered Tycho the island of Hven in Øresund and funding to set up an observatory. Until then, Hven had been property directly under the Crown, and the 50 families on the island considered themselves to be freeholding farmers, but with Tycho's appointment as Feudal Lord of Hven, this changed. Tycho took control of agricultural planning, requiring the peasants to cultivate twice as much as they had done before, and he also exacted corvée labor from the peasants for the construction of his new castle. The peasants complained about Tycho's excessive taxation and took him to court. The court established Tycho's right to levy taxes and labor, and the result was a contract detailing the mutual obligations of lord and peasants on the island. Tycho envisioned his castle Uraniborg as a temple dedicated to the muses of arts and sciences, rather than as a military fortress; indeed, it was named after Urania, the muse of astronomy. Construction began in 1576 (with a laboratory for his alchemical experiments in the cellar). Uraniborg was inspired by the Venetian architect Andrea Palladio, and was one of the first buildings in northern Europe to show influence from Italian renaissance architecture. When he realized that the towers of Uraniborg were not adequate as observatories because of the instruments' exposure to the elements and the movement of the building, he then constructed a second underground observatory at nearby Stjerneborg in 1581. The basement included an alchemical laboratory with 16 furnaces for conducting distillations and other chemical experiments. Unusually for the time, Tycho established Uraniborg as a research centre, where almost 100 students and artisans worked from 1576 to 1597. Uraniborg also contained a printing press and a paper mill, both among the first in Scandinavia, enabling Tycho to publish his own manuscripts, on locally made paper with his own watermark. He created a system of ponds and canals to run the wheels of the paper mill. Over the years he worked on Uraniborg, Tycho was assisted by a number of students and protegés, many of whom went on to their own careers in astronomy: among them were Christian Sørensen Longomontanus, later one of the main proponents of the Tychonic model and Tycho's replacement as royal Danish astronomer; Peder Flemløse; Elias Olsen Morsing; and Cort Aslakssøn. Tycho's instrument-maker Hans Crol also formed part of the scientific community on the island. He observed the great comet that was visible in the Northern sky from November 1577 to January 1578. Within Lutheranism, it was commonly believed that celestial objects like comets were powerful portents, announcing the coming apocalypse, and in addition to Tycho's observations several Danish amateur astronomers observed the object and published prophesies of impending doom. He was able to determine that the comet's distance to Earth was much greater than the distance of the Moon, so that the comet could not have originated in the "earthly sphere", confirming his prior anti-Aristotelian conclusions about the fixed nature of the sky beyond the Moon. He also realized that the comet's tail was always pointing away from the Sun. He calculated its diameter, mass, and the length of its tail, and speculated about the material it was made of. At this point, he had not yet broken with Copernican theory, and observing the comet inspired him to try to develop an alternative Copernican model in which the Earth was immobile. The second half of his manuscript about the comet dealt with the astrological and apocalyptic aspects of the comet, and he rejected the prophesies of his competitors; instead, making his own predictions of dire political events in the near future. Among his predictions was bloodshed in Moscow and the imminent fall of Ivan the Terrible by 1583. The support that Tycho received from the Crown was substantial, amounting to 1% of the annual total revenue at one point in the 1580s. Tycho often held large social gatherings in his castle. Pierre Gassendi wrote that Tycho also had a tame elk (moose) and that his mentor the Landgrave Wilhelm of Hesse-Kassel (Hesse-Cassel) asked whether there was an animal faster than a deer. Tycho replied that there was none, but he could send his tame elk. When Wilhelm replied he would accept one in exchange for a horse, Tycho replied with the sad news that the elk had just died on a visit to entertain a nobleman at Landskrona. Apparently, during dinner, the elk had drunk a lot of beer, fallen down the stairs, and died. Among the many noble visitors to Hven was James VI of Scotland, who married the Danish princess Anne. After his visit to Hven in 1590, he wrote a poem comparing Tycho with Apollon and Phaethon. As part of Tycho's duties to the Crown in exchange for his estate, he fulfilled the functions of a royal astrologer. At the beginning of each year, he had to present an Almanac to the court, predicting the influence of the stars on the political and economic prospects of the year. And at the birth of each prince, he prepared their horoscopes, predicting their fates. He also worked as a cartographer with his former tutor Anders Sørensen Vedel on mapping out all of the Danish realm. An ally of the king and friendly with Queen Sophie (both his mother Beate Bille and adoptive mother Inger Oxe had been her court maids), he secured a promise from the King that ownership of Hven and Uraniborg would pass to his heirs. In 1588, Tycho's royal benefactor died, and a volume of Tycho's great two-volume work "Astronomiae Instauratae Progymnasmata" ("Introduction to the New Astronomy") was published. The first volume, devoted to the new star of 1572, was not ready, because the reduction of the observations of 1572–3 involved much research to correct the stars' positions for refraction, precession, the motion of the Sun etc., and was not completed in Tycho's lifetime (it was published in Prague in 1602/03), but the second volume, titled "De Mundi Aetherei Recentioribus Phaenomenis Liber Secundus" ("Second Book About Recent Phenomena in the Celestial World") and devoted to the comet of 1577, was printed at Uraniborg and some copies were issued in 1588. Besides the comet observations, it included an account of Tycho's system of the world. The third volume was intended to treat the comets of 1580 and following years in a similar manner, but it was never published, nor even written, though a great deal of material about the comet of 1585 was put together and first published in 1845 with the observations of this comet. While at Uraniborg, Tycho maintained correspondence with scientists and astronomers across Europe. He inquired about other astronomers' observations and shared his own technological advances to help them achieve more accurate observations. Thus, his correspondence was crucial to his research. Often, correspondence was not just private communication between scholars, but also a way to disseminate results and arguments and to build progress and scientific consensus. Through correspondence, Tycho was involved in several personal disputes with critics of his theories. Prominent among them were John Craig, a Scottish physician who was a strong believer in the authority of the Aristotelian worldview, and Nicolaus Reimers Baer, known as Ursus, an astronomer at the Imperial court in Prague, whom Tycho accused of having plagiarized his cosmological model. Craig refused to accept Tycho's conclusion that the comet of 1577 had to be located within the aetherial sphere rather than within the atmosphere of Earth. Craig tried to contradict Tycho by using his own observations of the comet, and by questioning his methodology. Tycho published an "apologia" (a defense) of his conclusions, in which he provided additional arguments, as well as condemning Craig's ideas in strong language for being incompetent. Another dispute concerned the mathematician Paul Wittich, who, after staying on Hven in 1580, taught Count Wilhelm of Kassel and his astronomer Christoph Rothmann to build copies of Tycho's instruments without permission from Tycho. In turn, Craig, who had studied with Wittich, accused Tycho of minimizing Wittich's role in developing some of the trigonometric methods used by Tycho. In his dealings with these disputes, Tycho made sure to leverage his support in the scientific community, by publishing and disseminating his own answers and arguments. When Frederick died in 1588, his son and heir Christian IV was only 11 years old. A regency council was appointed to rule for the young prince-elect until his coronation in 1596. The head of the council (Steward of the Realm) was Christoffer Valkendorff, who disliked Tycho after a conflict between them, and hence Tycho's influence at the Danish court steadily declined. Feeling that his legacy on Hven was in peril, he approached the Dowager Queen Sophie and asked her to affirm in writing her late husband's promise to endow Hven to Tycho's heirs. Nonetheless, he realized that the young king was more interested in war than in science, and was of no mind to keep his father's promise. King Christian IV followed a policy of curbing the power of the nobility by confiscating their estates to minimize their income bases, by accusing nobles of misusing their offices and of heresies against the Lutheran church. Tycho, who was known to sympathize with the Philippists (followers of Philip Melanchthon), was among the nobles who fell out of grace with the new king. The king's unfavorable disposition towards Tycho was likely also a result of efforts by several of his enemies at court to turn the king against him. Tycho's enemies included, in addition to Valkendorff, the king's doctor Peter Severinus, who also had personal gripes with Tycho, and several gnesio-Lutheran Bishops who suspected Tycho of heresy — a suspicion motivated by his known Philippist sympathies, his pursuits in medicine and alchemy (both of which he practiced without the church's approval) and his prohibiting the local priest on Hven to include the exorcism in the baptismal ritual. Among the accusations raised against Tycho were his failure to adequately maintain the royal chapel at Roskilde, and his harshness and exploitation of the Hven peasantry. Tycho became even more inclined to leave when a mob of commoners, possibly incited by his enemies at court, rioted in front of his house in Copenhagen. Tycho left Hven in 1597, bringing some of his instruments with him to Copenhagen, and entrusting others to a caretaker on the island. Shortly before leaving, he completed his star catalogue giving the positions of 1,000 stars. After some unsuccessful attempts at influencing the king to let him return; including showcasing his instruments on the wall of the city, he finally acquiesced to exile, but he wrote his most famous poem "Elegy to Dania" in which he chided Denmark for not appreciating his genius. The instruments he had used in Uraniborg and Stjerneborg were depicted and described in detail in his book "Astronomiae instauratae mechanica" or "Instruments for the restoration of astronomy", first published in 1598. The King sent two envoys to Hven to describe the instruments left behind by Tycho. Unversed in astronomy, the envoys reported to the king that the large mechanical contraptions such as his large quadrant and sextant were "useless and even harmful". From 1597 to 1598, he spent a year at the castle of his friend Heinrich Rantzau in Wandesburg outside Hamburg, and then they moved for a while to Wittenberg, where they stayed in the former home of Philip Melanchthon. In 1599, he obtained the sponsorship of Rudolf II, Holy Roman Emperor and moved to Prague, as Imperial Court Astronomer. Tycho built a new observatory in a castle in Benátky nad Jizerou, 50 km from Prague, and worked there for one year. The emperor then brought him back to Prague, where he stayed until his death. At the imperial court even Tycho's wife and children were treated like nobility, which they had never been at the Danish court. Tycho received financial support from several nobles in addition to the emperor, including Oldrich Desiderius Pruskowsky von Pruskow, to whom he dedicated his famous "Mechanica". In return for their support, Tycho's duties included preparing astrological charts and predictions for his patrons at events such as births, weather forecasting, and astrological interpretations of significant astronomical events, such as the supernova of 1572 (sometimes called Tycho's supernova) and the Great Comet of 1577. In Prague, Tycho worked closely with Johannes Kepler, his assistant. Kepler was a convinced Copernican, and considered Tycho's model to be mistaken, and derived from simple "inversion" of the Sun's and Earth's positions in the Copernican model. Together, the two worked on a new star catalogue based on his own accurate positions — this catalogue became the "Rudolphine Tables". Also at the court in Prague was the mathematician Nicolaus Reimers (Ursus), with whom Tycho had previously corresponded, and who, like Tycho, had developed a geo-heliocentric planetary model, which Tycho considered to have been plagiarized from his own. Kepler had previously spoken highly of Ursus, but now found himself in the problematic position of being employed by Tycho and having to defend his employer against Ursus' accusations, even though he disagreed with both of their planetary models. In 1600, he finished the tract "Apologia pro Tychone contra Ursum" (defense of Tycho against Ursus). Kepler had great respect for Tycho's methods and the accuracy of his observations and considered him to be the new Hipparchus, who would provide the foundation for a restoration of the science of astronomy. Tycho suddenly contracted a bladder or kidney ailment after attending a banquet in Prague, and died eleven days later, on 24 October 1601, at the age of 54. It is also said that Tycho had been suffering from an illness which he had attempted to take care of himself with his alchemy skills, but failed and rather contributed to his death. According to Kepler's first-hand account, Tycho had refused to leave the banquet to relieve himself because it would have been a breach of etiquette. After he returned home, he was no longer able to urinate, except eventually in very small quantities and with excruciating pain. The night before he died, he suffered from a delirium during which he was frequently heard to exclaim that he hoped he would not seem to have lived in vain. Before dying, he urged Kepler to finish the "Rudolphine Tables" and expressed the hope that he would do so by adopting Tycho's own planetary system, rather than that of Copernicus. It was reported that Tycho had written his own epitaph, "He lived like a sage and died like a fool." A contemporary physician attributed his death to a kidney stone, but no kidney stones were found during an autopsy performed after his body was exhumed in 1901, and the 20th-century medical assessment is that his death is more likely to have resulted from uremia. The investigations in the 1990s have suggested that Tycho may not have died from urinary problems, but instead from mercury poisoning. It was speculated that he had been intentionally poisoned. The two main suspects were his assistant, Johannes Kepler, whose motives would be to gain access to Tycho's laboratory and chemicals, and his cousin, Erik Brahe, at the order of friend-turned-enemy Christian IV, because of rumors that Tycho had had an affair with Christian's mother. In February 2010, the Prague city authorities approved a request by Danish scientists to exhume the remains, and in November 2010 a group of Czech and Danish scientists from Aarhus University collected bone, hair and clothing samples for analysis. The scientists, led by Dr Jens Vellev, analyzed Tycho's beard hair once again. The team reported in November 2012 that not only was there not enough mercury present to substantiate murder, but that there were no lethal levels of any poisons present. The team's conclusion was that "it is impossible that Tycho Brahe could have been murdered". The findings were confirmed by scientists from the University of Rostock, who examined a sample of Tycho's beard hairs that had been taken in 1901. Although traces of mercury were found, these were present only in the outer scales. Therefore, mercury poisoning as the cause of death was ruled out, while the study suggests that the accumulation of mercury may have come from the "precipitation of mercury dust from the air during [Tycho's] long-term alchemistic activities". The hair samples contain 20–100 times the natural concentration of gold until 2 months before his death. Tycho is buried in the Church of Our Lady before Týn, in Old Town Square near the Prague Astronomical Clock. Tycho's view of science was driven by his passion for accurate observations, and the quest for improved instruments of measurement drove his life's work. Tycho was the last major astronomer to work without the aid of a telescope, soon to be turned skyward by Galileo Galilei and others. Given the limitations of the naked eye for making accurate observations, he devoted many of his efforts to improving the accuracy of the existing types of instrument — the sextant and the quadrant. He designed larger versions of these instruments, which allowed him to achieve much higher accuracy. Because of the accuracy of his instruments, he quickly realized the influence of wind and the movement of buildings, and instead opted to mount his instruments underground directly on the bedrock. Tycho's observations of stellar and planetary positions were noteworthy both for their accuracy and quantity. With an accuracy approaching one arcminute, his celestial positions were much more accurate than those of any predecessor or contemporary — about five times as accurate as the observations of the contemporary astronomer Wilhelm of Hesse. asserts of Tycho's Star Catalog D, "In it, Tycho achieved, on a mass scale, a precision far beyond that of earlier catalogers. Cat D represents an unprecedented confluence of skills: instrumental, observational, & computational—all of which combined to enable Tycho to place most of his hundreds of recorded stars to an accuracy of ordermag 1'!" He aspired to a level of accuracy in his estimated positions of celestial bodies of being consistently within an arcminute of their real celestial locations, and also claimed to have achieved this level. But, in fact, many of the stellar positions in his star catalogues were less accurate than that. The median errors for the stellar positions in his final published catalog were about 1.5', indicating that only half of the entries were more accurate than that, with an overall mean error in each coordinate of around 2'. Although the stellar observations as recorded in his observational logs were more accurate, varying from 32.3" to 48.8" for different instruments, systematic errors of as much as 3' were introduced into some of the stellar positions Tycho published in his star catalog — due, for instance, to his application of an erroneous ancient value of parallax and his neglect of polestar refraction. Incorrect transcription in the final published star catalogue, by scribes in Tycho's employ, was the source of even larger errors, sometimes by many degrees. Celestial objects observed near the horizon and above appear with a greater altitude than the real one, due to atmospheric refraction, and one of Tycho's most important innovations was that he worked out and published the very first tables for the systematic correction of this possible source of error. But, as advanced as they were, they attributed no refraction whatever above 45° altitude for solar refraction, and none for starlight above 20° altitude. To perform the huge number of multiplications needed to produce much of his astronomical data, Tycho relied heavily on the then-new technique of "prosthaphaeresis", an algorithm for approximating products based on trigonometric identities that predated logarithms. Although Tycho admired Copernicus and was the first to teach his theory in Denmark, he was unable to reconcile Copernican theory with the basic laws of Aristotelian physics, that he considered to be foundational. He was also critical of the observational data that Copernicus built his theory on, which he correctly considered to have a high margin of error. Instead, Tycho proposed a "geo-heliocentric" system in which the Sun and Moon orbited the Earth, while the other planets orbited the Sun. Tycho's system had many of the same observational and computational advantages that Copernicus' system had, and both systems also could accommodate the phases of Venus, although Galilei had yet to discover them. Tycho's system provided a safe position for astronomers who were dissatisfied with older models but were reluctant to accept the heliocentrism and the Earth's motion. It gained a considerable following after 1616 when Rome declared that the heliocentric model was contrary to both philosophy and Scripture, and could be discussed only as a computational convenience that had no connection to fact. Tycho's system also offered a major innovation: while both the purely geocentric model and the heliocentric model as set forth by Copernicus relied on the idea of transparent rotating crystalline spheres to carry the planets in their orbits, Tycho eliminated the spheres entirely. Kepler, as well as other Copernican astronomers, tried to persuade Tycho to adopt the heliocentric model of the Solar System, but he was not persuaded. According to Tycho, the idea of a rotating and revolving Earth would be "in violation not only of all physical truth but also of the authority of Holy Scripture, which ought to be paramount." With respect to physics, Tycho held that the Earth was just too sluggish and heavy to be continuously in motion. According to the accepted Aristotelian physics of the time, the heavens (whose motions and cycles were continuous and unending) were made of "Aether" or "Quintessence"; this substance, not found on Earth, was light, strong, unchanging, and its natural state was circular motion. By contrast, the Earth (where objects seem to have motion only when moved) and things on it were composed of substances that were heavy and whose natural state was rest. Accordingly, Tycho said the Earth was a "lazy" body that was not readily moved. Thus while Tycho acknowledged that the daily rising and setting of the Sun and stars could be explained by the Earth's rotation, as Copernicus had said, still such a fast motion could not belong to the earth, a body very heavy and dense and opaque, but rather belongs to the sky itself whose form and subtle and constant matter are better suited to a perpetual motion, however fast. With respect to the stars, Tycho also believed that, if the Earth orbited the Sun annually, there should be an observable stellar parallax over any period of six months, during which the angular orientation of a given star would change thanks to Earth's changing position. (This parallax does exist, but is so small it was not detected until 1838, when Friedrich Bessel discovered a parallax of 0.314 arcseconds of the star 61 Cygni.) The Copernican explanation for this lack of parallax was that the stars were such a great distance from Earth that Earth's orbit was almost insignificant by comparison. However, Tycho noted that this explanation introduced another problem: Stars as seen by the naked eye appear small, but of some size, with more prominent stars such as Vega appearing larger than lesser stars such as Polaris, which in turn appear larger than many others. Tycho had determined that a typical star measured approximately a minute of arc in size, with more prominent ones being two or three times as large. In writing to Christoph Rothmann, a Copernican astronomer, Tycho used basic geometry to show that, assuming a small parallax that just escaped detection, the distance to the stars in the Copernican system would have to be 700 times greater than the distance from the Sun to Saturn. Moreover, the only way the stars could be so distant and still appear the sizes they do in the sky would be if even average stars were gigantic — at least as big as the orbit of the Earth, and of course vastly larger than the Sun. And, Tycho said, the more prominent stars would have to be even larger still. And what if the parallax was even smaller than anyone thought, so the stars were yet more distant? Then they would all have to be even larger still. Tycho saidDeduce these things geometrically if you like, and you will see how many absurdities (not to mention others) accompany this assumption [of the motion of the earth] by inference. Copernicans offered a religious response to Tycho's geometry: titanic, distant stars might seem unreasonable, but they were not, for the Creator could make his creations that large if He wanted. In fact, Rothmann responded to this argument of Tycho's by saying: "[W]hat is so absurd about [an average star] having size equal to the whole [orbit of the Earth]? What of this is contrary to divine will, or is impossible by divine Nature, or is inadmissible by infinite Nature? These things must be entirely demonstrated by you, if you will wish to infer from here anything of the absurd. These things that vulgar sorts see as absurd at first glance are not easily charged with absurdity, for in fact divine Sapience and Majesty is far greater than they understand. Grant the vastness of the Universe and the sizes of the stars to be as great as you like — these will still bear no proportion to the infinite Creator. It reckons that the greater the king, so much greater and larger the palace befitting his majesty. So how great a palace do you reckon is fitting to GOD?". Religion played a role in Tycho's geocentrism also – he cited the authority of scripture in portraying the Earth as being at rest. He rarely used Biblical arguments alone (to him they were a secondary objection to the idea of Earth's motion) and over time he came to focus on scientific arguments, but he did take Biblical arguments seriously. Tycho's 1587 geo-heliocentric model differed from those of other geo-heliocentric astronomers, such as Paul Wittich, Reimarus Ursus, Helisaeus Roeslin and David Origanus, in that the orbits of Mars and the Sun intersected. This was because Tycho had come to believe the distance of Mars from the Earth at opposition (that is, when Mars is on the opposite side of the sky from the Sun) was less than that of the Sun from the Earth. Tycho believed this because he came to believe Mars had a greater daily parallax than the Sun. But, in 1584, in a letter to a fellow astronomer, Brucaeus, he had claimed that Mars had been further than the Sun at the opposition of 1582, because he had observed that Mars had little or no daily parallax. He said he had therefore rejected Copernicus's model because it predicted Mars would be at only two-thirds the distance of the Sun. But, he apparently later changed his mind to the opinion that Mars at opposition was indeed nearer the Earth than the Sun was, but apparently without any valid observational evidence in any discernible Martian parallax. Such intersecting Martian and solar orbits meant that there could be no solid rotating celestial spheres, because they could not possibly interpenetrate. Arguably, this conclusion was independently supported by the conclusion that the comet of 1577 was superlunary, because it showed less daily parallax than the Moon and thus must pass through any celestial spheres in its transit. Tycho's distinctive contributions to lunar theory include his discovery of the variation of the Moon's longitude. This represents the largest inequality of longitude after the equation of the center and the evection. He also discovered librations in the inclination of the plane of the lunar orbit, relative to the ecliptic (which is not a constant of about 5° as had been believed before him, but fluctuates through a range of over a quarter of a degree), and accompanying oscillations in the longitude of the lunar node. These represent perturbations in the Moon's ecliptic latitude. Tycho's lunar theory doubled the number of distinct lunar inequalities, relative to those anciently known, and reduced the discrepancies of lunar theory to about a fifth of their previous amounts. It was published posthumously by Kepler in 1602, and Kepler's own derivative form appears in Kepler's "Rudolphine Tables" of 1627. Kepler used Tycho's records of the motion of Mars to deduce laws of planetary motion, enabling calculation of astronomical tables with unprecedented accuracy (the "Rudolphine Tables") and providing powerful support for a heliocentric model of the solar system. Galileo's 1610 telescopic discovery that Venus shows a full set of phases refuted the pure geocentric Ptolemaic model. After that it seems 17th-century astronomy mostly converted to geo-heliocentric planetary models that could explain these phases just as well as the heliocentric model could, but without the latter's disadvantage of the failure to detect any annual stellar parallax that Tycho and others regarded as refuting it. The three main geo-heliocentric models were the Tychonic, the Capellan with just Mercury and Venus orbiting the Sun such as favoured by Francis Bacon, for example, and the extended Capellan model of Riccioli with Mars also orbiting the Sun whilst Saturn and Jupiter orbit the fixed Earth. But the Tychonic model was probably the most popular, albeit probably in what was known as 'the semi-Tychonic' version with a daily rotating Earth. This model was advocated by Tycho's ex-assistant and disciple Longomontanus in his 1622 "Astronomia Danica" that was the intended completion of Tycho's planetary model with his observational data, and which was regarded as the canonical statement of the complete Tychonic planetary system. Longomontanus' work was published in several editions and used by many subsequent astronomers, and through him the Tychonic system was adopted by astronomers as far away as China. The ardent anti-heliocentric French astronomer Jean-Baptiste Morin devised a Tychonic planetary model with elliptical orbits published in 1650 in a simplified, Tychonic version of the "Rudolphine Tables". Another geocentric French astronomer, Jacques du Chevreul, rejected Tycho’s observations including his description of the heavens and the theory that Mars was below the Sun. Some acceptance of the Tychonic system persisted through the 17th century and in places until the early 18th century; it was supported (after a 1633 decree about the Copernican controversy) by "a flood of pro-Tycho literature" of Jesuit origin. Among pro-Tycho Jesuits, Ignace Pardies declared in 1691 that it was still the commonly accepted system, and Francesco Blanchinus reiterated that as late as 1728. Persistence of the Tychonic system, especially in Catholic countries, has been attributed to its satisfaction of a need (relative to Catholic doctrine) for "a safe synthesis of ancient and modern". After 1670, even many Jesuit writers only thinly disguised their Copernicanism. But in Germany, the Netherlands, and England, the Tychonic system "vanished from the literature much earlier". James Bradley's discovery of stellar aberration, published in 1729, eventually gave direct evidence excluding the possibility of all forms of geocentrism including Tycho's. Stellar aberration could only be satisfactorily explained on the basis that the Earth is in annual orbit around the Sun, with an orbital velocity that combines with the finite speed of the light coming from an observed star or planet, to affect the apparent direction of the body observed. Tycho also worked in medicine and alchemy. He was strongly influenced by Paracelsus, who considered the human body to be directly influenced by celestial bodies. The paracelsian view of man as a microcosm, and astrology as the science tying together the celestial and bodily universes was also shared by Philip Melanchthon, and was precisely one of the points of contention between Melanchthon and Luther, and hence between the philippists and the gnesio-Lutherans. For Tycho there was a close connection between empiricism and natural science on one hand and religion and astrology on the other. Using his large herbal garden at Uraniborg, Tycho produced several recipes for herbal medicines, using them to treat illnesses such as fever and plague. In his own time, Tycho was also famous for his contributions to medicine; his herbal medicines were in use as late as the 1900s. The expression Tycho Brahe days, in Scandinavian folklore, refers to a number of "unlucky days" that were featured in many almanacs beginning in the 1700s, but which have no direct connection to Tycho or his work. Whether because he realized that astrology was not an empirical science or because he feared religious repercussions Tycho seems to have had a somewhat ambiguous relation to his own astrological work. For example, two of his more astrological treatises, one on weather predictions and an almanac, were published in the names of his assistants, in spite of the fact that he worked on them personally. Some scholars have argued that he lost faith in horoscope astrology over the course of his career, and others that he simply changed his public communication on the topic as he realized that connections with astrology could influence the reception of his empirical astronomical work. The first biography of Tycho, which was also the first full-length biography of any scientist, was written by Pierre Gassendi in 1654. In 1779, Tycho de Hoffmann wrote of Tycho's life in his history of the Brahe family. In 1913, Dreyer published Tycho's collected works, facilitating further research. Early modern scholarship on Tycho tended to see the shortcomings of his astronomical model, painting him as a mysticist recalcitrant in accepting the Copernican revolution, and valuing mostly his observations that allowed Kepler to formulate his laws of planetary movement. Especially in Danish scholarship, Tycho was depicted as a mediocre scholar and a traitor to the nation — perhaps because of the important role in Danish historiography of Christian IV as a warrior king. In the second half of the 20th century, scholars began reevaluating is significance and studies by Kristian Peder Moesgaard, Owen Gingerich, Robert Westman, Victor E. Thoren, and John R. Christianson focused on his contributions to science, and demonstrated that while he admired Copernicus he was simply unable to reconcile his basic theory of physics with the Copernican view. Christianson's work showed the influence of Tycho's Uraniborg as a training center for scientists who after studying with Tycho went on to make contributions in various scientific fields. Although Tycho's planetary model was soon discredited, his astronomical observations were an essential contribution to the scientific revolution. The traditional view of Tycho is that he was primarily an empiricist who set new standards for precise and objective measurements. This appraisal originated in Pierre Gassendi's 1654 biography, "Tychonis Brahe, equitis Dani, astronomorum coryphaei, vita". It was furthered by Johann Dreyer's biography in 1890, which was long the most influential work on Tycho. According to historian of science Helge Kragh, this assessment grew out of Gassendi's opposition to Aristotelianism and Cartesianism, and fails to account for the diversity of Tycho's activities. Tycho's discovery of the new star was the inspiration for Edgar Allan Poe's poem "Al Aaraaf". In 1998, "Sky & Telescope" magazine published an article by Donald W. Olson, Marilynn S. Olson and Russell L. Doescher arguing, in part, that Tycho's supernova was also the same "star that's westward from the pole" in Shakespeare's "Hamlet". Tycho is directly referenced in Sarah Williams' poem The Old Astronomer: "Reach me down my Tycho Brahé,—I would know him when we meet". Though, the poem's oft quoted line comes later: "Though my soul may set in darkness, it will rise in perfect light; / I have loved the stars too truly to be fearful of the night." Alfred Noyes also wrote a long biographical poem in honor of Brahe. The lunar crater Tycho is named in his honour, as is the crater Tycho Brahe on Mars and the minor planet 1677 Tycho Brahe in the asteroid belt. The bright supernova, SN 1572, is also known as Tycho's Nova and the Tycho Brahe Planetarium in Copenhagen is also named after him, as is the palm genus Brahea.
https://en.wikipedia.org/wiki?curid=30027
The A-Team The A-Team is an American action-adventure television series that ran on NBC from 1983 to 1987 about former members of a fictitious United States Army Special Forces unit. The members, after being court-martialed "for a crime they didn't commit", escaped from military prison and, while still on the run, worked as soldiers of fortune. The series was created by Stephen J. Cannell and Frank Lupo. A feature film based on the series was released by 20th Century Fox in 2010. "The A-Team" was created by writers and producers Stephen J. Cannell and Frank Lupo at the behest of Brandon Tartikoff, NBC's Entertainment president. Cannell was fired from ABC in the early 1980s, after failing to produce a hit show for the network, and was hired by NBC; his first project was "The A-Team." Brandon Tartikoff pitched the series to Cannell as a combination of "The Dirty Dozen", "", "The Magnificent Seven", "Mad Max" and "Hill Street Blues", with "Mr. T driving the car". "The A-Team" was not generally expected to become a hit, although Stephen J. Cannell has said that George Peppard suggested it would be a huge hit "before we ever turned on a camera". The show became very popular; the first regular episode, which aired after Super Bowl XVII on January 30, 1983, reached 26.4% of the television audience, placing fourth in the top 10 Nielsen-rated shows. The show remains prominent in popular culture for its cartoonish violence (in which people were seldom seriously hurt, despite the frequent use of automatic weapons), formulaic episodes, its characters' ability to form weaponry and vehicles out of old parts, and its distinctive theme tune. The show boosted the career of Mr. T, who portrayed the character of B. A. Baracus, around whom the show was initially conceived. Some of the show's catchphrases, such as "I love it when a plan comes together", "Hannibal's on the jazz", and "I ain't gettin' on no plane!" have also made their way onto T-shirts and other merchandise. The term "A-Team" is a nickname coined for U.S. Special Forces' Operational Detachments Alpha (ODA) during the Vietnam War. In a 2003 Yahoo! survey of 1,000 television viewers, "The A-Team" was voted the "oldie" television show viewers would most like to see revived, beating out such popular television series from the 1980s as "The Dukes of Hazzard" and "Knight Rider". "The A-Team" is a naturally episodic show, with few overarching stories, except the characters' continuing motivation to clear their names, with few references to events in past episodes and a recognizable and steady episode structure. In describing the ratings drop that occurred during the show's fourth season, reviewer Gold Burt points to this structure as being a leading cause for the decreased popularity "because the same basic plot had been used over and over again for the past four seasons with the same predictable outcome". Similarly, reporter Adrian Lee called the plots "stunningly simple" in a 2006 article for "The Express" (UK newspaper), citing such recurring elements "as BA's fear of flying, and outlandish finales when the team fashioned weapons from household items". The show became emblematic of this kind of "fit-for-TV warfare" due to its depiction of high-octane combat scenes, with lethal weapons, wherein the participants (with the notable exception of General Fulbright) are never killed and rarely seriously injured ("see also" On-screen violence section). As the television ratings of "The A-Team" fell dramatically during the fourth season, the format was changed for the show's final season in 1986–87 in a bid to win back viewers. After years on the run from the authorities, the A-Team is finally apprehended by the military. General Hunt Stockwell (Robert Vaughn), a mysterious CIA operative, propositions them to work for him in exchange for which he will arrange for their pardons upon successful completion of several suicide missions. To do so, the A-Team must first escape from their captivity. With the help of a new character, Frankie "Dishpan Man" Santana, Stockwell fakes their deaths before a military firing squad. The new status of the A-Team, no longer working for themselves, remained for the duration of the fifth season while Eddie Velez and Robert Vaughn received star billing along with the principal cast. The missions that the team had to perform in season five were somewhat reminiscent of "", and based more around political espionage than beating local thugs, also usually taking place in foreign countries, including successfully overthrowing an island dictator, the rescue of a scientist from East Germany, and recovering top secret Star Wars defense information from Soviet hands. These changes proved unsuccessful with viewers, however, and ratings continued to decline. Only 13 episodes aired in the fifth season. In what was supposed to be the final episode, "The Grey Team" (although "Without Reservations" was broadcast on NBC as the last first-run episode in March 1987), Hannibal, after being misled by Stockwell one time too many, tells him that the team will no longer work for him. At the end, the team discusses what they were going to do if they get their pardon, and it is implied that they would continue doing what they were doing as the A-Team. The character of Howling Mad Murdock can be seen in the final scene wearing a T-shirt that says, "Fini". During the Vietnam War, the A-Team were members of the 5th Special Forces Group (see the episode "West Coast Turnaround"). In the episode "Bad Time on the Border", Colonel John "Hannibal" Smith (George Peppard), indicated that the A-Team were "ex-Green Berets". During the Vietnam War, the A-Team's commanding officer, Colonel Morrison, gave them orders to rob the Bank of Hanoi to help bring the war to an end. They succeeded in their mission, but on their return to base four days after the end of the war, they discovered that Morrison had been killed by the Viet Cong, and that his headquarters had been burned to the ground. This meant that the proof that the A-Team members were acting under orders had been destroyed. They were arrested, and imprisoned at Fort Bragg, from which they quickly escaped before standing trial. The origin of the A-Team is directly linked to the Vietnam War, during which the team formed. The show's introduction in the first four seasons mentions this, accompanied by images of soldiers coming out of a helicopter in an area resembling a forest or jungle. Besides this, "The A-Team" would occasionally feature an episode in which the team came across an old ally or enemy from those war days. For example, the first season's final episode "A Nice Place To Visit" revolved around the team traveling to a small town to honor a fallen comrade and end up avenging his death, and in season two's "Recipe For Heavy Bread", a chance encounter leads the team to meet both the POW cook who helped them during the war, and the American officer who sold his unit out. An article in the "New Statesman" (UK) published shortly after the premiere of "The A-Team" in the United Kingdom, also pointed out "The A-Team's" connection to the Vietnam War, characterizing it as the representation of the idealization of the Vietnam War, and an example of the war slowly becoming accepted and assimilated into American culture. One of the team's primary antagonists, Col. Roderick Decker (Lance LeGault), had his past linked back to the Vietnam War, in which he and Hannibal had come to fisticuffs in "the DOOM Club" (Da Nang Open Officers' Mess). At other times, members of the team would refer back to a certain tactic used during the War, which would be relevant to the team's present predicament. Often, Hannibal would refer to such a tactic, after which the other members of the team would complain about its failure during the War. This was also used to refer to some of Face's past accomplishments in scamming items for the team, such as in the first-season episode "Holiday In The Hills", in which Murdock fondly remembers Face being able to secure a '53 Cadillac while in the Vietnam jungle. The team's ties to the Vietnam War were referred to again in the fourth-season finale, "The Sound of Thunder", in which the team is introduced to Tia (Tia Carrere), a war orphan and daughter of fourth season antagonist General Fulbright. Returning to Vietnam, Fulbright is shot in the back and gives his last words as he dies. The 2006 documentary "Bring Back The A-Team" joked that the scene lasted seven and a half minutes, but his death actually took a little over a minute. His murderer, a Vietnamese colonel, is killed in retaliation. Tia then returns with the team to the United States ("see also: casting"). This episode is notable for having one of the show's few truly serious dramatic moments, with each team member privately reminiscing on their war experiences, intercut with news footage from the war with Barry McGuire's "Eve of Destruction" playing in the background. The show's ties to the Vietnam War are fully dealt with in the opening arc of the fifth season, dubbed "The Court-Martial (Part 1–3)", in which the team is finally court-martialed for the robbery of the bank of Hanoi. The character of Roderick Decker makes a return on the witness stand, and various newly introduced characters from the A-Team's past also make appearances. The team, after a string of setbacks, decides to plead guilty to the crime and they are sentenced to be executed. They escape this fate and come to work for a General Hunt Stockwell, leading into the remainder of the fifth season. The show ran for five seasons on the NBC television network, from January 23, 1983 to December 30, 1986 (with one additional, previously unbroadcast episode shown on March 8, 1987), for a total of 98 episodes. "The A-Team" revolves around the four members of a former commando outfit, now mercenaries. Their leader is Lieutenant Colonel/Colonel John "Hannibal" Smith (George Peppard), whose plans tend to be unorthodox, but effective. Lieutenant Templeton Peck (Dirk Benedict; Tim Dunigan appeared as Templeton Peck in the pilot), usually called "Face" or "Faceman", is a smooth-talking con man who serves as the team's appropriator of vehicles and other useful items, as well as the team's second-in-command. The team's pilot is Captain H.M. "Howling Mad" Murdock (Dwight Schultz), who has been declared insane and lives in a Veterans' Administration mental institution for the show's first four seasons. Finally, there is the team's strong man, mechanic and Sergeant First Class Bosco "B.A.", or "Bad Attitude", Baracus (Mr. T). The team belonged to the 5th Special Forces as seen in the left side shoulder patch on Hannibal's uniform in the episode "A Nice Place To Visit". A patch on Hannibal's uniform on the right shoulder in that episode indicates he belonged to the 101st Airborne during a prior combat assignment, but that patch was replaced by the 1st Air Cavalry Division patch in the episode "Trial by Fire". The patch worn on the left sleeve according to uniform wear in the Army is the current assignment of the person wearing it and in the episode "A Nice Place to Visit" shows that the team was assigned to the Special Forces with a tab Airborne over the shoulder patch. Also their berets in that episode are green and have the tab of the 5th Special Forces in Vietnam on them. In the episode "West Coast Turnaround", Hannibal stated they were with the 5th Special Forces Group. Then, in episode "Bad Time on the Border", Hannibal refers to his friends as "ex-Green Berets". Though the name they have adopted comes from the "A-Teams", the nickname coined for Special Forces Operational Detachments Alpha, these detachments usually consisted of "twelve" members; whether the four were considered a "detachment" of their own or had once had eight compatriots who were killed in action was never revealed. In the episode "A Nice Place to Visit" Ray Brenner is stated to have been a Major and part of Hannibal's team in Vietnam. For its first season and the first half of the second season, the team was assisted by reporter Amy Amanda Allen (Melinda Culea). In the second half of the second season, Allen was replaced by fellow reporter Tawnia Baker (Marla Heasley). The character of Tia (Tia Carrere), a Vietnam war orphan now living in the United States, was meant to join the Team in the fifth season, but she was replaced by Frankie Santana (Eddie Velez), who served as the team's special effects expert. Velez was added to the opening credits of the fifth season after its second episode. During their adventures, the A-Team was constantly met by opposition from the Military Police. In the show's first season, the MPs were led by Colonel Francis Lynch (William Lucking), but he was replaced for the second, third, and earlier fourth season by Colonel Roderick Decker (Lance LeGault) and his aide Captain Crane (Carl Franklin). Lynch returned for one episode in the show's third season ("Showdown!") but was not seen after. Decker was also briefly replaced by a Colonel Briggs (Charles Napier) in the third season for one episode ("Fire") when LeGault was unavailable, but returned shortly after. For the latter portion of the show's fourth season, the team was hunted by General Harlan "Bull" Fulbright (Jack Ging), who would later hire the A-Team to find Tia in the season four finale, during which Fulbright was killed. The fifth season introduced General Hunt Stockwell (Robert Vaughn) who, while serving as the team's primary antagonist, was also the team's boss and joined them on several missions. He was often assisted by Carla (Judith Ledford, sometimes credited as Judy Ledford). "John "Hannibal" Smith" is a master of disguise. His most used disguise (onscreen only on the pilot episode) is Mr. Lee, the dry cleaner. This is one of the final parts of the client screening process, as he tells the client where to go to make full contact with the A-Team. He dresses most often in a tan safari jacket and black leather gloves. He also is constantly seen smoking a cigar. Hannibal carries either a Browning Hi-Power, Colt M1911A1 or a Smith & Wesson Model 39 as a sidearm, most often "Mexican carried" although he uses a holster when on missions. His catchphrase is "I love it when a plan comes together". Often said, usually by B.A., to be "on the jazz" when in the fury of completing a mission. "Templeton "Faceman" Peck" is a master of the persuasive arts. The team's scrounger, he can get virtually anything he sets his mind to, usually exploiting women with sympathy-appeal and flirtation. He grew up an orphan, and is not without integrity, as stated by Murdock in the episode "Family Reunion": "He would rip the shirt off his back for you, and then scam one for himself." Faceman is also the A-Team's accountant. He dresses suavely, often appearing in suits. Faceman carries a Colt Lawman Mk III revolver for protection, and drives a custom white 1984 Corvette with red trim. "Bosco "B.A." (Bad Attitude) Baracus" is the muscle for the A-Team, able to perform exceptional feats of strength. He is also the team's mechanic. Baracus affects a dislike for Murdock, calling him a "crazy fool", but his true feelings of friendship are revealed when he prevents Murdock from drowning in his desire to live like a fish. Baracus also has a deep fear of flying, and the others usually have to trick and/or knock him out to get him on an aircraft. It is very rare that Baracus is awake while flying, and even rarer for him actually to consent to it. When he does, however, he then goes into a catatonic state. Baracus generally wears overalls and leopard or tiger print shirts in the early seasons, and wears a green jumpsuit in the later seasons. He is almost always seen with many gold chains and rings on every finger, and also wears a weightlifting belt. Baracus' hairstyle is always in a mohawk-like cut. He drives a customized black GMC van that acts as the team's usual mode of transport. "H.M. "Howling Mad" Murdock" is the A-Team's pilot, and can fly any kind of aircraft with considerable skill. However, due to a helicopter crash in Vietnam, Murdock apparently went insane. He lives in a Veterans' Hospital in the mental wing. Whenever the rest of the team requires a pilot, they have to break him out of the hospital, generally using Faceman to do so. In Seasons 1–4, Murdock has a different pet, imaginary friend, or alter-ego in each episode. Whenever one of his pets or imaginary friends is killed by an enemy, Murdock snaps and takes revenge (but never kills). Many times, when Baracus is mad at Murdock for being crazy, Hannibal will side with Murdock in a sympathetic way. Once he is discharged from the hospital in Season 5, Murdock has a different job each episode. Murdock usually wears a leather flight jacket, a baseball cap, and basketball sneakers. Although the part of Face was written by Frank Lupo and Stephen J. Cannell with Dirk Benedict in mind, NBC insisted that the part should be played by another actor, instead. Therefore, in the pilot, Face was portrayed by Tim Dunigan, who was later replaced by Dirk Benedict, with the comment that Dunigan was "too tall and too young". According to Dunigan: "I look even younger on camera than I am. So it was difficult to accept me as a veteran of the Vietnam War, which ended when I was a sophomore in high school." Tia Carrere was intended to join the principal cast of the show in its fifth season after appearing in the season four finale, providing a tie to the team's inception during the war. Unfortunately for this plan, Carrere was under contract to "General Hospital", which prevented her from joining "The A-Team." Her character was abruptly dropped as a result. According to Mr. T's account in "Bring Back... The A-Team" in 2006, the role of B. A. Baracus was written specifically for him. This is corroborated by Stephen J. Cannell's own account of the initial concept proposed by Tartikoff. James Coburn, who co-starred in "The Magnificent Seven", was considered for the role of Hannibal in "The A-Team", while George Peppard (Hannibal) was the original consideration for the role of Vin (played by Steve McQueen instead) in "The Magnificent Seven". Robert Vaughn, of course, actually appeared in the film. Notable guest stars included: During the show's first three seasons, "The A-Team" managed to pull in 17% to 20% of the American households on average. The first regular episode ("Children of Jamestown"), reached 26.4% of the television watching audience, placing fourth in the top 10 rated shows, according to the Nielsen ratings. By March, "The A-Team", now on its regular Tuesday timeslot, dropped to the eight spot, but rated a 20.5%. During the sweeps week in May of that year, "The A-Team" dropped again but remained steady at 18.5%, and rose to 18.8% during the second week of May sweeps. These were the highest ratings NBC had achieved in five years. During the second season, the ratings continued to soar, reaching third place in the twenty-highest rated programs, behind "Dallas" and "Simon & Simon", in January (mid-season), while during the third season, it was beaten out only by two other NBC shows, including "The Cosby Show". The fourth season saw "The A-Team" experience a dramatic fall in ratings, as it started to lose its position while television viewership increased. As such, the ratings, while stable, were relatively less. The season premiere ranked a 17.4% (a 26% audience share on that timeslot) on the Nielsen Rating scale, but after ratings quickly declined. In October, "The A-Team" had fallen to the 19th and by Super Bowl Night had fallen still to 29th the night on which the show had originally scored its first hit three years before. For the remainder of its fourth season "The A-Team" managed to hang around the 20th spot, far from original top 10 position it had enjoyed during its first three seasons. After four years on Tuesday, NBC decided to move "The A-Team" to a new timeslot on Friday for what would be its final season. Ratings continued to drop, and after seven episodes, "The A-Team" fell out of the top 50 altogether with a 13.3 Nielsen Rating. In November 1986, NBC cancelled the series, declining to order the last nine episodes of what would have been a 22-episode season. The series has achieved cult status through heavy syndication in the U.S. and internationally. It has also remained popular overseas, such as in the United Kingdom, since it was first shown in July 1983. It is airing on satellite and cable channel Esquire Network. The series was to begin airing over NBC-TV's OTA digital subchannel network, Cozi TV, in January 2016. Forces TV started showing the series every weekday since October 17, 2016. The series has been airing in Spanish on Telemundo-TV's OTA digital subchannel network, TeleXitos since December 2014. The series is currently available through Starz, ELREY, Tubi, and COZI TV. "The A-Team" has been broadcast all over the world; international response has been varied. In 1984, the main cast members of "The A-Team", George Peppard, Mr. T, Dirk Benedict and Dwight Schultz were invited to the Netherlands. George Peppard was the first to receive the invitation and thus thought the invitation applied only to him. When the other cast members were also invited, Peppard declined, leaving only Mr. T, Benedict and Schultz to visit the Netherlands. The immense turn-out for the stars was unforeseen, and they were forced to leave early as a security measure. A video was released with the present actors in which Dwight Schultz apologized and thanked everyone who had attended. In Australia "The A-Team" was broadcast on Channel Ten. From 2010 7mate has been showing reruns of show. The show was broadcast in New Zealand on TV2. In Brazil, the series was broadcast on SBT from 1984 to 1989, later moving to Rede Globo in the early 1990s. In the UK, the program was shown on ITV, starting on Friday, July 22, 1983; when it returned for its second run (resuming mid-second season) it moved to Saturday evenings. The series continued to be repeated on ITV until 1994. The series was later repeated on UK Gold from 1997 through 2007 at various times. It was also repeated on Bravo from 1997 to 1999. It returned to the channel in 2008 until the channel's closure in 2011. In 2017 the digital channel Spike began showing the series from the beginning. Channel 5 also repeated it in 2017. Although ratings soared during its early seasons, many television critics described the show largely as cartoonish and thereby wrote the series off. Most reviews focused on acting and the formulaic nature of the episodes, most prominently the absence of actual killing in a show about Vietnam War veterans. The show was a huge hit in Italy in the mid-1980s to the 1990s. The violence presented in "The A-Team" is highly sanitized. People do not bleed or bruise when hit (though they might develop a limp or require a sling), nor do the members of the A-Team kill people. The results of violence were only ever presented when it was required for the script. In almost every car crash there is a short take showing the occupants of the vehicle climbing out of the mangled or burning wreck, even in helicopter crashes. However, more of these types of takes were dropped near the end of the fourth season. According to Stephen J. Cannell, this part of the show did become a running joke for the writing staff and they would at times test the limits of realism on purpose. The show has been described as cartoonish and likened to "Tom and Jerry". Dean P. of the "Courier-Mail" described the violence in the show as "hypocritical" and that "the morality of giving the impression that a hail of bullets does no-one any harm is ignored. After all, Tom and Jerry survived all sorts of mayhem for years with no ill-effects." Television reviewer Ric Meyers joked that the A-Team used "antineutron bullets—they destroy property for miles around, but never harm a human being". According to certain estimates, an episode of the A-Team held up to 46 violent acts. Stephen J. Cannell, co-creator of the show responds: "They were determined to make a point, and we were too big a target to resist. Cartoon violence is a scapegoat issue." Originally, "The A-Team's" status as a hit show remained strong, but it ultimately lost out to more family-oriented shows such as "The Cosby Show", "Who's the Boss?" and "Growing Pains". John J. O'Connor of "The New York Times" wrote in a 1986 article that "...a substantial number of viewers, if the ratings in recent months are to be believed, are clearly fed up with mindless violence of the car-chasing, fist-slugging variety". During its tenure, the show was occasionally criticized for being sexist. These critiques were based on the notion that most female roles on the show were either a lead-in to the episode's plot, the recipient of Face's affections, or both. The only two regular female members of the cast, Melinda Culea (season 1 and the first half of season 2) and Marla Heasley (the latter half of season 2) did not have long tenures with the show. Both Culea and Heasley had been brought in by the network and producers to stem these critiques, hoping that a female character would properly balance the otherwise all-male cast. Culea was fired during the second season because of creative differences between her and the show's writers; she wanted more lines and more action scenes. Culea's character of Amy Allen suddenly disappeared between two episodes, and was only briefly referred to once in the episode "In Plane Sight", and a couple of times in "The Battle of Bel Air" in which she was cited to have taken a correspondence job overseas (in Jakarta, Indonesia). The latter episode also introduced Heasley's character, Tawnia Baker. The new character was also an assisting reporter character, but with a more fragile and seductive quality to her. Ultimately, she was written out of the show at the start of the third season when the network determined that a female cast member was not necessary. Tawnia left the team on-screen, choosing to marry and move out of Los Angeles. As Marla Heasley recounts in "Bring Back... The A-Team" (May 18, 2006), although sexism was not prevalent on the set per se, there was a sense that a female character was not necessary on the show. On her first day on set George Peppard took her aside and told her "We don't want you on the show. None of the guys want you here. The only reason you're here is because the network and the producers want you. For some reason they think they need a girl." The interview continues with Heasley noting that on her last day of work Peppard took her aside again, saying: "I'm sorry that this is your last day, but remember what I said the very first day, that we didn't want a girl, has nothing to do with you. You were very professional, but no reason to have a girl." In "Bring Back... the A-Team", Dirk Benedict also remarked that: "It was a guy's show. It was male driven. It was written by guys. It was directed by guys. It was acted by guys. It's about what guys do. We talked the way guys talked. We were the boss. We were the God. We smoked when we wanted. We shot guns when we wanted. We kissed the girls and made them cry... when we wanted. It was the last truly masculine show." The 1983 GMC Vandura van used by the A-Team, with its characteristic red stripe, black and red turbine mag wheels, and rooftop spoiler, has become an enduring pop culture icon. The customized 1994 Chevrolet G20 used on the A-Team movie was also on display at the 2010 New York International Auto Show. A number of devices were seen in the back of the van in different episodes, including a mini printing press ("Pros and Cons"), an audio surveillance recording device ("A Small and Deadly War"), Hannibal's disguise kits in various episodes, and a gun storage locker. Early examples of the van had a red GMC logo on the front grille, and an additional GMC logo on the rear left door. Early in the second season, these logos were blacked out, although GMC continued to supply vans and receive a credit on the closing credits of each episode. The van was almost all-black, as the section above the red stripe was metallic gray. The angle of the rear spoiler can also be seen to vary on different examples of the van within the series. Additionally, some versions of the van have a sunroof, whereas others, typically those used for stunts do not. This led to continuity errors in some episodes, such as in the third season's "The Bells of St. Mary's", in a scene where Face jumps from a building onto the roof of the van with no sunroof but moments later, in an interior studio shot, climbs in through the sunroof. The huge success of the series saw a vast array of merchandise, including toys and snacks released both in America and internationally. There were several sets of trading cards and stickers, action figures of the characters were produced by Galoob as well as vehicles, including B.A.'s van and Face's Corvette (available in several different sizes), as well as items such as helicopters, trucks and jeeps to fit in with the line, from model car manufacturer Ertl. Some of the other array of items available included jigsaw puzzles, View-Master reels containing 21 3-D pictures (over three reels) of the second season "A-Team" story "When You Comin' Back, Range Rider?", was produced by View-Master International (available both as a pack of reels, and also as a "gift set" with 3-D viewer), an electric race car track with A-Team vehicle covers instead of normal cars, and a TYCO produced train set with various accessories and pieces themed for the A-Team look. The set includes a Baldwin shark nose engine painted up like the Van and a matching Caboose. Following the original cancellation of the series, further merchandise has appeared as the series has achieved cult status, including an "A-Team" van by "Hot Wheels". In 2016 Lego released a pack that includes a B.A. Baracus minifigure and constructible van; the pack will unlock additional "A-Team" themed content in the video game "Lego Dimensions", including all four team members as playable characters. Marvel Comics produced a three-issue "A-Team" comic book series, which was later reprinted as a trade paperback. Similarly, in the United Kingdom, an "A-Team" comic strip appeared for several years in the 1980s as part of the children's television magazine and comic "Look-In," to tie in with the British run of the series. It was preceded, though, by a short run in the final year (1984) of "TV Comic," drawn by Jim Eldridge. Several novels were based on the series, the first six published in America by Dell and in Britain by Target Books; the last four were only published in Britain. The first six are credited to Charles Heath. The books are generally found in paperback form, although hardback copies (with different cover artwork) were also released. In the United Kingdom from 1985 to 1988, four Annuals were produced, each consisting of text and comic strip stories, puzzles, and photos of the show's stars, with a further one produced by Marvel Comics consisting of several reprinted comic strips, released in 1989/1990. A Panini set of stickers, which adapted six TV episodes (from the first and earlier second season) using shots from the episodes, could be stuck into an accompanying book, with text under each inserted sticker to narrate the story. The original main theme composed by Mike Post and Pete Carpenter (in a performance credited to Post) was released on the vinyl LP Mike Post – Television Theme Songs (Elektra Records E1-60028Y, 1982) and again on the Mike Post – Mike Post LP (RCA Records AFL1-5183, 1984), both long out-of-print; however, this was not the same version of the theme as heard on-screen. The theme, as heard on seasons two through four (including the opening narration and sound effects), was also released on TVT's "Television's Greatest Hits: 70s and 80s". A 7-inch single of the song credited to Post was released on RCA in 1984. The French version of the song had lyrics, which mirrored the spoken description of the show in the English opening credits. The theme has been ranked among the best TV themes ever written, with TV weatherman Al Roker sharing that opinion, and using the song to "get jazzed up" in the morning. Though no original music other than the theme has been released, in 1984 EMI issued an album of re-recorded material from the series conducted by Daniel Caine (reissued by Silva Screen on compact disc in 1999, SILVAD 3509). The series was co produced by former actor John Ashley who also provided the opening narration to the movie. During its time, "The A-Team" was nominated for three Emmy Awards: In 1983 (Outstanding Film Sound Mixing for a Series) for the pilot episode, in 1984 (Outstanding Film Sound Mixing for a Series) for the episode "When You Comin' Back, Range Rider?" and in 1987 (Outstanding Sound Editing for a Series) for the episode "Firing Line". The show featured professional wrestlers such as Hulk Hogan, Professor Toru Tanaka, Ricky "The Dragon" Steamboat, The Dynamite Kid, Bobby "The Brain" Heenan, Davey Boy Smith (The British Bulldog), Big John Studd and Greg "The Hammer" Valentine, in most cases playing themselves. In the episode "Body Slam", which featured Hogan, wrestling interviewer and announcer "Mean" Gene Okerlund also appeared. In addition, the music video for John Cena's "Bad, Bad Man" (on Cena's "You Can't See Me" album) featured the Chain Gang as a three-man A-Team—Cena as Hannibal, plus Cena's cousin Tha Trademarc as Howling Mad and Bumpy Knuckles as B.A. In early episodes the team used Colt AR-15 SP1 semi-automatic rifles (with automatic sound effects, simulating the M16), while in later seasons they used the Ruger Mini-14, and on rare occasions, the selective fire AC-556K variant of the Mini-14. Hannibal is also seen using an M60 machine gun (which Hannibal called "Baby") in some episodes as well as a Micro-Uzi. MAC-11s with parts added to simulate the Uzi appear in at least two early episodes. Hannibal's sidearms are either a nickel-plated Smith & Wesson Model 59, or a stainless steel Smith & Wesson Model 639. Unusually in the episode "Black Day At Bad Rock" he is seen carrying a Browning Hi-Power. Face's usual sidearm is a Colt Lawman Mk III, though he does use Smith & Wesson revolvers in latter seasons. Many antagonists and members of the team are seen using 1911s as well. Starting from Season 4, the then-exotic Steyr AUG bullpup rifle also became prominent in the series. "So many different firearms were used in the 1980s hit 'The A-Team' that it's impossible to list them all. For five seasons, the wrongly accused foursome used rifles, handguns, submachine guns and shotguns to bring justice for the little guy while trying to stay out of jail. Regardless of the number of explosions or rounds fired, nobody ever got seriously hurt except for the occasional flesh wound of a team member." As a result, the "American Rifleman" declared "The A-Team" the Number One Show on Television to regularly feature firearms. Universal Studios has released all five seasons of "The A-Team" on DVD in Region 1, 2, and 4. In Region 2, a complete series set entitled "The A-Team--The Ultimate Collection" was released on October 8, 2007. A complete series set was released in Region 1 on June 8, 2010. The set includes 25 discs packaged in a replica of the A-Team's signature black van from the show. The complete series set was released in Region 4 on November 3, 2010. All 5 seasons were re-released in Region 2 with new packaging on June 21, 2010. The series has been remastered and was released on Blu-ray disc in the United Kingdom on October 17, 2016. On May 18, 2006, Channel 4 in the UK attempted to reunite the surviving cast members of "The A-Team" for the show "Bring Back..." in an episode titled "Bring Back...The A-Team". Justin Lee Collins presented the challenge, securing interviews and appearances from Dirk Benedict, Dwight Schultz, Marla Heasley, Jack Ging, series co-creator Stephen Cannell, and Mr. T. Collins eventually managed to bring together Benedict, Schultz, Heasley, Ging and Cannell, along with William Lucking, Lance LeGault, and George Peppard's son, Christian. Mr. T was unable to make the meeting, which took place in the Friar's Club in Beverly Hills, but he did manage to appear on the show for a brief talk with Collins. A feature film based on "The A-Team" was released on June 11, 2010, and was produced by 20th Century Fox. Both Dirk Benedict (Face) and Dwight Schultz (Murdock) made brief cameo appearances in the film (as a prisoner using a sunbed and a psychiatrist overseeing Murdock's shock therapy, respectively); because of timing issues, these scenes were moved to the end of the credits. They were later reinserted for the extended-cut of the film. In September 2015, Fox announced that they were developing a reboot "A-Team" series with Chris Morgan as executive producer with Cannell's daughter, Tawnia McKiernan, and Albert Kim writing. The team is to be made up of both male and female characters.
https://en.wikipedia.org/wiki?curid=30028
Terry Pratchett Sir Terence David John Pratchett (28 April 1948 – 12 March 2015) was an English humorist, satirist, and author of fantasy novels, especially comical works. He is best known for his "Discworld" series of 41 novels. Pratchett's first novel, "The Carpet People", was published in 1971. The first "Discworld" novel, "The Colour of Magic", was published in 1983, after which Pratchett wrote an average of two books a year. His 2011 "Discworld" novel "Snuff" became the third-fastest-selling hardback adult-readership novel since records began in the UK, selling 55,000 copies in the first three days. The final "Discworld" novel, "The Shepherd's Crown", was published in August 2015, five months after his death. Pratchett, with more than 85 million books sold worldwide in 37 languages, was the UK's best-selling author of the 1990s. He was appointed Officer of the Order of the British Empire (OBE) in 1998 and was knighted for services to literature in the 2009 New Year Honours. In 2001 he won the annual Carnegie Medal for "The Amazing Maurice and his Educated Rodents", the first "Discworld" book marketed for children. He received the World Fantasy Award for Life Achievement in 2010. In December 2007, Pratchett announced that he had been diagnosed with early-onset Alzheimer's disease. He later made a substantial public donation to the Alzheimer's Research Trust, filmed a television programme chronicling his experiences with the condition for the BBC, and became a patron for Alzheimer's Research UK. Pratchett died on 12 March 2015, aged 66. Pratchett was born on 28 April 1948 in Beaconsfield in Buckinghamshire, England (where he attended Holtspur School), the only child of David (1921–2006) and Eileen Pratchett (1922–2010), of Hay-on-Wye. His family moved to Bridgwater, Somerset, briefly in 1957, following which he passed his eleven plus exam in 1959, earning a place at High Wycombe Technical High School (now John Hampden Grammar School) where he was a key member of the debating society and wrote stories for the school magazine. Pratchett described himself as a "non-descript student" and, in his "Who's Who" entry, credits his education to the Beaconsfield Public Library. His maternal grandparents came from Ireland. His early interests included astronomy. He collected Brooke Bond tea cards about space, owned a telescope and wanted to be an astronomer, but lacked the necessary mathematical skills. He developed an interest in reading science fiction and began attending science fiction conventions from about 1963–1964, but stopped a few years later when he got his first job as a trainee journalist at the local paper. His early reading included the works of H. G. Wells, Arthur Conan Doyle, and "every book you really ought to read", which he later regarded as "getting an education". Pratchett published his first short story, "Business Rivals", in the High Wycombe Technical School magazine in 1962. It is the tale of a man named Crucible who finds the Devil in his flat in a cloud of sulphurous smoke. "The Hades Business", which was published in the school magazine when he was 13, was published commercially when he was 15. Pratchett earned five O-levels and started A-level courses in Art, English and History. His initial career choice was journalism and he left school at 17, in 1965, to start an apprenticeship with Arthur Church, the editor of the "Bucks Free Press". In this position he wrote, among other things, over eighty stories for the "Children's Circle" section under the name Uncle Jim. Two of these episodic stories contain characters found in his novel "The Carpet People" (1971). While on day release from his apprenticeship he finished his A-Level in English and took the National Council for the Training of Journalists proficiency course where he received the highest marks of his group. Pratchett had his writing breakthrough in 1968 when he interviewed Peter Bander van Duren, co-director of a small publishing company, Colin Smythe Ltd. During the meeting, Pratchett mentioned he had written a manuscript, "The Carpet People". Colin Smythe Ltd published the book in 1971, with illustrations by the author. The book received strong, although few, reviews and was followed by the science fiction novels "The Dark Side of the Sun" (10 May 1976) and "Strata" (15 June 1981). After various positions in journalism, in 1980 Pratchett became Press Officer for the Central Electricity Generating Board (CEGB) in an area that covered four nuclear power stations. He later joked that he had demonstrated "impeccable timing" by making this career change so soon after the Three Mile Island nuclear accident in Pennsylvania, US, and said he would "write a book about my experiences, if I thought anyone would believe it". The first Discworld novel, "The Colour of Magic", was published in hardback by Colin Smythe Ltd in 1983. The paperback edition was published by Corgi, an imprint of Transworld, in 1985. Pratchett's popularity increased when the BBC's "Woman's Hour" broadcast "The Colour of Magic" as a serial in six parts, and later "Equal Rites". Subsequently, the hardback rights were taken by the publishing house Victor Gollancz Ltd, which remained Pratchett's publisher until 1997, Colin Smythe having become Pratchett's agent. Pratchett was the first fantasy author published by Gollancz. Pratchett gave up working for the CEGB to make his living through writing in 1987, after finishing the fourth Discworld novel, "Mort". His sales increased quickly and many of his books occupied top places on the best-seller list; he was the UK's best-selling author of the 1990s. According to "The Times", Pratchett was the top-selling and highest earning UK author in 1996. Some of his books have been published by Doubleday, another Transworld imprint. In the US, Pratchett is published by HarperCollins. According to the "Bookseller's Pocket Yearbook" (2005), in 2003 Pratchett's UK sales amounted to 3.4% of the fiction market by hardback sales and 3.8% by value, putting him in second place behind J. K. Rowling (6% and 5.6%, respectively), while in the paperback sales list Pratchett came 5th with 1.2% and 1.3% by value (behind James Patterson (1.9% and 1.7%), Alexander McCall Smith, John Grisham, and J. R. R. Tolkien). His sales in the UK alone are more than 2.5 million copies a year. Pratchett married Lyn Purves at the Congregational Church, Gerrards Cross, on 5 October 1968, and they moved to Rowberrow, Somerset, in 1970. Their daughter Rhianna Pratchett, who is also a writer, was born there in 1976. In 1993, the family moved to Broad Chalke, a village west of Salisbury, Wiltshire. He listed his recreations as "writing, walking, computers, life". He described himself as a humanist and was a Distinguished Supporter of Humanists UK (formerly known as the British Humanist Association) and an Honorary Associate of the National Secular Society. He was the patron of the Friends of High Wycombe Library. In 2013 he gave a talk at Beaconsfield Library which he had visited as a child and donated the income from the event to it. On a number of occasions he also visited his former school to speak to the students and look around. Pratchett was well known for his penchant for wearing large, black fedora hats, as seen on the inside back covers of most of his books. His style has been described as "more that of urban cowboy than city gent." Concern for the future of civilisation prompted him to install five kilowatts of photovoltaic cells (for solar energy) at his house. Having been interested in astronomy since childhood, he had an observatory built in his garden. An asteroid (127005 Pratchett) is named after him. On 31 December 2008, it was announced that Pratchett would be knighted (as a Knight Bachelor) in the Queen's 2009 New Year Honours. He formally received the accolade at Buckingham Palace on 18 February 2009. Afterwards he said, "You can't ask a fantasy writer not to want a knighthood. You know, for two pins I'd get myself a horse and a sword." In late 2009, he made himself a sword, with help from friends. He told a "Times Higher Education" interviewer that "At the end of last year I made my own sword. I dug out the iron ore from a field about 10 miles away – I was helped by interested friends. We lugged 80 kilos of iron ore, used clay from the garden and straw to make a kiln, and lit the kiln with wildfire by making it with a bow.' Colin Smythe, his longtime friend and agent, donated some pieces of meteoric iron – 'thunderbolt iron' has a special place in magic and we put that in the smelt, and I remember when we sawed the iron apart it looked like silver. Everything about it I touched, handled and so forth ... And everything was as it should have been, it seemed to me." In 2013, Pratchett was named Humanist of the Year by the British Humanist Association (now known as Humanists UK) for his campaign to fund research into Alzheimers, his contribution to the right to die public debate and his Humanist values. In August 2007, Pratchett was misdiagnosed as having had a minor stroke a few years before, which doctors believed had damaged the right side of his brain. In December 2007, he announced that he had been newly diagnosed with early-onset Alzheimer's disease, which had been responsible for the "stroke". He had a rare form of posterior cortical atrophy (PCA), a disease in which areas at the back of the brain begin to shrink and shrivel. Describing the diagnosis as an "embuggerance" in a radio interview, Pratchett appealed to people to "keep things cheerful" and proclaimed that "we are taking it fairly philosophically down here and possibly with a mild optimism." He stated he felt he had time for "at least a few more books yet", and added that while he understood the impulse to ask "is there anything I can do?", in this case he would only entertain such offers from "very high-end experts in brain chemistry." Discussing his diagnosis at the Bath Literature Festival in early 2008, Pratchett revealed that by then he found it too difficult to write dedications when signing books. In his later years Pratchett wrote by dictating to his assistant, Rob Wilkins, or by using speech recognition software. In March 2008, Pratchett announced he would donate US$1,000,000 (about £494,000) to the Alzheimer's Research Trust, and that he was shocked "to find out that funding for Alzheimer's research is just 3% of that to find cancer cures." He said: "I am, along with many others, scrabbling to stay ahead long enough to be there when the cure comes along." In April 2008, Pratchett worked with the BBC to make a two-part documentary series about his illness, "Terry Pratchett: Living With Alzheimer's". The first part was broadcast on BBC Two on 4 February 2009, drawing 2.6 million viewers and a 10.4% audience share. The second, broadcast on 11 February 2009, drew 1.72 million viewers and a 6.8% audience share. The documentary won a BAFTA award in the Factual Series category. Pratchett also made an appearance on "The One Show" on 15 May 2008, talking about his condition. He was the subject and interviewee of the 20 May 2008 edition of "On the Ropes" (Radio 4), discussing Alzheimer's and how it had affected his life. On 8 June 2008, news reports indicated that Pratchett had an experience which he described as: "It is just possible that once you have got past all the gods that we have created with big beards and many human traits, just beyond all that, on the other side of physics, there just may be the ordered structure from which everything flows" and "I don't actually believe in anyone who could have put that in my head". He went into further detail on "Front Row", in which he was asked if this was a shift in his beliefs: "A shift in me in the sense I heard my father talk to me when I was in the garden one day. But I'm absolutely certain that what I heard was my memories of my father. An engram, or something in my head ... This is not about God, but somewhere around there is where gods come from." On 26 November 2008, Pratchett met the Prime Minister Gordon Brown and asked for an increase in dementia research funding. Pratchett tested a prototype device to address his condition. The ability of the device to alter the course of the illness has been met with skepticism from Alzheimer's researchers. In an article published mid-2009, Pratchett stated that he wished to die by assisted suicide (although he disliked that term) before his disease progressed to a critical point. He later said he felt "it should be possible for someone stricken with a serious and ultimately fatal illness to choose to die peacefully with medical help, rather than suffer." Pratchett was selected to give the 2010 BBC Richard Dimbleby Lecture, entitled "Shaking Hands With Death", broadcast on 1 February 2010. Pratchett introduced his lecture on the topic of assisted death, but the main text was read by his friend Tony Robinson because Pratchett's condition made it difficult for him to read. In June 2011 Pratchett presented a one-off BBC television documentary, "," about assisted suicide. It won the Best Documentary award at the Scottish BAFTAs in November 2011. In September 2012, Pratchett stated: "I have to tell you that I thought I'd be a lot worse than this by now, and so did my specialist." In the same interview, he stated that the cognitive part of his mind was "untouched" and his symptoms were physical (normal for PCA). However, in July 2014 he cancelled his appearance at the biennial International Discworld Convention, saying: "the Embuggerance is finally catching up with me, along with other age-related ailments". Pratchett died at his home on the morning of 12 March 2015 from Alzheimer's. "The Telegraph" reported an unidentified source as saying that despite his previous discussion of assisted suicide, his death had been natural. After Pratchett's death, his assistant, Rob Wilkins, wrote from the official Terry Pratchett Twitter account: The use of small capitals is a reference to how the character of Death speaks in Pratchett's works. Public figures who paid tribute include British Prime Minister David Cameron, comedian Ricky Gervais, and authors Ursula K. Le Guin, Terry Brooks, Margaret Atwood, George R. R. Martin, and Neil Gaiman. Pratchett was memorialised in graffiti in East London. The video game companies Frontier Developments and Valve added elements to their games named after him. Users of the social news site Reddit organised a tribute by which an HTTP header, "codice_1", was added to web sites' responses, a reference to the Discworld novel "Going Postal", in which "the clacks" (Discworld's equivalent to a telegraph) are programmed to repeat the name of its creator's deceased son; the sentiment in the novel is that no one is ever forgotten as long as their name is still spoken. A June 2015 web server survey reported that approximately 84,000 websites had been configured with the header. Pratchett's humanist funeral service was held on 25 March 2015. Pratchett started to use computers for writing as soon as they were available to him. His first computer was a Sinclair ZX81; the first computer he used properly for writing was an Amstrad CPC 464, later replaced by a PC. Pratchett was one of the first authors to routinely use the Internet to communicate with fans, and was a contributor to the Usenet newsgroup alt.fan.pratchett from 1992. However, he did not consider the Internet a hobby, just another "thing to use". He had many computers in his house, with a bank of six monitors rigged up to ease writing. When he travelled, he always took a portable computer with him to write. His background as a journalist and press officer led him to be concerned by 1995 about the spread of fake news on the Internet. In an interview with Bill Gates, founder of Microsoft, he said "Let’s say I call myself the Institute for Something-or-other and I decide to promote a spurious treatise saying the Jews were entirely responsible for the second world war and the Holocaust didn’t happen, and it goes out there on the Internet and is available on the same terms as any piece of historical research which has undergone peer review and so on. There’s a kind of parity of esteem of information on the net. It’s all there: there’s no way of finding out whether this stuff has any bottom to it or whether someone has just made it up". Gates was optimistic and disagreed, saying that authorities on the Net would index and check facts and reputations in a much more sophisticated way than in print. But it was Pratchett who had "accurately predicted how the Internet would propagate and legitimise fake news". His experiments with computer upgrades are reflected in "Hex". Pratchett was also an avid video game player, and collaborated in the creation of a number of game adaptations of his books. He favoured games that are "intelligent and have some depth", citing "Half-Life 2" and fan missions from "Thief" as examples. The red army in "Interesting Times" alludes to the game "Lemmings", and when asked about this connection, Pratchett stated "Merely because the red army can fight, dig, march and climb and is controlled by little icons? Can't imagine how anyone thought that ... Not only did I wipe "Lemmings" from my hard disk, I overwrote it so I couldn't get it back." Additionally, he played "", which he described as "wonderful", and used many of its non-combat-oriented, fan-made mods. Pratchett wrote dialogue for a mod for "Oblivion" which added a Nord companion named Vilja. He also worked on a similar mod for "" which featured Vilja's "great-great-granddaughter". He is said to have enjoyed playing the first "Tomb Raider" game. Pratchett had a fascination with natural history that he referred to many times, and he owned a greenhouse full of carnivorous plants. He described them in the biographical notes on the dust jackets of some of his books, and elsewhere, as "not as interesting as people think". By Carpe Jugulum the account had become that he used to, but the plants had taken over the greenhouse, and he avoided going in. In 1995, a fossil sea-turtle from the Eocene epoch of New Zealand was named "Psephophorus terrypratchetti" in his honour by the palaeontologist Richard Köhler. In 2016, Pratchett fans petitioned the International Union of Pure and Applied Chemistry (IUPAC) to name chemical element 117, temporarily called "ununseptium", as "octarine" with the proposed symbol Oc (pronounced "ook"). The final name chosen for element 117 was "tennessine" with the symbol Ts. Pratchett was a trustee for Orangutan Foundation but was pessimistic about the animal's future. His activities included visiting Borneo with a Channel 4 film crew to make an episode of "Jungle Quest" in 1995, seeing orangutans in their natural habitat. Following Pratchett's lead, fan events such as the Discworld Conventions have adopted Orangutan Foundation as their nominated charity, which has been acknowledged by the foundation. One of Pratchett's most popular fictional characters, the Librarian of the Unseen University's Library, is a wizard who was transformed into an orangutan in a magical accident and decides to remain in that condition as it is so convenient for his work. Pratchett had an observatory in his back garden and was a keen astronomer from childhood. He made an appearance on the BBC programme "The Sky at Night". Pratchett sponsored a biennial award for unpublished science fiction novelists, the Terry Pratchett First Novel Award. The prize is a publishing contract with his publishers Transworld. In 2011 the award was won jointly by David Logan for "Half Sick of Shadows" and Michael Logan for "Apocalypse Cow". In 2013 the award was won by Alexander Maskill for "The Hive". In 2015, Pratchett's estate announced an in-perpetuity endowment to the University of South Australia. The Sir Terry Pratchett Memorial Scholarship supports a Masters scholarship at the University's Hawke Research Institute. Pratchett received a knighthood for "services to literature" in the 2009 UK New Year Honours list. He was previously appointed Officer of the Order of the British Empire, also for "services to literature", in 1998. Following this, Pratchett commented in the Ansible SF/fan newsletter, "I suspect the 'services to literature' consisted of refraining from trying to write any," but added, "Still, I cannot help feeling mightily chuffed about it." Pratchett was the British Book Awards' 'Fantasy and Science Fiction Author of the Year' for 1994. Pratchett won the British Science Fiction Award in 1989 for his novel "Pyramids", and a Locus Award for Best Fantasy Novel in 2008 for "Making Money". Pratchett was awarded ten honorary doctorates: University of Warwick in 1999, the University of Portsmouth in 2001, the University of Bath in 2003, the University of Bristol in 2004, Buckinghamshire New University in 2008, the University of Dublin in 2008, Bradford University in 2009, University of Winchester in 2009, The Open University in 2013 for his contribution to Public Service and his last, from the University of South Australia, in May 2014. Pratchett won the 2001 Carnegie Medal from the British librarians, recognising "The Amazing Maurice and His Educated Rodents" as the year's best children's book published in the UK. "Night Watch" won the 2003 Prometheus Award for best libertarian novel. In 2003, BBC conducted The Big Read to identify the "Nation's Best-loved Novel" and finally published a ranked list of the "Top 200". Pratchett's highest-ranking novel was "Mort", number 65, but he and Charles Dickens were the only authors with five in the Top 100 (four of his were from the "Discworld" series). He also led all authors with fifteen novels in the Top 200. Three of the five "Discworld" novels that centre on the "trainee witch" Tiffany Aching won the annual Locus Award for Best Young Adult Book in 2004, 2005 and 2007. In 2005, "Going Postal" was shortlisted for the Hugo Award for Best Novel; however, Pratchett recused himself, stating that stress over the award would mar his enjoyment of Worldcon. Pratchett received the NESFA Skylark Award in 2009 and the World Fantasy Award for Life Achievement in 2010. In 2011 he won Margaret A. Edwards Award from the American Library Association, a lifetime honour for "significant and lasting contribution to young adult literature". The librarians cited nine Discworld novels published from 1983 to 2004 and observed that "Pratchett's tales of Discworld have won over generations of teen readers with intelligence, heart, and undeniable wit. Comic adventures that fondly mock the fantasy genre, the Discworld novels expose the hypocrisies of contemporary society in an intricate, ever-expanding universe. With satisfyingly multilayered plots, Pratchett's humor honors the intelligence of the reader. Teens eagerly lose themselves in a universe with no maps." He was made an adjunct Professor in the School of English at Trinity College Dublin in 2010, with a role in postgraduate education in creative writing and popular literature. "I Shall Wear Midnight" won the 2010 Andre Norton Award for Young Adult Science Fiction and Fantasy presented by the Science Fiction and Fantasy Writers of America (SFWA) as a part of the Nebula Award ceremony. In 2016, SFWA announced that Sir Terry would be the recipient of the Kate Wilhelm Solstice Award, presented at the 2016 SFWA Nebula Conference. Pratchett's "Discworld" novels have led to dedicated conventions, the first in Manchester in 1996, then worldwide, often with the author as guest of honour. Publication of a new novel was sometimes accompanied by an international book signing tour; queues were known to stretch outside the bookshop as the author continued to sign books well after the intended finishing time. His fans were not restricted by age or gender, and he received a large amount of fan mail from them. Pratchett enjoyed meeting fans and hearing what they think about his books, saying that since he was well paid for his novels, his fans were "everything" to him. Pratchett said that to write, you must read extensively, both inside and outside your chosen genre and to the point of "overflow". He advised that writing is hard work, and that writers must "make grammar, punctuation and spelling a part of your life." However, Pratchett enjoyed writing, regarding its monetary rewards as "an unavoidable consequence", rather than the reason for writing. Although during his early career he wrote for the sci-fi and horror genres, Pratchett later focused almost entirely on fantasy, and said: "It is easier to bend the universe around the story." In the acceptance speech for his Carnegie Medal he said: "Fantasy isn't just about wizards and silly wands. It's about seeing the world from new directions", pointing to J. K. Rowling's "Harry Potter" novels and J. R. R. Tolkien's "The Lord of the Rings". In the same speech, he acknowledged benefits of these works for the genre. Pratchett believed he owed "a debt to the science fiction/fantasy genre which he grew up out of" and disliked the term "magical realism" which, he said, is "like a polite way of saying you write fantasy and is more acceptable to certain people ... who, on the whole, do not care that much." He expressed annoyance that fantasy is "unregarded as a literary form", arguing that it "is the oldest form of fiction"; he described himself as "infuriated" when novels containing science fiction or fantasy ideas were not regarded as part of those genres. He debated this issue with novelist A. S. Byatt and critic Terry Eagleton, arguing that fantasy is fundamental to the way we understand the world and therefore an integral aspect of all fiction. On 31 July 2005, Pratchett criticised media coverage of "Harry Potter" author J. K. Rowling, commenting that certain members of the media seemed to think that "the continued elevation of J. K. Rowling can be achieved only at the expense of other writers". Pratchett later denied claims that this was a swipe at Rowling, and said that he was not making claims of plagiarism, but was pointing out the "shared heritage" of the fantasy genre. Pratchett also posted on the "Harry Potter" newsgroup about a media-covered exchange of views with her. Pratchett is known for a distinctive writing style that included a number of characteristic hallmarks. One example is his use of footnotes, which usually involve a comic departure from the narrative or a commentary on the narrative, and occasionally have footnotes of their own. Pratchett's earliest Discworld novels were written largely to parody classic sword-and-sorcery fiction (and occasionally science-fiction); as the series progressed, Pratchett dispensed with parody almost entirely, and the Discworld series evolved into straightforward (though still comedic) satire. Pratchett had a tendency to avoid using chapters, arguing in a Book Sense interview that "life does not happen in regular chapters, nor do movies, and Homer did not write in chapters", adding "I'm blessed if I know what function they serve in books for adults." However, there were exceptions; "Going Postal" and "Making Money" and several of his books for younger readers are divided into chapters. Pratchett offered explanations for his sporadic use of chapters; in the young adult titles, he said that he must use chapters because '[his] editor screams until [he] does', but otherwise felt that they were an unnecessary 'stopping point' that got in the way of the narrative. Characters, place names, and titles in Pratchett's books often contain puns, allusions and cultural references. Some characters are parodies of well-known characters: for example, Pratchett's character Cohen the Barbarian, also called Ghengiz Cohen, is a parody of Conan the Barbarian and Genghis Khan, and his character Leonard of Quirm is a parody of Leonardo da Vinci. Another hallmark of his writing was the use of capitalised dialogue without quotation marks, used to indicate the character of Death communicating telepathically into a character's mind. Other characters or types of characters were given similarly distinctive ways of speaking, such as the auditors of reality not having quotation marks around the words they speak, Ankh-Morpork grocers never using punctuation correctly, and Golems Capitalising Each Word In Everything They Say. Also, common spelling mistakes were used to indicate a person's level of literacy. Pratchett made up a new colour, octarine, a 'fluorescent greenish-yellow-purple', which is the eighth colour in the "Discworld" spectrum—the colour of magic. Indeed, the number eight itself is regarded in the Discworld as being a magical number; for example, the eighth son of an eighth son will be a wizard, and his eighth son will be a "sourcerer", extremely powerful users of magic with abilities far beyond what most wizards usually achieve (which is one reason why wizards are not allowed to have children). Discworld novels often included a modern innovation and its introduction to the world's medieval setting, such as a public police force ("Guards! Guards!"), guns ("Men at Arms"), submarines ("Jingo"), cinema ("Moving Pictures"), investigative journalism ("The Truth"), the postage stamp ("Going Postal"), modern banking ("Making Money"), and the steam engine ("Raising Steam"). The "clacks", the tower-to-tower semaphore system that sprang up in later novels, is a mechanical optical telegraph (as created by the Chappe brothers and employed during the French revolution) before wired electric telegraph chains, with all the change and turmoil that such an advancement implies. The resulting social upheaval driven by these changes serves as the setting for the main story. Pratchett made no secret of outside influences on his work: they were a major source of his humour. He imported numerous characters from classic literature, popular culture and ancient history, always adding an unexpected twist. Pratchett was a crime novel fan, which was reflected in frequent appearances of the Ankh-Morpork City Watch in the "Discworld" series. Pratchett was an only child, and his characters are often without siblings. Pratchett explained, "In fiction, only-children are the interesting ones". Pratchett's earliest inspirations were "The Wind in the Willows" by Kenneth Grahame, and the works of Isaac Asimov and Arthur C. Clarke. His literary influences were P.G. Wodehouse, Tom Sharpe, Jerome K. Jerome, Roy Lewis, Alan Coren, G. K. Chesterton, and Mark Twain. While Pratchett's UK publishing history remained quite stable, his relationships with international publishers were turbulent (especially in America). He changed German publishers after an advertisement for Maggi soup appeared in the middle of the German-language version of "Pyramids". Pratchett began writing the Discworld series in 1983 to "have fun with some of the cliches" and it is a humorous and often satirical sequence of stories set in the colourful fantasy Discworld universe. The series contains various story arcs (or sub-series), and a number of free-standing stories. All are set in an abundance of locations in the same detailed and unified world, such as the Unseen University and 'The Drum/Broken Drum/Mended Drum' public house in the twin city Ankh-Morpork, or places in the various continents, regions and countries on the Disc. Characters and locations reappear throughout the series, variously taking major and minor roles. The Discworld itself is described as a large disc resting on the backs of four giant elephants, all supported by the giant turtle Great A'Tuin as it swims its way through space. The books are essentially in chronological order, and advancements can be seen in the development of the Discworld civilisations, such as the creation of paper money in Ankh-Morpork. Many of the novels in Pratchett's Discworld series parody real-world subjects such as film making, newspaper publishing, rock and roll music, religion, philosophy, Ancient Greece, Egyptian history, the Gulf War, Australia, university politics, trade unions, and the financial world. Pratchett also included further parody as a feature within the stories, including such subjects as Ingmar Bergman films, numerous fiction, science fiction, and fantasy characters, and various bureaucratic and ruling systems. Pratchett wrote or collaborated on a number of Discworld books that are not novels in themselves but serve to accompany the series. "The Discworld Companion", written with Stephen Briggs, is an encyclopaedic guide to Discworld. The third edition was renamed "The New Discworld Companion", and was published in 2003. The fourth and most recent edition of the companion, "Turtle Recall" was published on 18 October 2012. Briggs also collaborated with Pratchett on a series of fictional Discworld "mapps". The first, "The Streets of Ankh-Morpork" (1993), was illustrated by Stephen Player. It was followed by "The Discworld Mapp" (1995), also illustrated by Stephen Player, which comprises a large, comprehensive map of the Discworld itself with a small booklet that contains short biographies of the Disc's prominent explorers and their discoveries. Two further "mapps" have been released, focusing on particular regions of the Disc: Lancre, and Death's Domain. Between 1997 and 2015, ten Discworld Diaries were published as collaborations with Briggs or the Discworld Emporium. Pratchett and Tina Hannan collaborated on "Nanny Ogg's Cookbook" (1999). The design of this cookbook, illustrated by Paul Kidby, was based on the traditional "Mrs Beeton's Book of Household Management", but with humorous recipes. Pratchett and Bernard Pearson collaborated on "The Discworld Almanak", for the Year of the Prawn, with illustration by Paul Kidby, Pearson and Sheila Watkins. Collections of Discworld-related art have also been released in book form. "The Pratchett Portfolio" (1996) and "The Art of Discworld" (2004) are collections of paintings of major Discworld characters by Paul Kidby, with details added by Pratchett on the character's origins. In 2005, Pratchett's first book for very young children was "Where's My Cow?" Illustrated by Melvyn Grant, this is a realisation of the short story Sam Vimes reads to his child in "Thud!". "The Unseen University Cut Out Book" was published in 2006 developed with Alan Bately and Bernard Pearson. The book contains cut-out templates of seven of the major buildings in the Unseen University. Following on from the release of Sky's adaptation of "Hogfather", "Terry Pratchett's Hogfather, The Illustrated Screenplay" was released in 2006. It was written by Vadim Jean and "mucked about with by Terry Pratchett". It contains the final shooting script, pictures from the film and additional illustrations by Stephen Player. It was published by Gollancz. Pratchett and the Discworld Emporium published "The Compleat Ankh-Morpork City Guide" in 2012, which combined a trade directory, gazetteer, laws and ordinances together with a fully revised city map with artwork by Bernard Pearson, Ian Mitchell and Peter Dennis, who followed it with "The Compleat Discworld Atlas" (2015), a much enlarged and updated version of the original 'Mapp' and gazetteer, with information on all the countries of the Discworld. A number of publications have been released on the back of Pratchett's novels with the participation of the Discworld Emporium: Pratchett resisted mapping the Discworld for quite some time, noting that a firmly designed map restricts narrative possibility (i.e., with a map, fans would complain if he placed a building on the wrong street, but without one, he could adjust the geography to fit the story). Pratchett wrote four "Science of Discworld" books in collaboration with Professor of mathematics Ian Stewart and reproductive biologist Jack Cohen, both of the University of Warwick: "The Science of Discworld" (1999), "" (2002), "" (2005), and "" (2013). All four books have chapters that alternate between fiction and non-fiction: the fictional chapters are set within the Discworld universe, where characters observe, and experiment on, a universe with the same physics as ours. The non-fiction chapters (written by Stewart and Cohen) explain the science behind the fictional events. In 1999, Pratchett appointed both Cohen and Stewart as "Honorary Wizards of the Unseen University" at the same ceremony at which the University of Warwick awarded him an honorary degree. Pratchett collaborated with the folklorist Dr Jacqueline Simpson on "The Folklore of Discworld" (2008), a study of the relationship between many of the persons, places and events described in the Discworld books and their counterparts in myths, legends, fairy tales and folk customs on Earth. Pratchett's first two adult novels, "The Dark Side of the Sun" (1976) and "Strata" (1981), were both science-fiction, the latter taking place partly on a disc-shaped world. Subsequent to these, Pratchett mostly concentrated on his "Discworld" series and novels for children, with two exceptions: "Good Omens" (1990), a collaboration with Neil Gaiman (which was nominated for both Locus and World Fantasy Awards in 1991), a humorous story about the Apocalypse set on Earth, and "Nation" (2008), a book for young adults. After writing "Good Omens", Pratchett began to work with Larry Niven on a book that would become "Rainbow Mars"; Niven eventually completed the book on his own, but states in the afterword that a number of Pratchett's ideas remained in the finished version. Pratchett also collaborated with British science fiction author Stephen Baxter on a parallel earth series. The first novel, entitled "The Long Earth" was released on 21 June 2012. A second novel, "The Long War", was released on 18 June 2013. "The Long Mars" was published in 2014. The fourth book in the series, "The Long Utopia", was published in June 2015, and the fifth, "The Long Cosmos", in June 2016. In 2012, the first volume of Pratchett's collected short fiction was published under the title "A Blink of the Screen". In 2014, a similar collection was published of Pratchett's non-fiction, entitled "A Slip of the Keyboard". Pratchett's first children's novel was also his first published novel: "The Carpet People" in 1971, which Pratchett substantially rewrote and re-released in 1992. The next, "Truckers" (1988), was the first in "The Nome Trilogy" of novels for young readers (also known as "The Bromeliad Trilogy"), about small gnome-like creatures called "Nomes", and the trilogy continued in "Diggers" (1990) and "Wings" (1990). Subsequently, Pratchett wrote the "Johnny Maxwell" trilogy, about the adventures of a boy called Johnny Maxwell and his friends, comprising "Only You Can Save Mankind" (1992), "Johnny and the Dead" (1993) and "Johnny and the Bomb" (1996). "Nation" (2008) marked his return to the non-Discworld children's novel, and this was followed in 2012 by "Dodger", a children's novel set in Victorian London. On 21 November 2013 Doubleday Children's released Pratchett's "Jack Dodger's Guide to London". In September 2014 an anthology of children's stories, "Dragons at Crumbling Castle", written by Pratchett, and illustrated by "Mark Beech", was published. This was followed by another collection, "The Witch's Vacuum Cleaner", also illustrated by Mark Beech, in 2016. A third volume, "Father Christmas's Fake Beard", was released in 2017. According to Pratchett's assistant Rob Wilkins, Pratchett left "an awful lot" of unfinished writing, "10 titles I know of and fragments from many other bits and pieces." In the past, Pratchett himself mentioned at least two texts, "Scouting for Trolls", and a "Discworld" novel centering on a new character. The notes left behind outline ideas about "how the old folk of the Twilight Canyons solve the mystery of a missing treasure and defeat the rise of a Dark Lord despite their failing memories", "the secret of the crystal cave and the carnivorous plants in the Dark Incontinent", about Constable Feeney of the Watch, first introduced in "Snuff", involving how he "solves a whodunnit among the congenitally decent and honest goblins", and on a second book about Amazing Maurice from "The Amazing Maurice and His Educated Rodents". Pratchett's daughter is the current custodian of the Discworld franchise, and has stated on several occasions that she has no plans to publish any of her father's unfinished work, or to continue the Discworld on her own. Pratchett had told Neil Gaiman that anything that he had been working on at the time of his death should be put in the middle of a road and then destroyed by a steamroller crushing it. On 25 August 2017 Rob Wilkins, who manages the Pratchett estate, fulfilled this wish by destroying Terry Pratchett's computer hard-drive. He did this by running it over with a steamroller called "Lord Jericho" at the Great Dorset Steam Fair. Five graphic novels of Pratchett's work have been released. The first two, originally published in the US, were adaptations of "The Colour of Magic" and "The Light Fantastic" and illustrated by Steven Ross (with Joe Bennett on the latter). The second two, published in the UK, were adaptations of "Mort" (subtitled "A Discworld Big Comic") and "Guards! Guards!", both illustrated by Graham Higgins and adapted by Stephen Briggs. The graphic novels of "The Colour of Magic" and "The Light Fantastic" were republished by Doubleday on 2 June 2008. An adaption of "Small Gods" illustrated by Ray Friesen was published on 28 July 2016. Pratchett held back from "Discworld" feature films; though the rights to a number of his books have been sold, no films have yet been made, albeit various books have been adapted into feature-length television dramas (see Television section below). Pratchett had a number of radio adaptations on BBC Radio 4: "The Colour of Magic", "Equal Rites" (on "Woman's Hour"), "Only You Can Save Mankind", "Guards! Guards!", "Wyrd Sisters", "Mort", and "Small Gods" have all been dramatised as serials, as was "Night Watch" in early 2008, and "The Amazing Maurice and his Educated Rodents" as a 90-minute play. The four-part BBC Radio 4 adaptation of "Eric" by Robin Brooks again started on 6 March 2013. "Guards! Guards!" was adapted as a one-hour audio drama by the Atlanta Radio Theatre Company and performed live at Dragon*Con in 2001. In 2014, a six-part adaption of "Good Omens" aired on BBC Radio 4, and featured cameos by both Terry Pratchett and Neil Gaiman. "Truckers" was adapted as a stop motion animation series for Thames Television by Cosgrove Hall Films in 1992. "Johnny and the Dead" was made into a TV serial for Children's ITV on ITV, in 1995. "Wyrd Sisters" and "Soul Music" were adapted as animated cartoon series by Cosgrove Hall for Channel 4 in 1996; illustrated screenplays of these were published in 1998 and 1997 respectively. In January 2006, BBC One aired a three-part adaptation of "Johnny and the Bomb". A two-part, feature-length version of "Hogfather" starring David Jason and the voice of Ian Richardson was first aired on Sky One in the United Kingdom in December 2006, and on ION Television in the US in 2007. Pratchett was opposed to live action films about "Discworld" before because of his negative experience with Hollywood film makers. He changed his opinion when he saw that the director Vadim Jean and producer Rod Brown were very enthusiastic and cooperative. A two-part, feature-length adaptation of "The Colour of Magic" and its sequel "The Light Fantastic" aired during Easter 2008 on Sky One. A third adaptation, "Going Postal" was aired at the end of May 2010. The Sky adaptations notably feature the author's presence in cameo roles. He is also credited as having "mucked about" with these adaptations. In 2012, Pratchett founded a television production company of his own, Narrativia, which is to hold the rights to his works, and which is in development of a television series, "The Watch", based on the Ankh-Morpork City Watch. In 2016, Neil Gaiman stated that Terry had given him his blessing to go forward with an adaptation of "Good Omens" if he so wished. The Good Omens miniseries was released as a six-part series on Amazon Prime in May 2019 and was broadcast on the BBC after its Amazon release. Twenty one of Pratchett's novels have been adapted as plays by Stephen Briggs and published in book form. They were first produced by the Studio Theatre Club in Abingdon, Oxfordshire. They include adaptations of "The Truth", "Maskerade", "Mort", "Wyrd Sisters" and "Guards! Guards!" Stage adaptations of Discworld novels have been performed on every continent in the world, including Antarctica. In addition, "Lords & Ladies" has been adapted for the stage by Irana Brown, and "Pyramids" was adapted for the stage by Suzi Holyoake in 1999 and had a week-long theatre run in the UK. In 2002, an adaptation of "Truckers" was produced as a co-production between Harrogate Theatre, the Belgrade Theatre Coventry and Theatre Royal, Bury St. Edmunds. It was adapted by Bob Eaton, and directed by Rob Swain. The play toured to many venues in the UK between 15 March and 29 June 2002. A version of "Eric" adapted for the stage by Scott Harrison and Lee Harris was produced and performed by The Dreaming Theatre Company in June/July 2003 inside Clifford's Tower, the 700-year-old castle keep in York. It was revived in 2004 in a tour of England along with Robert Rankin's "The Antipope". In 2004, a musical adaptation of "Only You Can Save Mankind" was premiered at the Edinburgh Festival, with music by Leighton James House and book and lyrics by Shaun McKenna. In January 2009, the National Theatre announced that their annual winter family production in 2009 would be a theatrical adaptation of Pratchett's novel "Nation". The novel was adapted by playwright Mark Ravenhill and directed by Melly Still. The production premiered at the Olivier Theatre on 24 November, and ran until 28 March 2010. It was broadcast to cinemas around the world on 30 January 2010. Pratchett worked with Youth Music Theatre UK to bring adaptations of both "Mort" and "Soul Music" to the stage. In August 2014, an adaptation of "Soul Music" was performed at the Rose Theatre, Kingston. "GURPS Discworld" (Steve Jackson Games, 1998) and "GURPS Discworld Also" (Steve Jackson Games, 2001) are role-playing source books which were written by Terry Pratchett and Phil Masters, which offer insights into the workings of the Discworld. The first of these two books was re-released in September 2002 under the name of "The Discworld Roleplaying Game", with art by Paul Kidby. The Discworld universe has been used as a basis for a number of video games on a range of formats, such as the Sega Saturn, the Sony PlayStation, the Philips CD-i, and the 3DO, as well as DOS and Windows-based PCs. The following are the more notable games: So far there have been five games published relating to Discworld: A collection of essays about his writings is compiled in the book "Terry Pratchett: Guilty of Literature", edited by Andrew M. Butler, Edward James and Farah Mendlesohn, published by Science Fiction Foundation in 2000 (). A second, expanded edition was published by Old Earth Books in 2004 (). Andrew M. Butler wrote the "Pocket Essentials Guide to Terry Pratchett" published in 2001 (). "Writers Uncovered: Terry Pratchett" is a biography for young readers by Vic Parker, published by Heinemann Library in 2006 (). A BBC docudrama based on Pratchett's life. "Terry Pratchett: Back In Black" was broadcast in February 2017 and starred Paul Kaye as Pratchett. Neil Gaiman was involved with the project which used Pratchett's own words. Terry's long term assistant Rob Wilkins stated that Terry was working on this documentary before he died, and according to the BBC, finishing it would "show the author was still having the last laugh".
https://en.wikipedia.org/wiki?curid=30029
Treaty of Versailles The Treaty of Versailles () was the most important of the peace treaties that brought World War I to an end. The Treaty ended the state of war between Germany and the Allied Powers. It was signed on 28 June 1919 in Versailles, exactly five years after the assassination of Archduke Franz Ferdinand, which had directly led to the war. The other Central Powers on the German side signed separate treaties. Although the armistice, signed on 11 November 1918, ended the actual fighting, it took six months of Allied negotiations at the Paris Peace Conference to conclude the peace treaty. The treaty was registered by the Secretariat of the League of Nations on 21 October 1919. Of the many provisions in the treaty, one of the most important and controversial required "Germany [to] accept the responsibility of Germany and her allies for causing all the loss and damage" during the war (the other members of the Central Powers signed treaties containing similar articles). This article, Article 231, later became known as the War Guilt clause. The treaty required Germany to disarm, make ample territorial concessions, and pay reparations to certain countries that had formed the Entente powers. In 1921 the total cost of these reparations was assessed at 132 billion marks (then $31.4 billion or £6.6 billion, roughly equivalent to US$442 billion or UK£284 billion in 2020). At the time economists, notably John Maynard Keynes (a British delegate to the Paris Peace Conference), predicted that the treaty was too harsh—a "Carthaginian peace"—and said the reparations figure was excessive and counter-productive, views that, since then, have been the subject of ongoing debate by historians and economists. On the other hand, prominent figures on the Allied side, such as French Marshal Ferdinand Foch, criticized the treaty for treating Germany too leniently. The result of these competing and sometimes conflicting goals among the victors was a compromise that left no one satisfied, and, in particular, Germany was neither pacified nor conciliated, nor was it permanently weakened. The problems that arose from the treaty would lead to the Locarno Treaties, which improved relations between Germany and the other European powers, and the re-negotiation of the reparation system resulting in the Dawes Plan, the Young Plan, and the indefinite postponement of reparations at the Lausanne Conference of 1932. The treaty has sometimes been cited as a cause of World War II: although its actual impact was not as severe as feared, its terms led to great resentment in Germany which powered the rise of Hitler's Nazis. Although it is often referred to as the "Versailles Conference", only the actual signing of the treaty took place at the historic palace. Most of the negotiations were in Paris, with the "Big Four" meetings taking place generally at the French Ministry of Foreign Affairs on the Quai d'Orsay. On 28 June 1914, the heir to the throne of Austria-Hungary, the Archduke Franz Ferdinand of Austria, was assassinated by a Serbian nationalist. This caused a rapidly escalating July Crisis resulting in Austria-Hungary declaring war on Serbia, followed quickly by the entry of most European powers into the First World War. Two alliances faced off, the Central Powers (led by Germany) and the Triple Entente (led by Britain, France and Russia). Other countries entered as fighting raged widely across Europe, as well as the Middle East, Africa and Asia. In 1917, two revolutions occurred within the Russian Empire. The new Bolshevik government under Vladimir Lenin in March 1918 signed the Treaty of Brest-Litovsk that was highly favourable to Germany. Sensing victory before American armies could be ready, Germany now shifted force to the Western Front and tried to overwhelm the Allies. It failed. Instead the Allies won decisively on the battlefield and forced an armistice in November 1918 that resembled a surrender. On 6 April 1917, the United States entered the war against the Central Powers. The motives were twofold: German submarine warfare against merchant ships trading with France and Britain, which led to the sinking of the RMS "Lusitania" and the loss of 128 American lives; and the interception of the German Zimmermann Telegram, urging Mexico to declare war against the United States. The American war aim was to detach the war from nationalistic disputes and ambitions after the Bolshevik disclosure of secret treaties between the Allies. The existence of these treaties tended to discredit Allied claims that Germany was the sole power with aggressive ambitions. On 8 January 1918, President Woodrow Wilson issued the nation's postwar goals, the Fourteen Points. It outlined a policy of free trade, open agreements, and democracy. While the term was not used self-determination was assumed. It called for a negotiated end to the war, international disarmament, the withdrawal of the Central Powers from occupied territories, the creation of a Polish state, the redrawing of Europe's borders along ethnic lines, and the formation of a League of Nations to guarantee the political independence and territorial integrity of all states. It called for a just and democratic peace uncompromised by territorial annexation. The Fourteen Points were based on the research of the Inquiry, a team of about 150 advisors led by foreign-policy advisor Edward M. House, into the topics likely to arise in the expected peace conference. After the Central Powers launched Operation Faustschlag on the Eastern Front, the new Soviet Government of Russia signed the Treaty of Brest-Litovsk with Germany on 3 March 1918. This treaty ended the war between Russia and the Central powers and annexed of territory and 62 million people. This loss equated to a third of the Russian population, a quarter of its territory, around a third of the country's arable land, three-quarters of its coal and iron, a third of its factories (totalling 54 percent of the nation's industrial capacity), and a quarter of its railroads. During the autumn of 1918, the Central Powers began to collapse. Desertion rates within the German army began to increase, and civilian strikes drastically reduced war production. On the Western Front, the Allied forces launched the Hundred Days Offensive and decisively defeated the German western armies. Sailors of the Imperial German Navy at Kiel mutinied, which prompted uprisings in Germany, which became known as the German Revolution. The German government tried to obtain a peace settlement based on the Fourteen Points, and maintained it was on this basis that they surrendered. Following negotiations, the Allied powers and Germany signed an armistice, which came into effect on 11 November while German forces were still positioned in France and Belgium. The terms of the armistice called for an immediate evacuation of German troops from occupied Belgium, France, and Luxembourg within fifteen days. In addition, it established that Allied forces would occupy the Rhineland. In late 1918, Allied troops entered Germany and began the occupation. Both Germany and Great Britain were dependent on imports of food and raw materials, most of which had to be shipped across the Atlantic Ocean. The Blockade of Germany (1914–1919) was a naval operation conducted by the Allied Powers to stop the supply of raw materials and foodstuffs reaching the Central Powers. The German "Kaiserliche Marine" was mainly restricted to the German Bight and used commerce raiders and unrestricted submarine warfare for a counter-blockade. The German Board of Public Health in December 1918 stated that civilians had died during the Allied blockade, although an academic study in 1928 put the death toll at Talks between the Allies to establish a common negotiating position started on 18 January 1919, in the "Salle de l'Horloge" at the French Foreign Ministry on the Quai d'Orsay in Paris. Initially, 70 delegates from 27 nations participated in the negotiations. Russia was excluded due to their signing of a separate peace (the Treaty of Brest-Litovsk) and early withdrawal from the war. Furthermore, German negotiators were excluded to deny them an opportunity to divide the Allies diplomatically. Initially, a "Council of Ten" (comprising two delegates each from Britain, France, the United States, Italy, and Japan) met officially to decide the peace terms. This council was replaced by the "Council of Five", formed from each country's foreign ministers, to discuss minor matters. French Prime Minister Georges Clemenceau, Italian Prime Minister Vittorio Emanuele Orlando, British Prime Minister David Lloyd George, and United States President Woodrow Wilson formed the "Big Four" (at one point becoming the "Big Three" following the temporary withdrawal of Vittorio Emanuele Orlando). These four men met in 145 closed sessions to make all the major decisions, which were later ratified by the entire assembly. The minor powers attended a weekly "Plenary Conference" that discussed issues in a general forum but made no decisions. These members formed over 50 commissions that made various recommendations, many of which were incorporated into the final text of the treaty. France had lost soldiers, including French men aged France had also been more physically damaged than any other nation (the so-called zone rouge (Red Zone); the most industrialized region and the source of most coal and iron ore in the north-east had been devastated and in the final days of the war mines had been flooded and railways, bridges and factories destroyed.) Clemenceau intended to ensure the security of France, by weakening Germany economically, militarily, territorially and by supplanting Germany as the leading producer of steel in Europe. British economist and Versailles negotiator John Maynard Keynes summarized this position as attempting to "set the clock back and undo what, since 1870, the progress of Germany had accomplished." Clemenceau told Wilson: "America is far away, protected by the ocean. Not even Napoleon himself could touch England. You are both sheltered; we are not". The French wanted a frontier on the Rhine, to protect France from a German invasion and compensate for French demographic and economic inferiority. American and British representatives refused the French claim and after two months of negotiations, the French accepted a British pledge to provide an immediate alliance with France if Germany attacked again, and Wilson agreed to put a similar proposal to the Senate. Clemenceau had told the Chamber of Deputies, in December 1918, that his goal was to maintain an alliance with both countries. Clemenceau accepted the offer, in return for an occupation of the Rhineland for fifteen years and that Germany would also demilitarise the Rhineland. French negotiators required reparations, to make Germany pay for the destruction induced throughout the war and to decrease German strength. The French also wanted the iron ore and coal of the Saar Valley, by annexation to France. The French were willing to accept a smaller amount of reparations than the Americans would concede and Clemenceau was willing to discuss German capacity to pay with the German delegation, before the final settlement was drafted. In April and May 1919, the French and Germans held separate talks, on mutually acceptable arrangements on issues like reparation, reconstruction and industrial collaboration. France, along with the British Dominions and Belgium, opposed mandates and favored annexation of former German colonies. Britain had suffered heavy financial costs but suffered little physical devastation during the war. However, the British wartime coalition was re-elected during the so-called Coupon election at the end of 1918, with a policy of squeezing the German "'til the pips squeak". Public opinion favoured a "just peace", which would force Germany to pay reparations and be unable to repeat the aggression of 1914, although those of a "liberal and advanced opinion" shared Wilson's ideal of a peace of reconciliation. In private Lloyd George opposed revenge and attempted to compromise between Clemenceau's demands and the Fourteen Points, because Europe would eventually have to reconcile with Germany. Lloyd George wanted terms of reparation that would not cripple the German economy, so that Germany would remain a viable economic power and trading partner. By arguing that British war pensions and widows' allowances should be included in the German reparation sum, Lloyd George ensured that a large amount would go to the British Empire. Lloyd George also intended to maintain a European balance of power to thwart a French attempt to establish itself as the dominant European power. A revived Germany would be a counterweight to France and a deterrent to Bolshevik Russia. Lloyd George also wanted to neutralize the German navy to keep the Royal Navy as the greatest naval power in the world; dismantle the German colonial empire with several of its territorial possessions ceded to Britain and others being established as League of Nations mandates, a position opposed by the Dominions. Prior to the American entry into the war, Wilson had talked of a 'peace without victory'. This position fluctuated following the US entry into the war. Wilson spoke of the German aggressors, with whom there could be no compromised peace. However, on 8 January 1918, Wilson delivered a speech (known as the Fourteen Points) that declared the American peace objectives: the rebuilding of the European economy, self-determination of European and Middle Eastern ethnic groups, the promotion of free trade, the creation of appropriate mandates for former colonies, and above all, the creation of a powerful League of Nations that would ensure the peace. The aim of the latter was to provide a forum to revise the peace treaties as needed, and deal with problems that arose as a result of the peace and the rise of new states. Wilson brought along top intellectuals as advisors to the American peace delegation, and the overall American position echoed the Fourteen Points. Wilson firmly opposed harsh treatment on Germany. While the British and French wanted to largely annex the German colonial empire, Wilson saw that as a violation of the fundamental principles of justice and human rights of the native populations, and favored them having the right of self-determination via the creation of mandates. The promoted idea called for the major powers to act as disinterested trustees over a region, aiding the native populations until they could govern themselves. In spite of this position and in order to ensure that Japan did not refuse to join the League of Nations, Wilson favored turning over the former German colony of Shandong, in Eastern China, to Japan rather than return the area to Chinese control. Further confounding the Americans, was US internal partisan politics. In November 1918, the Republican Party won the Senate election by a slim margin. Wilson, a Democrat, refused to include prominent Republicans in the American delegation making his efforts seem partisan, and contributed to a risk of political defeat at home. Vittorio Emanuele Orlando and his foreign minister Sidney Sonnino, an Anglican of British origins, worked primarily to secure the partition of the Habsburg Empire and their attitude towards Germany was not as hostile. Generally speaking, Sonnino was in line with the British position while Orlando favored a compromise between Clemenceau and Wilson. Within the negotiations for the Treaty of Versailles, Orlando obtained certain results such as the permanent membership of Italy in the security council of the League of Nations and a promised transfer of British Jubaland and French Aozou strip to the Italian colonies of Somalia and Libya respectively. Italian nationalists, however, saw WW1 as a mutilated victory for what they considered to be little territorial gains achieved in the other treaties directly impacting Italy's borders. Orlando was ultimately forced to abandon the conference and resign. Orlando refused to see World War One as a mutilated victory, replying at nationalists calling for a greater expansion that "Italy today is a great state...on par with the great historic and contemporary states. This is, for me, our main and principal expansion." Francesco Saverio Nitti took Orlando's place in signing the treaty of Versailles. In June 1919, the Allies declared that war would resume if the German government did not sign the treaty they had agreed to among themselves. The government headed by Philipp Scheidemann was unable to agree on a common position, and Scheidemann himself resigned rather than agree to sign the treaty. Gustav Bauer, the head of the new government, sent a telegram stating his intention to sign the treaty if certain articles were withdrawn, including Articles 227, 230 and 231. In response, the Allies issued an ultimatum stating that Germany would have to accept the treaty or face an invasion of Allied forces across the Rhine within On 23 June, Bauer capitulated and sent a second telegram with a confirmation that a German delegation would arrive shortly to sign the treaty. On 28 June 1919, the fifth anniversary of the assassination of Archduke Franz Ferdinand (the immediate impetus for the war), the peace treaty was signed. The treaty had clauses ranging from war crimes, the prohibition on the merging of the Republic of German Austria with Germany without the consent of the League of Nations, freedom of navigation on major European rivers, to the returning of a Koran to the king of Hedjaz. The treaty stripped Germany of of territory and It also required Germany to give up the gains made via the Treaty of Brest-Litovsk and grant independence to the protectorates that had been established. In Western Europe Germany was required to recognize Belgian sovereignty over Moresnet and cede control of the Eupen-Malmedy area. Within six months of the transfer, Belgium was required to conduct a plebiscite on whether the citizens of the region wanted to remain under Belgian sovereignty or return to German control, communicate the results to the League of Nations and abide by the League's decision. To compensate for the destruction of French coal mines, Germany was to cede the output of the Saar coalmines to France and control of the Saar to the League of Nations for 15 years; a plebiscite would then be held to decide sovereignty. The treaty restored the provinces of Alsace-Lorraine to France by rescinding the treaties of Versailles and Frankfurt of 1871 as they pertained to this issue. France was able to make the claim that the provinces of Alsace-Lorraine were indeed part of France and not part of Germany by disclosing a letter sent from the Prussian King to Empress Eugénie that Eugénie provided, in which William I wrote that the territories of Alsace-Lorraine were requested by Germany for the sole purpose of national defense and not to expand the German territory. The sovereignty of Schleswig-Holstein was to be resolved by a plebiscite to be held at a future time (see Schleswig Plebiscites). In Central Europe Germany was to recognize the independence of Czechoslovakia (which had actually been controlled by Austria) and cede parts of the province of Upper Silesia. Germany had to recognize the independence of Poland and renounce "all rights and title over the territory". Portions of Upper Silesia were to be ceded to Poland, with the future of the rest of the province to be decided by plebiscite. The border would be fixed with regard to the vote and to the geographical and economic conditions of each locality. The province of Posen (now Poznań), which had come under Polish control during the Greater Poland Uprising, was also to be ceded to Poland. Pomerelia (Eastern Pomerania), on historical and ethnic grounds, was transferred to Poland so that the new state could have access to the sea and became known as the Polish Corridor. The sovereignty of part of southern East Prussia was to be decided via plebiscite while the East Prussian Soldau area, which was astride the rail line between Warsaw and Danzig, was transferred to Poland outright without plebiscite. An area of was granted to Poland at the expense of Germany. Memel was to be ceded to the Allied and Associated powers, for disposal according to their wishes. Germany was to cede the city of Danzig and its hinterland, including the delta of the Vistula River on the Baltic Sea, for the League of Nations to establish the Free City of Danzig. Article 119 of the treaty required Germany to renounce sovereignty over former colonies and Article 22 converted the territories into League of Nations mandates under the control of Allied states. Togoland and German Kamerun (Cameroon) were transferred to France. Ruanda and Urundi were allocated to Belgium, whereas German South-West Africa went to South Africa and Britain obtained German East Africa. As compensation for the German invasion of Portuguese Africa, Portugal was granted the Kionga Triangle, a sliver of German East Africa in northern Mozambique. Article 156 of the treaty transferred German concessions in Shandong, China, to Japan, not to China. Japan was granted all German possessions in the Pacific north of the equator and those south of the equator went to Australia, except for German Samoa, which was taken by New Zealand. The treaty was comprehensive and complex in the restrictions imposed upon the post-war German armed forces (the "Reichswehr"). The provisions were intended to make the incapable of offensive action and to encourage international disarmament. Germany was to demobilize sufficient soldiers by 31 March 1920 to leave an army of no more than in a maximum of seven infantry and three cavalry divisions. The treaty laid down the organisation of the divisions and support units, and the General Staff was to be dissolved. Military schools for officer training were limited to three, one school per arm, and conscription was abolished. Private soldiers and non-commissioned officers were to be retained for at least twelve years and officers for a minimum of with former officers being forbidden to attend military exercises. To prevent Germany from building up a large cadre of trained men, the number of men allowed to leave early was limited. The number of civilian staff supporting the army was reduced and the police force was reduced to its pre-war size, with increases limited to population increases; paramilitary forces were forbidden. The Rhineland was to be demilitarized, all fortifications in the Rhineland and east of the river were to be demolished and new construction was forbidden. Military structures and fortifications on the islands of Heligoland and Düne were to be destroyed. Germany was prohibited from the arms trade, limits were imposed on the type and quantity of weapons and prohibited from the manufacture or stockpile of chemical weapons, armoured cars, tanks and military aircraft. The German navy was allowed six pre-dreadnought battleships and was limited to a maximum of six light cruisers (not exceeding ), twelve destroyers (not exceeding ) and twelve torpedo boats (not exceeding ) and was forbidden submarines. The manpower of the navy was not to exceed including manning for the fleet, coast defences, signal stations, administration, other land services, officers and men of all grades and corps. The number of officers and warrant officers was not allowed to exceed Germany surrendered eight battleships, eight light cruisers, forty-two destroyers, and fifty torpedo boats for decommissioning. Thirty-two auxiliary ships were to be disarmed and converted to merchant use. Article 198 prohibited Germany from having an air force, including naval air forces, and required Germany to hand over all aerial related materials. In conjunction, Germany was forbidden to manufacture or import aircraft or related material for a period of six months following the signing of the treaty. In Article 231 Germany accepted responsibility for the losses and damages caused by the war "as a consequence of the ... aggression of Germany and her allies." The treaty required Germany to compensate the Allied powers, and it also established an Allied "Reparation Commission" to determine the exact amount which Germany would pay and the form that such payment would take. The commission was required to "give to the German Government a just opportunity to be heard", and to submit its conclusions by . In the interim, the treaty required Germany to pay an equivalent of gold marks ($5 billion) in gold, commodities, ships, securities or other forms. The money would help to pay for Allied occupation costs and buy food and raw materials for Germany. To ensure compliance, the Rhineland and bridgeheads east of the Rhine were to be occupied by Allied troops for fifteen years. If Germany had not committed aggression, a staged withdrawal would take place; after five years, the Cologne bridgehead and the territory north of a line along the Ruhr would be evacuated. After ten years, the bridgehead at Coblenz and the territories to the north would be evacuated and after fifteen years remaining Allied forces would be withdrawn. If Germany reneged on the treaty obligations, the bridgeheads would be reoccupied immediately. Part I of the treaty, as per all the treaties signed during the Paris Peace Conference, was the Covenant of the League of Nations, which provided for the creation of the League, an organization for the arbitration of international disputes. Part XIII organized the establishment of the International Labour Officer, to regulate hours of work, including a maximum working day and week; the regulation of the labour supply; the prevention of unemployment; the provision of a living wage; the protection of the worker against sickness, disease and injury arising out of his employment; the protection of children, young persons and women; provision for old age and injury; protection of the interests of workers when employed abroad; recognition of the principle of freedom of association; the organization of vocational and technical education and other measures. The treaty also called for the signatories to sign or ratify the International Opium Convention. The delegates of the Commonwealth and British Government had mixed thoughts on the treaty, with some seeing the French policy as being greedy and vindictive. Lloyd George and his private secretary Philip Kerr believed in the treaty, although they also felt that the French would keep Europe in a constant state of turmoil by attempting to enforce the treaty. Delegate Harold Nicolson wrote "are we making a good peace?", while General Jan Smuts (a member of the South African delegation) wrote to Lloyd-George, before the signing, that the treaty was unstable and declared "Are we in our sober senses or suffering from shellshock? What has become of Wilson's 14 points?" He wanted the Germans not be made to sign at the "point of the bayonet". Smuts issued a statement condemning the treaty and regretting that the promises of "a new international order and a fairer, better world are not written in this treaty". Lord Robert Cecil said that many within the Foreign Office were disappointed by the treaty. The treaty received widespread approval from the general public. Bernadotte Schmitt wrote that the "average Englishman ... thought Germany got only what it deserved" as a result of the treaty. However, public opinion changed as German complaints mounted. Prime Minister Ramsay MacDonald, following the German re-militarisation of the Rhineland in 1936, stated that he was "pleased" that the treaty was "vanishing", expressing his hope that the French had been taught a "severe lesson". The Treaty of Versailles was an important step in the status of the British Dominions under international law. Australia, Canada, New Zealand and South Africa had each made significant contributions to the British war effort, but as separate countries, rather than as British colonies. India also made a substantial troop contribution, although under direct British control, unlike the Dominions. The four Dominions and India all signed the Treaty separately from Britain, a clear recognition by the international community that the Dominions were no longer British colonies. "Their status defied exact analysis by both international and constitutional lawyers, but it was clear that they were no longer regarded simply as colonies of Britain." By signing the Treaty individually, the four Dominions and India also were in their own right, rather than simply as part of the British Empire. The signing of the treaty was met with roars of approval, singing, and dancing from a crowd outside the Palace of Versailles. In Paris proper, people rejoiced at the official end of the war, the return of Alsace and Lorraine to France, and that Germany had agreed to pay reparations. While France ratified the treaty and was active in the League, the jubilant mood soon gave way to a political backlash for Clemenceau. The French Right saw the treaty as being too lenient and saw it as failing to achieve all of France's demands. Left-wing politicians attacked the treaty and Clemenceau for being too harsh (the latter turning into a ritual condemnation of the treaty, for politicians remarking on French foreign affairs, as late as August 1939). Marshal Ferdinand Foch stated "this (treaty) is not peace. It is an armistice for twenty years."; a criticism over the failure to annex the Rhineland and for compromising French security for the benefit of the United States and Britain. When Clemenceau stood for election as President of France in January 1920, he was defeated. Reaction in Italy to the treaty was extremely negative. The country had suffered high casualties, yet failed to achieve most of its major war goals, notably gaining control of the Dalmatian coast and Fiume. President Wilson rejected Italy's claims on the basis of "national self-determination." For their part, Britain and France—who had been forced in the war's latter stages to divert their own troops to the Italian front to stave off collapse—were disinclined to support Italy's position at the peace conference. Differences in negotiating strategy between Premier Vittorio Orlando and Foreign Minister Sidney Sonnino further undermined Italy's position at the conference. A furious Vittorio Orlando suffered a nervous collapse and at one point walked out of the conference (though he later returned). He lost his position as prime minister just a week before the treaty was scheduled to be signed, effectively ending his active political career. Anger and dismay over the treaty's provisions helped pave the way for the establishment of Benito Mussolini's dictatorship three years later. Portugal entered the war on the Allied side in 1916 primarily to ensure the security of its African colonies, which were threatened with seizure by both Britain and Germany. To this extent, she succeeded in her war aims. The treaty recognized Portuguese sovereignty over these areas and awarded her small portions of Germany's bordering overseas colonies. Otherwise, Portugal gained little at the peace conference. Her promised share of German reparations never materialized, and a seat she coveted on the executive council of the new League of Nations went instead to Spain—which had remained neutral in the war. In the end, Portugal ratified the treaty, but got little out of the war, which cost more than 8,000 Portuguese troops and as many as 100,000 of her African colonial subjects their lives. After the Versailles conference, Democratic President Woodrow Wilson claimed that "at last the world knows America as the savior of the world!" However, the Republican Party, led by Henry Cabot Lodge, controlled the US Senate after the election of 1918, and the senators were divided into multiple positions on the Versailles question. It proved possible to build a majority coalition, but impossible to build a two-thirds coalition that was needed to pass a treaty. A discontent bloc of 12–18 "Irreconcilables", mostly Republicans but also representatives of the Irish and German Democrats, fiercely opposed the treaty. One block of Democrats strongly supported the Versailles Treaty, even with reservations added by Lodge. A second group of Democrats supported the treaty but followed Wilson in opposing any amendments or reservations. The largest bloc, led by Senator Lodge, comprised a majority of the Republicans. They wanted a treaty with reservations, especially on Article 10, which involved the power of the League of Nations to make war without a vote by the US Congress. All of the Irreconcilables were bitter enemies of President Wilson, and he launched a nationwide speaking tour in the summer of 1919 to refute them. However, Wilson collapsed midway with a serious stroke that effectively ruined his leadership skills. The closest the treaty came to passage was on 19 November 1919, as Lodge and his Republicans formed a coalition with the pro-Treaty Democrats, and were close to a two-thirds majority for a Treaty with reservations, but Wilson rejected this compromise and enough Democrats followed his lead to end the chances of ratification permanently. Among the American public as a whole, the Irish Catholics and the German Americans were intensely opposed to the treaty, saying it favored the British. After Wilson's presidency, his successor Republican President Warren G. Harding continued American opposition to the formation of the League of Nations. Congress subsequently passed the Knox–Porter Resolution bringing a formal end to hostilities between the United States and the Central Powers. It was signed into law by President Harding on 2 July 1921. Soon after, the US–German Peace Treaty of 1921 was signed in Berlin on 25 August 1921, and two similar treaties were signed with Austria and Hungary on 24 and 29 August 1921, in Vienna and Budapest respectively. Wilson's former friend Edward Mandell House, present at the negotiations, wrote in his diary on 29 June 1919: I am leaving Paris, after eight fateful months, with conflicting emotions. Looking at the conference in retrospect, there is much to approve and yet much to regret. It is easy to say what should have been done, but more difficult to have found a way of doing it. To those who are saying that the treaty is bad and should never have been made and that it will involve Europe in infinite difficulties in its enforcement, I feel like admitting it. But I would also say in reply that empires cannot be shattered, and new states raised upon their ruins without disturbance. To create new boundaries is to create new troubles. The one follows the other. While I should have preferred a different peace, I doubt very much whether it could have been made, for the ingredients required for such a peace as I would have were lacking at Paris. Many in China felt betrayed as the German territory in China was handed to Japan. Wellington Koo refused to sign the treaty and the Chinese delegation at the Paris Peace Conference was the only nation that did not sign the Treaty of Versailles at the signing ceremony. The sense of betrayal led to great demonstrations in China such as the May 4th movement. There was immense dissatisfaction with Duan Qirui’s government, which had secretly negotiated with the Japanese in order to secure loans to fund their military campaigns against the south. On 12 June 1919, the Chinese cabinet was forced to resign and the government instructed its delegation at Versailles not to sign the treaty. As a result, relations with the West deteriorated. On 29 April, the German delegation under the leadership of the Foreign Minister Ulrich Graf von Brockdorff-Rantzau arrived in Versailles. On 7 May, when faced with the conditions dictated by the victors, including the so-called "War Guilt Clause", von Brockdorff-Rantzau replied to Clemenceau, Wilson and Lloyd George: "We know the full brunt of hate that confronts us here. You demand from us to confess we were the only guilty party of war; such a confession in my mouth would be a lie." Because Germany was not allowed to take part in the negotiations, the German government issued a protest against what it considered to be unfair demands, and a "violation of honour", soon afterwards withdrawing from the proceedings of the peace conference. Germans of all political shades denounced the treaty—particularly the provision that blamed Germany for starting the war—as an insult to the nation's honor. They referred to the treaty as "the "Diktat"" since its terms were presented to Germany on a take-it-or-leave-it basis. Germany's first democratically elected head of government, Philipp Scheidemann, resigned rather than sign the treaty. In a passionate speech before the National Assembly on 12 May 1919, he called the treaty a "murderous plan" and exclaimed, After Scheidemann's resignation, a new coalition government was formed under Gustav Bauer. President Friedrich Ebert knew that Germany was in an impossible situation. Although he shared his countrymen's disgust with the treaty, he was sober enough to consider the possibility that the government would not be in a position to reject it. He believed that if Germany refused to sign the treaty, the Allies would invade Germany from the west—and there was no guarantee that the army would be able to make a stand in the event of an invasion. With this in mind, he asked Field Marshal Paul von Hindenburg if the army was capable of any meaningful resistance in the event the Allies resumed the war. If there was even the slightest chance that the army could hold out, Ebert intended to recommend against ratifying the treaty. Hindenburg—after prodding from his chief of staff, Wilhelm Groener—concluded the army could not resume the war even on a limited scale. However, rather than inform Ebert himself, he had Groener inform the government that the army would be in an untenable position in the event of renewed hostilities. Upon receiving this, the new government recommended signing the treaty. The National Assembly voted in favour of signing the treaty by 237 to 138, with five abstentions (there were 421 delegates in total). This result was wired to Clemenceau just hours before the deadline. Foreign minister Hermann Müller and colonial minister Johannes Bell travelled to Versailles to sign the treaty on behalf of Germany. The treaty was signed on 28 June 1919 and ratified by the National Assembly on 9 July by a vote of 209 to 116. The Japanese felt they had been treated unfairly by the Allies, notably by the United States, France, and Great Britain, in the Treaty, for that they got as what they saw as not enough in return for their efforts against the German Empire during the course of the war. Japan attempted to amend Racial Equality Proposal in the treaty, which would require racial equality among members of the League of Nations. The amendment had broad support, but was effectively declined when it was rejected by the United States and Australia. Japanese nationalism grew in response to their growing mistrust of Western powers. As a result, Japan became alienated among world powers, allowing it to pursue its own strategic interests in Asia and the Pacific. On 5 May 1921, the reparation Commission established the London Schedule of Payments and a final reparation sum of gold marks to be demanded of all the Central Powers. This was the public assessment of what the Central Powers combined could pay, and was also a compromise between Belgian, British, and French demands and assessments. Furthermore, the Commission recognized that the Central Powers could pay little and that the burden would fall upon Germany. As a result, the sum was split into different categories, of which Germany was only required to pay gold marks ; this being the genuine assessment of the Commission on what Germany could pay, and allowed the Allied powers to save face with the public by presenting a higher figure. Furthermore, payments made between 1919 and 1921 were taken into account reducing the sum to 41 billion gold marks. In order to meet this sum, Germany could pay in cash or kind: coal, timber, chemical dyes, pharmaceuticals, livestock, agricultural machines, construction materials, and factory machinery. Germany's assistance with the restoration of the university library of Leuven, which was destroyed by the Germans on 25 August 1914, was also credited towards the sum. Territorial changes imposed by the treaty were also factored in. The payment schedule required within twenty-five days and then annually, plus 26 per cent of the value of German exports. The German Government was to issue bonds at five per cent interest and set up a sinking fund of one per cent to support the payment of reparations. In February and March 1920, the Schleswig Plebiscites were held. The people of Schleswig were presented with only two choices: Danish or German sovereignty. The northern Danish-speaking area voted for Denmark while the southern German-speaking area voted for Germany, resulting in the province being partitioned. The East Prussia plebiscite was held on 11 July 1920. There was a out with the population wishing to remain with Germany. Further plebiscites were held in Eupen, Malmedy, and Prussian Moresnet. On 20 September 1920, the League of Nations allotted these territories to Belgium. These latter plebiscites were followed by a boundary commission in 1922, followed by the new Belgian-German border being recognized by the German Government on 15 December 1923. The transfer of the Hultschin area, of Silesia, to Czechoslovakia was completed on 3 February 1921. Following the implementation of the treaty, Upper Silesia was initially governed by Britain, France, and Italy. Between 1919 and 1921, three major outbreaks of violence took place between German and Polish civilians, resulting in German and Polish military forces also becoming involved. In March 1921, the Inter-Allied Commission held the Upper Silesia plebiscite, which was peaceful despite the previous violence. The plebiscite resulted in the population voting for the province to remain part of Germany. Following the vote, the League of Nations debated the future of the province. In 1922, Upper Silesia was partitioned: Oppeln, in the north-west, remained with Germany while Silesia Province, in the south-east, was transferred to Poland. Memel remained under the authority of the League of Nations, with a French military garrison, until January 1923. On 9 January 1923, Lithuanian forces invaded the territory during the Klaipėda Revolt. The French garrison withdrew, and in February the Allies agreed to attach Memel as an "autonomous territory" to Lithuania. On 8 May 1924, after negotiations between the Lithuanian Government and the Conference of Ambassadors and action by the League of Nations, the annexation of Memel was ratified. Lithuania accepted the Memel Statute, a power-sharing arrangement to protect non-Lithuanians in the territory and its autonomous status while responsibility for the territory remained with the great powers. The League of Nations mediated between the Germans and Lithuanians on a local level, helping the power-sharing arrangement last until 1939. On 13 January 1935, 15 years after the Saar Basin had been placed under the protection of the League of Nations, a plebiscite was held to determine the future of the area. were cast, with ( the ballot) in favour of union with Germany; were cast for the status quo, and for union with France. The region returned to German sovereignty on 1 March 1935. When the result was announced including from Germany fled to France. In late 1918, American, Belgian, British, and French troops entered the Rhineland to enforce the armistice. Prior to the treaty, the occupation force stood at roughly 740,000 men. Following the signing of the peace treaty, the numbers drastically decreased and by 1926 the occupation force numbered only 76,000 men. As part of the 1929 negotiations that would become the Young Plan, Stresemann and Aristide Briand negotiated the early withdrawal of Allied forces from the Rhineland. On 30 June 1930, after speeches and the lowering of flags, the last troops of the Anglo-French-Belgian occupation force withdrew from Germany. Belgium maintained an occupation force of roughly 10,000 troops throughout the initial years. This figure fell to 7,102 by 1926, and continued to fall as a result of diplomatic developments. The British Second Army, with some 275,000 veteran soldiers, entered Germany in late 1918. In March 1919, this force became the British Army of the Rhine (BAOR). The total number of troops committed to the occupation rapidly dwindled as veteran soldiers were demobilized, and were replaced by inexperienced men who had finished basic training following the cessation of hostilities. By 1920, the BAOR consisted of only 40,594 men and the following year had been further reduced to 12,421. The size of the BAOR fluctuated over the following years, but never rose above 9,000 men. The British did not adhere to all obligated territorial withdrawals as dictated by Versailles, on account of Germany not meeting her own treaty obligations. A complete withdrawal was considered, but rejected in order to maintain a presence to continue acting as a check on French ambitions and prevent the establishment of an autonomous Rhineland Republic. The French Army of the Rhine was initially 250,000 men strong, including at a peak 40,000 African colonial troops ("Troupes coloniales"). By 1923, the French occupation force had decreased to roughly 130,000 men, including 27,126 African troops. The troop numbers peaked again at 250,000 during the occupation of the Ruhr, before decreasing to 60,000 men by 1926. Germans viewed the use of French colonial troops as a deliberate act of humiliation, and used their presence to create a propaganda campaign dubbed the Black shame. This campaign lasted throughout the 1920s and 30s, although peaked in 1920 and 1921. For example, a 1921 German Government memo detailed 300 acts of violence from colonial troops, which included 65 murders and 170 sexual offenses. Historical consensus is that the charges were exaggerated for political and propaganda purposes, and that the colonial troops behaved far better than their white counterparts. An estimated 500–800 Rhineland Bastards were born as a result of fraternization between colonial troops and German women, and who would later be persecuted. The United States Third Army entered Germany with . In June 1919, the Third Army demobilized and by 1920 the US occupation force had been reduced to . Wilson further reduced the garrison to , prior to the inauguration of Warren G. Harding in 1921. On 7 January 1923, after the Franco–Belgian occupation of the Ruhr, the US senate legislated the withdrawal of the remaining force. On 24 January, the American garrison started their withdrawal from the Rhineland, with the final troops leaving in early February. The German economy was so weak that only a small percentage of reparations was paid in hard currency. Nonetheless, even the payment of this small percentage of the original reparations (132 billion gold marks) still placed a significant burden on the German economy. Although the causes of the devastating post-war hyperinflation are complex and disputed, Germans blamed the near-collapse of their economy on the treaty, and some economists estimated that the reparations accounted for as much as one-third of the hyper-inflation. In March 1921, French and Belgian troops occupied Duisburg, Düsseldorf, and other areas which formed part of the demilitarized Rhineland, according to the Treaty of Versailles. In January 1923, French and Belgian forces occupied the rest of the Ruhr area as a reprisal after Germany failed to fulfill reparation payments demanded by the Versailles Treaty. The German government answered with "passive resistance", which meant that coal miners and railway workers refused to obey any instructions by the occupation forces. Production and transportation came to a standstill, but the financial consequences contributed to German hyperinflation and completely ruined public finances in Germany. Consequently, passive resistance was called off in late 1923. The end of passive resistance in the Ruhr allowed Germany to undertake a currency reform and to negotiate the Dawes Plan, which led to the withdrawal of French and Belgian troops from the Ruhr Area in 1925. In 1920, the head of the "Reichswehr" Hans von Seeckt clandestinely re-established the General Staff, by expanding the "Truppenamt" (Troop Office); purportedly a human resources section of the army. In March, German troops entered the Rhineland under the guise of attempting to quell possible unrest by communists and in doing so violated the demilitarized zone. In response, French troops advanced further into Germany until the German troops withdrew. German officials conspired systematically to evade the clauses of the treaty, by failing to meet disarmament deadlines, refusing Allied officials access to military facilities, and maintaining and hiding weapon production. As the treaty did not ban German companies from producing war material outside of Germany, companies moved to the Netherlands, Switzerland, and Sweden. Bofors was bought by Krupp, and in 1921 German troops were sent to Sweden to test weapons. The establishment of diplomatic ties with the Soviet Union, via the Genoa Conference and Treaty of Rapallo, was also used to circumvent the Treaty of Versailles. Publicly, these diplomatic exchanges were largely in regards to trade and future economic cooperation. However, secret military clauses were included that allowed for Germany to develop weapons inside the Soviet Union. Furthermore, it allowed for Germany to establish three training areas for aviation, chemical and tank warfare. In 1923, the British newspaper The Times made several claims about the state of the German Armed Forces: that it had equipment for , was transferring army staff to civilian positions in order to obscure their real duties, and warned of the militarization of the German police force by the exploitation the Krümper system. The Weimar Government also funded domestic rearmament programs, which were covertly funded with the money camouflaged in "X-budgets", worth up to an additional the disclosed military budget. By 1925, German companies had begun to design tanks and modern artillery. During the year, over half of Chinese arms imports were German and worth "Reichsmarks." In January 1927, following the withdrawal of the Allied disarmament committee, Krupps ramped up production of armor plate and artillery. Production increased so that by 1937, military exports had increased to "Reichsmarks". Production was not the only violation: "Volunteers" were rapidly passed through the army to make a pool of trained reserves, and paramilitary organizations were encouraged with the illegally militarized police. Non-commissioned officers (NCOs) were not limited by the treaty, thus this loophole was exploited and as such the number of NCOs were vastly in excess to the number needed by the "Reichswehr". In December 1931, the "Reichswehr" finalized a second rearmament plan that called for "Reichsmarks" to be spent over the following five years: this program sought to provide Germany the capability of creating and supplying a defensive force of 21 divisions supported by aircraft, artillery, and tanks. This coincided with a "Reichsmark" programme that planned for additional industrial infrastructure that would be able to permanently maintain this force. As these programs did not require an expansion of the military, they were nominally legal. On 7 November 1932, the Reich Minister of Defense Kurt von Schleicher authorized the illegal "Umbau" Plan for a standing army of 21 divisions based on soldiers and a large militia. Later in the year at the World Disarmament Conference, Germany withdrew to force France and Britain to accept German equality of status. London attempted to get Germany to return with the promise of all nations maintaining an equality in armaments and security. The British later proposed and agreed to an increase in the "Reichswehr" to men, and for Germany to have an air force half the size of the French. It was also negotiated for the French Army to be reduced. In October 1933, following the rise of Adolf Hitler and the founding of Nazi regime, Germany withdrew from League of Nations and the World Disarmament Conference. In March 1935, Germany reintroduced conscription followed by an open rearmament programme, the official unveiling of the Luftwaffe (air force), and signed the Anglo-German Naval Agreement that allowed a surface fleet the size of the Royal Navy. The resulting rearmament programs was allotted "Reichsmarks" over an eight-year period. On 7 March 1936, German troops entered and remilitarized the Rhineland. On 12 March 1938, following German pressure to the collapse the Austrian Government, German troops crossed into Austria and the following day Hitler announced the Anschluss: the annexation of Austria by Germany. The following year, on 23 March 1939, Germany annexed Memel from Lithuania. Historians are split on the impact of the treaty. Some saw it as a good solution in a difficult time, others saw it as a disastrous measure that would anger the Germans to seek revenge. The actual impact of the treaty is also disputed. In his book "The Economic Consequences of the Peace", John Maynard Keynes referred to the Treaty of Versailles as a "Carthaginian peace", a misguided attempt to destroy Germany on behalf of French revanchism, rather than to follow the fairer principles for a lasting peace set out in President Woodrow Wilson's Fourteen Points, which Germany had accepted at the armistice. He stated: "I believe that the campaign for securing out of Germany the general costs of the war was one of the most serious acts of political unwisdom for which our statesmen have ever been responsible." Keynes had been the principal representative of the British Treasury at the Paris Peace Conference, and used in his passionate book arguments that he and others (including some US officials) had used at Paris. He believed the sums being asked of Germany in reparations were many times more than it was possible for Germany to pay, and that these would produce drastic instability. French economist Étienne Mantoux disputed that analysis. During the 1940s, Mantoux wrote a posthumously published book titled "The Carthaginian Peace, or the Economic Consequences of Mr. Keynes" in an attempt to rebut Keynes' claims. More recently economists have argued that the restriction of Germany to a small army saved it so much money it could afford the reparations payments. It has been argued (for instance by historian Gerhard Weinberg in his book "A World at Arms") that the treaty was in fact quite advantageous to Germany. The Bismarckian Reich was maintained as a political unit instead of being broken up, and Germany largely escaped post-war military occupation (in contrast to the situation following World War II). In a 1995 essay, Weinberg noted that with the disappearance of Austria-Hungary and with Russia withdrawn from Europe, that Germany was now the dominant power in Eastern Europe. The British military historian Correlli Barnett claimed that the Treaty of Versailles was "extremely lenient in comparison with the peace terms that Germany herself, when she was expecting to win the war, had had in mind to impose on the Allies". Furthermore, he claimed, it was "hardly a slap on the wrist" when contrasted with the Treaty of Brest-Litovsk that Germany had imposed on a defeated Russian SFSR in March 1918, which had taken away a third of Russia's population (albeit mostly of non-Russian ethnicity), one-half of Russia's industrial undertakings and nine-tenths of Russia's coal mines, coupled with an indemnity of six billion marks. Eventually, even under the "cruel" terms of the Treaty of Versailles, Germany's economy had been restored to its pre-war status. Barnett also claims that, in strategic terms, Germany was in fact in a superior position following the Treaty than she had been in 1914. Germany's eastern frontiers faced Russia and Austria, who had both in the past balanced German power. Barnett asserts that its post-war eastern borders were safer, because the former Austrian Empire fractured after the war into smaller, weaker states, Russia was wracked by revolution and civil war, and the newly restored Poland was no match for even a defeated Germany. In the West, Germany was balanced only by France and Belgium, both of which were smaller in population and less economically vibrant than Germany. Barnett concludes by saying that instead of weakening Germany, the treaty "much enhanced" German power. Britain and France should have (according to Barnett) "divided and permanently weakened" Germany by undoing Bismarck's work and partitioning Germany into smaller, weaker states so it could never have disrupted the peace of Europe again. By failing to do this and therefore not solving the problem of German power and restoring the equilibrium of Europe, Britain "had failed in her main purpose in taking part in the Great War". The British historian of modern Germany, Richard J. Evans, wrote that during the war the German right was committed to an annexationist program which aimed at Germany annexing most of Europe and Africa. Consequently, any peace treaty that did not leave Germany as the conqueror would be unacceptable to them. Short of allowing Germany to keep all the conquests of the Treaty of Brest-Litovsk, Evans argued that there was nothing that could have been done to persuade the German right to accept Versailles. Evans further noted that the parties of the Weimar Coalition, namely the Social Democratic Party of Germany (SPD), the social liberal German Democratic Party (DDP) and the Christian democratic Centre Party, were all equally opposed to Versailles, and it is false to claim as some historians have that opposition to Versailles also equalled opposition to the Weimar Republic. Finally, Evans argued that it is untrue that Versailles caused the premature end of the Republic, instead contending that it was the Great Depression of the early 1930s that put an end to German democracy. He also argued that Versailles was not the "main cause" of National Socialism and the German economy was "only marginally influenced by the impact of reparations". Ewa Thompson points out that the treaty allowed numerous nations in Central and Eastern Europe to liberate themselves from oppressive German rule, a fact that is often neglected by Western historiography, more interested in understanding the German point of view. In nations that found themselves free as the result of the treaty—such as Poles or Czechs—it is seen as a symbol of recognition of wrongs committed against small nations by their much larger aggressive neighbours. Resentment caused by the treaty sowed fertile psychological ground for the eventual rise of the Nazi Party. But the German-born Australian historian Jürgen Tampke argued that it was "a perfidious distortion of history" to argue that the terms prevented the growth of democracy in Germany and aided the growth of the Nazi party; saying that its terms were not as punitive as often held and that German hyper-inflation in the 1920s was partly a deliberate policy to minimise the cost of repatriations. As an example of the arguments against the "Versaillerdiktat" he quotes Elizabeth Wiskemann who heard two officer's widows in Wiesbaden complaining that "with their stocks of linen depleted they had to have their linen washed once a fortnight (every two weeks) instead of once a month!" The German historian Detlev Peukert wrote that Versailles was far from the impossible peace that most Germans claimed it was during the interwar period, and though not without flaws was actually quite reasonable to Germany. Rather, Peukert argued that it was widely believed in Germany that Versailles was a totally unreasonable treaty, and it was this "perception" rather than the "reality" of the Versailles treaty that mattered. Peukert noted that because of the "millenarian hopes" created in Germany during World War I when for a time it appeared that Germany was on the verge of conquering all of Europe, any peace treaty the Allies of World War I imposed on the defeated "German Reich" were bound to create a nationalist backlash, and there was nothing the Allies could have done to avoid that backlash. Having noted that much, Peukert commented that the policy of rapprochement with the Western powers that Gustav Stresemann carried out between 1923 and 1929 were constructive policies that might have allowed Germany to play a more positive role in Europe, and that it was not true that German democracy was doomed to die in 1919 because of Versailles. Finally, Peukert argued that it was the Great Depression and the turn to a nationalist policy of autarky within Germany at the same time that finished off the Weimar Republic, not the Treaty of Versailles. French historian Raymond Cartier states that millions of Germans in the Sudetenland and in Posen-West Prussia were placed under foreign rule in a hostile environment, where harassment and violation of rights by authorities are documented. Cartier asserts that, out of 1,058,000 Germans in Posen-West Prussia in 1921, 758,867 fled their homelands within five years due to Polish harassment. These sharpening ethnic conflicts would lead to public demands to reattach the annexed territory in 1938 and become a pretext for Hitler's annexations of Czechoslovakia and parts of Poland. According to David Stevenson, since the opening of French archives, most commentators have remarked on French restraint and reasonableness at the conference, though Stevenson notes that "[t]he jury is still out", and that "there have been signs that the pendulum of judgement is swinging back the other way." The Treaty of Versailles resulted in the creation of several thousand miles of new boundaries, with maps playing a central role in the negotiations at Paris. The plebiscites initiated due to the treaty have drawn much comment. Historian Robert Peckham wrote that the issue of Schleswig "was premised on a gross simplification of the region's history. ... Versailles ignored any possibility of there being a third way: the kind of compact represented by the Swiss Federation; a bilingual or even trilingual Schleswig-Holsteinian state" or other options such as "a Schleswigian state in a loose confederation with Denmark or Germany, or an autonomous region under the protection of the League of Nations." In regards to the East Prussia plebiscite, historian Richard Blanke wrote that "no other contested ethnic group has ever, under un-coerced conditions, issued so one-sided a statement of its national preference". Richard Debo wrote "both Berlin and Warsaw believed the Soviet invasion of Poland had influenced the East Prussian plebiscites. Poland appeared so close to collapse that even Polish voters had cast their ballots for Germany". In regards to the Silesian plebiscite, Blanke observed "given that the electorate was at least 60% Polish-speaking, this means that about one 'Pole' in three voted for Germany" and "most Polish observers and historians" have concluded that the outcome of plebiscite was due to "unfair German advantages of incumbency and socio-economic position". Blanke alleged "coercion of various kinds even in the face of an allied occupation regime" occurred, and that Germany granted votes to those "who had been born in Upper Silesia but no longer resided there". Blanke concluded that despite these protests "there is plenty of other evidence, including Reichstag election results both before and after 1921 and the large-scale emigration of Polish-speaking Upper Silesians to Germany after 1945, that their identification with Germany in 1921 was neither exceptional nor temporary" and "here was a large population of Germans and Poles—not coincidentally, of the same Catholic religion—that not only shared the same living space but also came in many cases to see themselves as members of the same national community". Prince Eustachy Sapieha, the Polish Minister of Foreign Affairs, alleged that Soviet Russia "appeared to be intentionally delaying negotiations" to end the Polish-Soviet War "with the object of influencing the Upper Silesian plebiscite". Once the region was partitioned, both "Germany and Poland attempted to 'cleanse' their shares of Upper Silesia" via oppression resulting in Germans migrating to Germany and Poles migrating to Poland. Despite the oppression and migration, Opole Silesia "remained ethnically mixed." Frank Russell wrote that, in regards to the Saar plebiscite, the inhabitants "were not terrorized at the polls" and the "totalitarian [Nazi] German regime was not distasteful to most of the Saar inhabitants and that they preferred it even to an efficient, economical, and benevolent international rule." When the outcome of the vote became known, 4,100 (including 800 refugees who had previously fled Germany) residents fled over the border into France. During the formulation of the treaty, the British wanted Germany to abolish conscription but be allowed to maintain a volunteer Army. The French wanted Germany to maintain a conscript army of up to 200,000 men in order to justify their own maintenance of a similar force. Thus the treaty's allowance of 100,000 volunteers was a compromise between the British and French positions. Germany, on the other hand, saw the terms as leaving them defenseless against any potential enemy. Bernadotte Everly Schmitt wrote that "there is no reason to believe that the Allied governments were insincere when they stated at the beginning of Part V of the Treaty ... that in order to facilitate a general reduction of the armament of all nations, Germany was to be required to disarm first." A lack of American ratification of the treaty or joining the League of Nations left France unwilling to disarm, which resulted in a German desire to rearm. Schmitt argued "had the four Allies remained united, they could have forced Germany really to disarm, and the German will and capacity to resist other provisions of the treaty would have correspondingly diminished." Max Hantke and Mark Spoerer wrote "military and economic historians [have] found that the German military only insignificantly exceeded the limits" of the treaty prior to 1933. Adam Tooze concurred, and wrote "To put this in perspective, annual military spending by the Weimar Republic was counted not in the billions but in the hundreds of millions of "Reichsmarks""; for example, the Weimar Republic's 1931 program of "Reichsmarks" over five years compared to the Nazi Government's 1933 plan to spend "Reichsmarks" per year. P. M. H. Bell argued that the British Government was aware of later Weimar rearming, and lent public respectability to the German efforts by not opposing them, an opinion shared by Churchill. Norman Davies wrote that "a curious oversight" of the military restrictions were that they "did not include rockets in its list of prohibited weapons", which provided Wernher von Braun an area to research within eventually resulting in "his break [that] came in 1943" leading to the development of the V-2 rocket. The Treaty created much resentment in Germany, which was exploited by Adolf Hitler in his rise to power at the helm of Nazi Germany. Central to this was belief in the stab-in-the-back myth, which held that the German army had not lost the war and had been betrayed by the Weimar Republic, who negotiated an unnecessary surrender. The Great Depression exacerbated the issue, and led to a collapse of the German economy. Though the treaty may not have caused the crash, it was a convenient scapegoat. Germans viewed the treaty as a humiliation, and eagerly listened to Hitler's oratory which blamed the treaty for Germany's ills. Hitler promised to reverse the depredations of the Allied powers and recover Germany's lost territory and pride, which has led to the treaty being cited as a cause of World War II.
https://en.wikipedia.org/wiki?curid=30030
Mort Mort is a fantasy novel by British writer Terry Pratchett. Published in 1987, it is the fourth "Discworld" novel and the first to focus on the character Death, who only appeared as a side character in the previous novels. The title is the name of its main character, and is also a play on words: in French, "mort" means "death". The French language edition is titled "Mortimer". In the BBC's 2003 Big Read contest, viewers voted on the "Nation's Best-loved Book"; "Mort" was among the Top 100 and chosen as the most popular of Pratchett's novels. In 2004, Pratchett stated that "Mort" was the first Discworld novel with which he was "pleased", stating that in previous books, the plot had existed to support the jokes, but that in "Mort", the plot was integral. As a teenager, Mort has a personality and temperament that makes him unsuited to the family farming business. Mort's father Lezek takes him to a local hiring fair in the hope that Mort will land an apprenticeship; not only would this provide a job for his son, but it would also make his son's propensity for thinking into someone else's problem. Just before the last stroke of midnight, Death arrives and takes Mort on as an apprentice (though his father thinks he has been apprenticed to an undertaker). Death takes Mort to his domain, where he meets Death's elderly manservant Albert, and his adopted daughter Ysabell. Mort later accompanies Death as he travels to collect the soul of a king, who is due to be assassinated by the scheming Duke of Sto Helit. After Mort unsuccessfully tries to prevent the assassination, Death warns him that all deaths are predetermined, and that he cannot interfere with fate. Later on, Death assigns Mort to collect the soul of Princess Keli, daughter of the murdered king, but he instead kills the assassin the Duke had sent after her. Keli lives, but shortly after the assassin's death people begin acting as if something had happened without knowing why such as a solemn song being played. She soon finds that the rest of the world no longer acknowledges her existence at all unless she confronts them and even then only in a confused manner which is forgotten immediately after. She subsequently employs the wizard Igneous Cutwell, who is able to see her as he is trained to see things that are invisible to normal people (like death) to make her existence clear to the public. Mort eventually discovers that his actions have created an alternate reality in which Keli lives, but he also learns that it is being overridden by the original reality and will eventually cease to exist, killing Keli. While consulting Cutwell, Mort sees a picture of Unseen University's founder, Alberto Malich, noting that he bears a resemblance to Albert. Mort and Ysabell travel into the Stack, a library in Death's domain that holds the biographies of everyone who has ever lived, in order to investigate Albert, eventually discovering that he is indeed Malich. They further learn that Malich had feared monsters waiting for him in the afterlife, and performed a reversed version of the Rite of AshkEnte in the hope of keeping Death away from him. However, the spell backfired and sent him to Death's side, where he has remained in order to put off his demise. During this time, Death, yearning to relish what being human is like, travels to Ankh-Morpork to indulge in new experiences, including getting drunk, dancing, gambling and finding a job. Mort in turn starts to become more like Death, adopting his mannerisms and aspects of his personality, while his own is slowly overridden. Death's absence forces Mort to collect the next two souls, who are both located on separate parts of the Disc, and due to die on the same night that the alternate reality will be destroyed. Before he and Ysabell leave to collect the souls, Mort uses the part of Death within him to force Albert to provide a spell that will slow down the alternate reality's destruction. After Mort and Ysabell leave, Albert returns to Unseen University, under the identity of Malich. His eagerness to live on the Disc is reinvigorated during this time, and he has the wizards perform the Rite of AshkEnte in the hope of finally escaping Death's grasp. The ritual summons both Death and the part of Death that had been taking Mort over, restoring him to normal. Unaware of Albert's treachery, Death takes him back into his service, the Librarian preventing the wizard's escape. Mort and Ysabell travel to Keli's palace, where the princess and Cutwell have organised a hasty coronation ceremony in the hope that Keli can be crowned queen before the alternate reality is destroyed. With the reality now too small for Albert's spell, Mort and Ysabell save Keli and Cutwell from being destroyed with the alternate reality. They return to Death's domain to find a furious Death waiting for them, the latter having learned of Mort's actions from Albert. Death dismisses Mort and attempts to take the souls of Keli and Cutwell, but Mort challenges him to a duel for them. Though Death eventually wins the duel, he spares Mort's life and sends him back to the Disc. Death convinces the gods to change the original reality so that Keli rules in place of the Duke, who was inadvertently killed during Death and Mort's duel. Mort and Ysabell – who have fallen in love over the course of the story – get married, and are made Duke and Duchess of Sto Helit by Keli, while Cutwell is made the Master of the Queen's Bedchamber. Death attends Mort and Ysabell's reception, where he warns Mort that he will have to make sure that the original Duke's destiny is fulfilled, and presents him with the alternate reality he created, now shrunk to the size of a large pearl, before the two part on amicable terms. Stephen Briggs adapted the novel for the stage in 1992. The novel was adapted as a graphic novel, "Mort: The Big Comic", published in 1994. The novel has been adapted by Robin Brooks for BBC Radio Four. Narrated by Anton Lesser, with Geoffrey Whitehead as Death, Carl Prekopp as Mort, Clare Corbett as Ysabell and Alice Hart as Princess Keli, the programme was first broadcast in four parts in mid-2004 and has been repeated frequently, most recently on Radio 4 Extra. On 15 December 2007 a German-language stage musical adaptation premiered in Hamburg. An English musical adaptation of "Mort" was presented in Guildford in August 2008 by Youth Music Theatre UK. The adaptation was by Jenifer Toksvig (sister of Sandi Toksvig) and the composer was Dominic Haslam. A new production, directed by Luke Sheppard, was staged at the Greenwich Theatre in 2011. The play "Adler" seen in the manga "Beastars" seems to be based on "Mort". After the film "The Princess And The Frog", Disney animators John Musker and Ron Clements planned that their next project would be an animated film version of "Mort", but their failure to obtain the film rights prevented them from continuing with the project.
https://en.wikipedia.org/wiki?curid=30033
Tim Berners-Lee Sir Timothy John Berners-Lee (born 8 June 1955), also known as TimBL, is an English engineer and computer scientist best known as the inventor of the World Wide Web. He is a Professorial Fellow of Computer Science at the University of Oxford and a professor at the Massachusetts Institute of Technology (MIT). Berners-Lee proposed an information management system on 12 March 1989, then implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the internet in mid-November. Berners-Lee is the director of the World Wide Web Consortium (W3C) which oversees the continued development of the Web. He is also the founder of the World Wide Web Foundation and is a senior researcher and holder of the 3Com founders chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He is a director of the Web Science Research Initiative (WSRI) and a member of the advisory board of the MIT Center for Collective Intelligence. In 2011, he was named as a member of the board of trustees of the Ford Foundation. He is a founder and president of the Open Data Institute and is currently an advisor at social network MeWe. In 2004, Berners-Lee was knighted by Queen Elizabeth II for his pioneering work. In April 2009, he was elected a Foreign Associate of the National Academy of Sciences. He was named in "Time" magazine's list of the and has received a number of other accolades for his invention. He was honoured as the "Inventor of the World Wide Web" during the 2012 Summer Olympics opening ceremony in which he appeared working with a vintage NeXT Computer at the London Olympic Stadium. He tweeted "This is for everyone" which appeared in LCD lights attached to the chairs of the audience. He received the 2016 Turing Award "for inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale". Berners-Lee was born on 8 June 1955 in London, England, the eldest of the four children of Mary Lee Woods and Conway Berners-Lee; his brother Mike is an expert on greenhouse gases. His parents were computer scientists who worked on the first commercially built computer, the Ferranti Mark 1. He attended Sheen Mount Primary School, and then went on to attend south west London's Emanuel School from 1969 to 1973, at the time a direct grant grammar school, which became an independent school in 1975. A keen trainspotter as a child, he learnt about electronics from tinkering with a model railway. He studied at The Queen's College, Oxford, from 1973 to 1976, where he received a first-class bachelor of arts degree in physics. While at university, Berners-Lee made a computer out of an old television set, which he bought from a repair shop. After graduation, Berners-Lee worked as an engineer at the telecommunications company Plessey in Poole, Dorset. In 1978, he joined D. G. Nash in Ferndown, Dorset, where he helped create type-setting software for printers. Berners-Lee worked as an independent contractor at CERN from June to December 1980. While in Geneva, he proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers. To demonstrate it, he built a prototype system named ENQUIRE. After leaving CERN in late 1980, he went to work at John Poole's Image Computer Systems, Ltd, in Bournemouth, Dorset. He ran the company's technical side for three years. The project he worked on was a "real-time remote procedure call" which gave him experience in computer networking. In 1984, he returned to CERN as a fellow. In 1989, CERN was the largest internet node in Europe, and Berners-Lee saw an opportunity to join hypertext with the internet: Berners-Lee wrote his proposal in March 1989 and, in 1990, redistributed it. It then was accepted by his manager, Mike Sendall, who called his proposals 'vague, but exciting'. He used similar ideas to those underlying the ENQUIRE system to create the World Wide Web, for which he designed and built the first Web browser. His software also functioned as an editor (called WorldWideWeb, running on the NeXTSTEP operating system), and the first Web server, CERN HTTPd (short for Hypertext Transfer Protocol daemon). Berners-Lee published the first web site, which described the project itself, on 20 December 1990; it was available to the Internet from the CERN network. The site provided an explanation of what the World Wide Web was, and how people could use a browser and set up a web server, as well as how to get started with your own website. In a list of 80 cultural moments that shaped the world, chosen by a panel of 25 eminent scientists, academics, writers, and world leaders, the invention of the World Wide Web was ranked number one, with the entry stating, "The fastest growing communications medium of all time, the internet has changed the shape of modern life forever. We can connect with each other instantly, all over the world". In 1994, Berners-Lee founded the W3C at the Massachusetts Institute of Technology. It comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made his idea available freely, with no patent and no royalties due. The World Wide Web Consortium decided that its standards should be based on royalty-free technology, so that they easily could be adopted by anyone. In 2001, Berners-Lee became a patron of the East Dorset Heritage Trust, having previously lived in Colehill in Wimborne, East Dorset. In December 2004, he accepted a chair in computer science at the School of Electronics and Computer Science, University of Southampton, Hampshire, to work on the Semantic Web. In a "Times" article in October 2009, Berners-Lee admitted that the initial pair of slashes ("//") in a web address were "unnecessary". He told the newspaper that he easily could have designed web addresses without the slashes. "There you go, it seemed like a good idea at the time", he said in his lighthearted apology. In June 2009, then-British prime minister Gordon Brown announced that Berners-Lee would work with the UK government to help make data more open and accessible on the Web, building on the work of the Power of Information Task Force. Berners-Lee and Professor Nigel Shadbolt are the two key figures behind data.gov.uk, a UK government project to open up almost all data acquired for official purposes for free re-use. Commenting on the opening up of Ordnance Survey data in April 2010, Berners-Lee said that: "The changes signal a wider cultural change in government based on an assumption that information should be in the public domain unless there is a good reason not to—not the other way around." He went on to say: "Greater openness, accountability and transparency in Government will give people greater choice and make it easier for individuals to get more directly involved in issues that matter to them." In November 2009, Berners-Lee launched the World Wide Web Foundation (WWWF) in order to campaign to "advance the Web to empower humanity by launching transformative programs that build local capacity to leverage the Web as a medium for positive change." Berners-Lee is one of the pioneer voices in favour of net neutrality, and has expressed the view that ISPs should supply "connectivity with no strings attached", and should neither control nor monitor the browsing activities of customers without their expressed consent. He advocates the idea that net neutrality is a kind of human network right: "Threats to the internet, such as companies or governments that interfere with or snoop on internet traffic, compromise basic human network rights." Berners-Lee participated in an open letter to the US Federal Communications Commission (FCC). He and 20 other Internet pioneers urged the FCC to cancel a vote on 14 December 2017 to uphold net neutrality. The letter was addressed to Senator Roger Wicker, Senator Brian Schatz, Representative Marsha Blackburn and Representative Michael F. Doyle. Berners-Lee joined the board of advisors of start-up State.com, based in London. As of May 2012, Berners-Lee is president of the Open Data Institute, which he co-founded with Nigel Shadbolt in 2012. The Alliance for Affordable Internet (A4AI) was launched in October 2013 and Berners-Lee is leading the coalition of public and private organisations that includes Google, Facebook, Intel, and Microsoft. The A4AI seeks to make internet access more affordable so that access is broadened in the developing world, where only 31% of people are online. Berners-Lee will work with those aiming to decrease internet access prices so that they fall below the UN Broadband Commission's worldwide target of 5% of monthly income. Berners-Lee holds the founders chair in Computer Science at the Massachusetts Institute of Technology, where he heads the Decentralized Information Group and is leading Solid, a joint project with the Qatar Computing Research Institute that aims to radically change the way Web applications work today, resulting in true data ownership as well as improved privacy. In October 2016, he joined the Department of Computer Science at Oxford University as a professorial research fellow and as a fellow of Christ Church, one of the Oxford colleges. From the mid 2010s Berners-Lee initially remained neutral on the emerging Encrypted Media Extensions (EME) proposal for with its controversial Digital Rights Management (DRM) implications. In March 2017 he felt he had to take a position which was to support the EME proposal. He reasoned EME's virtues whilst noting DRM was inevitable. As W3C director he went on to approve the finalised specification in July 2017. His stance was opposed by some including Electronic Frontier Foundation (EFF), the anti-DRM campaign Defective by Design and the Free software foundation. Varied concerns raised included being not supportive of the internet's open philosophy against commercial interests and risks of users being forced to use a particular web browser to view specific DRM content. The EFF raised a formal appeal which did not succeed and the EME specification became a formal W3C recommendation in September 2017. On 30 September 2018, Berners-Lee announced his new open-source startup Inrupt to fuel a commercial ecosystem around the Solid project, which aims to give users more control over their personal data and lets users choose where the data goes, who's allowed to see certain elements and which apps are allowed to see that data. In November 2019 at the Internet Governance Forum in Berlin Berners-Lee and the WWWF launched "Contract for the Web", a campaign initiative to persuade governments, companies and citizens to commit to nine principles to stop "misuse" with the warning that "if we don't act nowand act togetherto prevent the web being misused by those who want to exploit, divide and undermine, we are at risk of squandering [its potential for good]". Berners-Lee has received many awards and honours. He was knighted by Queen Elizabeth II in the 2004 New Year Honours "for services to the global development of the internet", and was invested formally on 16 July 2004. On 13 June 2007, he was appointed to the Order of Merit (OM), an order restricted to 24 (living) members. Bestowing membership of the Order of Merit is within the personal purview of the Queen, and does not require recommendation by ministers or the Prime Minister. He was elected a Fellow of the Royal Society (FRS) in 2001. He has been conferred honorary degrees from a number of Universities around the world, including Manchester (his parents worked on the Manchester Mark 1 in the 1940s), Harvard and Yale. In 2012, Berners-Lee was among the British cultural icons selected by artist Sir Peter Blake to appear in a new version of his most famous artwork – the Beatles' "Sgt. Pepper's Lonely Hearts Club Band" album cover – to celebrate the British cultural figures of his life that he most admires to mark his 80th birthday. In 2013, he was awarded the inaugural Queen Elizabeth Prize for Engineering. On 4 April 2017, he received the 2016 ACM Turing Award "for inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the Web to scale". Berners-Lee married Nancy Carlson in 1990, an American computer programmer. She was also working in Switzerland at the World Health Organization. They had two children and divorced in 2011. In 2014, he married Rosemary Leith at the Chapel Royal, St. James's Palace in London. Leith is a Canadian internet and banking entrepreneur and a founding director of Berners-Lee's World Wide Web Foundation. The couple also collaborate on venture capital to support artificial intelligence companies. Berners-Lee was raised as an Anglican, but he turned away from religion in his youth. After he became a parent, he became a Unitarian Universalist (UU). When asked whether he believes in God, he stated: "Not in the sense of most people, I'm atheist and Unitarian Universalist".
https://en.wikipedia.org/wiki?curid=30034
The Legend of Zelda The series centers on various incarnations of Link, the player character and chief protagonist. Link is often given the task of rescuing Princess Zelda and the kingdom of Hyrule from Ganon, an evil warlord turned demon who is the principal antagonist of the series; however, other settings and antagonists have appeared in several games. The plots commonly involve the Triforce, a relic representing the virtues of Courage, Wisdom and Power that together are omnipotent. With a few exceptions, the protagonist in each game is usually a different incarnation of Link. Since the original "Legend of Zelda" was released in 1986, the series has expanded to include 19 entries on all of Nintendo's major game consoles, as well as a number of spin-offs. An American animated TV series based on the games aired in 1989 and individual manga adaptations commissioned by Nintendo have been produced in Japan since 1997. "The Legend of Zelda" is one of Nintendo's most prominent and successful franchises; several of its entries are considered to be among the greatest video games of all time. "The Legend of Zelda" games feature a mix of puzzles, action, adventure/battle gameplay, and exploration. These elements have remained constant throughout the series, but with refinements and additions featured in each new game. Later games in the series also include stealth gameplay, where the player must avoid enemies while proceeding through a level, as well as racing elements. Although the games can be beaten with a minimal amount of exploration and side quests, the player is frequently rewarded with helpful items or increased abilities for solving puzzles or exploring hidden areas. Some items are consistent and appear many times throughout the series (such as bombs and bomb flowers, which can be used both as weapons and to open blocked or hidden doorways; boomerangs, which can kill or paralyze enemies; keys for locked doors; magic swords, shields, and bows and arrows), while others are unique to a single game. Though the games contain many role-playing elements ("" is the only one to include an experience system), they emphasize straightforward hack and slash-style combat over the strategic, turn-based or active time combat of series like "Final Fantasy". The game's role-playing elements, however, have led to much debate over whether or not the "Zelda" games should be classified as action role-playing games, a genre on which the series has had a strong influence. Every game in the main "Zelda" series has consisted of three principal areas: an overworld which connects all other areas, in which movement is multidirectional, allowing the player some degree of freedom of action; areas of interaction with other characters (merely caves or hidden rooms in the first game, but expanding to entire towns and cities in subsequent games) in which the player gains special items or advice, can purchase equipment or complete side quests; and dungeons, areas of labyrinthine layout, usually underground, comprising a wide range of difficult enemies, bosses, and items. Each dungeon usually has one major item inside, which can be essential for solving many of the puzzles within that dungeon and often plays a crucial role in defeating that dungeon's boss, as well as progressing through the game. In nearly every "Zelda" game, navigating a dungeon is aided by locating a map, which reveals its layout, and a magic compass, which reveals the location of significant and smaller items such as keys and equipment. In later games, the dungeon includes a special "big key" that will unlock the door to battle the dungeon's boss enemy and open the item chest. In most "Zelda" games, the player's HP or life meter is represented by a line of hearts, each heart typically representing two hit points. At the start of the game the player only has three hearts but players can increase their max hearts by finding heart-shaped crystals called "Heart Containers". Full heart containers are usually received at the end of dungeons and dropped by dungeon bosses. Smaller "Pieces of Heart" are awarded for completing side quests or found hidden around the game world in various places, and require a certain number (usually four) to form a full heart container. Health can be replenished by picking up hearts left by defeated enemies or destroyed objects, consuming items such as potions or food, or going to a Great Fairy Fountain to have the Great Fairy heal Link completely. Occasionally the player will find fairies hidden in specific locations; these fairies can either heal Link immediately or be kept in empty bottles, and will revive the player the next time they die. The games pioneered a number of features that were to become industry standards. The original "Legend of Zelda" was the first console game with a save function that enabled players to stop playing and then resume later. "The Legend of Zelda: Ocarina of Time" introduced a targeting system that let the player lock the camera on to enemy or friendly NPCs which simplified 3D combat. Games in "The Legend of Zelda" series frequently feature in-game musical instruments, particularly in musical puzzles, which are widespread. Often, instruments trigger game events: for example, the recorder in "The Legend of Zelda" can reveal secret areas, as well as warp Link to the Dungeon entrances. This warping with music feature has also been used in "A Link to the Past" and "Link's Awakening". In "", playing instruments is a core part of the game, with the player needing to play the instrument through the use of the game controller to succeed. "Ocarina of Time" is "[one of the] first contemporary non-dance title[s] to feature music-making as part of its gameplay", using music as a heuristic device and requiring the player to utilise songs to progress in the game – a game mechanic that is also present in "". "The Legend of Zelda Theme" is a recurring piece of music that was created for the first game of the franchise. The composer and sound director of the series, Koji Kondo, initially planned to use Maurice Ravel's "Boléro" as the game's title theme, but was forced to change it when he learned, late in the game's development cycle, that the copyright for the orchestral piece had not yet expired. As a result, Kondo wrote a new arrangement of the overworld theme within one day. The "Zelda Theme" has topped ScrewAttack's "Top Ten Videogame Themes Ever" list. Up until "Breath of the Wild", the "Legend of Zelda" series avoided using voice acting in speaking roles, relying instead on written dialogue. Series producer Eiji Aonuma previously stated that as Link is entirely mute, having the other characters speak while Link remains silent "would be off-putting". Instead of theme music for different locations, "Breath of the Wild" plays natural ambience around the player as main sounds, in addition to some minimalist piano music. "The Legend of Zelda" was principally inspired by Shigeru Miyamoto's "explorations" as a young boy in the hillsides, forests, and caves surrounding his childhood home in Sonobe, Japan where he ventured into forests with secluded lakes, caves, and rural villages. According to Miyamoto, one of his most memorable experiences was the discovery of a cave entrance in the middle of the woods. After some hesitation, he apprehensively entered the cave, and explored its depths with the aid of a lantern. Miyamoto has referred to the creation of the "Zelda" games as an attempt to bring to life a "miniature garden" for players to play with in each game of the series. Hearing of American novelist, socialite and painter Zelda Fitzgerald, Miyamoto thought the name sounded "pleasant and significant". Paying tribute, he chose to name the princess after her, and titled it "The Legend of Zelda". Link and the fairy were inspired by Peter Pan and Tinker Bell. The Master Sword was inspired by Excalibur which originates from the Arthurian Legend in the Welsh collection of Mabinogion. "The Legend of Zelda" takes place predominantly in a medieval Western Europe-inspired fantasy world called Hyrule, which has developed a deep history and wide geography over the series' many releases. Much of the backstory of the creation of Hyrule was revealed in the games "", "", "", "", "", and "". Hyrule's principal inhabitants are pointy-eared, elf-like humanoids called Hylians, which include the player character, Link, and the eponymous princess, Zelda. According to the in-game backstories, the world of Hyrule was created by the three golden goddesses: Din, Farore, and Nayru. Before departing, the goddesses left a sacred artifact called the Triforce, which could grant powers to the user. It physically manifests itself as three golden triangles in which each embodies one of the goddesses' virtues: Power, Courage, and Wisdom. However, because the Triforce has no will of its own and it can not judge between good and evil, it will grant any wish indiscriminately. Because of this, it was placed within an alternate world called the "Sacred Realm" or the "Golden Land" until one worthy of its power and has balanced virtues of Power, Wisdom, and Courage in their heart could obtain it, in its entirety. If a person is not of a balanced heart, the triforce part that the user mostly believes in will stay with that person and the remainder will seek out others. In order to master and control the triforce as a whole, the user must get the other parts found in other individuals and bring them together to reunite them. The Sacred Realm can itself be affected by the heart of those who enters it: those who are pure will make it a paradise, while those who are evil will transform it into a dark realm. In "Skyward Sword", the Triforce was sought by a demon king named Demise, and after a long battle, Demise was sealed away within the Temple of the goddess Hylia, guardian of the Triforce. Hylia, placing the Hylians on a floating island (called Skyloft) in the sky to protect them, orchestrated a means to stop the demon from escaping: creating the Goddess Sword (later becoming the Master Sword) for her chosen hero and discarding her divinity to be reborn among the people of Skyloft. In time, Zelda and Link (the reborn Hylia and her predestined warrior) enacted the goddess' plan and Demise was destroyed. However, Demise vowed that his rage would be reborn and forever plague those descended from Link and Zelda. That prophecy came to fruition in , when Ganondorf's attempt to get the Triforce scattered it with him gaining the Triforce of Power. The Triforce of Wisdom ended up with the Hylian princesses descended from Zelda, each named after her, while the Triforce of Courage is passed to a youth named Link across generations. While the Triforces of Power and Wisdom have been part of the series since the original "The Legend of Zelda", it was only in "Zelda II: The Adventure of Link" that the Triforce of Courage was first introduced, being obtained by Link at the end of his quest. The Triforce, or even a piece of it, is not always distributed as a whole. Such as in "The Wind Waker", Link must find all the pieces (called Triforce Shards) of the Triforce of Courage before he can return to Hyrule. Even in the original "The Legend of Zelda", Zelda breaks her Triforce of Wisdom into 8 pieces for Link to find, before she was captured by Ganon. The fictional universe established by the "Zelda" games sets the stage for each adventure. Some games take place in different lands with their own back-stories. Termina and serve as parallel worlds to Hyrule, is a connected kingdom, and Koholint is an island far away from Hyrule that appears to be part of a dream. The chronology of the "Legend of Zelda" series was a subject of much debate among fans until an official timeline was released within the "" collector's book, which was first released in Japan in December 2011. Prior to its release, producers confirmed the existence of a confidential document, which connected all the games. Certain materials and developer statements once partially established an official timeline of the released installments. "" is a direct sequel to original "The Legend of Zelda", and takes place several years later. The third game, "", is a prequel to the first two games, and is directly followed by "Link's Awakening". "" is a prequel that takes the story many centuries back; according to character designer Satoru Takizawa, it was meant to implicitly tell the story of the Imprisoning War from the manual of "A Link to the Past", with "Majora's Mask" directly following its ending. "Skyward Sword" is then a prequel to "Ocarina of Time". "Twilight Princess" is set more than 100 years after "Ocarina of Time". "The Wind Waker" is parallel, and takes place in the other timeline branch, more than a century after the adult era of "Ocarina of Time". "Phantom Hourglass" is a continuation of the story from "The Wind Waker", and is followed by "Spirit Tracks", which is set about 100 years later on a supercontinent far away from the setting of "The Wind Waker". At the time of its release, "" for the Game Boy Advance was considered the oldest tale in the series' chronology, with "Four Swords Adventures" set sometime after its events. "The Minish Cap" precedes the two games, telling of the origins of villain Vaati and the creation of the Four Sword. "" takes place six generations after "Link to the Past". Important events that occur in the game include the Triforce being reunited, and Ganon being resurrected. Nintendo's 2011 timeline announcement subsequently posits that following "Ocarina of Time", the timeline splits into three alternate routes: in one, Link fails to defeat Ganon, leading into the Imprisoning War and "A Link to the Past", "Oracle of Seasons" and "Oracle of Ages", "Link's Awakening", "The Legend of Zelda" and "The Adventure of Link". In the second and third, Link is successful, leading to a timeline split between his childhood (when Zelda sends him back in time so he can use the wisdom he has gained to warn the Zelda in the past of the horrifying fate of Hyrule) and adulthood (where the Zelda from the future lives on to try and rebuild the kingdom). His childhood continues with "Majora's Mask", followed by "Twilight Princess" and "Four Swords Adventures". The timeline from his adult life continues into "Wind Waker", "Phantom Hourglass" and "Spirit Tracks". In the early 2000s, Nintendo of America released a timeline on the official website of the series, which interpreted all stories up to the "Oracle" games as the adventures of a single protagonist named Link. At one point, translator Dan Owsen and his coworkers at Nintendo of America had conceived another complete timeline and intended to make it available online. However, the Japanese series developers rejected the idea so the timeline would be kept open to the imagination of the players. In 2018, Nintendo revealed that "Breath of the Wild" officially takes place after all previous games in the series (without specifying a connection to any of the three timelines), and moved "Link's Awakening" to take place before "Oracle of Seasons" and "Oracle of Ages". The central protagonist of "The Legend of Zelda" series, Link is the name of various young men who characteristically wear a green tunic and a pointed cap, and are the bearers of the Triforce of Courage. In most games, the player can give Link a different name before the start of the adventure, and he will be referred by that given name throughout by the non-player characters (NPCs). The various Links each have a special title, such as "Hero of Time", "Hero of the Winds" or "Hero chosen by the gods". Like many silent protagonists in video games, Link does not speak, only producing grunts, yells, or similar sounds. Despite the player not seeing the dialogue, it is referenced second-hand by in-game characters, showing that he is not, in fact, mute. Link is shown as a silent protagonist so that the audience is able to have their own thoughts as to how their Link would answer the characters instead of him having scripted responses. Princess Zelda is the princess of Hyrule and the guardian of the Triforce of Wisdom. Her name is present in many of her female ancestors and descendants. While most games require Link to save Zelda from Ganon, she sometimes plays a supporting role in battle, using magical powers and weapons such as Light Arrows to aid Link. With the exception of the CD-i games (which were not official Nintendo games), she was not playable in the main series until "Spirit Tracks", where she becomes a spirit and can possess a Phantom Knight that can be controlled by the player. Zelda appears under various other aliases and alter egos, including Sheik (in "") and Tetra (in "" and ")". In "Skyward Sword", it is revealed that the Zelda of that game is a reincarnation of the goddess Hylia, whose power flows through the royal bloodline. The name "Zelda" derives from the American novelist Zelda Fitzgerald. Ganon, also known as Ganondorf in his humanoid form, is the main antagonist and the final boss in the majority of "The Legend of Zelda" games. In the series, Ganondorf is the leader of a race of desert brigands called the Gerudo, which consists entirely of female warriors save for one man born every one hundred years. He is significantly taller than other human NPCs, but his looks vary between games, often taking the form of a monstrous anthropomorphic boar. His specific motives vary from game to game, but most often his plans include him kidnapping Princess Zelda and planning to achieve domination of Hyrule and presumably the world beyond it. To this end, he seeks the Triforce, a powerful magical relic. He often possesses a portion of the Triforce called the Triforce of Power, which gives him great strength. However, it is often not enough to accomplish his ends, leading him to hunt the remaining Triforce pieces. Unlike Link, Zelda, and most other recurring characters, he is actually the same person in every game, with the exception of "Four Swords Adventures", where he is a reincarnation of the original. In each game the battles with him are different and he fights using different styles. The game "" indicates that Ganon is a reincarnation of an evil deity known as Demise. "The Legend of Zelda", the first game of the series, was first released in Japan on February 21, 1986, on the Famicom Disk System. A cartridge version, using battery-backed memory, was released in the United States on August 22, 1987, and Europe on November 27, 1987. The game features a "Second Quest," accessible either upon completing the game, or by registering one's name as "ZELDA" when starting a new quest. The Second Quest features different dungeons and item placement, and more difficult enemies. The second game, "", was released for the Famicom Disk System in Japan on January 14, 1987, and for the Nintendo Entertainment System in Europe in November 1988 and North America in December 1988. The game exchanged the top-down perspective for side-scrolling (though the top-down point of view was retained for overworld areas), and introduced RPG elements (such as experience points) not used previously or thereafter in the series. "The Legend of Zelda" and "Zelda II" were released in gold-coloured cartridges instead of the console's regular grey cartridges. Both were re-released in the final years of the Nintendo Entertainment System with grey cartridges. Four years later, "" returned to the top-down view (under a 3/4 perspective), and added the concept of an alternate dimension, the Dark World. The game was released for the Super NES on November 21, 1991. It was later re-released for the Game Boy Advance on March 14, 2003, in North America, on a cartridge with "", the first multiplayer "Zelda", and then through Nintendo's Virtual Console service on January 22, 2007. In addition, both this game (unchanged, except for being converted into a downloadable format) and an exclusive "loosely based" sequel (which used the same game engine) called "BS Zelda no Densetsu Inishie no Sekiban" were released on the Satellaview in Japan on March 2, 1997, and March 30, 1997, respectively. In 1994, near the end of the Famicom's lifespan, the original Famicom game was re-released in cartridge format. A modified version, "BS Zelda no Densetsu", was released for the Super Famicom's satellite-based expansion, Satellaview, on August 6, 1995, in Japan. A second Satellaview game, "BS Zelda no Densetsu MAP2" was released for the Satellaview on December 30, 1995. Both games featured rearranged dungeons, an altered overworld, and new voice-acted plot-lines. The next game, "", is the first "Zelda" for Nintendo's Game Boy handheld, and the first set outside Hyrule and to exclude Princess Zelda. It was released in 1993, and re-released, in full color, as a launch game for the Game Boy Color in 1998 as "Link's Awakening DX". This re-release features additions such as an extra color-based dungeon and a photo shop that allows interaction with the Game Boy Printer. After a five-year hiatus, the series made the transition to 3D with "" for the Nintendo 64, which was released in November 1998. This game, initially known as "Zelda 64", retains the core gameplay of the previous 2D games, and was very successful commercially and critically. It is considered by many critics and gamers to be the best video game of all time, and ranks highly on IGN and EGM's "greatest games of all time" lists, as well as scoring perfect scores in several video game publications. In February 2006, it was ranked by "Nintendo Power" as the best game released for a Nintendo console. The game was originally developed for the poorly selling, Japanese-only Nintendo 64DD, but was ported to cartridge format when the 64DD hardware was delayed. A new gameplay mechanic, lock-on targeting (called "Z-targeting" as that is the controller button used), is used in the game, which focuses the camera on a nearby target and alters the player's actions relative to that target. Such mechanics allow precise sword fighting in a 3D space. The game heavily uses context-sensitive button play, which enabled the player to control various actions with Link using only one button on the Nintendo 64's game pad. Each action was handled slightly differently but all used the 'A' button to perform. For instance, standing next to a block and pressing 'A' made Link grab it (enabling him to push/pull it), but moving forwards into a block and pressing 'A' allowed Link to climb the block. The 'B' button was used only as an attack button. The game featured the first appearance of Link's horse, Epona, allowing Link to travel quickly across land and fire arrows from horseback. Those who preordered the game received a gold-coloured cartridge in a limited edition box with a golden plastic card affixed, reading "Collector's Edition". In some stores that had this "Collector's Edition" quickly sell out, a small and rare Zelda pin was given instead. It is the sword and shield emblem with "Zelda" written on it. Very few of them are known to remain. "Ocarina of Time" was re-released on the GameCube in 2002, when it was offered as a pre-order incentive for "" in the U.S., Canada and Japan. Europe continued to receive it free in every copy of "", except for the discounted Player's Choice version. It includes what is widely believed to be the remnants of a cancelled 64DD expansion for Ocarina of Time known as "Ura Zelda" in early development. Named "", the game was given the addition of revamped, more difficult dungeon layouts. "Ocarina of Time" was included as part of the "" for the GameCube in 2003. It is now available through the Wii's Virtual Console service. In 2011, Nintendo released a new version of the game in stereoscopic 3D for the Nintendo 3DS, "". In July 2015, Nintendo rereleased it for the Wii U Virtual Console. "Ocarina of Time"s follow-up, "", was released in April 2000. It uses the same 3D game engine as the previous game, and added a time-based concept, in which Link, the protagonist, relives the events of three days as many times as needed to complete the game's objectives. It was originally called "Zelda Gaiden", a Japanese title that translates as "Zelda Side story". Gameplay changed significantly; in addition to the time-limit, Link can use masks to transform into creatures with unique abilities. While "Majora's Mask" retains the graphical style of "Ocarina of Time", it is also a departure, particularly in its atmosphere. It features motion-blur, unlike its predecessor. The game is darker, dealing with death and tragedy in a manner not previously seen in the series, and has a sense of impending doom, as a large moon slowly descends upon the land of Termina to destroy all life. All copies of "" are gold cartridges. A limited "Collector's Edition" lenticular cartridge label was offered as the pre-order incentive. Copies of the game that are not collector's editions feature a normal sticker cartridge label. "Majora's Mask" is included in the "", and is available on the Virtual Console, as well as a 3D port for the portable 3DS console. The next two games, , were released simultaneously for the Game Boy Color, and interact using passwords or a Game Link Cable. After one game has been completed, the player is given a password that allows the other game to be played as a sequel. They were developed by Flagship in conjunction with Nintendo, with supervision from Miyamoto. After the team experimented with porting the original "The Legend of Zelda" to the Game Boy Color, they decided to make an original trilogy to be called the "Triforce Series". When the password system linking the three games proved too troublesome, the concept was reduced to two games at Miyamoto's suggestion. These two games became "Oracle of Ages", which is more puzzle-based, and "Oracle of Seasons", which is more action-oriented. When Nintendo revealed the GameCube on August 24, 2000, the day before Nintendo's SpaceWorld 2000 exposition, a software demonstration showed a realistically styled real-time duel between Ganondorf and Link. Fans and the media speculated that the battle might be from a "Zelda" game in development. At Spaceworld 2001, Nintendo showed a cel-shaded "Zelda" game, later released as "" in December 2002. Due to poor reception, nothing further was shown until a playable demonstration was ready. Miyamoto felt "The Wind Waker" would "extend "Zelda"s reach to all ages". The gameplay centres on controlling wind with a baton called the "Wind Waker" and sailing a small boat around an island-filled ocean, retaining similar gameplay mechanics as the previous 3D games in the series. Following the release of "The Wind Waker" came "The Legend of Zelda: Collector's Edition", which included the original "The Legend of Zelda", "Zelda II", "Ocarina of Time", "Majora's Mask", and a demo of "The Wind Waker". GameSpot noted that "Majora's Mask" suffered from a frame rate which appeared choppier and inconsistencies in the audio. This compilation was never sold commercially, and originally could only be obtained by purchasing a GameCube bundled with the disc (in North America, Europe and Australia), by registering a GameCube and two games at Nintendo.com, or by subscribing or renewing a subscription to "Nintendo Power" (in North America) or Club Nintendo in Sweden. In the UK, 1000 copies were made available through the Club Nintendo Stars Catalogue program. After these were quickly claimed, Nintendo gave a copy to customers who mailed in proof of purchases from select GameCube games. The next game released in the series was "" for the GameCube, which was released in early 2004 in Japan and America, and January 2005 in Europe. Based on the handheld "", "Four Swords Adventures" was another deviation from previous "Zelda" gameplay, focusing on level-based and multiplayer gameplay. The game contains 24 levels and a map screen; there is no connecting overworld. For multiplayer features, each player must use a Game Boy Advance system linked to the GameCube via a Nintendo GameCube – Game Boy Advance link cable. The game features a single-player campaign, in which using a Game Boy Advance is optional. "Four Swords Adventures" includes two gameplay modes: "Hyrulean Adventure", with a plot and gameplay similar to other "Zelda" games, and "Shadow Battle", in which multiple Links, played by multiple players, battle each other. The Japanese and Korean versions include an exclusive third segment, "Navi Trackers" (originally designed as the stand-alone game "Tetra's Trackers"), which contains spoken dialogue for most of the characters, unlike other games in "The Legend of Zelda" series. In November 2004 in Japan and Europe, and January 2005 in America, Nintendo released "" for the Game Boy Advance. In "The Minish Cap" Link can shrink in size using a mystical, sentient hat named Ezlo. While shrunk, he can see previously explored parts of a dungeon from a different perspective, and enter areas through otherwise-impassable openings. In November 2006, "" was released as the first "Zelda" game on the Wii, and later, in December 2006, as the last official Nintendo game for the GameCube, the console for which it was originally developed. The Wii version features a reversed world where everything that is in the west on the GameCube is in the east on the Wii, and vice versa. The display is mirrored in order to make Link right-handed, to make use of the Wii remote feel more natural. The game chronicles the struggle of an older Link to clear the troubles of the interacting "Twilight Realm", a mysterious force that appears around Hyrule. When he enters this realm, he is transformed into a wolf, and loses the ability to use his sword, shield or other items, but gains other abilities such as sharpened senses from his new form. "Twilight Princess" includes an incarnation of Link's horse, Epona, for fast transportation, and features mounted battle scenarios including boss battles that were not seen in previous games. Twilight Princess diverted from the cel shading of Wind Waker and went for graphics featuring more detailed textures, giving the game a darker atmosphere, thus making it feel more adult than previous games. At the 2006 Game Developers Conference, a trailer for "" for the Nintendo DS was shown. It revealed traditional top-down "Zelda" gameplay optimised for the DS' features, with a cel-shaded 3d graphical style similar to "The Wind Waker". At E3 2006, Nintendo confirmed the game's status as a direct sequel to "The Wind Waker", and released an extensive playable demo, including a multiplayer mode with "capture the flag" elements. "Phantom Hourglass" was released on June 23, 2007, in Japan, October 1, 2007, in North America and October 19, 2007, in Europe. The next "Legend of Zelda" for the DS, "", was released in December 2009. In this game, the 'spirit tracks', railroads which chain an ancient evil, are disappearing from Hyrule. Zelda and Link go to the 'Spirit Tower' (the ethereal point of convergence for the tracks) to find out why. But villains steal Zelda's body for the resurrection of the Demon King. Rendered disembodied, Zelda is left a spirit, and only Link (and a certain few sages) can see her. Together they go on a quest to restore the spirit tracks, defeat the Demon King, and return Zelda to her body. Using a modified engine of that used in "Phantom Hourglass", the notably new feature in this game is that the Phantom Guardians seen in "Phantom Hourglass" are, through a series of events, periodically controllable. It was the first time in the series that both Link & Zelda work together on the quest. In April 2008, Miyamoto stated that "the "Zelda" team is forming again to work on new games". Miyamoto clarified in July that the "Zelda" team had been working on a new "Zelda" game for the Wii. In January 2010, Nintendo president Satoru Iwata stated that the game would be coming out at some time in 2010, and confirmed that the game would make use of the Wii's MotionPlus feature, which had been announced too late to be integrated into the "Twilight Princess" Wii release. The game's subtitle was announced at E3 2010 as "Skyward Sword", but its release was delayed to 2011. The game, the earliest in the "Legend of Zelda" timeline, reveals the origins of Hyrule, Ganon and many elements featured in previous games. It was released on November 20, 2011; the first run included a 25th Anniversary CD of fully orchestrated music from various Zelda games, including "Skyward Sword". In addition, Nintendo celebrated the 25th anniversary of "The Legend of Zelda" game by releasing a "Zelda" game for all its current consoles in 2011: "Link's Awakening" in the 3DS's Virtual Console on June 7, "Ocarina of Time 3D" for the 3DS in mid-June, "Four Swords Anniversary Edition" from September 28, 2011, to February 20, 2012, as a free DSiWare download and "Skyward Sword" for the Wii, which was released on November 18, 2011, in Europe; on November 20, 2011, in the United States; and on November 24, 2011, in Australia. A limited edition "Zelda" 25th anniversary 3DS was released on December 1, 2011, in Australia. "", a remaster of the original GameCube game, was released by Nintendo on September 20, 2013, digitally on the Nintendo eShop in North America with a retail release on September 26 in Japan, October 4 in North America and Europe, and October 5 in Australia. A month later, Nintendo released "" for the Nintendo 3DS, which takes place in the same setting as "A Link to the Past". Nintendo released a second 3DS version, "", in North America and Europe on February 13, 2015, and in Japan and Australia on February 14, 2015. At E3 2015, Nintendo announced "", a cooperative multiplayer game released for the 3DS in October 2015. "", a high-definition remastering of "Twilight Princess", was released for the Wii U in March 2016. Nintendo showcased a demo reel at E3 2011, which depicted Link fighting a monster in HD. In January 2013, Nintendo revealed that a new "Legend of Zelda" game was being planned for the Wii U. The game was officially teased at E3 2014, and was scheduled to be released in 2015. However, in March 2015, the game was delayed to 2016. In April 2016, the game was delayed again to 2017; it was also announced that it would be simultaneously released on the Wii U and Nintendo Switch. At E3 2016, the game was showcased under the title "". The game was released on March 3, 2017. In February 2019, Nintendo announced a that would be released for the Nintendo Switch later that year; specifically, September 20, 2019. On June 11, 2019, Nintendo announced a sequel to "" during their Nintendo Direct E3 2019 presentation. A series of video games was developed and released for the Philips CD-i in the early 1990s as a product of a compromise between Philips and Nintendo, after the companies failed to develop a CD-ROM peripheral for the Super NES. Created independently with no observation by or influence from Nintendo, the games are , together with "Zelda's Adventure". Nintendo never acknowledged them in the "Zelda" timeline, and they are considered to be in a separate, self-contained canon. These games are widely acknowledged to be the worst installments in the series. Three "Zelda"-themed LCD games were created between 1989 and 1992. The "Zelda" version of Nintendo's Game & Watch series was released first in August 1989 as a dual-screen handheld electronic game similar in appearance to today's Nintendo DS. It was re-released in 1998 as a Toymax, Inc. Mini Classic and was later included as an unlockable extra in "Game & Watch Gallery 4", a 2002 compilation for the Game Boy Advance. While the Game & Watch "Zelda" was developed in-house by Nintendo, the subsequent two LCD games were developed by third parties under license by Nintendo. In October 1989, "The Legend of Zelda" was developed by Nelsonic as part of its Game Watch line. This game was an actual digital watch with primitive gameplay based on the original "Legend of Zelda". In 1992, Epoch Co. developed "" for its Barcode Battler II console. The game employed card-scanning technology similar to the later-released Nintendo e-Reader. Throughout the lifespan of "The Legend of Zelda" series, a number of games (including main series games as well as re-releases and spin-offs) in varying states of completeness have had their releases cancelled. Perhaps the earliest of these was Gottlieb's "The Legend of Zelda Pinball Machine" (cancelled 1993). After securing a license from Nintendo to produce two Nintendo-franchise-based pinball machines, pinball designer Jon Norris was tasked with designing the table. Before it was completed, Gottlieb decided to repurpose the game with an "American Gladiators" theme. Licensing for this version ultimately fell through and the game was released as simply "Gladiators" (November 1993). In 1998, Nintendo cancelled "The Legend of Zelda: Ocarina of Time Ura". Originally intended as an expansion disk for "" on the Nintendo 64DD, poor sales figures for the N64DD system led Nintendo to cancel its plans for the release. In 2002, Nintendo released a bonus disc called "". It contained emulated versions of "Ocarina of Time" and "Ocarina of Time Master Quest" with a number of modifications originally planned for release in "Ocarina of Time Ura" including GUI textures and text modified to reflect the GameCube. In 2001, under license from Nintendo, Capcom cancelled the release of "The Legend of Zelda: Mystical Seed of Courage" for Game Boy Color. Working with a Capcom team, Yoshiki Okamoto was originally tasked with designing a series of three "Zelda" games for the Game Boy Color. Referred to as the "Triforce Series", the games were known as "The Legend of Zelda: The Mysterious Acorn: Chapter of Power", "Chapter of Wisdom", and "Chapter of Courage" in Japan and "The Legend of Zelda: Mystical Seed of Power", "Mystical Seed of Wisdom", and "Mystical Seed of Courage" in the US. The games were to interact using a password system, but the limitations of this system and the difficulty of coordinating three games proved too complicated, so the team scaled back to two games at Miyamoto's suggestion. "" was adapted from "Mystical Seed of Power", "" was adapted from "Mystical Seed of Wisdom", and "Mystical Seed of Courage" was cancelled. Before its 2006 release, both Link and Samus from the "Metroid" series were planned to be playable characters for the Wii version of "". However, they didn't make the final release because they weren't Marvel characters. In 2011, an unnamed Zelda 25th Anniversary Compilation was cancelled. To celebrate the 25th anniversary of the series, Nintendo of America originally had planned to release a compilation of games together for the Wii, similar to the collector's edition disc released for the GameCube in 2003. However Nintendo of Japan's president Satoru Iwata and Shigeru Miyamoto decided against releasing it, believing it would be too similar to the Super Mario 25th Anniversary collection released in 2010. As the franchise has grown in popularity, several games have been released that are set within or star a minor character from the universe of "The Legend of Zelda" but are not directly connected to the main "The Legend of Zelda" series. Both map versions of the game "BS Zelda no Densetsu" for the Satellaview (released in August and December 1995) could be considered spin-offs due to the fact that they star the "Hero of Light" (portrayed by either the Satellaview's male or female avatar) as opposed to Link as the protagonist of Hyrule. A third Satellaview game released in March 1997, "BS Zelda no Densetsu Inishie no Sekiban" ("BS The Legend of Zelda: Ancient Stone Tablets") could also be considered a spin-off for the same reason. Other spin-off games include "Freshly-Picked Tingle's Rosy Rupeeland" for the Nintendo DS – an RPG released in September 2006 in Japan (Summer of 2007 in the UK) to star supporting character Tingle. A second Tingle game is "Tingle's Balloon Fight DS" for the Nintendo DS. Here Tingle again stars in this spin-off arcade style platformer, released in April 2007 only in Japan and available solely to Platinum Club Nintendo members. In addition to games in which Link does not star as the protagonist, games such as the shooter game, "Link's Crossbow Training" (for the Wii), have been considered spin-offs due to the lack of a traditional "Save Hyrule" plot-line. Released in November 2007 as a bundle with the Wii Zapper, this game allows players to assume the identity of Link as he progresses through a series of tests to perfect his crossbow marksmanship. "Color Changing Tingle's Love Balloon Trip" was released in Japan in 2009 as a sequel to "Freshly-Picked Tingle's Rosy Rupeeland". "Hyrule Warriors", a crossover game combining the setting of Nintendo's "The Legend of Zelda" series and the gameplay of Tecmo Koei's "Dynasty Warriors" series, was announced for the Wii U video game system in December 2013 and was released in North America in September 2014. "Hyrule Warriors Legends", a version for the Nintendo 3DS containing more content and gameplay modifications, was released in March 2016. To commemorate the launch of the My Nintendo loyalty program in March 2016, Nintendo released "My Nintendo Picross: The Legend of Zelda: Twilight Princess", a Picross puzzle game developed by Jupiter for download to the Nintendo 3DS. "Cadence of Hyrule", developed by Brace Yourself Games and released on June 13, 2019, is an officially licensed crossover of "Zelda" with "Crypt of the NecroDancer". "The Legend of Zelda" series has crossed over into other Nintendo and third-party video games, most prominently in the "Super Smash Bros." series of fighting games published by Nintendo. Link appears as a fighter in "Super Smash Bros." for the Nintendo 64, the first entry in the series, and is part of the roster in all subsequent releases in the series as well. Zelda, (who is able to transform into Sheik as well), Ganondorf, and Young Link (the child version of Link from "Ocarina of Time") were added to the player roster for "Super Smash Bros. Melee", and appeared in all subsequent releases except for "Young Link", who is later replaced by "Toon Link" from "The Wind Waker", in subsequent releases "Super Smash Bros. Brawl" and "Super Smash Bros. for Nintendo 3DS and Wii U" however, both Young Link and Toon Link appear in the fifth installment, "Super Smash Bros. Ultimate". Other elements from the series, such as locations and items, are also included throughout the "Smash Bros." series. Outside of the series, Nintendo allowed for the use of Link as a playable character exclusively in the GameCube release of Namco's fighting game "Soulcalibur II". "The Legend of Zelda" series has received outstanding levels of acclaim from critics and the public. "", "", "", and "" have each received a perfect 40/40 score (10/10 by four reviewers) by Japanese "Famitsu" magazine, making "Zelda" one of the few series with multiple perfect scores. "Ocarina of Time" was even listed by "Guinness World Records" as the highest-rated video game in history, citing its Metacritic score of 99 out of 100. "Computer and Video Games" awarded "The Wind Waker" and "" a score of 10/10. "" has won Gold Award from "Electronic Gaming Monthly". In "Nintendo Power"s Top 200 countdown in 2004, "Ocarina of Time" took first place, and seven other "Zelda" games placed in the top 40. "Twilight Princess" was named Game of the Year by "X-Play", "GameTrailers", "1UP", "Electronic Gaming Monthly", "Spacey Awards", "Game Informer", "GameSpy", "Nintendo Power", "IGN", and many other websites. The editors of review aggregator websites GameRankings, IGN and Metacritic have all given "Ocarina of Time" their highest aggregate scores. "Game Informer" has awarded "The Wind Waker", "Twilight Princess", "Skyward Sword", "A Link Between Worlds" and "Breath of the Wild" with scores of 10/10. "" was named DS Game of the Year by IGN and "GameSpy". Airing December 10, 2011, Spike TV's annual Video Game Awards gave the series the first ever "Hall of Fame Award", which Miyamoto accepted in person. "" and its use of melodic themes to identify different game regions has been called a reverse of Richard Wagner's use of leitmotifs to identify characters and themes. "Ocarina of Time" was so well received that sales increased for real ocarinas. IGN praised the music of "Majora's Mask" for its brilliance despite its heavy use of MIDI. It has been ranked the seventh-greatest game by "Electronic Gaming Monthly", whereas "Ocarina of Time" was ranked eighth. The series won "GameFAQs Best Series Ever" competition. , "The Legend of Zelda" franchise has sold over 90 million copies, with the original "The Legend of Zelda" being the fourth best-selling NES game of all time. The series was ranked as the 64th top game (collectively) by "Next Generation" in 1996. In 1999, "Next Generation" listed the "Zelda" series as number 1 on their "Top 50 Games of All Time", commenting that, "With incredible level and dungeon, Shigeru Miyamoto's "Zelda" series has always had more gameplay in its pinky finger than most other titles have in their entire bodies." According to British film magazine "Empire", with "the most vividly-realised world and the most varied game-play of any game on any console, "Zelda" is a solid bet for the best game series ever." Multiple members of the game industry have expressed how Zelda games have impacted them. Rockstar Games founder and "Grand Theft Auto" director, Dan Houser, stated, "Anyone who makes 3-D games who says they've not borrowed something from "Mario" or "Zelda" [on the Nintendo 64] is lying." Rockstar founder and "Grand Theft Auto" director Sam Houser also cited the influence of "Zelda", describing "Grand Theft Auto III" as "Zelda meets Goodfellas". "Ōkami" director Hideki Kamiya (Capcom, PlatinumGames) states that he has been influenced by "The Legend of Zelda" series in developing the game, citing "" as his favorite game of all time. "Soul Reaver" and "Uncharted" director, Amy Hennig (Crystal Dynamics, Naughty Dog), cited "Zelda" as inspiration for the "Legacy of Kain" series, noting "A Link to the Past"s influence on "Blood Omen" and "Ocarina of Time"'s influence on "Soul Reaver". "Soul Reaver" and "Uncharted" creator, Richard Lemarchand (Crystal Dynamics, Naughty Dog), cited "A Link to the Past"s approach to combining gameplay with storytelling as inspiration for "Soul Reaver". "Wing Commander" and "Star Citizen" director, Chris Roberts (Origin Systems, Cloud Imperium Games), cited "Zelda" as an influence on his action role-playing game, "Times of Lore". "Souls" creator Hidetaka Miyazaki (FromSoftware) named "A Link To The Past" as one of his favorite role-playing video games. According to Miyazaki, ""The Legend of Zelda" became a sort of textbook for 3D action games." "Ico" director Fumito Ueda (Team Ico) cited "Zelda" as an influence on "Shadow of the Colossus". "Fable" series director Peter Molyneux (Lionhead Studios, Microsoft Studios) stated that "" is one of his favorite games. "I just feel it's jaw-dropping and its use of the hardware was brilliant. And I've played that game through several times," he said to TechRadar. "Darksiders" director David Adams (Vigil Games) cited "Zelda" as an influence on his work. "Prince of Persia" and "Assassin's Creed" director Raphael Lacoste (Ubisoft) cited "The Wind Waker" as an influence on "". CD Projekt Red ("The Witcher", "Cyberpunk 2077") cited the "Zelda" series as an influence on "The Witcher" series, including "". Alex Hall cited as the primary influence on their "Ben Drowned" web serial and web series. "Final Fantasy" and "The 3rd Birthday" director Hajime Tabata (Square Enix) cited "Ocarina of Time" as inspiration for the seamless open world of "Final Fantasy XV". A 13-episode American animated TV series, adapted by DiC and distributed by Viacom Enterprises, aired in 1989. The animated "Zelda" shorts were broadcast each Friday, instead of the usual "Super Mario Bros." cartoon which was aired during the rest of the week. The series loosely follows the two NES "Zelda" games (the original "The Legend of Zelda" and "The Adventure of Link"), mixing settings and characters from those games with original creations. The show's older incarnations of both Link and Zelda appear in various episodes of "" during its second season. Valiant Comics released a short series of comics featuring characters and settings from the "Zelda" cartoon as part of their "Nintendo Comics System" line. Manga adaptations of many entries in the series, including "A Link to the Past", "Ocarina of Time", "Majora's Mask", "Oracle of Seasons" and "Oracle of Ages", "Four Swords Adventures", "The Minish Cap", and "Phantom Hourglass", have been produced under license from Nintendo, mostly in Japan. These cartoons do not strictly follow the plot of the games from which they are based and may contain additional story elements. A number of official books, novels, and gamebooks have been released based on the series as well. The earliest was "Moblin's Magic Spear", published in 1989 by Western Publishing under their Golden Books Family Entertainment division and written by Jack C. Harris. It took place sometime during the first game. Two gamebooks were published as part of the "Nintendo Adventure Books" series by Archway, both of which were written by Matt Wayne. The first was "The Crystal Trap" (which focuses more on Zelda) and the second was "The Shadow Prince". Both were released in 1992. A novel based on "Ocarina of Time" was released in 1999, written by Jason R. Rich and published by Sybex Inc. under their "Pathways to Adventure" series. Another two gamebooks were released as part of the "You Decide on the Adventure" series published by Scholastic. The first book was based on "Oracle of Seasons" and was released in 2001. The second, based on "Oracle of Ages", was released in 2002. Both were written by Craig Wessel. In 2006, Scholastic released a novel as part of their "Nintendo Heroes" series, "Link and the Portal of Doom". It was written by Tracey West and was set shortly after the events of "Ocarina of Time". In 2011, to coincide with the 25th anniversary of the series, an art book was published exclusively in Japan under the name "" by Shogakukan. It contains concept art from the series's conception to the release of "Skyward Sword" in 2011 and multiple essays about the production of the games, as well as an overarching timeline of the series. It also includes a prequel manga to "Skyward Sword" by Zelda manga duo Akira Himekawa. The book received an international release by publisher Dark Horse Comics on January 29, 2013; it took the number one spot on Amazon's sales chart, taking the spot away from E. L. James's "50 Shades of Grey" trilogy. Dark Horse released "The Legend of Zelda: Art & Artifacts", a follow-up art book to "Hyrule Historia" containing additional artwork and interviews, in North America on February 21, 2017, and in Europe on February 23, 2017. Taking place in Cologne, Germany, on September 23, 2010, the video game music concert "Symphonic Legends" focused on music from Nintendo and, among others, featured games such as "The Legend of Zelda". Following an intermission, the second half of the concert was entirely dedicated to an expansive symphonic poem dedicated to the series. The 35-minute epic tells the story of Link's evolution from child to hero. To celebrate the 25th anniversary of the series in 2011, Nintendo commissioned an original symphony, "". The show was originally performed in the fall of 2011 in Los Angeles and consists of live performances of much of the music from the series. It has since been scheduled for 18 shows so far throughout the United States and Canada. Nintendo released a CD, "The Legend of Zelda 25th Anniversary Special Orchestra CD". Featuring eight tracks from live performances of the symphony, the CD is included alongside the special edition of "The Legend of Zelda: Skyward Sword" for the Wii. Nintendo later celebrated "The Legend of Zelda" 30th anniversary with an album which was released in Japan in February 2017. In 2007, Imagi Animation Studios, which had provided the animation for "TMNT" and "Astro Boy", created a pitch reel for a computer-animated "The Legend of Zelda" movie. Nintendo did not accept the studio's offer due to the memory of the failure of the 1993 live-action movie adaption of "Super Mario Bros." In 2013, Aonuma stated that, if development of a film began, the company would want to use the opportunity to embrace audience interaction in some capacity. "The Legend of Zelda"-themed "Monopoly" board game was released in the United States on September 15, 2014. A "Clue" board game in the style of "The Legend of Zelda" series was released in June 2017. A "UNO"-styled "The Legend of Zelda" game was announced in February 2018 for release exclusively at GameStop in North America. Works cited
https://en.wikipedia.org/wiki?curid=30035
Tor Nørretranders Tor Nørretranders (born June 20, 1955) is a Danish author of popular science. He was born in Copenhagen, Denmark. His books and lectures have primarily been focused on light popular science and its role in society, often with Nørretranders' own advice about how society should integrate new findings in popular science. He introduced the notion of exformation in his book The User Illusion. Tor Nørretranders' mother is Yvonne Levy (1920-) and his father was Bjarne Nørretranders (1922-1986). Tor Nørretranders graduated at "Det frie gymnasium" in 1973 and reached a cand.techn.soc-degree from Roskilde University (Roskilde) in 1982, specialized in environment planning and its scientific theoretic basis. He lives north of Copenhagen with his wife Rikke Ulk and three children.
https://en.wikipedia.org/wiki?curid=30036
Puerto Rico Puerto Rico (; abbreviated PR), officially the Commonwealth of Puerto Rico () and in previous centuries called Porto Rico in English, is an unincorporated territory of the United States located in the northeast Caribbean Sea, approximately southeast of Miami, Florida. Puerto Rico is an archipelago among the Greater Antilles located between the Dominican Republic and the U.S. Virgin Islands; it includes the eponymous main island and several smaller islands, such as Mona, Culebra, and Vieques. The capital and most populous city is San Juan. The territory's total population is approximately 3.2 million, more than 20 U.S. states. Spanish and English are the official languages of the executive branch of government, though Spanish predominates. Originally populated by the indigenous Taíno people, Puerto Rico was colonized by Spain following the arrival of Christopher Columbus in 1493. It was contested by various other European powers, but remained a Spanish possession for the next four centuries. The island's cultural and demographic landscapes were shaped by the displacement and assimilation of the native population, the forced migration of African slaves, and settlement primarily from the Canary Islands and Andalusia. In the Spanish Empire, Puerto Rico played a secondary but strategic role compared to wealthier colonies like Peru and New Spain. By the late 19th century, a distinct Puerto Rican identity began to emerge, based on a unique creole Hispanic culture and language that combined indigenous, African, and European elements. In 1898, following the Spanish–American War, the United States acquired Puerto Rico, which remains an unincorporated territorial possession, making it the world's oldest colony. Puerto Ricans have been citizens of the United States since 1917, and can move freely between the island and the mainland. As it is not a state, Puerto Rico does not have a vote in the U.S. Congress, which governs the territory with full jurisdiction under the Puerto Rico Federal Relations Act of 1950. Puerto Rico's sole congressional representation is through one non-voting member of the House called a Resident Commissioner. As residents of a U.S. territory, American citizens in Puerto Rico are disenfranchised at the national level, do not vote for the president or vice president of the U.S., and in most cases do not pay federal income tax. Congress approved a local constitution in 1952, allowing U.S. citizens of the territory to elect a governor. Puerto Rico's future political status has consistently been a matter of significant debate. By Latin American standards, Puerto Rico has the highest GDP per capita and the most developed and competitive economy; however, its poverty rate is higher than the poorest U.S. state, and the territory struggles with chronically large debt, considerable unemployment, and a high rate of emigration. The 21st century has seen several major challenges, including a government-debt crisis and devastation by Hurricane Maria. Puerto Rico is Spanish for "rich port". Puerto Ricans often call the island – a derivation of , its indigenous Taíno name, which means "Land of the Valiant Lord". The terms and derive from and respectively, and are commonly used to identify someone of Puerto Rican heritage. The island is also popularly known in Spanish as , meaning "the island of enchantment". Columbus named the island , in honor of Saint John the Baptist, while the capital city was named ("Rich Port City"). Eventually traders and other maritime visitors came to refer to the entire island as Puerto Rico, while San Juan became the name used for the main trading/shipping port and the capital city. The island's name was changed to "Porto Rico" by the United States after the Treaty of Paris of 1898. The anglicized name was used by the U.S. government and private enterprises. The name was changed back to Puerto Rico by a joint resolution in Congress introduced by Félix Córdova Dávila in 1931. The official name of the entity in Spanish is ("free associated state of Puerto Rico"), while its official English name is Commonwealth of Puerto Rico. The ancient history of the archipelago which is now Puerto Rico is not well known. Unlike other indigenous cultures in the New World (Aztec, Maya and Inca) which left behind abundant archeological and physical evidence of their societies, scant artifacts and evidence remain of the Puerto Rico's indigenous population. Scarce archaeological findings and early Spanish accounts from the colonial era constitute all that is known about them. The first comprehensive book on the history of Puerto Rico was written by Fray Íñigo Abbad y Lasierra in 1786, nearly three centuries after the first Spaniards landed on the island. The first known settlers were the Ortoiroid people, an Archaic Period culture of Amerindian hunters and fishermen who migrated from the South American mainland. Some scholars suggest their settlement dates back about 4,000 years. An archeological dig in 1990 on the island of Vieques found the remains of a man, designated as the "Puerto Ferro Man", which was dated to around 2000 BC. The Ortoiroid were displaced by the Saladoid, a culture from the same region that arrived on the island between 430 and 250 BCE. The Igneri tribe migrated to Puerto Rico between 120 and 400 AD from the region of the Orinoco river in northern South America. The Arcaico and Igneri co-existed on the island between the 4th and 10th centuries. Between the 7th and 11th centuries, the Taíno culture developed on the island. By approximately 1000 AD, it had become dominant. At the time of Columbus' arrival, an estimated 30,000 to 60,000 Taíno Amerindians, led by the "cacique" (chief) Agüeybaná, inhabited the island. They called it "Boriken", meaning "the great land of the valiant and noble Lord". The natives lived in small villages, each led by a cacique. They subsisted by hunting and fishing, done generally by men, as well as by the women's gathering and processing of indigenous cassava root and fruit. This lasted until Columbus arrived in 1493. When Columbus arrived in Puerto Rico during his second voyage on November 19, 1493, the island was inhabited by the Taíno. They called it "Borikén" ("Borinquen" in Spanish transliteration). Columbus named the island San Juan Bautista, in honor of St John the Baptist. Having reported the findings of his first travel, Columbus brought with him this time a letter from King Ferdinand empowered by a papal bull that authorized any course of action necessary for the expansion of the Spanish Empire and the Christian faith. Juan Ponce de León, a lieutenant under Columbus, founded the first Spanish settlement, Caparra, on August 8, 1508. He later served as the first governor of the island. Eventually, traders and other maritime visitors came to refer to the entire island as Puerto Rico, and San Juan became the name of the main trading/shipping port. At the beginning of the 16th century, the Spanish people began to colonize the island. Despite the Laws of Burgos of 1512 and other decrees for the protection of the indigenous population, some Taíno Indians were forced into an encomienda system of forced labor in the early years of colonization. The population suffered extremely high fatalities from epidemics of European infectious diseases. In 1520, King Charles I of Spain issued a royal decree collectively emancipating the remaining Taíno population. By that time, the Taíno people were few in number. Enslaved Africans had already begun to be imported to compensate for the native labor loss, but their numbers were proportionate to the diminished commercial interest Spain soon began to demonstrate for the island colony. Other nearby islands, like Cuba, Hispaniola, and Guadalupe, attracted more of the slave trade than Puerto Rico, probably because of greater agricultural interests in those islands, on which colonists had developed large sugar plantations and had the capital to invest in the Atlantic slave trade. From the beginning of the country, the colonial administration relied heavily on the industry of enslaved Africans and creole blacks for public works and defenses, primarily in coastal ports and cities, where the tiny colonial population had hunkered down. With no significant industries or large-scale agricultural production as yet, enslaved and free communities lodged around the few littoral settlements, particularly around San Juan, also forming lasting Afro-creole communities. Meanwhile, in the island's interior, there developed a mixed and independent peasantry that relied on a subsistence economy. This mostly unsupervised population supplied villages and settlements with foodstuffs and, in relative isolation, set the pattern for what later would be known as the Puerto Rican Jíbaro culture. By the end of the 16th century, the Spanish Empire was diminishing and, in the face of increasing raids from European competitors, the colonial administration throughout the Americas fell into a "bunker mentality". Imperial strategists and urban planners redesigned port settlements into military posts with the objective of protecting Spanish territorial claims and ensuring the safe passing of the king's silver-laden Atlantic Fleet to the Iberian Peninsula. San Juan served as an important port-of-call for ships driven across the Atlantic by its powerful trade winds. West Indies convoys linked Spain to the island, sailing between Cádiz and the Spanish West Indies. The colony's seat of government was on the forested Islet of San Juan and for a time became one of the most heavily fortified settlements in the Spanish Caribbean earning the name of the "Walled City". The islet is still dotted with the various forts and walls, such as La Fortaleza, Castillo San Felipe del Morro, and Castillo San Cristóbal, designed to protect the population and the strategic Port of San Juan from the raids of the Spanish European competitors. In 1625, in the Battle of San Juan, the Dutch commander Boudewijn Hendricksz tested the defenses' limits like no one else before. Learning from Francis Drake's previous failures here, he circumvented the cannons of the castle of San Felipe del Morro and quickly brought his 17 ships into the San Juan Bay. He then occupied the port and attacked the city while the population hurried for shelter behind the Morro's moat and high battlements. Historians consider this event the worst attack on San Juan. Though the Dutch set the village on fire, they failed to conquer the Morro, and its batteries pounded their troops and ships until Hendricksz deemed the cause lost. Hendricksz's expedition eventually helped propel a fortification frenzy. Constructions of defenses for the San Cristóbal Hill were soon ordered so as to prevent the landing of invaders out of reach of the Morro's artillery. Urban planning responded to the needs of keeping the colony in Spanish hands. During the late 16th and early 17th centuries, Spain concentrated its colonial efforts on the more prosperous mainland North, Central, and South American colonies. With the advent of the lively Bourbon Dynasty in Spain in the 1700s, the island of Puerto Rico began a gradual shift to more imperial attention. More roads began connecting previously isolated inland settlements to coastal cities, and coastal settlements like Arecibo, Mayaguez, and Ponce began acquiring importance of their own, separate from San Juan. By the end of the 18th century, merchant ships from an array of nationalities threatened the tight regulations of the Mercantilist system, which turned each colony solely toward the European metropole and limited contact with other nations. U.S. ships came to surpass Spanish trade and with this also came the exploitation of the island's natural resources. Slavers, which had made but few stops on the island before, began selling more enslaved Africans to growing sugar and coffee plantations. The increasing number of Atlantic wars in which the Caribbean islands played major roles, like the War of Jenkins' Ear, the Seven Years' War and the Atlantic Revolutions, ensured Puerto Rico's growing esteem in Madrid's eyes. On April 17, 1797, Sir Ralph Abercromby's fleet invaded the island with a force of 6,000–13,000 men, which included German soldiers and Royal Marines and 60 to 64 ships. Fierce fighting continued for the next days with Spanish troops. Both sides suffered heavy losses. On Sunday April 30 the British ceased their attack and began their retreat from San Juan. By the time independence movements in the larger Spanish colonies gained success, new waves of loyal creole immigrants began to arrive in Puerto Rico, helping to tilt the island's political balance toward the Crown. In 1809, to secure its political bond with the island and in the midst of the European Peninsular War, the Supreme Central Junta based in Cádiz recognized Puerto Rico as an overseas province of Spain. This gave the island residents the right to elect representatives to the recently convened Spanish parliament (Cádiz Cortes), with equal representation to mainland Iberian, Mediterranean (Balearic Islands) and Atlantic maritime Spanish provinces (Canary Islands). Ramón Power y Giralt, the first Spanish parliamentary representative from the island of Puerto Rico, died after serving a three-year term in the Cortes. These parliamentary and constitutional reforms were in force from 1810 to 1814, and again from 1820 to 1823. They were twice reversed during the restoration of the traditional monarchy by Ferdinand VII. Immigration and commercial trade reforms in the 19th century increased the island's ethnic European population and economy and expanded the Spanish cultural and social imprint on the local character of the island. Minor slave revolts had occurred on the island throughout the years, with the revolt planned and organized by Marcos Xiorro in 1821 being the most important. Even though the conspiracy was unsuccessful, Xiorro achieved legendary status and is part of Puerto Rico's folklore. In the early 19th century, Puerto Rico spawned an independence movement that, due to harsh persecution by the Spanish authorities, convened in the island of St. Thomas. The movement was largely inspired by the ideals of Simón Bolívar in establishing a United Provinces of New Granada and Venezuela, that included Puerto Rico and Cuba. Among the influential members of this movement were Brigadier General Antonio Valero de Bernabé and María de las Mercedes Barbudo. The movement was discovered, and Governor Miguel de la Torre had its members imprisoned or exiled. With the increasingly rapid growth of independent former Spanish colonies in the South and Central American states in the first part of the 19th century, the Spanish Crown considered Puerto Rico and Cuba of strategic importance. To increase its hold on its last two New World colonies, the Spanish Crown revived the Royal Decree of Graces of 1815 as a result of which 450,000 immigrants, mainly Spaniards, settled on the island in the period up until the American conquest. Printed in three languages—Spanish, English, and French—it was intended to also attract non-Spanish Europeans, with the hope that the independence movements would lose their popularity if new settlers had stronger ties to the Crown. Hundreds of non-Spanish families, mainly from Corsica, France, Germany, Ireland, Italy and Scotland, also immigrated to the island. Free land was offered as an incentive to those who wanted to populate the two islands, on the condition that they swear their loyalty to the Spanish Crown and allegiance to the Roman Catholic Church. The offer was very successful, and European immigration continued even after 1898. Puerto Rico still receives Spanish and European immigration. Poverty and political estrangement with Spain led to a small but significant uprising in 1868 known as "Grito de Lares." It began in the rural town of Lares, but was subdued when rebels moved to the neighboring town of San Sebastián. Leaders of this independence movement included Ramón Emeterio Betances, considered the "father" of the Puerto Rican independence movement, and other political figures such as Segundo Ruiz Belvis. Slavery was abolished in Puerto Rico in 1873, "with provisions for periods of apprenticeship". Leaders of "El Grito de Lares" went into exile in New York City. Many joined the Puerto Rican Revolutionary Committee, founded on December 8, 1895, and continued their quest for Puerto Rican independence. In 1897, Antonio Mattei Lluberas and the local leaders of the independence movement in Yauco organized another uprising, which became known as the "Intentona de Yauco". They raised what they called the Puerto Rican flag, which was adopted as the national flag. The local conservative political factions opposed independence. Rumors of the planned event spread to the local Spanish authorities who acted swiftly and put an end to what would be the last major uprising in the island to Spanish colonial rule. In 1897, Luis Muñoz Rivera and others persuaded the liberal Spanish government to agree to grant limited self-government to the island by royal decree in the Autonomic Charter, including a bicameral legislature. In 1898, Puerto Rico's first, but short-lived, quasi-autonomous government was organized as an "overseas province" of Spain. This bilaterally agreed-upon charter maintained a governor appointed by the King of Spain – who held the power to annul any legislative decision – and a partially elected parliamentary structure. In February, Governor-General Manuel Macías inaugurated the new government under the Autonomic Charter. General elections were held in March and the new government began to function on July 17, 1898. In 1890, Captain Alfred Thayer Mahan, a member of the Navy War Board and leading U.S. strategic thinker, published a book titled "The Influence of Sea Power upon History" in which he argued for the establishment of a large and powerful navy modeled after the British Royal Navy. Part of his strategy called for the acquisition of colonies in the Caribbean, which would serve as coaling and naval stations. They would serve as strategic points of defense with the construction of a canal through the Isthmus of Panama, to allow easier passage of ships between the Atlantic and Pacific oceans. William H. Seward, the former Secretary of State under presidents Abraham Lincoln and Andrew Johnson, had also stressed the importance of building a canal in Honduras, Nicaragua or Panama. He suggested that the United States annex the Dominican Republic and purchase Puerto Rico and Cuba. The U.S. Senate did not approve his annexation proposal, and Spain rejected the U.S. offer of dollars for Puerto Rico and Cuba. Since 1894, the United States Naval War College had been developing contingency plans for a war with Spain. By 1896, the U.S. Office of Naval Intelligence had prepared a plan that included military operations in Puerto Rican waters. Except for one 1895 plan, which recommended annexation of the island then named "Isle of Pines" (later renamed as Isla de la Juventud), a recommendation dropped in later planning, plans developed for attacks on Spanish territories were intended as support operations against Spain's forces in and around Cuba. Recent research suggests that the U.S. did consider Puerto Rico valuable as a naval station, and recognized that it and Cuba generated lucrative crops of sugar – a valuable commercial commodity which the United States lacked, before the development of the sugar beet industry in the United States. On July 25, 1898, during the Spanish–American War, the U.S. invaded Puerto Rico with a landing at Guánica. After the U.S. victory in the war, Spain ceded Puerto Rico, along with the Philippines and Guam, then under Spanish sovereignty, to the U.S. under the Treaty of Paris, which went into effect on April 11, 1899. Spain relinquished sovereignty over Cuba, but did not cede it to the U.S. The United States and Puerto Rico began a long-standing metropolis-colony relationship. In the early 20th century, Puerto Rico was ruled by the military, with officials including the governor appointed by the president of the United States. The Foraker Act of 1900 gave Puerto Rico a certain amount of civilian popular government, including a popularly elected House of Representatives. The upper house and governor were appointed by the United States. Its judicial system was reformed to bring it into conformity with the American legal system; a Puerto Rico Supreme Court and a United State District Court for the territory were established. It was authorized a non-voting member of Congress, by the title of "Resident Commissioner", who was appointed. In addition, this Act extended all U.S. laws "not locally inapplicable" to Puerto Rico, specifying, in particular, exemption from U.S. Internal Revenue laws. The Act empowered the civil government to legislate on "all matters of legislative character not locally inapplicable", including the power to modify and repeal any laws then in existence in Puerto Rico, though the U.S. Congress retained the power to annul acts of the Puerto Rico legislature. During an address to the Puerto Rican legislature in 1906, President Theodore Roosevelt recommended that Puerto Ricans become U.S. citizens. In 1914, the Puerto Rican House of Delegates voted unanimously in favor of independence from the United States, but this was rejected by the U.S. Congress as "unconstitutional", and in violation of the 1900 Foraker Act. In 1917, the U.S. Congress passed the Jones–Shafroth Act (popularly known as the Jones Act), which granted Puerto Ricans born on or after April 25, 1898, U.S. citizenship. Opponents, including all of the Puerto Rican House of Delegates (who voted unanimously against it), claimed that the U.S. imposed citizenship in order to draft Puerto Rican men into the army as American entry into World War I as the likely motive. The same Act provided for a popularly elected Senate to complete a bicameral Legislative Assembly, as well as a bill of rights. It authorized the popular election of the Resident Commissioner to a four-year term. Natural disasters, including a major earthquake and tsunami in 1918 and several hurricanes, as well as the Great Depression, impoverished the island during the first few decades under U.S. rule. Some political leaders, such as Pedro Albizu Campos, who led the Puerto Rican Nationalist Party, demanded a change in relations with the United States. He organized a protest at the University of Puerto Rico in 1935, in which four were killed by police. In 1936, U.S. senator Millard Tydings introduced a bill supporting independence for Puerto Rico; he had previously co-sponsored the Tydings–McDuffie Act, which provided independence to the Philippines following a 10-year transition period of limited autonomy. While virtually all Puerto Rican political parties supported the bill, it was opposed by Luis Muñoz Marín of the Liberal Party of Puerto Rico, leading to its defeat In 1937, Albizu Campos' party organized a protest in Ponce. The Insular Police, similar to the National Guard, opened fire upon unarmed cadets and bystanders alike. The attack on unarmed protesters was reported by U.S. Congressman Vito Marcantonio and confirmed by a report from the Hays Commission, which investigated the events, led by Arthur Garfield Hays, counsel to the American Civil Liberties Union. Nineteen people were killed and over 200 were badly wounded, many shot in the back while running away. The Hays Commission declared it a massacre and police mob action, and it has since become known as the Ponce massacre. In the aftermath, on April 2, 1943, Tydings introduced another bill in Congress calling for independence for Puerto Rico, though it was again defeated. During the latter years of the Roosevelt–Truman administrations, the internal governance of the island was changed in a compromise reached with Luis Muñoz Marín and other Puerto Rican leaders. In 1946, President Truman appointed the first Puerto Rican-born governor, Jesús T. Piñero. Since 2007, the Puerto Rico State Department has developed a protocol to issue certificates of Puerto Rican citizenship to Puerto Ricans. In order to be eligible, applicants must have been born in Puerto Rico, born outside of Puerto Rico to a Puerto Rican–born parent, or be an American citizen with at least one year of residence in Puerto Rico. In 1947, the U.S. Congress passed the Elective Governor Act, signed by President Truman, allowing Puerto Ricans to vote for their own governor. The first elections under this act were held the following year, on November 2, 1948. On May 21, 1948, a bill was introduced before the Puerto Rican Senate which would restrain the rights of the independence and Nationalist movements on the island. The Senate, controlled by the "Partido Popular Democrático" (PPD) and presided by Luis Muñoz Marín, approved the bill that day. This bill, which resembled the anti-communist Smith Act passed in the United States in 1940, became known as the "Ley de la Mordaza" (Gag Law) when the U.S.-appointed governor of Puerto Rico, Jesús T. Piñero, signed it into law on June 10, 1948. Under this new law it would be a crime to print, publish, sell, or exhibit any material intended to paralyze or destroy the insular government; or to organize any society, group or assembly of people with a similar destructive intent. It made it illegal to sing a patriotic song, and reinforced the 1898 law that had made it illegal to display the Flag of Puerto Rico, with anyone found guilty of disobeying the law in any way being subject to a sentence of up to ten years imprisonment, a fine of up to US$10,000 (), or both. According to Dr. Leopoldo Figueroa, the only non-PPD member of the Puerto Rico House of Representatives, the law was repressive and in violation of the First Amendment of the U.S. Constitution, which guarantees Freedom of Speech. He asserted that the law as such was a violation of the civil rights of the people of Puerto Rico. The law was repealed in 1957. In the November 1948 election, Muñoz Marín became the first popularly elected governor of Puerto Rico, replacing U.S.-appointed Piñero on January 2, 1949. In 1950, the U.S. Congress granted Puerto Ricans the right to organize a constitutional convention via a referendum that gave them the option of voting their preference, "yes" or "no", on a proposed U.S. law that would organize Puerto Rico as a "commonwealth" that would continue United States sovereignty over Puerto Rico and its people. Puerto Rico's electorate expressed its support for this measure in 1951 with a second referendum to ratify the constitution. The Constitution of Puerto Rico was formally adopted on July 3, 1952. The Constitutional Convention specified the name by which the body politic would be known. On February 4, 1952, the convention approved Resolution 22 which chose in English the word "Commonwealth", meaning a "politically organized community" or "state", which is simultaneously connected by a compact or treaty to another political system. Puerto Rico officially designates itself with the term "Commonwealth of Puerto Rico" in its constitution, as a translation into English of the term to "Estado Libre Asociado" (ELA). In 1967 Puerto Rico's Legislative Assembly polled the political preferences of the Puerto Rican electorate by passing a plebiscite act that provided for a vote on the status of Puerto Rico. This constituted the first plebiscite by the Legislature for a choice among three status options (commonwealth, statehood, and independence). In subsequent plebiscites organized by Puerto Rico held in 1993 and 1998 (without any formal commitment on the part of the U.S. government to honor the results), the current political status failed to receive majority support. In 1993, Commonwealth status won by a plurality of votes (48.6% versus 46.3% for statehood), while the "none of the above" option, which was the Popular Democratic Party-sponsored choice, won in 1998 with 50.3% of the votes (versus 46.5% for statehood). Disputes arose as to the definition of each of the ballot alternatives, and Commonwealth advocates, among others, reportedly urged a vote for "none of the above". In 1950, the U.S. Congress approved Public Law 600 (P.L. 81-600), which allowed for a democratic referendum in Puerto Rico to determine whether Puerto Ricans desired to draft their own local constitution. This Act was meant to be adopted in the "nature of a compact". It required congressional approval of the Puerto Rico Constitution before it could go into effect, and repealed certain sections of the Organic Act of 1917. The sections of this statute left in force were entitled the "Puerto Rican Federal Relations Act". U.S. Secretary of the Interior Oscar L. Chapman, under whose Department resided responsibility of Puerto Rican affairs, clarified the new commonwealth status in this manner: On October 30, 1950, Pedro Albizu Campos and other nationalists led a three-day revolt against the United States in various cities and towns of Puerto Rico, in what is known as the Puerto Rican Nationalist Party Revolts of the 1950s. The most notable occurred in Jayuya and Utuado. In the Jayuya revolt, known as the "Jayuya Uprising", the Puerto Rican governor declared martial law, and attacked the insurgents in Jayuya with infantry, artillery and bombers under control of the Puerto Rican commander. The "Utuado Uprising" culminated in what is known as the Utuado massacre. On November 1, 1950, Puerto Rican nationalists from New York City, Griselio Torresola and Oscar Collazo, attempted to assassinate President Harry S. Truman at his temporary residence of Blair House. Torresola was killed during the attack, but Collazo was wounded and captured. He was convicted of murder and sentenced to death, but President Truman commuted his sentence to life. After Collazo served 29 years in a federal prison, President Jimmy Carter commuted his sentence to times served and he was released in 1979. Pedro Albizu Campos served many years in a federal prison in Atlanta, for seditious conspiracy to overthrow the U.S. government in Puerto Rico. The Constitution of Puerto Rico was approved by a Constitutional Convention on February 6, 1952, and 82% of the voters in a March referendum. It was modified and ratified by the U.S. Congress, approved by President Truman on July 3 of that year, and proclaimed by Gov. Muñoz Marín on July 25, 1952. This was the anniversary of July 25, 1898, landing of U.S. troops in the Puerto Rican Campaign of the Spanish–American War, until then celebrated as an annual Puerto Rico holiday. Puerto Rico adopted the name of "Estado Libre Asociado de Puerto Rico" (literally "Associated Free State of Puerto Rico"), officially translated into English as Commonwealth, for its body politic. "The United States Congress legislates over many fundamental aspects of Puerto Rican life, including citizenship, the currency, the postal service, foreign policy, military defense, communications, labor relations, the environment, commerce, finance, health and welfare, and many others." During the 1950s and 1960s, Puerto Rico experienced rapid industrialization, due in large part to "Operación Manos a la Obra" ("Operation Bootstrap"), an offshoot of FDR's New Deal. It was intended to transform Puerto Rico's economy from agriculture-based to manufacturing-based to provide more jobs. Puerto Rico has become a major tourist destination, as well as a global center for pharmaceutical manufacturing. Four referenda have been held since the late 20th century to resolve the political status. The 2012 referendum showed a majority (54% of the voters) in favor of a change in status, with full statehood the preferred option of those who wanted a change. Because there were almost 500,000 blank ballots in the 2012 referendum, creating confusion as to the voters' true desire, Congress decided to ignore the vote. The first three plebiscites provided voters with three options: statehood, free association, and independence. The Puerto Rican status referendum, 2017 in June 2017 was going to offer only two options: Statehood and Independence/Free Association. However, a letter from the Donald Trump administration recommended adding the Commonwealth, the current status, in the plebiscite. The option had been removed from this plebiscite in response to the results of the plebiscite in 2012 which asked whether to remain in the current status and No had won. The Trump administration cited changes in demographics during the past 5 years to add the option once again. Amendments to the plebiscite bill were adopted making ballot wording changes requested by the Department of Justice, as well as adding a "current territorial status" option. While 97 percent voted in favor of statehood, the turnout was low; only some 23 percent voted. After the ballots were counted the Justice Department was non-committal. The Justice Department had asked for the 2017 plebiscite to be postponed but the Rosselló government chose not to do so. After the outcome was announced, the department told the Associated Press that it had "not reviewed or approved the ballot's language". Former governor Aníbal Acevedo Vilá (2005–2009) is convinced that statehood is not the solution for either the U.S. or for Puerto Rico "for economic, identity and cultural reasons". He pointed out that voter turnout for the 2017 referendum was extremely low, and suggests that a different type of mutually-beneficial relationship should be found. If the federal government agrees to discuss an association agreement, the conditions would be negotiated between the two entities. The agreement might cover topics such as the role of the U.S. military in Puerto Rico, the use of the U.S. currency, free trade between the two entities, and whether Puerto Ricans would be U.S. citizens. The three current Free Associated States (Marshall Islands, Micronesia and Palau) use the American dollar, receive some financial support and the promise of military defense if they refuse military access to any other country. Their citizens are allowed to work in the U.S. and serve in its military. Governor Ricardo Rosselló is strongly in favor of statehood to help develop the economy and help to "solve our 500-year-old colonial dilemma ... Colonialism is not an option ... It's a civil rights issue ... 3.5 million citizens seeking an absolute democracy," he told the news media. Benefits of statehood include an additional $10 billion per year in federal funds, the right to vote in presidential elections, higher Social Security and Medicare benefits, and a right for its government agencies and municipalities to file for bankruptcy. The latter is currently prohibited. Statehood might be useful as a means of dealing with the financial crisis, since it would allow for bankruptcy and the relevant protection. According to the Government Development Bank, this might be the only solution to the debt crisis. Congress has the power to vote to allow Chapter 9 protection without the need for statehood, but in late 2015 there was very little support in the House for this concept. Other benefits to statehood include increased disability benefits and Medicaid funding, the right to vote in presidential elections and the higher (federal) minimum wage. Subsequent to the 2017 referendum, Puerto Rico's legislators are also expected to vote on a bill that would allow the governor to draft a state constitution and hold elections to choose senators and representatives to the federal Congress. In spite of the outcome of the referendum, action by the United States Congress would be necessary to implement changes to the status of Puerto Rico under the Territorial Clause of the United States Constitution. Since 1953, the UN has been considering the political status of Puerto Rico and how to assist it in achieving "independence" or "decolonization". In 1978, the Special Committee determined that a "colonial relationship" existed between the U.S. and Puerto Rico. The UN's Special Committee on Decolonization has often referred to Puerto Rico as a "nation" in its reports, because, internationally, the people of Puerto Rico are often considered to be a Caribbean nation with their own national identity. Most recently, in a June 2016 report, the Special Committee called for the United States to expedite the process to allow self-determination in Puerto Rico. More specifically, the group called on the United States to expedite a process that would allow the people of Puerto Rico to exercise fully their right to self-determination and independence. ... allow the Puerto Rican people to take decisions in a sovereign manner, and to address their urgent economic and social needs, including unemployment, marginalization, insolvency and poverty". However, these efforts have been meaningless and without effect for the most part. On November 27, 1953, shortly after the establishment of the Commonwealth, the General Assembly of the United Nations approved Resolution 748, removing Puerto Rico's classification as a non-self-governing territory. The General Assembly did not apply the full list of criteria which was enunciated in 1960 when it took favorable note of the cessation of transmission of information regarding the non-self-governing status of Puerto Rico. According to the White House Task Force on Puerto Rico's Political Status in its December 21, 2007 report, the U.S., in its written submission to the UN in 1953, never represented that Congress could not change its relationship with Puerto Rico without the territory's consent. It stated that the U.S. Justice Department in 1959 reiterated that Congress held power over Puerto Rico pursuant to the Territorial Clause of the U.S. Constitution. In 1993 the United States Court of Appeals for the Eleventh Circuit stated that Congress may unilaterally repeal the Puerto Rican Constitution or the Puerto Rican Federal Relations Act and replace them with any rules or regulations of its choice. In a 1996 report on a Puerto Rico status political bill, the U.S. House Committee on Resources stated, "Puerto Rico's current status does not meet the criteria for any of the options for full self-government under Resolution 1541" (the three established forms of full self-government being stated in the report as (1) national independence, (2) free association based on separate sovereignty, or (3) full integration with another nation on the basis of equality). The report concluded that Puerto Rico "... remains an unincorporated colony and does not have the status of 'free association' with the United States as that status is defined under United States law or international practice", that the establishment of local self-government with the consent of the people can be unilaterally revoked by the U.S. Congress, and that U.S. Congress can also withdraw the U.S. citizenship of Puerto Rican residents of Puerto Rico at any time, for a legitimate Federal purpose. The application of the U.S. Constitution to Puerto Rico is limited by the Insular Cases. In 2006, 2007, 2009, 2010, and 2011 the United Nations Special Committee on Decolonization passed resolutions calling on the United States to expedite a process "that would allow Puerto Ricans to fully exercise their inalienable right to self-determination and independence", and to release all Puerto Rican political prisoners in U.S. prisons, to clean up, decontaminate and return the lands in the islands of Vieques and Culebra to the people of Puerto Rico, and to perform a probe into U.S. human rights violations on the island and into the killing by the FBI of pro-independence leader Filiberto Ojeda Rios. On July 15, 2009, the United Nations Special Committee on Decolonization approved a draft resolution calling on the government of the United States to expedite a process that would allow the Puerto Rican people to exercise fully their inalienable right to self-determination and independence. On April 29, 2010, the U.S. House voted 223–169 to approve a measure for a federally sanctioned process for Puerto Rico's self-determination, allowing Puerto Rico to set a new referendum on whether to continue its present form of commonwealth, or to have a different political status. If Puerto Ricans voted to continue as a commonwealth, the government of Puerto Rico was authorized to conduct additional plebiscites at intervals of every eight years from the date on which the results of the prior plebiscite were certified; if Puerto Ricans voted to have a different political status, a second referendum would determine whether Puerto Rico would become a U.S. state, an independent country, or a sovereign nation associated with the U.S. that would not be subject to the Territorial Clause of the United States Constitution. During the House debate, a fourth option, to retain its present form of commonwealth (sometimes referred to as "the status quo") political status, was added as an option in the second plebiscite. Immediately following U.S. House passage, H.R. 2499 was sent to the U.S. Senate, where it was given two formal readings and referred to the Senate Committee on Energy and Natural Resources. On December 22, 2010, the 111th United States Congress adjourned without any Senate vote on H.R.2499, killing the bill. The latest Task Force report was released on March 11, 2011. The report suggested a two-plebiscite process, including a "first plebiscite that requires the people of Puerto Rico to choose whether they wish to be part of the United States (either via Statehood or Commonwealth) or wish to be independent (via Independence or Free Association). If continuing to be part of the United States were chosen in the first plebiscite, a second vote would be taken between Statehood and Commonwealth." On June 14, 2011, President Barack Obama "promised to support 'a clear decision' by the people of Puerto Rico on statehood". That same month, the United Nations Special Committee on Decolonization passed a resolution and adopted a consensus text introduced by Cuba's delegate on June 20, 2011, calling on the United States to expedite a process "that would allow Puerto Ricans to fully exercise their inalienable right to self-determination and independence". On November 6, 2012, a two-question referendum took place, simultaneous with the general elections. The first question, voted on in August, asked voters whether they wanted to maintain the current status under the territorial clause of the U.S. Constitution. 54% voted against the status quo, effectively approving the second question to be voted on in November. The second question posed three alternate status options: statehood, independence, or free association. 61.16% voted for statehood, 33.34% for a sovereign free associated state, and 5.49% for independence. There were also 515,348 blank and invalidated ballots, which are not reflected in the final tally, as they are not considered cast votes under Puerto Rico law. On December 11, 2012, Puerto Rico's Legislature passed a concurrent resolution to request to the president and the U.S. Congress action on November 6, 2012, plebiscite results. But on April 10, 2013, with the issue still being widely debated, the White House announced that it will seek $2.5 million to hold another referendum, this next one being the first Puerto Rican status referendum to be financed by the U.S. Federal government. In December 2015, the U.S. government submitted a brief as Amicus Curiae to the U.S. Supreme Court related to the case "Puerto Rico v. Sanchez Valle". The U.S. government official position is that the U.S. Constitution does not contemplate "sovereign territories". That the Court has consistently recognized that "there is no sovereignty in a Territory of the United States but that of the United States itself". and a U.S. territory has "no independent sovereignty comparable to that of a state. That is because "the Government of a territory owes its existence wholly to the United States". Congress's plenary authority over federal territories includes the authority to permit self-government, whereby local officials administer a territory's internal affairs. On June 9, 2016, the court ruled by a 6–2 majority that Puerto Rico is a territory and thus lacks sovereignty. On June 30, 2016, the President signed a new law approved by U.S. Congress, H.R. 5278: PROMESA, establishing a Control Board over the Puerto Rico government. This board will have a significant degree of federal control involved in its establishment and operations. In particular, the authority to establish the control board derives from the federal government's constitutional power to "make all needful rules and regulations" regarding U.S. territories; The president would appoint all seven voting members of the board; and the board would have broad sovereign powers to effectively overrule decisions by Puerto Rico's legislature, governor, and other public authorities. In September 2017, the island was hit by two major hurricanes: Irma and Maria. Hurricane Irma hit the island on September 6 as a category 5 hurricane with maximum sustained winds of , and was the most powerful hurricane to hit the island in recorded history. The heart of the storm stayed off-shore, but the northeast of Puerto Rico, including San Juan, saw catastrophic damage. Nearly half of the island lost power, and the already weak power grid was significantly weakened. This led to a humanitarian crisis, which was further exacerbated by Hurricane Maria's landfall as a Category 4 storm. After the two hurricanes hit, the entire island was without power, and total casualties topped 3,000. The recovery as of late November was slow but progress had been made. Electricity was restored to two-thirds of the island, although there was some doubt as to the number of residents getting reliable power. In January 2018, it was reported that close to 40 percent of the island's customers still did not have electricity. The vast majority had access to water but were still required to boil it. The number still living in shelters had dropped to 982 with thousands of others living with relatives. The official death toll at the time was 58 but some sources indicated that the actual number is much higher. A dam on the island was close to failure and officials were concerned about additional flooding from this source. Thousands had left Puerto Rico, with close to 200,000 having arrived in Florida alone. Those who were then living on the mainland experienced difficulty in getting health care benefits. A "The New York Times" report on November 27 said it was understandable that Puerto Ricans wanted to leave the island. "Basic essentials are hard to find and electricity and other utilities are unreliable or entirely inaccessible. Much of the population has been unable to return to jobs or to school and access to health care has been severely limited." The Center for Puerto Rican Studies at New York's Hunter College estimated that some half million people, about 14% of the population, may permanently leave by 2019. The total damage on the island was estimated as up to $95 billion. By the end of November, FEMA had received over a million applications for aid and had approved about a quarter of those. The US government had agreed in October to provide funding to rebuild and up to $4.9 billion in loans to help the island's government. FEMA had $464 million earmarked to help local governments rebuild public buildings and infrastructure. Bills for other funding were being considered in Washington but little progress had been made on those. A November 28, 2017 report by the Sierra Club included this comment: "It will take years to rebuild Puerto Rico, not just from the worst hurricane to make landfall since 1932, but to sustainably overcome environmental injustices which made Maria's devastation even more catastrophic". In May 2017, the Natural Resources Defense Council reported that Puerto Rico's water system was the worst as measured by the Clean Water Act. 70% of the population drank water that violated U.S. law. A tourism web site report in March 2018 indicated that all airports were operating, although Luis Muñoz Marín International Airport would not be back to handling the full number of flights until July 2018. Some 90% of the island was receiving electricity, although the power structure would require another $17.6 billion for full rebuilding, according to the United States Department of Energy. Nearly all residents had access to telecommunications service and running water. All hospitals were operating. Some 83% of hotel rooms were available for use and the cruise ship port was receiving ships; 58 arrived in San Juan in February. The island was encouraging operators to increase the number of tourists. Reports in April 2018 stated that Puerto Rico will receive $18.5 billion from the United States Department of Housing and Urban Development to help rebuild homes and infrastructure. This was substantially less than the $46 billion requested by the governor. However, the island was expecting to receive approximately $50 billion for disaster relief over the subsequent six years mostly via the Federal Emergency Management Agency. FEMA also has granted Puerto Rico a grant of $79 million in order to update building code and construction code as well as increasing the number of compliance officers for permitting from 11 to 200. In 2018, nearly 2800 families were living in FEMA-sponsored short-term housing across 34 states and Puerto Rico. Nearly half of the schools are operating at only 60% classroom capacity. Over 280 public schools would not reopen in the fall; 827 were expected to be operational. Almost 40,000 students left the island's schools since May 2017; some of these were in schools in the mainland U.S. (Before the hurricanes, Puerto Rico had planned to close 179 schools due to inadequate numbers of students.) Rebuilding efforts in Puerto Rico were set back again by an earthquake swarm beginning in December 2019 and continuing into 2020. The earthquakes have caused structural damage across Puerto Rico including collapsing resident homes and historical landmarks. The official number of Hurricane Maria-related deaths as reported by the government of Puerto Rico was 64. The Commonwealth commissioned George Washington University to assess the death toll. An academic study based on household surveys and reported in the New England Journal of Medicine estimated that the number of hurricane-related deaths during the period September 20, 2017 to December 31, 2017 was around 4,600 (range 793–8,498) On August 28, Governor Rosselló acknowledged the results of the George Washington University study and revised the island's official death toll to 2,975 people. Rosselló described the effects of the hurricane as "unprecedented devastation". Hurricane Dorian was the third hurricane in three years to hit Puerto Rico. The recovering infrastructure from the 2017 hurricanes, as well as new governor Wanda Vázquez Garced, were put to the test against a potential humanitarian crisis. Puerto Rico consists of the main island of Puerto Rico and various smaller islands, including Vieques, Culebra, Mona, Desecheo, and Caja de Muertos. Of these five, only Culebra and Vieques are inhabited year-round. Mona, which has played a key role in maritime history, is uninhabited most of the year except for employees of the Puerto Rico Department of Natural Resources. There are many other even smaller islets, like Monito, which is near to Mona, Isla de Cabras and La Isleta de San Juan, both located on the San Juan Bay. The latter is the only inhabited islet with communities like Old San Juan and Puerta de Tierra, and connected to the main island by bridges. The Commonwealth of Puerto Rico has an area of , of which is land and is water. Puerto Rico is larger than Delaware and Rhode Island. The maximum length of the main island from east to west is , and the maximum width from north to south is . Puerto Rico is the smallest of the Greater Antilles. It is 80% of the size of Jamaica, just over 18% of the size of Hispaniola and 8% of the size of Cuba, the largest of the Greater Antilles. The island is mostly mountainous with large coastal areas in the north and south. The main mountain range is called "La Cordillera Central" (The Central Range). The highest elevation in Puerto Rico, Cerro de Punta , is located in this range. Another important peak is El Yunque, one of the highest in the "Sierra de Luquillo" at the El Yunque National Forest, with an elevation of . Puerto Rico has 17 lakes, all man-made, and more than 50 rivers, most originating in the Cordillera Central. Rivers in the northern region of the island are typically longer and of higher water flow rates than those of the south, since the south receives less rain than the central and northern regions. Puerto Rico is composed of Cretaceous to Eocene volcanic and plutonic rocks, overlain by younger Oligocene and more recent carbonates and other sedimentary rocks. Most of the caverns and karst topography on the island occurs in the northern region in the carbonates. The oldest rocks are approximately years old (Jurassic) and are located at Sierra Bermeja in the southwest part of the island. They may represent part of the oceanic crust and are believed to come from the Pacific Ocean realm. Puerto Rico lies at the boundary between the Caribbean and North American plates and is being deformed by the tectonic stresses caused by their interaction. These stresses may cause earthquakes and tsunamis. These seismic events, along with landslides, represent some of the most dangerous geologic hazards in the island and in the northeastern Caribbean. The 1918 San Fermín earthquake occurred on , 1918, and had an estimated magnitude of 7.5 on the Richter scale. It originated off the coast of Aguadilla, several kilometers off the northern coast, and was accompanied by a tsunami. It caused extensive property damage and widespread losses, damaging infrastructure, especially bridges. It resulted in an estimated 116 deaths and $4 million in property damage. The failure of the government to move rapidly to provide for the general welfare contributed to political activism by opponents and eventually to the rise of the Puerto Rican Nationalist Party. On January 7, 2020, the country experienced its second largest earthquake, estimated at a 6.4 on the Richter scale. Its estimated economic loss is more than $100 million. The Puerto Rico Trench, the largest and deepest trench in the Atlantic, is located about north of Puerto Rico at the boundary between the Caribbean and North American plates. It is long. At its deepest point, named the Milwaukee Deep, it is almost deep. The climate of Puerto Rico in the Köppen climate classification is tropical rainforest. Temperatures are warm to hot year round, averaging near 85 °F (29 °C) in lower elevations and 70 °F (21 °C) in the mountains. Easterly trade winds pass across the island year round. Puerto Rico has a rainy season which stretches from April into November. The mountains of the Cordillera Central are the main cause of the variations in the temperature and rainfall that occur over very short distances. The mountains can also cause wide variation in local wind speed and direction due to their sheltering and channeling effects adding to the climatic variation. The island has an average temperature of throughout the year, with an average minimum temperature of and maximum of . Daily temperature changes seasonally are quite small in the lowlands and coastal areas. The temperature in the south is usually a few degrees higher than those in the north and temperatures in the central interior mountains are always cooler than those on the rest of the island. Between the dry and wet season, there is a temperature change of around . This change is due mainly to the warm waters of the tropical Atlantic Ocean, which significantly modify cooler air moving in from the north and northwest. Coastal waters temperatures around the years are about in February to in August. The highest temperature ever recorded was at Arecibo, while the lowest temperature ever recorded was in the mountains at Adjuntas, Aibonito, and Corozal. The average yearly precipitation is . Puerto Rico experiences the Atlantic hurricane season, similar to the remainder of the Caribbean Sea and North Atlantic oceans. On average, a quarter of its annual rainfall is contributed from tropical cyclones, which are more prevalent during periods of La Niña than El Niño. A cyclone of tropical storm strength passes near Puerto Rico, on average, every five years. A hurricane passes in the vicinity of the island, on average, every seven years. Since 1851, the Lake Okeechobee Hurricane (also known as the San Felipe Segundo hurricane in Puerto Rico) of September 1928 is the only hurricane to make landfall as a Category 5 hurricane. In the busy 2017 Atlantic hurricane season, Puerto Rico avoided a direct hit by the Category 5 Hurricane Irma on September 6, 2017, as it passed about north of Puerto Rico, but high winds caused a loss of electrical power to some one million residents. Almost 50% of hospitals were operating with power provided by generators. The Category 4 Hurricane Jose, as expected, veered away from Puerto Rico. A short time later, the devastating Hurricane Maria made landfall on Puerto Rico on Wednesday, September 20, near the Yabucoa municipality at 10:15 UTC (6:15 am local time) as a high-end Category 4 hurricane with sustained winds of 155 mph (250 km/h), powerful rains and widespread flooding causing tremendous destruction, including the electrical grid, which would remain out for 4–6 months in many portions of the island. Species endemic to the archipelago number 239 plants, 16 birds and 39 amphibians/reptiles, recognized as of 1998. Most of these (234, 12 and 33 respectively) are found on the main island. The most recognizable endemic species and a symbol of Puerto Rican pride is the "coquí", a small frog easily identified by the sound of its call, from which it gets its name. Most "coquí" species (13 of 17) live in the El Yunque National Forest, a tropical rainforest in the northeast of the island previously known as the Caribbean National Forest. El Yunque is home to more than 240 plants, 26 of which are endemic to the island. It is also home to 50 bird species, including the critically endangered Puerto Rican amazon. Across the island in the southwest, the of dry land at the Guánica Commonwealth Forest Reserve contain over 600 uncommon species of plants and animals, including 48 endangered species and 16 endemic to Puerto Rico. Puerto Rico has three bioluminescent bays: rare bodies of water occupied by microscopic marine organisms that glow when touched. However, tourism, pollution, and hurricanes have threatened the organisms. The population of Puerto Rico has been shaped by initial Amerindian settlement, European colonization, slavery, economic migration, and Puerto Rico's status as unincorporated territory of the United States. The estimated population of Puerto Rico as of July 1, 2019 was 3,193,694, a 14.28% decrease since the 2010 United States Census. From 2000 to 2010, the population declined for the first time in census history for Puerto Rico, from 3,808,610 to 3,725,789. Continuous European immigration and high natural increase helped the population of Puerto Rico grow from 155,426 in 1800 to almost a million by the close of the 19th century. A census conducted by royal decree on September 30, 1858, gave the following totals of the Puerto Rican population at that time: 341,015 were free colored; 300,430 identified as Whites; and 41,736 were slaves. A census in 1887 found a population of around 800,000, of which 320,000 were black. During the 19th century, hundreds of families arrived in Puerto Rico, primarily from the Canary Islands and Andalusia, but also from other parts of Spain such as Catalonia, Asturias, Galicia and the Balearic Islands and numerous Spanish loyalists from Spain's former colonies in South America. Settlers from outside Spain also arrived in the islands, including from Corsica, France, Lebanon, China, Portugal, Ireland, Scotland, Germany and Italy. This immigration from non-Hispanic countries was the result of the "Real Cedula de Gracias de 1815" ("Royal Decree of Graces of 1815"), which allowed European Catholics to settle in the island with land allotments in the interior of the island, provided they paid taxes and continued to support the Catholic Church. Between 1960 and 1990 the census questionnaire in Puerto Rico did not ask about race or ethnicity. The 2000 United States Census included a racial self-identification question in Puerto Rico. According to the census, most Puerto Ricans identified as White and Hispanic; few identified as Black or some other race. A group of researchers from Puerto Rican universities conducted a study of mitochondrial DNA that revealed that the modern population of Puerto Rico has a high genetic component of Taíno and Guanche (especially of the island of Tenerife). Other studies show Amerindian ancestry in addition to the Taíno. One genetic study on the racial makeup of Puerto Ricans (including all races) found them to be roughly around 61% West Eurasian/North African (overwhelmingly of Spanish provenance), 27% Sub-Saharan African and 11% Native American. Another genetic study from 2007, claimed that "the average genomewide individual (ie. Puerto Rican) ancestry proportions have been estimated as 66%, 18%, and 16%, for European, West African, and Native American, respectively." Another study estimates 63.7% European, 21.2% (Sub-Saharan) African, and 15.2% Native American; European ancestry is more prevalent in the West and in Central Puerto Rico, African in Eastern Puerto Rico, and Native American in Northern Puerto Rico. A Pew Research survey indicated an adult literacy rate of 90.4% in 2012 based on data from the United Nations. Puerto Rico has a life expectancy of approximately 81.0 years according to the CIA World Factbook, an improvement from 78.7 years in 2010. This means Puerto Rico has the second highest life expectancy in the United States, if territories are taken into account. As of 2019, Puerto Rico was home to 100,000 permanent legal residents. The vast majority of recent immigrants, both legal and illegal, come from the Dominican Republic and Haiti. Other major sources of recent immigrants include Cuba, Mexico, Colombia, Panama, Jamaica, Venezuela, Spain, and Nigeria. Additionally, there are many non-Puerto Rican U.S. citizens settling in Puerto Rico from the mainland United States and the U.S. Virgin Islands, as well as Nuyoricans (stateside Puerto Ricans) coming back. Most recent immigrants settle in and around San Juan. Emigration is a major part of contemporary Puerto Rican history. Starting soon after World War II, poverty, cheap airfares, and promotion by the island government caused waves of Puerto Ricans to move to the United States, particularly to the northeastern states and nearby Florida. This trend continued even as Puerto Rico's economy improved and its birth rate declined. Puerto Ricans continue to follow a pattern of "circular migration", with some migrants returning to the island. In recent years, the population has declined markedly, falling nearly 1% in 2012 and an additional 1% (36,000 people) in 2013 due to a falling birthrate and emigration. The impact of hurricanes Maria and Irma, combined with the territory's worsening economy, led to its greatest population decline since the U.S. acquired the territory. According to the 2010 Census, the number of Puerto Ricans living in the United States outside of Puerto Rico far exceeds those living in Puerto Rico. Emigration exceeds immigration. As those who leave tend to be better educated than those who remain, this accentuates the drain on Puerto Rico's economy. Based on the July 1, 2019 estimate by the U.S. Census Bureau, the population of the Commonwealth had declined by 532,095 people since the 2010 Census data had been tabulated. The most populous city is the capital, San Juan, with 318,441 people based on a 2019 estimate by the Census Bureau. Other major cities include Bayamón, Carolina, Ponce, and Caguas. Of the ten most populous cities on the island, eight are located within what is considered San Juan's metropolitan area, while the other two are located in the south (Ponce) and west (Mayagüez) of the island. The official languages of the executive branch of government of Puerto Rico are Spanish and English, with Spanish being the primary language. Spanish is, and has been, the only official language of the entire Commonwealth judiciary system, despite a 1902 English-only language law. However, all official business of the U.S. District Court for the District of Puerto Rico is conducted in English. English is the primary language of less than 10% of the population. Spanish is the dominant language of business, education and daily life on the island, spoken by nearly 95% of the population. The U.S. Census Bureau's 2016 update provides the following facts: 94.3% of adults speak only Spanish at home, which compares to 5.5% who speak English, 0.2% who speak French, and 0.1% who speak another language at home. In Puerto Rico, public school instruction is conducted almost entirely in Spanish. There have been pilot programs in about a dozen of the over 1,400 public schools aimed at conducting instruction in English only. Objections from teaching staff are common, perhaps because many of them are not fully fluent in English. English is taught as a second language and is a compulsory subject from elementary levels to high school. The languages of the deaf community are American Sign Language and its local variant, Puerto Rican Sign Language. The Spanish of Puerto Rico has evolved into having many idiosyncrasies in vocabulary and syntax that differentiate it from the Spanish spoken elsewhere. As a product of Puerto Rican history, the island possesses a unique Spanish dialect. Puerto Rican Spanish utilizes many Taíno words, as well as English words. The largest influence on the Spanish spoken in Puerto Rico is that of the Canary Islands. Taíno loanwords are most often used in the context of vegetation, natural phenomena, and native musical instruments. Similarly, words attributed to primarily West African languages were adopted in the contexts of foods, music, and dances, particularly in coastal towns with concentrations of descendants of Sub-Saharan Africans. The Roman Catholic Church was brought by Spanish colonists and gradually became the dominant religion in Puerto Rico. The first dioceses in the Americas, including that of Puerto Rico, were authorized by Pope Julius II in 1511. In 1512, priests were established for the parrochial churches. By 1759, there was a priest for each church. One Pope, John Paul II, visited Puerto Rico in October 1984. All municipalities in Puerto Rico have at least one Catholic church, most of which are located at the town center, or plaza. African slaves brought and maintained various ethnic African religious practices associated with different peoples; in particular, the Yoruba beliefs of Santería and/or Ifá, and the Kongo-derived Palo Mayombe. Some aspects were absorbed into syncretic Christianity. Protestantism, which was suppressed under the Spanish Catholic regime, has reemerged under United States rule, making contemporary Puerto Rico more interconfessional than in previous centuries, although Catholicism continues to be the dominant religion. The first Protestant church, Iglesia de la Santísima Trinidad, was established in Ponce by the Anglican Diocese of Antigua in 1872. It was the first non-Roman Catholic Church in the entire Spanish Empire in the Americas. Pollster Pablo Ramos stated in 1998 that the population was 38% Roman Catholic, 28% Pentecostal, and 18% were members of independent churches, which would give a Protestant percentage of 46% if the last two populations are combined. Protestants collectively added up to almost two million people. Another researcher gave a more conservative assessment of the proportion of Protestants: Puerto Rico, by virtue of its long political association with the United States, is the most Protestant of Latin American countries, with a Protestant population of approximately 33 to 38 percent, the majority of whom are Pentecostal. David Stoll calculates that if we extrapolate the growth rates of evangelical churches from 1960 to 1985 for another twenty-five years Puerto Rico will become 75 percent evangelical. (Ana Adams: "Brincando el Charco..." in "Power, Politics and Pentecostals in Latin America", Edward Cleary, ed., 1997. p. 164). An Associated Press article in March 2014 stated that "more than 70 percent of whom identify themselves as Catholic" but provided no source for this information. The CIA World Factbook reports that 85% of the population of Puerto Rico identifies as Roman Catholic, while 15% identify as Protestant and Other. Neither a date or a source for that information is provided and may not be recent. A 2013 Pew Research survey found that only about 45% of Puerto Rican adults identified themselves as Catholic, 29% as Protestant and 20% as unaffiliated with a religion. The people surveyed by Pew consisted of Puerto Ricans living in the 50 states and DC and may not be indicative of those living in the Commonwealth. By 2014, a Pew Research report, with the sub-title "Widespread Change in a Historically Catholic Region", indicated that only 56% of Puerto Ricans were Catholic and that 33% were Protestant; this survey was completed between October 2013 and February 2014. An Eastern Orthodox community, the Dormition of the Most Holy Theotokos / St. Spyridon's Church is located in Trujillo Alto, and serves the small Orthodox community. This affiliation accounted for under 1% of the population in 2010 according to the Pew Research report. In 1940, Juanita García Peraza founded the Mita Congregation, the first religion of Puerto Rican origin. Taíno religious practices have been rediscovered/reinvented to a degree by a handful of advocates. Similarly, some aspects of African religious traditions have been kept by some adherents. In 1952, a handful of American Jews established the island's first synagogue; this religion accounts for under 1% of the population in 2010 according to the Pew Research report. The synagogue, called "Sha'are Zedeck", hired its first rabbi in 1954. Puerto Rico has the largest Jewish community in the Caribbean, numbering 3000 people (date not stated), and is the only Caribbean island in which the Conservative, Reform and Orthodox Jewish movements all are represented. In 2007, there were about 5,000 Muslims in Puerto Rico, representing about 0.13% of the population. Eight mosques are located throughout the island, with most Muslims living in Río Piedras and Caguas; most Muslims are of Palestinian and Jordanian descent. There is also a Bahá'í community. In 2015, the 25,832 Jehovah's Witnesses represented about 0.70% of the population, with 324 congregations. The Padmasambhava Buddhist Center, whose followers practice Tibetan Buddhism, as well as Nichiren Buddhism have branches in Puerto Rico. There are several atheist activist and educational organizations, and an atheistic parody religion called the Pastafarian Church of Puerto Rico. AnISKCON temple in Gurabo is devoted to Krishna Consciousness, with two preaching centers in the metropolitan area. Puerto Rico has 8 senatorial districts, 40 representative districts and 78 municipalities. It has a republican form of government with separation of powers subject to the jurisdiction and sovereignty of the United States. Its current powers are all delegated by the United States Congress and lack full protection under the United States Constitution. Puerto Rico's head of state is the president of the United States. The government of Puerto Rico, based on the formal republican system, is composed of three branches: the executive, legislative, and judicial branch. The executive branch is headed by the governor, currently Wanda Vázquez Garced. The legislative branch consists of a bicameral legislature called the Legislative Assembly, made up of a Senate as its upper chamber and a House of Representatives as its lower chamber. The Senate is headed by the president of the Senate, currently Thomas Rivera Schatz, while the House of Representatives is headed by the speaker of the House, currently Carlos Johnny Méndez. The governor and legislators are elected by popular vote every four years with the last election held in November 2016. The judicial branch is headed by the chief justice of the Supreme Court of Puerto Rico, currently Maite Oronoz Rodríguez. Members of the judicial branch are appointed by the governor with the advice and consent of the Senate. Puerto Rico is represented in the United States Congress by a nonvoting delegate, the resident commissioner, currently Jenniffer González. Current congressional rules have removed the commissioner's power to vote in the Committee of the Whole, but the commissioner can vote in committee. Puerto Rican elections are governed by the Federal Election Commission and the State Elections Commission of Puerto Rico. While residing in Puerto Rico, Puerto Ricans cannot vote in U.S. presidential elections, but they can vote in primaries. Puerto Ricans who become residents of a U.S. state can vote in presidential elections. Puerto Rico hosts consulates from 41 countries, mainly from the Americas and Europe, with most located in San Juan. As an unincorporated territory of the United States, Puerto Rico does not have any first-order administrative divisions as defined by the U.S. government, but has 78 municipalities at the second level. Mona Island is not a municipality, but part of the municipality of Mayagüez. Municipalities are subdivided into wards or barrios, and those into sectors. Each municipality has a mayor and a municipal legislature elected for a four-year term. The municipality of San Juan (previously called "town"), was founded first, in 1521, San Germán in 1570, Coamo in 1579, Arecibo in 1614, Aguada in 1692 and Ponce in 1692. An increase of settlement saw the founding of 30 municipalities in the 18th century and 34 in the 19th. Six were founded in the 20th century; the last was Florida in 1971. Since 1952, Puerto Rico has had three main political parties: the Popular Democratic Party (PPD in Spanish), the New Progressive Party (PNP in Spanish) and the Puerto Rican Independence Party (PIP). The three parties stand for different political status. The PPD, for example, seeks to maintain the island's status with the U.S. as a commonwealth, while the PNP, on the other hand, seeks to make Puerto Rico a state of the United States. The PIP, in contrast, seeks a complete separation from the United States by seeking to make Puerto Rico a sovereign nation. In terms of party strength, the PPD and PNP usually hold about 47% of the vote each while the PIP holds only about 5%. After 2007, other parties emerged on the island. The first, the Puerto Ricans for Puerto Rico Party (PPR in Spanish) was registered that same year. The party claims that it seeks to address the islands' problems from a status-neutral platform. But it ceased to remain as a registered party when it failed to obtain the required number of votes in the 2008 general election. Four years later, the 2012 election saw the emergence of the Movimiento Unión Soberanista (MUS; English: "Sovereign Union Movement") and the Partido del Pueblo Trabajador (PPT; English: "Working People's Party") but none obtained more than 1% of the vote. Other non-registered parties include the Puerto Rican Nationalist Party, the Socialist Workers Movement, and the Hostosian National Independence Movement. The insular legal system is a blend of civil law and the common law systems. Puerto Rico is the only current U.S. possession whose legal system operates primarily in a language other than American English: namely, Spanish. Because the U.S. federal government operates primarily in English, all Puerto Rican attorneys must be bilingual in order to litigate in English in U.S. federal courts, and litigate federal preemption issues in Puerto Rican courts. Title 48 of the United States Code outlines the role of the United States Code to United States territories and insular areas such as Puerto Rico. After the U.S. government assumed control of Puerto Rico in 1901, it initiated legal reforms resulting in the adoption of codes of criminal law, criminal procedure, and civil procedure modeled after those then in effect in California. Although Puerto Rico has since followed the federal example of transferring criminal and civil procedure from statutory law to rules promulgated by the judiciary, several portions of its criminal law still reflect the influence of the California Penal Code. The judicial branch is headed by the chief justice of the Puerto Rico Supreme Court, which is the only appellate court required by the Constitution. All other courts are created by the Legislative Assembly of Puerto Rico. There is also a Federal District Court for Puerto Rico. Someone accused of a criminal act at the federal level may not be accused for the same act in a Commonwealth court, unlike a state court, since Puerto Rico as a territory lacks sovereignty separate from Congress as a state does. Such a parallel accusation would constitute double jeopardy. The nature of Puerto Rico's political relationship with the U.S. is the subject of ongoing debate in Puerto Rico, the United States Congress, and the United Nations. Specifically, the basic question is whether Puerto Rico should remain a U.S. territory, become a U.S. state, or become an independent country. Constitutionally, Puerto Rico is subject to the plenary powers of the United States Congress under the territorial clause of Article IV of the U.S. Constitution. Laws enacted at the federal level in the United States apply to Puerto Rico as well, regardless of its political status. Their residents do not have voting representation in the U.S. Congress. Like the different states of the United States, Puerto Rico lacks "the full sovereignty of an independent nation", for example, the power to manage its "external relations with other nations", which is held by the U.S. federal government. The Supreme Court of the United States has indicated that once the U.S. Constitution has been extended to an area (by Congress or the courts), its coverage is irrevocable. To hold that the political branches may switch the Constitution on or off at will would lead to a regime in which they, not this Court, say "what the law is". Puerto Ricans "were collectively made U.S. citizens" in 1917 as a result of the Jones-Shafroth Act. U.S. citizens residing in Puerto Rico cannot vote for the U.S. president, though both major parties, Republican and Democratic, run primary elections in Puerto Rico to send delegates to vote on a presidential candidate. Since Puerto Rico is an unincorporated territory (see above) and not a U.S. state, the United States Constitution does not fully enfranchise U.S. citizens residing in Puerto Rico. Only fundamental rights under the American federal constitution and adjudications are applied to Puerto Ricans. Various other U.S. Supreme Court decisions have held which rights apply in Puerto Rico and which ones do not. Puerto Ricans have a long history of service in the U.S. Armed Forces and, since 1917, they have been included in the U.S. compulsory draft whensoever it has been in effect. Though the Commonwealth government has its own tax laws, Puerto Ricans are also required to pay many kinds of U.S. federal taxes, not including the federal personal income tax for Puerto Rico-sourced income, but only under certain circumstances. In 2009, Puerto Rico paid into the U.S. Treasury. Residents of Puerto Rico pay into Social Security, and are thus eligible for Social Security benefits upon retirement. They are excluded from the Supplemental Security Income (SSI), and the island actually receives a smaller fraction of the Medicaid funding it would receive if it were a U.S. state. Also, Medicare providers receive less-than-full state-like reimbursements for services rendered to beneficiaries in Puerto Rico, even though the latter paid fully into the system. While a state may try an individual for the same crime he/she was tried in federal court, this is not the case in Puerto Rico. Being a U.S. territory, Puerto Rico's authority to enact a criminal code derives from Congress and not from local sovereignty as with the states. Thus, such a parallel accusation would constitute double jeopardy and is constitutionally impermissible. In 1992, President George H. W. Bush issued a memorandum to heads of executive departments and agencies establishing the current administrative relationship between the federal government and the Commonwealth of Puerto Rico. This memorandum directs all federal departments, agencies, and officials to treat Puerto Rico administratively as if it were a state, insofar as doing so would not disrupt federal programs or operations. Many federal executive branch agencies have significant presence in Puerto Rico, just as in any state, including the Federal Bureau of Investigation, Federal Emergency Management Agency, Transportation Security Administration, Social Security Administration, and others. While Puerto Rico has its own Commonwealth judicial system similar to that of a U.S. state, there is also a U.S. federal district court in Puerto Rico, and Puerto Ricans have served as judges in that Court and in other federal courts on the U.S. mainland regardless of their residency status at the time of their appointment. Sonia Sotomayor, a New Yorker of Puerto Rican descent, serves as an associate justice of the Supreme Court of the United States. Puerto Ricans have also been frequently appointed to high-level federal positions, including serving as United States ambassadors to other nations. Puerto Rico is subject to the Commerce and Territorial Clause of the Constitution of the United States and, therefore, is restricted on how it can engage with other nations, sharing the opportunities and limitations that state governments have albeit not being one. As is the case with state governments, it has established several trade agreements with other nations, particularly with Hispanic American countries such as Colombia and Panamá. It has also established trade promotion offices in many foreign countries, all Spanish-speaking, and within the United States itself, which now include Spain, the Dominican Republic, Panama, Colombia, Washington, D.C., New York City and Florida, and has included in the past offices in Chile, Costa Rica, and Mexico. Such agreements require permission from the U.S. Department of State; most are simply allowed by existing laws or trade treaties between the United States and other nations which supersede trade agreements pursued by Puerto Rico and different U.S. states. At the local level, Puerto Rico established by law that the international relations which states and territories are allowed to engage must be handled by the Department of State of Puerto Rico, an executive department, headed by the secretary of state of Puerto Rico, who also serves as the territory's lieutenant governor. It is also charged to liaise with general consuls and honorary consuls based in Puerto Rico. The Puerto Rico Federal Affairs Administration, along with the Office of the Resident Commissioner, manages all its intergovernmental affairs before entities of or in the United States (including the federal government of the United States, local and state governments of the United States, and public or private entities in the United States). Both entities frequently assist the Department of State of Puerto Rico in engaging with Washington, D.C.-based ambassadors and federal agencies that handle Puerto Rico's foreign affairs, such as the U.S. Department of State, the Agency for International Development, and others. The current secretary of state is Elmer Román from the New Progressive Party, while the current director of the Puerto Rico Federal Affairs Administration is Jennifer M. Stopiran also from the NPP and a member of the Republican Party of the United States. The resident commissioner of Puerto Rico, the delegate elected by Puerto Ricans to represent them before the federal government, including the U.S. Congress, sits in the United States House of Representatives, serves and votes on congressional committees, and functions in every respect as a legislator except being denied a vote on the final disposition of legislation on the House floor. The current resident commissioner is Jenniffer González-Colón, a Republican, elected in 2016. She received more votes than any other official elected in Puerto Rico that year. Many Puerto Ricans have served as United States ambassadors to different nations and international organizations, such as the Organization of American States, mostly but not exclusively in Latin America. For example, Maricarmen Aponte, a Puerto Rican and now an acting assistant secretary of state, previously served as U.S. ambassador to El Salvador. As it is a territory of the United States of America, the defense of Puerto Rico is provided by the United States as part of the Treaty of Paris with the president of the United States as its commander-in-chief. Puerto Rico has its own Puerto Rico National Guard, and its own state defense force, the Puerto Rico State Guard, which by local law is under the authority of the Puerto Rico National Guard. The commander-in-chief of both local forces is the governor of Puerto Rico who delegates his authority to the Puerto Rico adjutant general, currently Major General José J. Reyes. The Adjutant General, in turn, delegates the authority over the State Guard to another officer but retains the authority over the Puerto Rico National Guard as a whole. U.S. military installations in Puerto Rico were part of the U.S. Atlantic Command (LANTCOM after 1993 USACOM), which had authority over all U.S. military operations that took place throughout the Atlantic. Puerto Rico had been seen as crucial in supporting LANTCOM's mission until 1999, when U.S. Atlantic Command was renamed and given a new mission as United States Joint Forces Command. Puerto Rico is currently under the responsibility of United States Northern Command. Both the Naval Forces Caribbean (NFC) and the Fleet Air Caribbean (FAIR) were formerly based at the Roosevelt Roads Naval Station. The NFC had authority over all U.S. Naval activity in the waters of the Caribbean while FAIR had authority over all U.S. military flights and air operations over the Caribbean. With the closing of the Roosevelt Roads and Vieques Island training facilities, the U.S. Navy has basically exited from Puerto Rico, except for the ships that steam by, and the only significant military presence in the island is the U.S. Army at Ft Buchanan, the Puerto Rican Army and Air National Guards, and the U.S. Coast Guard. Protests over the noise of bombing practice forced the closure of the naval base. This resulted in a loss of 6,000 jobs and an annual decrease in local income of $300 million. A branch of the U.S. Army National Guard is stationed in Puerto Rico – known as the Puerto Rico Army National Guard – which performs missions equivalent to those of the Army National Guards of the different states of the United States, including ground defense, disaster relief, and control of civil unrest. The local National Guard also incorporates a branch of the U.S. Air National Guard – known as the Puerto Rico Air National Guard – which performs missions equivalent to those of the Air National Guards of each one of the U.S. states. At different times in the 20th century, the U.S. had about 25 military or naval installations in Puerto Rico, some very small ones, as well as large installations. The largest of these installations were the former Roosevelt Roads Naval Station in Ceiba, the Atlantic Fleet Weapons Training Facility (AFWTF) on Vieques, the National Guard training facility at Camp Santiago in Salinas, Fort Allen in Juana Diaz, the Army's Fort Buchanan in San Juan, the former U.S. Air Force Ramey Air Force Base in Aguadilla, and the Puerto Rico Air National Guard at Muñiz Air Force base in San Juan. The former U.S. Navy facilities at Roosevelt Roads, Vieques, and Sabana Seca have been deactivated and partially turned over to the local government. Other than U.S. Coast Guard and Puerto Rico National Guard facilities, there are only two remaining military installations in Puerto Rico: the U.S. Army's small Ft. Buchanan (supporting local veterans and reserve units) and the PRANG (Puerto Rico Air National Guard) Muñiz Air Base (the C-130 Fleet). In recent years, the U.S. Congress has considered their deactivations, but these have been opposed by diverse public and private entities in Puerto Rico – such as retired military who rely on Ft. Buchanan for the services available there. Puerto Ricans have participated in many of the military conflicts in which the United States has been involved. For example, they participated in the American Revolution, when volunteers from Puerto Rico, Cuba, and Mexico fought the British in 1779 under the command of General Bernardo de Gálvez (1746–1786), and have continued to participate up to the present-day conflicts in Iraq and Afghanistan. A significant number of Puerto Ricans participate as members and work for the U.S. Armed Services, largely as National Guard members and civilian employees. The size of the overall military-related community in Puerto Rico is estimated to be 100,000 individuals. This includes retired personnel. Fort Buchanan has about 4,000 military and civilian personnel. In addition, approximately 17,000 people are members of the Puerto Rico Army and Air National Guards, or the U.S. Reserve forces. Puerto Rican soldiers have served in every U.S. military conflict from World War I to the current military engagement known by the United States and its allies as the War against Terrorism. The 65th Infantry Regiment, nicknamed ""The Borinqueneers"" from the original Taíno name of the island (Borinquen), is a Puerto Rican regiment of the United States Army. The regiment's motto is "Honor et Fidelitas", Latin for "Honor and Fidelity". The 65th Infantry Regiment participated in World War I, World War II, the Korean War, and the War on Terror and in 2014 was awarded the Congressional Gold Medal, presented by President Barack Obama, for its heroism during the Korean War. There are no counties, as there are in 48 of the 50 United States. There are 78 municipalities. Municipalities are subdivided into "barrios", and those into sectors. Each municipality has a mayor and a municipal legislature elected to four-year terms. The economy of Puerto Rico is classified as a high income economy by the World Bank and as the most competitive economy in Latin America by the World Economic Forum, but Puerto Rico currently has a public debt of $72.204 billion (equivalent to 103% of GNP), and a government deficit of $2.5 billion. According to World Bank, gross national income per capita of Puerto Rico in 2013 is $23,830 (PPP, International Dollars), ranked as 63rd among all sovereign and dependent territories entities in the world. Its economy is mainly driven by manufacturing (primarily pharmaceuticals, textiles, petrochemicals and electronics) followed by the service industry (primarily finance, insurance, real estate and tourism). In recent years, the territory has also become a popular destination for MICE (meetings, incentives, conferencing, exhibitions), with a modern convention centre district overlooking the Port of San Juan. The geography of Puerto Rico and its political status are both determining factors on its economic prosperity, primarily due to its relatively small size as an island; its lack of natural resources used to produce raw materials, and, consequently, its dependence on imports; as well as its territorial status with the United States, which controls its foreign policy while exerting trading restrictions, particularly in its shipping industry. Puerto Rico experienced a recession from 2006 to 2011, interrupted by 4 quarters of economic growth, and entered into recession again in 2013, following growing fiscal imbalance and the expiration of the IRS Section 936 corporate incentives that the U.S. Internal Revenue Code had applied to Puerto Rico. This IRS section was critical to the economy, as it established tax exemptions for U.S. corporations that settled in Puerto Rico, and allowed their insular subsidiaries to send their earnings to the parent corporation at any time, without paying federal tax on corporate income. Puerto Rico has surprisingly been able to maintain a relatively low inflation in the past decade while maintaining a purchasing power parity per capita higher than 80% of the rest of the world. Academically, most of Puerto Rico's economic woes stem from federal regulations that expired, have been repealed, or no longer apply to Puerto Rico; its inability to become self-sufficient and self-sustainable throughout history; its highly politicized public policy which tends to change whenever a political party gains power; as well as its highly inefficient local government which has accrued a public debt equal to 68% of its gross domestic product throughout time. In comparison to the different states of the United States, Puerto Rico is poorer than Mississippi (the poorest state of the U.S.) with 41% of its population below the poverty line. When compared to Latin America, Puerto Rico has the highest GDP per capita in the region. Its main trading partners are the United States, Ireland, and Japan, with most products coming from East Asia, mainly from China, Hong Kong, and Taiwan. At a global scale, Puerto Rico's dependency on oil for transportation and electricity generation, as well as its dependency on food imports and raw materials, makes Puerto Rico volatile and highly reactive to changes in the world economy and climate. Puerto Rico's agricultural sector represents less than 1% of GNP. Tourism in Puerto Rico is also an important part of the economy. In 2017, Hurricane Maria caused severe damage to the island and its infrastructure, disrupting tourism for many months. The damage was estimated at $100 billion. An April 2019 report indicated that by that time, only a few hotels were still closed, that life for tourists in and around the capital had, for the most part, returned to normal. By October 2019, nearly all of the popular amenities for tourists, in the major destinations such as San Juan, Ponce and Arecibo, were in operation on the island and tourism was rebounding. This was important for the economy, since tourism provides up 10% of Puerto Rico's GDP, according to Discover Puerto Rico. The latest Discover Puerto Rico campaign started in July 2018. An April 2019 report stated that the tourism team "after hitting the one-year anniversary of the storm in September [2018], the organization began to shift towards more optimistic messaging. The "Have We Met Yet?" campaign was intended to highlight the island's culture and history, making it distinct, different than other Caribbean destinations. In 2019, Discover Puerto Rico planned to continue that campaign, including "streaming options for branded content". In late November 2019, reports indicated that 90 calls to San Juan by Royal Caribbean ships would be cancelled during 2020 and 2021. This step would mean 360,000 fewer visitors, with a loss to the island's economy of 44 million. As well, 30 ship departures from San Juan were being canceled. The rationale for this decision was discussed in a news report:The reason for the cancellations is the privatization of the cruise docks in San Juan due to much-needed maintenance that is needed. Around $250 million investment is needed to make sure cruise ships can continue to dock there in the years to come. There is an urge for governor Wanda Vazquez to not go ahead with the privatization so this news is fluid. In early 2017, the Puerto Rican government-debt crisis posed serious problems for the government which was saddled with outstanding bond debt that had climbed to $70 billion at a time with a 45-percent poverty rate and 12.4% unemployment that is more than twice the mainland U.S. average. The debt had been increasing during a decade-long recession. The Commonwealth had been defaulting on many debts, including bonds, since 2015. With debt payments due, the governor was facing the risk of a government shutdown and failure to fund the managed health care system. "Without action before April, Puerto Rico's ability to execute contracts for Fiscal Year 2018 with its managed care organizations will be threatened, thereby putting at risk beginning July 1, 2017 the health care of up to 900,000 poor U.S. citizens living in Puerto Rico", according to a letter sent to Congress by the Secretary of the Treasury and the Secretary of Health and Human Services. They also said that "Congress must enact measures recommended by both Republicans and Democrats that fix Puerto Rico's inequitable health care financing structure and promote sustained economic growth." Initially, the oversight board created under PROMESA called for Puerto Rico's governor Ricardo Rosselló to deliver a fiscal turnaround plan by January 28. Just before that deadline, the control board gave the Commonwealth government until February 28 to present a fiscal plan (including negotiations with creditors for restructuring debt) to solve the problems. A moratorium on lawsuits by debtors was extended to May 31. It is essential for Puerto Rico to reach restructuring deals to avoid a bankruptcy-like process under PROMESA. An internal survey conducted by the Puerto Rican Economists Association revealed that the majority of Puerto Rican economists reject the policy recommendations of the Board and the Rosselló government, with more than 80% of economists arguing in favor of auditing the debt. In early August 2017, the island's financial oversight board (created by PROMESA) planned to institute two days off without pay per month for government employees, down from the original plan of four days per month; the latter had been expected to achieve $218 million in savings. Governor Rossello rejected this plan as unjustified and unnecessary. Pension reforms were also discussed including a proposal for a 10% reduction in benefits to begin addressing the $50 billion in unfunded pension liabilities. Puerto Rico has an operating budget of about U.S.$9.8 billion with expenses at about $10.4 billion, creating a structural deficit of $775 million (about 7.9% of the budget). The practice of approving budgets with a structural deficit has been done for consecutive years starting in 2000. Throughout those years, including present time, all budgets contemplated issuing bonds to cover these projected deficits rather than making structural adjustments. This practice increased Puerto Rico's cumulative debt, as the government had already been issuing bonds to balance its actual budget for four decades beginning in 1973. Projected deficits added substantial burdens to an already indebted nation which accrued a public debt of $71B or about 70% of Puerto Rico's gross domestic product. This sparked an ongoing government-debt crisis after Puerto Rico's general obligation bonds were downgraded to speculative non-investment grade ("junk status") by three credit-rating agencies. In terms of financial control, almost 9.6%—or about $1.5 billion—of Puerto Rico's central government budget expenses for FY2014 is expected to be spent on debt service. Harsher budget cuts are expected as Puerto Rico must now repay larger chunks of debts in the coming years. For practical reasons the budget is divided into two aspects: a "general budget" which comprises the assignments funded exclusively by the Department of Treasury of Puerto Rico, and the "consolidated budget" which comprises the assignments funded by the general budget, by Puerto Rico's government-owned corporations, by revenue expected from loans, by the sale of government bonds, by subsidies extended by the federal government of the United States, and by other funds. Both budgets contrast each other drastically, with the consolidated budget being usually thrice the size of the general budget; currently $29B and $9.0B respectively. Almost one out of every four dollars in the consolidated budget comes from U.S. federal subsidies while government-owned corporations compose more than 31% of the consolidated budget. The critical aspects come from the sale of bonds, which comprise 7% of the consolidated budget – a ratio that increased annually due to the government's inability to prepare a balanced budget in addition to being incapable of generating enough income to cover all its expenses. In particular, the government-owned corporations add a heavy burden to the overall budget and public debt, as none is self-sufficient. For example, in FY2011 the government-owned corporations reported aggregated losses of more than $1.3B with the Puerto Rico Highways and Transportation Authority (PRHTA) reporting losses of $409M, the Puerto Rico Electric Power Authority (PREPA; the government monopoly that controls all electricity on the island) reporting losses of $272M, while the Puerto Rico Aqueducts and Sewers Authority (PRASA; the government monopoly that controls all water utilities on the island) reported losses of $112M. Losses by government-owned corporations have been defrayed through the issuance of bonds compounding more than 40% of Puerto Rico's entire public debt today. Holistically, from FY2000–FY2010 Puerto Rico's debt grew at a compound annual growth rate (CAGR) of 9% while GDP remained stagnant. This has not always provided a long-term solution. In early July 2017 for example, the PREPA power authority was effectively bankrupt after defaulting in a plan to restructure $9 billion in bond debt; the agency planned to seek Court protection. In terms of protocol, the governor, together with the Puerto Rico Office of Management and Budget (OGP in Spanish), formulates the budget he believes is required to operate all government branches for the ensuing fiscal year. He then submits this formulation as a budget request to the Puerto Rican legislature before July 1, the date established by law as the beginning of Puerto Rico's fiscal year. While the constitution establishes that the request must be submitted "at the beginning of each regular session", the request is typically submitted during the first week of May as the regular sessions of the legislature begin in January and it would be impractical to submit a request so far in advance. Once submitted, the budget is then approved by the legislature, typically with amendments, through a joint resolution and is referred back to the governor for his approval. The governor then either approves it or vetoes it. If vetoed, the legislature can then either refer it back with amendments for the governor's approval, or approve it without the governor's consent by two-thirds of the bodies of each chamber. Once the budget is approved, the Department of Treasury disburses funds to the Office of Management and Budget which in turn disburses the funds to the respective agencies, while the Puerto Rico Government Development Bank (the government's intergovernmental bank) manages all related banking affairs including those related to the government-owned corporations. The cost of living in Puerto Rico is high and has increased over the past decade. San Juan's in particular is higher than Atlanta, Dallas, and Seattle but lower than Boston, Chicago, and New York City. One factor is housing prices which are comparable to Miami and Los Angeles, although property taxes are considerably lower than most places in the United States. Statistics used for cost of living sometimes do not take into account certain costs, such as the high cost of electricity, which has hovered in the 24¢ to 30¢ range per kilowatt/hour, two to three times the national average, increased travel costs for longer flights, additional shipping fees, and the loss of promotional participation opportunities for customers "outside the continental United States". While some online stores do offer free shipping on orders to Puerto Rico, many merchants exclude Hawaii, Alaska, Puerto Rico and other United States territories. The household median income is stated as $19,350 and the mean income as $30,463 in the U.S. Census Bureau's 2015 update. The report also indicates that 45.5% of individuals are below the poverty level. The median home value in Puerto Rico ranges from U.S.$100,000 to U.S.$214,000, while the national median home value sits at $119,600. One of the most cited contributors to the high cost of living in Puerto Rico is the Merchant Marine Act of 1920, also known as the Jones Act, which prevents foreign-flagged ships from carrying cargo between two American ports, a practice known as cabotage. Because of the Jones Act, foreign ships inbound with goods from Central and South America, Western Europe, and Africa cannot stop in Puerto Rico, offload Puerto Rico-bound goods, load mainland-bound Puerto Rico-manufactured goods, and continue to U.S. ports. Instead, they must proceed directly to U.S. ports, where distributors break bulk and send Puerto Rico-bound manufactured goods to Puerto Rico across the ocean by U.S.-flagged ships. The local government of Puerto Rico has requested several times to the U.S. Congress to exclude Puerto Rico from the Jones Act restrictions without success. The most recent measure has been taken by the 17th Legislative Assembly of Puerto Rico through R. Conc. del S. 21. These measures have always received support from all the major local political parties. In 2013 the Government Accountability Office published a report which concluded that "repealing or amending the Jones Act cabotage law might cut Puerto Rico shipping costs" and that "shippers believed that opening the trade to non-U.S.-flag competition could lower costs". However, the same GAO report also found that "[shippers] doing business in Puerto Rico that GAO contacted reported that the freight rates are often—although not always—lower for foreign carriers going to and from Puerto Rico and foreign locations than the rates shippers pay to ship similar cargo to and from the United States, despite longer distances. Data were not available to allow us to validate the examples given or verify the extent to which this difference occurred." Ultimately, the report concluded that "[the] effects of modifying the application of the Jones Act for Puerto Rico are highly uncertain" for both Puerto Rico and the United States, particularly for the U.S. shipping industry and the military preparedness of the United States. A 2018 study by economists at Boston-based Reeve & Associates and Puerto Rico-based Estudios Tecnicos has concluded that the 1920 Jones Act has no impact on either retail prices or the cost of livings on Puerto Rico. The study found that Puerto Rico received very similar or lower shipping freight rates when compared to neighboring islands, and that the transportation costs have no impact on retail prices on the island. The study was based in part on actual comparison of consumer goods at retail stores in San Juan, Puerto Rico, and Jacksonville, Florida, finding: no significant difference in the prices of either grocery items or durable goods between the two locations. The first school in Puerto Rico was the "Escuela de Gramática" (Grammar School). It was established by Bishop Alonso Manso in 1513, in the area where the Cathedral of San Juan was to be constructed. The school was free of charge and the courses taught were Latin language, literature, history, science, art, philosophy and theology. Education in Puerto Rico is divided in three levels—Primary (elementary school grades 1–6), Secondary (intermediate and high school grades 7–12), and Higher Level (undergraduate and graduate studies). As of 2002, the literacy rate of the Puerto Rican population was 94.1%; by gender, it was 93.9% for males and 94.4% for females. According to the 2000 Census, 60.0% of the population attained a high school degree or higher level of education, and 18.3% has a bachelor's degree or higher. Instruction at the primary school level is compulsory between the ages of 5 and 18. , there are 1539 public schools and 806 private schools. The largest and oldest university system is the public University of Puerto Rico (UPR) with 11 campuses. The largest private university systems on the island are the Sistema Universitario Ana G. Mendez which operates the Universidad del Turabo, Metropolitan University and Universidad del Este. Other private universities include the multi-campus Inter American University, the Pontifical Catholic University, Universidad Politécnica de Puerto Rico, and the Universidad del Sagrado Corazón. Puerto Rico has four schools of Medicine and three ABA-approved Law Schools. , medical care in Puerto Rico had been heavily impacted by emigration of doctors to the mainland and underfunding of the Medicare and Medicaid programs which serve 60% of the island's population. Since Puerto Ricans pay no income tax, they are not eligible for health insurance subsidies under the Affordable Care Act. The city of San Juan has a system of triage, hospital, and preventive care health services. The municipal government sponsors regular health fairs in different areas of the city focusing on health care for the elderly and the disabled. In 2017, there were 69 hospitals in Puerto Rico. There are twenty hospitals in San Juan, half of which are operated by the government. The largest hospital is the "Centro Médico de Río Piedras" (the Río Piedras Medical Center). Founded in 1956, it is operated by the Medical Services Administration of the Department of Health of Puerto Rico, and is actually a network of eight hospitals: The city of San Juan operates nine other hospitals. Of these, eight are Diagnostic and Treatment Centers located in communities throughout San Juan. These nine hospitals are: There are also ten private hospitals in San Juan. These are: The city of Ponce is served by several clinics and hospitals. There are four comprehensive care hospitals: Hospital Dr. Pila, Hospital San Cristobal, Hospital San Lucas, and Hospital de Damas. In addition, Hospital Oncológico Andrés Grillasca specializes in the treatment of cancer, and Hospital Siquiátrico specializes in mental disorders. There is also a U.S. Department of Veterans Affairs Outpatient Clinic that provides health services to U.S. veterans. The U.S. Veterans Administration will build a new hospital in the city to satisfy regional needs. Hospital de Damas is listed in the U.S. News & World Report as one of the best hospitals under the U.S. flag. Ponce has the highest concentration of medical infrastructure per inhabitant of any municipality in Puerto Rico. On the island of Culebra, there is a small hospital in the island called "Hospital de Culebra". It also offers pharmacy services to residents and visitors. For emergencies, patients are transported by plane to Fajardo on the main island. The town of Caguas has three hospitals: Hospital Hima San Pablo, Menonita Caguas Regional Hospital, and the San Juan Bautista Medical Center. The town of Cayey is served by the "Hospital Menonita de Cayey", and the "Hospital Municipal de Cayey." "Reforma de Salud de Puerto Rico" (Puerto Rico Health Reform) – locally referred to as "La Reforma" (The Reform) – is a government-run program which provides medical and health care services to the indigent and impoverished, by means of contracting private health insurance companies, rather than employing government-owned hospitals and emergency centers. The Reform is administered by the Puerto Rico Health Insurance Administration. The overall rate of crime is low in Puerto Rico. The territory has a high firearm homicide rate. The homicide rate of 19.2 per 100,000 inhabitants was significantly higher than any U.S. state in 2014. Most homicide victims are gang members and drug traffickers with about 80% of homicides in Puerto Rico being drug related. Carjackings happen often in many areas of Puerto Rico. In 1992, the FBI made it a Federal crime and rates decreased per statistics, but as of 2019, the problem continued in municipalities like Guaynabo and others. From January 1, 2019, to March 14, 2019, thirty carjackings had occurred on the island. Modern Puerto Rican culture is a unique mix of cultural antecedents: including European (predominantly Spanish, Italian, French, German and Irish), African, and, more recently, some North American and many South Americans. Many Cubans and Dominicans have relocated to the island in the past few decades. From the Spanish, Puerto Rico received the Spanish language, the Catholic religion and the vast majority of their cultural and moral values and traditions. The United States added English-language influence, the university system and the adoption of some holidays and practices. On March 12, 1903, the University of Puerto Rico was officially founded, branching out from the "Escuela Normal Industrial", a smaller organization that was founded in Fajardo three years earlier. Much of Puerto Rican culture centers on the influence of music and has been shaped by other cultures combining with local and traditional rhythms. Early in the history of Puerto Rican music, the influences of Spanish and African traditions were most noticeable. The cultural movements across the Caribbean and North America have played a vital role in the more recent musical influences which have reached Puerto Rico. The official symbols of Puerto Rico are the "reinita mora" or Puerto Rican spindalis (a type of bird), the "flor de maga" (a type of flower), and the "ceiba" or kapok (a type of tree). The unofficial animal and a symbol of Puerto Rican pride is the coquí, a small frog. Other popular symbols of Puerto Rico are the "jíbaro" (the "countryman") and the carite. The architecture of Puerto Rico demonstrates a broad variety of traditions, styles and national influences accumulated over four centuries of Spanish rule, and a century of American rule. Spanish colonial architecture, Ibero-Islamic, art deco, post-modern, and many other architectural forms are visible throughout the island. From town to town, there are also many regional distinctions. Old San Juan is one of the two "barrios", in addition to Santurce, that made up the municipality of San Juan from 1864 to 1951, at which time the former independent municipality of Río Piedras was annexed. With its abundance of shops, historic places, museums, open air cafés, restaurants, gracious homes, tree-shaded plazas, and its old beauty and architectonical peculiarity, Old San Juan is a main spot for local and internal tourism. The district is also characterized by numerous public plazas and churches including San José Church and the Cathedral of San Juan Bautista, which contains the tomb of the Spanish explorer Juan Ponce de León. It also houses the oldest Catholic school for elementary education in Puerto Rico, the Colegio de Párvulos, built in 1865. The oldest parts of the district of Old San Juan remain partly enclosed by massive walls. Several defensive structures and notable forts, such as the emblematic Fort San Felipe del Morro, Fort San Cristóbal, and El Palacio de Santa Catalina, also known as La Fortaleza, acted as the primary defenses of the settlement which was subjected to numerous attacks. La Fortaleza continues to serve also as the executive mansion for the governor of Puerto Rico. Many of the historic fortifications are part of San Juan National Historic Site. During the 1940s, sections of Old San Juan fell into disrepair, and many renovation plans were suggested. There was even a strong push to develop Old San Juan as a "small Manhattan". Strict remodeling codes were implemented to prevent new constructions from affecting the common colonial Spanish architectural themes of the old city. When a project proposal suggested that the old Carmelite Convent in San Juan be demolished to erect a new hotel, the Institute had the building declared as a historic building, and then asked that it be converted to a hotel in a renewed facility. This was what became the "Hotel El Convento" in Old San Juan. The paradigm to reconstruct and renovate the old city and revitalize it has been followed by other cities in the Americas, particularly Havana, Lima and Cartagena de Indias. Ponce Creole is an architectural style created in Ponce, Puerto Rico, in the late 19th and early 20th centuries. This style of Puerto Rican buildings is found predominantly in residential homes in Ponce that developed between 1895 and 1920. Ponce Creole architecture borrows heavily from the traditions of the French, the Spaniards, and the Caribbean to create houses that were especially built to withstand the hot and dry climate of the region, and to take advantage of the sun and sea breezes characteristic of the southern Puerto Rico's Caribbean Sea coast. It is a blend of wood and masonry, incorporating architectural elements of other styles, from Classical revival and Spanish Revival to Victorian. Puerto Rican art reflects many influences, much from its ethnically diverse background. A form of folk art, called "santos" evolved from the Catholic Church's use of sculptures to convert indigenous Puerto Ricans to Christianity. "Santos" depict figures of saints and other religious icons and are made from native wood, clay, and stone. After shaping simple, they are often finished by painting them in vivid colors. "Santos" vary in size, with the smallest examples around eight inches tall and the largest about twenty inches tall. Traditionally, santos were seen as messengers between the earth and Heaven. As such, they occupied a special place on household altars, where people prayed to them, asked for help, or tried to summon their protection. Also popular, "caretas" or "vejigantes" are masks worn during carnivals. Similar masks signifying evil spirits were used in both Spain and Africa, though for different purposes. The Spanish used their masks to frighten lapsed Christians into returning to the church, while tribal Africans used them as protection from the evil spirits they represented. True to their historic origins, Puerto Rican "caretas" always bear at least several horns and fangs. While usually constructed of papier-mâché, coconut shells and fine metal screening are sometimes used as well. Red and black were the typical colors for "caretas" but their palette has expanded to include a wide variety of bright hues and patterns. Puerto Rican literature evolved from the art of oral story telling to its present-day status. Written works by the native islanders of Puerto Rico were prohibited and repressed by the Spanish colonial government. Only those who were commissioned by the Spanish Crown to document the chronological history of the island were allowed to write. Diego de Torres Vargas was allowed to circumvent this strict prohibition for three reasons: he was a priest, he came from a prosperous Spanish family, and his father was a Sergeant Major in the Spanish Army, who died while defending Puerto Rico from an invasion by the Dutch armada. In 1647, Torres Vargas wrote ("Description of the Island and City of Puerto Rico"). This historical book was the first to make a detailed geographic description of the island. The book described all the fruits and commercial establishments of the time, mostly centered in the towns of San Juan and Ponce. The book also listed and described every mine, church, and hospital in the island at the time. The book contained notices on the State and Capital, plus an extensive and erudite bibliography. was the first successful attempt at writing a comprehensive history of Puerto Rico. Some of Puerto Rico's earliest writers were influenced by the teachings of Rafael Cordero. Among these was Dr. Manuel A. Alonso, the first Puerto Rican writer of notable importance. In 1849 he published , a collection of verses whose main themes were the poor Puerto Rican country farmer. Eugenio María de Hostos wrote in 1863, which used Bartolomé de las Casas as a spring board to reflect on Caribbean identity. After this first novel, Hostos abandoned fiction in favor of the essay which he saw as offering greater possibilities for inspiring social change. In the late 19th century, with the arrival of the first printing press and the founding of the Royal Academy of Belles Letters, Puerto Rican literature began to flourish. The first writers to express their political views in regard to Spanish colonial rule of the island were journalists. After the United States invaded Puerto Rico during the Spanish–American War and the island was ceded to the Americans as a condition of the Treaty of Paris of 1898, writers and poets began to express their opposition to the new colonial rule by writing about patriotic themes. Alejandro Tapia y Rivera, also known as the Father of Puerto Rican Literature, ushered in a new age of historiography with the publication of "The Historical Library of Puerto Rico". Cayetano Coll y Toste was another Puerto Rican historian and writer. His work "The Indo-Antillano Vocabulary" is valuable in understanding the way the Taínos lived. Manuel Zeno Gandía in 1894 wrote and told about the harsh life in the remote and mountainous coffee regions in Puerto Rico. Antonio S. Pedreira, described in his work the cultural survival of the Puerto Rican identity after the American invasion. With the Puerto Rican diaspora of the 1940s, Puerto Rican literature was greatly influenced by a phenomenon known as the Nuyorican Movement. Puerto Rican literature continued to flourish and many Puerto Ricans have since distinguished themselves as authors, journalists, poets, novelists, playwrights, screenwriters, essayists and have also stood out in other literary fields. The influence of Puerto Rican literature has transcended the boundaries of the island to the United States and the rest of the world. Over the past fifty years, significant writers include Ed Vega, Luis Rafael Sánchez, Piri Thomas, Giannina Braschi, and Miguel Piñero. Esmeralda Santiago has written an autobiographical trilogy about growing up in modern Puerto Rico as well as an historical novel, , about life on a sugar plantation during the mid-19th century. The mass media in Puerto Rico includes local radio stations, television stations and newspapers, the majority of which are conducted in Spanish. There are also three stations of the U.S. Armed Forces Radio and Television Service. Newspapers with daily distribution are , and , , and . is distributed free of charge, as are and . Newspapers distributed on a weekly or regional basis include , , , , and , among others. Several television channels provide local content in the island. These include WIPR-TV, , , WAPA-TV, and WKAQ-TV. The music of Puerto Rico has evolved as a heterogeneous and dynamic product of diverse cultural resources. The most conspicuous musical sources have been Spain and West Africa, although many aspects of Puerto Rican music reflect origins elsewhere in Europe and the Caribbean and, over the last century, from the U.S. Puerto Rican music culture today comprises a wide and rich variety of genres, ranging from indigenous genres like bomba, plena, aguinaldo, danza and salsa to recent hybrids like reggaeton. Puerto Rico has some national instruments, like the cuatro (Spanish for "four"). The cuatro is a local instrument that was made by the "Jibaro" or people from the mountains. Originally, the Cuatro consisted of four steel strings, hence its name, but currently the Cuatro consists of five double steel strings. It is easily confused with a guitar, even by locals. When held upright, from right to left, the strings are G, D, A, E, B. In the realm of classical music, the island hosts two main orchestras, the Orquesta Sinfónica de Puerto Rico and the Orquesta Filarmónica de Puerto Rico. The Casals Festival takes place annually in San Juan, drawing in classical musicians from around the world. With respect to opera, the legendary Puerto Rican tenor Antonio Paoli was so celebrated, that he performed private recitals for Pope Pius X and the Czar Nicholas II of Russia. In 1907, Paoli was the first operatic artist in world history to record an entire opera – when he participated in a performance of "Pagliacci" by Ruggiero Leoncavallo in Milan, Italy. Over the past fifty years, Puerto Rican artists such as Jorge Emmanuelli, Yomo Toro, Ramito, Jose Feliciano, Bobby Capo, Rafael Cortijo, Ismael Rivera, Chayanne, Tito Puente, Eddie Palmieri, Ray Barreto, Dave Valentin, Omar Rodríguez-López, Hector Lavoe, Ricky Martin, Marc Anthony and Luis Fonsi have gained fame internationally. Puerto Rican cuisine has its roots in the cooking traditions and practices of Europe (Spain), Africa and the native Taínos. In the latter part of the 19th century, the cuisine of Puerto Rico was greatly influenced by the United States in the ingredients used in its preparation. Puerto Rican cuisine has transcended the boundaries of the island, and can be found in several countries outside the archipelago. Basic ingredients include grains and legumes, herbs and spices, starchy tropical tubers, vegetables, meat and poultry, seafood and shellfish, and fruits. Main dishes include "mofongo", "arroz con gandules", "pasteles", "alcapurrias" and pig roast (or lechón). Beverages include "maví" and "piña colada". Desserts include flan, "arroz con dulce" (sweet rice pudding), "piraguas", "brazo gitanos", "tembleque", "polvorones", and "dulce de leche". Locals call their cuisine cocina criolla. The traditional Puerto Rican cuisine was well established by the end of the 19th century. By 1848 the first restaurant, La Mallorquina, opened in Old San Juan. "El Cocinero Puertorriqueño", the island's first cookbook was published in 1849. From the diet of the Taíno people come many tropical roots and tubers like "yautía" (taro) and especially "Yuca" (cassava), from which thin cracker-like "casabe" bread is made. Ajicito or cachucha pepper, a slightly hot habanero pepper, "recao/culantro" (spiny leaf), "achiote" (annatto), "peppers", "ají caballero" (the hottest pepper native to Puerto Rico), peanuts, guavas, pineapples, "jicacos" (cocoplum), "quenepas" (mamoncillo), "lerenes" (Guinea arrowroot), "calabazas" (tropical pumpkins), and "guanabanas" (soursops) are all Taíno foods. The Taínos also grew varieties of beans and some maize/corn, but maize was not as dominant in their cooking as it was for the peoples living on the mainland of Mesoamerica. This is due to the frequent hurricanes that Puerto Rico experiences, which destroy crops of maize, leaving more safeguarded plants like "conucos" (hills of "yuca" grown together). Spanish / European influence is also seen in Puerto Rican cuisine. Wheat, chickpeas (garbanzos), capers, olives, olive oil, black pepper, onions, garlic, "cilantrillo" (cilantro), oregano, basil, sugarcane, citrus fruit, eggplant, ham, lard, chicken, beef, pork, and cheese all came to Puerto Rico from Spain. The tradition of cooking complex stews and rice dishes in pots such as rice and beans are also thought to be originally European (much like Italians, Spaniards, and the British). Early Dutch, French, Italian, and Chinese immigrants influenced not only the culture but Puerto Rican cooking as well. This great variety of traditions came together to form La Cocina Criolla. Coconuts, coffee (brought by the Arabs and Corsos to Yauco from Kafa, Ethiopia), okra, yams, sesame seeds, "gandules" (pigeon peas in English) sweet bananas, plantains, other root vegetables and Guinea hen, all come to Puerto Rico from Africa. Puerto Rico has been commemorated on four U.S. postal stamps and four personalities have been featured. Insular Territories were commemorated in 1937, the third stamp honored Puerto Rico featuring 'La Fortaleza', the Spanish Governor's Palace. The first free election for governor of the U.S. colony of Puerto Rico was honored on April 27, 1949, at San Juan, Puerto Rico. 'Inauguration' on the 3-cent stamp refers to the election of Luis Muñoz Marín, the first democratically elected governor of Puerto Rico. San Juan, Puerto Rico was commemorated with an 8-cent stamp on its 450th anniversary issued September 12, 1971, featuring a sentry box from Castillo San Felipe del Morro. In the "Flags of our nation series" 2008–2012, of the fifty-five, five territorial flags were featured. Forever stamps included the Puerto Rico Flag illustrated by a bird issued 2011. Four Puerto Rican personalities have been featured on U.S. postage stamps. These include Roberto Clemente in 1984 as an individual and in the Legends of Baseball series issued in 2000. Luis Muñoz Marín in the Great Americans series, on February 18, 1990, Julia de Burgos in the Literary Arts series, issued 2010, and José Ferrer in the Distinguished American series, issued 2012. Baseball was one of the first sports to gain widespread popularity in Puerto Rico. The Puerto Rico Baseball League serves as the only active professional league, operating as a winter league. No Major League Baseball franchise or affiliate plays in Puerto Rico, however, San Juan hosted the Montreal Expos for several series in 2003 and 2004 before they moved to Washington, D.C. and became the Washington Nationals. The Puerto Rico national baseball team has participated in the World Cup of Baseball winning one gold (1951), four silver and four bronze medals, the Caribbean Series (winning fourteen times) and the World Baseball Classic. On , San Juan's Hiram Bithorn Stadium hosted the opening round as well as the second round of the newly formed World Baseball Classic. Puerto Rican baseball players include Hall of Famers Roberto Clemente, Orlando Cepeda and Roberto Alomar, enshrined in 1973, 1999, and 2011 respectively. Boxing, basketball, and volleyball are considered popular sports as well. Wilfredo Gómez and McWilliams Arroyo have won their respective divisions at the World Amateur Boxing Championships. Other medalists include José Pedraza, who holds a silver medal, and three boxers who finished in third place, José Luis Vellón, Nelson Dieppa and McJoe Arroyo. In the professional circuit, Puerto Rico has the third-most boxing world champions and it is the global leader in champions per capita. These include Miguel Cotto, Félix Trinidad, Wilfred Benítez and Gómez among others. The Puerto Rico national basketball team joined the International Basketball Federation in 1957. Since then, it has won more than 30 medals in international competitions, including gold in three FIBA Americas Championships and the 1994 Goodwill Games August 8, 2004, became a landmark date for the team when it became the first team to defeat the United States in an Olympic tournament since the integration of National Basketball Association players. Winning the inaugural game with scores of 92–73 as part of the 2004 Summer Olympics organized in Athens, Greece. Baloncesto Superior Nacional acts as the top-level professional basketball league in Puerto Rico, and has experienced success since its beginning in 1930. Puerto Rico is also a member of FIFA and CONCACAF. In 2008, the archipelago's first unified league, the Puerto Rico Soccer League, was established. Other sports include professional wrestling and road running. The World Wrestling Council and International Wrestling Association are the largest wrestling promotions in the main island. The World's Best 10K, held annually in San Juan, has been ranked among the 20 most competitive races globally. The "Puerto Rico All Stars" team, which has won twelve world championships in unicycle basketball. Organized Streetball has gathered some exposition, with teams like "Puerto Rico Street Ball" competing against established organizations including the Capitanes de Arecibo and AND1's Mixtape Tour Team. Six years after the first visit, AND1 returned as part of their renamed Live Tour, losing to the Puerto Rico Streetballers. Consequently, practitioners of this style have earned participation in international teams, including Orlando "El Gato" Meléndez, who became the first Puerto Rican born athlete to play for the Harlem Globetrotters. Orlando Antigua, whose mother is Puerto Rican, in 1995 became the first Hispanic and the first non-black in 52 years to play for the Harlem Globetrotters. Puerto Rico has representation in all international competitions including the Summer and Winter Olympics, the Pan American Games, the Caribbean World Series, and the Central American and Caribbean Games. Puerto Rico hosted the Pan Am Games in 1979 (officially in San Juan), and The Central American and Caribbean Games were hosted in 1993 in Ponce and in 2010 in Mayagüez. Puerto Rican athletes have won nine medals in Olympic competition (one gold, two silver, six bronze), the first one in 1948 by boxer Juan Evangelista Venegas. Monica Puig won the first gold medal for Puerto Rico in the Olympic Games by winning the Women's Tennis singles title in Rio 2016. In her poem "The Messenger-Bird", Felicia Hemans refers to a Puerto Rican legend concerning "The Fountain of Youth", supposedly to be found in the Lucayan Archipelago. She sourced this from Robertson's History of America. Cities and towns in Puerto Rico are interconnected by a system of roads, freeways, expressways, and highways maintained by the Highways and Transportation Authority under the jurisdiction of the U.S. Department of Transportation, and patrolled by the Puerto Rico Police Department. The island's metropolitan area is served by a public bus transit system and a metro system called "Tren Urbano" (in English: Urban Train). Other forms of public transportation include seaborne ferries (that serve Puerto Rico's archipelago) as well as "Carros Públicos" (private mini buses). Puerto Rico has three international airports, the Luis Muñoz Marín International Airport in Carolina, Mercedita Airport in Ponce, and the Rafael Hernández Airport in Aguadilla, and 27 local airports. The Luis Muñoz Marín International Airport is the largest aerial transportation hub in the Caribbean. Puerto Rico has nine ports in different cities across the main island. The San Juan Port is the largest in Puerto Rico, and the busiest port in the Caribbean and the 10th busiest in the United States in terms of commercial activity and cargo movement, respectively. The second largest port is the Port of the Americas in Ponce, currently under expansion to increase cargo capacity to twenty-foot containers (TEUs) per year. The Puerto Rico Electric Power Authority (PREPA)—Spanish: "Autoridad de Energía Eléctrica " (AEE)—is an electric power company and the government-owned corporation of Puerto Rico responsible for electricity generation, power transmission, and power distribution in Puerto Rico. PREPA is the only entity authorized to conduct such business in Puerto Rico, effectively making it a government monopoly. The Authority is ruled by a governing board appointed by the governor with the advice and consent of the Senate of Puerto Rico, and is run by an executive director. Telecommunications in Puerto Rico includes radio, television, fixed and mobile telephones, and the Internet. Broadcasting in Puerto Rico is regulated by the U.S. Federal Communications Commission (FCC). , there were 30 TV stations, 125 radio stations and roughly 1 million TV sets on the island. Cable TV subscription services are available and the U.S. Armed Forces Radio and Television Service also broadcast on the island.
https://en.wikipedia.org/wiki?curid=23041
Pseudoscience Pseudoscience consists of statements, beliefs, or practices that are claimed to be both scientific and factual but are incompatible with the scientific method. Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited. The term "pseudoscience" is considered pejorative, because it suggests something is being presented as science inaccurately or even deceptively. Those described as practicing or advocating pseudoscience often dispute the characterization. The demarcation between science and pseudoscience has philosophical and scientific implications. Differentiating science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education. Distinguishing scientific facts and theories from pseudoscientific beliefs, such as those found in astrology, alchemy, alternative medicine, occult beliefs, religious beliefs, and creation science, is part of science education and scientific literacy. Pseudoscience can be harmful. For example, pseudoscientific anti-vaccine activism and promotion of homeopathic remedies as alternative disease treatments can result in people forgoing important medical treatment with demonstrable health benefits. The word "pseudoscience" is derived from the Greek root "pseudo" meaning false and the English word "science", from the Latin word "scientia", meaning "knowledge". Although the term has been in use since at least the late 18th century (e.g., in 1796 by James Pettit Andrews in reference to alchemy), the concept of pseudoscience as distinct from real or proper science seems to have become more widespread during the mid-19th century. Among the earliest uses of "pseudo-science" was in an 1844 article in the "Northern Journal of Medicine", issue 387: An earlier use of the term was in 1843 by the French physiologist François Magendie, that refers to phrenology as ""a pseudo-science of the present day"". During the 20th century, the word was used pejoratively to describe explanations of phenomena which were claimed to be scientific, but which were not in fact supported by reliable experimental evidence. From time-to-time, though, the usage of the word occurred in a more formal, technical manner in response to a perceived threat to individual and institutional security in a social and cultural setting. Pseudoscience is differentiated from science because – although it claims to be science – pseudoscience does not adhere to accepted scientific standards, such as the scientific method, falsifiability of claims, and Mertonian norms. A number of basic principles are accepted by scientists as standards for determining whether a body of knowledge, method, or practice is scientific. Experimental results should be reproducible and verified by other researchers. These principles are intended to ensure experiments can be reproduced measurably given the same conditions, allowing further investigation to determine whether a hypothesis or theory related to given phenomena is valid and reliable. Standards require the scientific method to be applied throughout, and bias to be controlled for or eliminated through randomization, fair sampling procedures, blinding of studies, and other methods. All gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available for peer review, allowing further experiments or studies to be conducted to confirm or falsify results. Statistical quantification of significance, confidence, and error are also important tools for the scientific method. During the mid-20th century, the philosopher Karl Popper emphasized the criterion of falsifiability to distinguish science from nonscience. Statements, hypotheses, or theories have falsifiability or refutability if there is the inherent possibility that they can be proven false. That is, if it is possible to conceive of an observation or an argument which negates them. Popper used astrology and psychoanalysis as examples of pseudoscience and Einstein's theory of relativity as an example of science. He subdivided nonscience into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other. Another example which shows the distinct need for a claim to be falsifiable was stated in Carl Sagan's publication "The Demon-Haunted World" when he discusses an invisible dragon that he has in his garage. The point is made that there is no physical test to refute the claim of the presence of this dragon. Whatever test one thinks can be devised, there is a reason why it does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. Sagan concludes; "Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?". He states that "your inability to invalidate my hypothesis is not at all the same thing as proving it true", once again explaining that even if such a claim were true, it would be outside the realm of scientific inquiry. During 1942, Robert K. Merton identified a set of five "norms" which he characterized as what makes a real science. If any of the norms were violated, Merton considered the enterprise to be nonscience. These are not broadly accepted by the scientific community. His norms were: During 1978, Paul Thagard proposed that pseudoscience is primarily distinguishable from science when it is less progressive than alternative theories over a long period of time, and its proponents fail to acknowledge or address problems with the theory. During 1983, Mario Bunge has suggested the categories of "belief fields" and "research fields" to help distinguish between pseudoscience and science, where the former is primarily personal and subjective and the latter involves a certain systematic method. The 2018 book by Steven Novella, et al. "The Skeptics' Guide to the Universe" lists hostility to criticism as one of the major features of pseudoscience. Philosophers of science such as Paul Feyerabend argued that a distinction between science and nonscience is neither possible nor desirable. Among the issues which can make the distinction difficult is variable rates of evolution among the theories and methods of science in response to new data. Larry Laudan has suggested pseudoscience has no scientific meaning and is mostly used to describe our emotions: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudo-science' and 'unscientific' from our vocabulary; they are just hollow phrases which do only emotive work for us". Likewise, Richard McNally states, "The term 'pseudoscience' has become little more than an inflammatory buzzword for quickly dismissing one's opponents in media sound-bites" and "When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?" For philosophers Silvio Funtowicz and Jerome R. Ravetz "pseudo-science may be defined as one where the uncertainty of its inputs must be suppressed, lest they render its outputs totally indeterminate". The definition, in the book "Uncertainty and Quality in Science for Policy" (p. 54), alludes to the loss of craft skills in handling quantitative information, and to the bad practice of achieving precision in prediction (inference) only at the expenses of ignoring uncertainty in the input which was used to formulate the prediction. This use of the term is common among practitioners of post-normal science. Understood in this way, pseudoscience can be fought using good practices to assesses uncertainty in quantitative information, such as NUSAP and – in the case of mathematical modelling – sensitivity auditing. The history of pseudoscience is the study of pseudoscientific theories over time. A pseudoscience is a set of ideas that presents itself as science, while it does not meet the criteria to be properly called such. Distinguishing between proper science and pseudoscience is sometimes difficult. One proposal for demarcation between the two is the falsification criterion, attributed most notably to the philosopher Karl Popper. In the history of science and the history of pseudoscience it can be especially difficult to separate the two, because some sciences developed from pseudosciences. An example of this transformation is the science chemistry, which traces its origins to pseudoscientific or pre-scientific study of alchemy. The vast diversity in pseudosciences further complicates the history of science. Some modern pseudosciences, such as astrology and acupuncture, originated before the scientific era. Others developed as part of an ideology, such as Lysenkoism, or as a response to perceived threats to an ideology. Examples of this ideological process are creation science and intelligent design, which were developed in response to the scientific theory of evolution. A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms. A large percentage of the United States population lacks scientific literacy, not adequately understanding scientific principles and method. In the "Journal of College Science Teaching", Art Hobson writes, "Pseudoscientific beliefs are surprisingly widespread in our culture even among public school science teachers and newspaper editors, and are closely related to scientific illiteracy." However, a 10,000-student study in the same journal concluded there was no strong correlation between science knowledge and belief in pseudoscience. In his book "The Demon-Haunted World" Carl Sagan discusses the government of China and the Chinese Communist Party's concern about Western pseudoscience developments and certain ancient Chinese practices in China. He sees pseudoscience occurring in the United States as part of a worldwide trend and suggests its causes, dangers, diagnosis and treatment may be universal. During 2006, the U.S. National Science Foundation (NSF) issued an executive summary of a paper on science and engineering which briefly discussed the prevalence of pseudoscience in modern times. It said, "belief in pseudoscience is widespread" and, referencing a Gallup Poll, stated that belief in the 10 commonly believed examples of paranormal phenomena listed in the poll were "pseudoscientific beliefs". The items were "extrasensory perception (ESP), that houses can be haunted, ghosts, telepathy, clairvoyance, astrology, that people can communicate mentally with someone who has died, witches, reincarnation, and channelling". Such beliefs in pseudoscience represent a lack of knowledge of how science works. The scientific community may attempt to communicate information about science out of concern for the public's susceptibility to unproven claims. The National Science Foundation stated that pseudoscientific beliefs in the U.S. became more widespread during the 1990s, peaked about 2001, and then decreased slightly since with pseudoscientific beliefs remaining common. According to the NSF report, there is a lack of knowledge of pseudoscientific issues in society and pseudoscientific practices are commonly followed. Surveys indicate about a third of all adult Americans consider astrology to be scientific. In a report Singer and Benassi (1981) wrote that pseudoscientific beliefs have their origin from at least four sources. Another American study (Eve and Dunn, 1990) supported the findings of Singer and Benassi and found pseudoscientific belief being promoted by high school life science and biology teachers. The psychology of pseudoscience attempts to explore and analyze pseudoscientific thinking by means of thorough clarification on making the distinction of what is considered scientific vs. pseudoscientific. The human proclivity for seeking confirmation rather than refutation (confirmation bias), the tendency to hold comforting beliefs, and the tendency to overgeneralize have been proposed as reasons for pseudoscientific thinking. According to Beyerstein (1991), humans are prone to associations based on resemblances only, and often prone to misattribution in cause-effect thinking. Michael Shermer's theory of belief-dependent realism is driven by the belief that the brain is essentially a "belief engine" which scans data perceived by the senses and looks for patterns and meaning. There is also the tendency for the brain to create cognitive biases, as a result of inferences and assumptions made without logic and based on instinct – usually resulting in patterns in cognition. These tendencies of patternicity and agenticity are also driven "by a meta-bias called the bias blind spot, or the tendency to recognize the power of cognitive biases in other people but to be blind to their influence on our own beliefs". Lindeman states that social motives (i.e., "to comprehend self and the world, to have a sense of control over outcomes, to belong, to find the world benevolent and to maintain one's self-esteem") are often "more easily" fulfilled by pseudoscience than by scientific information. Furthermore, pseudoscientific explanations are generally not analyzed rationally, but instead experientially. Operating within a different set of rules compared to rational thinking, experiential thinking regards an explanation as valid if the explanation is "personally functional, satisfying and sufficient", offering a description of the world that may be more personal than can be provided by science and reducing the amount of potential work involved in understanding complex events and outcomes. There is a trend to believe in pseudoscience more than scientific evidence. Some people believe the prevalence of pseudoscientific beliefs is due to widespread "scientific illiteracy". Individuals lacking scientific literacy are more susceptible to wishful thinking, since they are likely to turn to immediate gratification powered by System 1, our default operating system which requires little to no effort. This system encourages one to accept the conclusions they believe, and reject the ones they do not. Further analysis of complex pseudoscientific phenomena require System 2, which follows rules, compares objects along multiple dimensions and weighs options. These two systems have several other differences which are further discussed in the dual-process theory. The scientific and secular systems of morality and meaning are generally unsatisfying to most people. Humans are, by nature, a forward-minded species pursuing greater avenues of happiness and satisfaction, but we are all too frequently willing to grasp at unrealistic promises of a better life. Psychology has much to discuss about pseudoscience thinking, as it is the illusory perceptions of causality and effectiveness of numerous individuals that needs to be illuminated. Research suggests that illusionary thinking happens in most people when exposed to certain circumstances such as reading a book, an advertisement or the testimony of others are the basis of pseudoscience beliefs. It is assumed that illusions are not unusual, and given the right conditions, illusions are able to occur systematically even in normal emotional situations. One of the things pseudoscience believers quibble most about is that academic science usually treats them as fools. Minimizing these illusions in the real world is not simple. To this aim, designing evidence-based educational programs can be effective to help people identify and reduce their own illusions. Philosophers classify types of knowledge. In English, the word "science" is used to indicate specifically the natural sciences and related fields, which are called the social sciences. Different philosophers of science may disagree on the exact limits – for example, is mathematics a formal science that is closer to the empirical ones, or is pure mathematics closer to the philosophical study of logic and therefore not a science? – but all agree that all of the ideas that are not scientific are non-scientific. The large category of non-science includes all matters outside the natural and social sciences, such as the study of history, metaphysics, religion, art, and the humanities. Dividing the category again, unscientific claims are a subset of the large category of non-scientific claims. This category specifically includes all matters that are directly opposed to good science. Un-science includes both "bad science" (such as an error made in a good-faith attempt at learning something about the natural world) and pseudoscience. Thus pseudoscience is a subset of un-science, and un-science, in turn, is subset of non-science. Science is also distinguishable from revelation, theology, or spirituality in that it offers insight into the physical world obtained by empirical research and testing. The most notable disputes concern the evolution of living organisms, the idea of common descent, the geologic history of the Earth, the formation of the solar system, and the origin of the universe. Systems of belief that derive from divine or inspired knowledge are not considered pseudoscience if they do not claim either to be scientific or to overturn well-established science. Moreover, some specific religious claims, such as the power of intercessory prayer to heal the sick, although they may be based on untestable beliefs, can be tested by the scientific method. Some statements and common beliefs of popular science may not meet the criteria of science. "Pop" science may blur the divide between science and pseudoscience among the general public, and may also involve science fiction. Indeed, pop science is disseminated to, and can also easily emanate from, persons not accountable to scientific methodology and expert peer review. If claims of a given field can be tested experimentally and standards are upheld, it is not pseudoscience, regardless of how odd, astonishing, or counterintuitive those claims are. If claims made are inconsistent with existing experimental results or established theory, but the method is sound, caution should be used, since science consists of testing hypotheses which may turn out to be false. In such a case, the work may be better described as ideas that are "not yet generally accepted". "Protoscience" is a term sometimes used to describe a hypothesis that has not yet been tested adequately by the scientific method, but which is otherwise consistent with existing science or which, where inconsistent, offers reasonable account of the inconsistency. It may also describe the transition from a body of practical knowledge into a scientific field. Karl Popper stated it is insufficient to distinguish science from pseudoscience, or from metaphysics (such as the philosophical question of what existence means), by the criterion of rigorous adherence to the empirical method, which is essentially inductive, based on observation or experimentation. He proposed a method to distinguish between genuine empirical, nonempirical or even pseudoempirical methods. The latter case was exemplified by astrology, which appeals to observation and experimentation. While it had astonishing empirical evidence based on observation, on horoscopes and biographies, it crucially failed to use acceptable scientific standards. Popper proposed falsifiability as an important criterion in distinguishing science from pseudoscience. To demonstrate this point, Popper gave two cases of human behavior and typical explanations from Sigmund Freud and Alfred Adler's theories: "that of a man who pushes a child into the water with the intention of drowning it; and that of a man who sacrifices his life in an attempt to save the child." From Freud's perspective, the first man would have suffered from psychological repression, probably originating from an Oedipus complex, whereas the second man had attained sublimation. From Adler's perspective, the first and second man suffered from feelings of inferiority and had to prove himself, which drove him to commit the crime or, in the second case, drove him to rescue the child. Popper was not able to find any counterexamples of human behavior in which the behavior could not be explained in the terms of Adler's or Freud's theory. Popper argued it was that the observation always fitted or confirmed the theory which, rather than being its strength, was actually its weakness. In contrast, Popper gave the example of Einstein's gravitational theory, which predicted "light must be attracted by heavy bodies (such as the Sun), precisely as material bodies were attracted." Following from this, stars closer to the Sun would appear to have moved a small distance away from the Sun, and away from each other. This prediction was particularly striking to Popper because it involved considerable risk. The brightness of the Sun prevented this effect from being observed under normal circumstances, so photographs had to be taken during an eclipse and compared to photographs taken at night. Popper states, "If observation shows that the predicted effect is definitely absent, then the theory is simply refuted." Popper summed up his criterion for the scientific status of a theory as depending on its falsifiability, refutability, or testability. Paul R. Thagard used astrology as a case study to distinguish science from pseudoscience and proposed principles and criteria to delineate them. First, astrology has not progressed in that it has not been updated nor added any explanatory power since Ptolemy. Second, it has ignored outstanding problems such as the precession of equinoxes in astronomy. Third, alternative theories of personality and behavior have grown progressively to encompass explanations of phenomena which astrology statically attributes to heavenly forces. Fourth, astrologers have remained uninterested in furthering the theory to deal with outstanding problems or in critically evaluating the theory in relation to other theories. Thagard intended this criterion to be extended to areas other than astrology. He believed it would delineate as pseudoscientific such practices as witchcraft and pyramidology, while leaving physics, chemistry and biology in the realm of science. Biorhythms, which like astrology relied uncritically on birth dates, did not meet the criterion of pseudoscience at the time because there were no alternative explanations for the same observations. The use of this criterion has the consequence that a theory can be scientific at one time and pseudoscientific at a later time. In the philosophy and history of science, Imre Lakatos stresses the social and political importance of the demarcation problem, the normative methodological problem of distinguishing between science and pseudoscience. His distinctive historical analysis of scientific methodology based on research programmes suggests: "scientists regard the successful theoretical prediction of stunning novel facts – such as the return of Halley's comet or the gravitational bending of light rays – as what demarcates good scientific theories from pseudo-scientific and degenerate theories, and in spite of all scientific theories being forever confronted by 'an ocean of counterexamples'". Lakatos offers a "novel fallibilist analysis of the development of Newton's celestial dynamics, [his] favourite historical example of his methodology" and argues in light of this historical turn, that his account answers for certain inadequacies in those of Karl Popper and Thomas Kuhn. "Nonetheless, Lakatos did recognize the force of Kuhn's historical criticism of Popper – all important theories have been surrounded by an 'ocean of anomalies', which on a falsificationist view would require the rejection of the theory outright... Lakatos sought to reconcile the rationalism of Popperian falsificationism with what seemed to be its own refutation by history". The boundary between science and pseudoscience is disputed and difficult to determine analytically, even after more than a century of study by philosophers of science and scientists, and despite some basic agreements on the fundamentals of the scientific method. The concept of pseudoscience rests on an understanding that the scientific method has been misrepresented or misapplied with respect to a given theory, but many philosophers of science maintain that different kinds of methods are held as appropriate across different fields and different eras of human history. According to Lakatos, the typical descriptive unit of great scientific achievements is not an isolated hypothesis but "a powerful problem-solving machinery, which, with the help of sophisticated mathematical techniques, digests anomalies and even turns them into positive evidence". The demarcation problem between science and pseudoscience brings up debate in the realms of science, philosophy and politics. Imre Lakatos, for instance, points out that the Communist Party of the Soviet Union at one point declared that Mendelian genetics was pseudoscientific and had its advocates, including well-established scientists such as Nikolai Vavilov, sent to a Gulag and that the "liberal Establishment of the West" denies freedom of speech to topics it regards as pseudoscience, particularly where they run up against social mores. Something becomes pseudoscientific when science cannot be separated from ideology, scientists misrepresent scientific findings to promote or draw attention for publicity, when politicians, journalists and a nation's intellectual elite distort the facts of science for short-term political gain, or when powerful individuals of the public conflate causation and cofactors by clever wordplay. These ideas reduce the authority, value, integrity and independence of science in society. Distinguishing science from pseudoscience has practical implications in the case of health care, expert testimony, environmental policies, and science education. Treatments with a patina of scientific authority which have not actually been subjected to actual scientific testing may be ineffective, expensive and dangerous to patients and confuse health providers, insurers, government decision makers and the public as to what treatments are appropriate. Claims advanced by pseudoscience may result in government officials and educators making bad decisions in selecting curricula. The extent to which students acquire a range of social and cognitive thinking skills related to the proper usage of science and technology determines whether they are scientifically literate. Education in the sciences encounters new dimensions with the changing landscape of science and technology, a fast-changing culture and a knowledge-driven era. A reinvention of the school science curriculum is one that shapes students to contend with its changing influence on human welfare. Scientific literacy, which allows a person to distinguish science from pseudosciences such as astrology, is among the attributes that enable students to adapt to the changing world. Its characteristics are embedded in a curriculum where students are engaged in resolving problems, conducting investigations, or developing projects. Friedman mentions why most scientists avoid educating about pseudoscience, including that paying undue attention to pseudoscience could dignify it. On the other hand, Park emphasizes how pseudoscience can be a threat to society and considers that scientists have a responsibility to teach how to distinguish science from pseudoscience. Pseudosciences such as homeopathy, even if generally benign, are used by charlatans. This poses a serious issue because it enables incompetent practitioners to administer health care. True-believing zealots may pose a more serious threat than typical con men because of their affection to homeopathy's ideology. Irrational health care is not harmless and it is careless to create patient confidence in pseudomedicine. On 8 December 2016, Michael V. LeVine, writing in "Business Insider", pointed out the dangers posed by the "Natural News" website: "Snake-oil salesmen have pushed false cures since the dawn of medicine, and now websites like "Natural News" flood social media with dangerous anti-pharmaceutical, anti-vaccination and anti-GMO pseudoscience that puts millions at risk of contracting preventable illnesses." The anti-vaccine movement has persuaded large number of parents not to vaccinate their children, citing pseudoscientific research that links childhood vaccines with the onset of autism. These include the study by Andrew Wakefield, which claimed that a combination of gastrointestinal disease and developmental regression, which are often seen in children with ASD, occurred within two weeks of receiving vaccines. The study was eventually retracted by its publisher while Wakefield was stripped of his license to practice medicine.
https://en.wikipedia.org/wiki?curid=23047
Prion Prions are misfolded proteins with the ability to transmit their misfolded shape onto normal variants of the same protein. They characterize several fatal and transmissible neurodegenerative diseases in humans and many other animals. It is not known what causes the normal protein to misfold, but the abnormal three-dimensional structure is suspected of conferring infectious properties, collapsing nearby protein molecules into the same shape. The word "prion" derives from "proteinaceous infectious particle". The hypothesized role of a protein as an infectious agent stands in contrast to all other known infectious agents such as viruses, bacteria, fungi and parasites, all of which contain nucleic acids (DNA, RNA or both). Prion variants of the prion protein (PrP), whose specific function is uncertain, are hypothesized as the cause of transmissible spongiform encephalopathies (TSEs), including scrapie in sheep, chronic wasting disease (CWD) in deer, bovine spongiform encephalopathy (BSE) in cattle (commonly known as "mad cow disease") and Creutzfeldt–Jakob disease (CJD) in humans. All known prion diseases in mammals affect the structure of the brain or other neural tissue; all are progressive, have no known effective treatment and are always fatal. Until 2015, all known mammalian prion diseases were considered to be caused by the prion protein (PrP); however in 2015 multiple system atrophy (MSA) was hypothesized to be caused by a prion form of alpha-synuclein. Prions form abnormal aggregates of proteins called amyloids, which accumulate in infected tissue and are associated with tissue damage and cell death. Amyloids are also responsible for several other neurodegenerative diseases such as Alzheimer's disease and Parkinson's disease. Prion aggregates are stable, and this structural stability means that prions are resistant to denaturation by chemical and physical agents: they cannot be destroyed by ordinary disinfection or cooking. This makes disposal and containment of these particles difficult. A prion disease is a type of proteopathy, or disease of structurally abnormal proteins. In humans, prions are believed to be the cause of Creutzfeldt–Jakob disease (CJD), its variant (vCJD), Gerstmann–Sträussler–Scheinker syndrome (GSS), fatal familial insomnia (FFI) and kuru. There is also evidence suggesting prions may play a part in the process of Alzheimer's disease, Parkinson's disease and amyotrophic lateral sclerosis (ALS), and these have been termed "prion-like diseases". Several yeast proteins have also been identified as having prionogenic properties. Prion replication is subject to epimutation and natural selection just as for other forms of replication, and their structure varies slightly between species. The word "prion", coined in 1982 by Stanley B. Prusiner, is a portmanteau derived from protein and infection, hence prion, and is short for "proteinaceous infectious particle", in reference to its ability to self-propagate and transmit its conformation to other proteins. Its main pronunciation is , although , as the homographic name of the bird is pronounced, is also heard. In his 1982 paper introducing the term, Prusiner specified that it be "pronounced "pree"-on." The protein that prions are made of (PrP) is found throughout the body, even in healthy people and animals. However, PrP found in infectious material has a different structure and is resistant to proteases, the enzymes in the body that can normally break down proteins. The normal form of the protein is called PrPC, while the infectious form is called PrPSc – the "C" refers to 'cellular' PrP, while the "Sc" refers to 'scrapie', the prototypic prion disease, occurring in sheep. While PrPC is structurally well-defined, PrPSc is certainly polydisperse and defined at a relatively poor level. PrP can be induced to fold into other more-or-less well-defined isoforms in vitro, and their relationship to the form(s) that are pathogenic in vivo is not yet clear. PrPC is a normal protein found on the membranes of cells. It has 209 amino acids (in humans), one disulfide bond, a molecular mass of 35–36 kDa and a mainly alpha-helical structure. Several topological forms exist; one cell surface form anchored via glycolipid and two transmembrane forms. The normal protein is not sedimentable; meaning that it cannot be separated by centrifuging techniques. Its function is a complex issue that continues to be investigated. PrPC binds copper (II) ions with high affinity. The significance of this finding is not clear, but it is presumed to relate to PrP structure or function. PrPC is readily digested by proteinase K and can be liberated from the cell surface in vitro by the enzyme phosphoinositide phospholipase C (PI-PLC), which cleaves the glycophosphatidylinositol (GPI) glycolipid anchor. PrP has been reported to play important roles in cell-cell adhesion and intracellular signaling "in vivo", and may therefore be involved in cell-cell communication in the brain. Protease-resistant PrPSc-like protein (PrPres) is the name given to any isoform of PrPc which is structurally altered and converted into a misfolded proteinase K-resistant form "in vitro". To model conversion of PrPC to PrPSc in vitro, Saborio "et al". rapidly converted PrPC into a PrPres by a procedure involving cyclic amplification of protein misfolding. The term "PrPres" has been used to distinguish between PrPSc, which is isolated from infectious tissue and associated with the transmissible spongiform encephalopathy agent. For example, unlike PrPSc, PrPres may not necessarily be infectious. The infectious isoform of PrP, known as PrPSc, or simply the prion, is able to convert normal PrPC proteins into the infectious isoform by changing their conformation, or shape; this, in turn, alters the way the proteins interconnect. PrPSc always causes prion disease. Although the exact 3D structure of PrPSc is not known, it has a higher proportion of β-sheet structure in place of the normal α-helix structure. Aggregations of these abnormal isoforms form highly structured amyloid fibers, which accumulate to form plaques. The end of each fiber acts as a template onto which free protein molecules may attach, allowing the fiber to grow. Under most circumstances, only PrP molecules with an identical amino acid sequence to the infectious PrPSc are incorporated into the growing fiber. However, rare cross-species transmission is also possible. The physiological function of the prion protein remains poorly understood. While data from in vitro experiments suggest many dissimilar roles, studies on PrP knockout mice have provided only limited information because these animals exhibit only minor abnormalities. In research done in mice, it was found that the cleavage of PrP proteins in peripheral nerves causes the activation of myelin repair in Schwann cells and that the lack of PrP proteins caused demyelination in those cells. MAVS, RIP1, and RIP3 are prion-like proteins found in other parts of the body. They also polymerise into filamentous amyloid fibers which initiate regulated cell death in the case of a viral infection to prevent the spread of virions to other, surrounding cells. A review of evidence in 2005 suggested that PrP may have a normal function in maintenance of long-term memory. As well, a 2004 study found that mice lacking genes for normal cellular PrP protein show altered hippocampal long-term potentiation. A recent study that might explain why this is found that neuronal protein CPEB has a similar genetic sequence to yeast prion proteins. The prion-like formation of CPEB is essential for maintaining long-term synaptic changes associated with long term memory formation. A 2006 article from the Whitehead Institute for Biomedical Research indicates that PrP expression on stem cells is necessary for an organism's self-renewal of bone marrow. The study showed that all long-term hematopoietic stem cells express PrP on their cell membrane and that hematopoietic tissues with PrP-null stem cells exhibit increased sensitivity to cell depletion. There is some evidence that PrP may play a role in innate immunity, as the expression of PRNP, the PrP gene, is upregulated in many viral infections and PrP has antiviral properties against many viruses, including HIV. The first hypothesis that tried to explain how prions replicate in a protein-only manner was the heterodimer model. This model assumed that a single PrPSc molecule binds to a single PrPC molecule and catalyzes its conversion into PrPSc. The two PrPSc molecules then come apart and can go on to convert more PrPC. However, a model of prion replication must explain both how prions propagate, and why their spontaneous appearance is so rare. Manfred Eigen showed that the heterodimer model requires PrPSc to be an extraordinarily effective catalyst, increasing the rate of the conversion reaction by a factor of around 1015. This problem does not arise if PrPSc exists only in aggregated forms such as amyloid, where cooperativity may act as a barrier to spontaneous conversion. What is more, despite considerable effort, infectious monomeric PrPSc has never been isolated. An alternative model assumes that PrPSc exists only as fibrils, and that fibril ends bind PrPC and convert it into PrPSc. If this were all, then the quantity of prions would increase linearly, forming ever longer fibrils. But exponential growth of both PrPSc and of the quantity of infectious particles is observed during prion disease. This can be explained by taking into account fibril breakage. A mathematical solution for the exponential growth rate resulting from the combination of fibril growth and fibril breakage has been found. The exponential growth rate depends largely on the square root of the PrPC concentration. The incubation period is determined by the exponential growth rate, and in vivo data on prion diseases in transgenic mice match this prediction. The same square root dependence is also seen in vitro in experiments with a variety of different amyloid proteins. The mechanism of prion replication has implications for designing drugs. Since the incubation period of prion diseases is so long, an effective drug does not need to eliminate all prions, but simply needs to slow down the rate of exponential growth. Models predict that the most effective way to achieve this, using a drug with the lowest possible dose, is to find a drug that binds to fibril ends and blocks them from growing any further. Researchers at Dartmouth College discovered that endogenous host cofactor molecules such as the phospholipid molecule (e.g phosphaditylethanolamine) and polyanions (e.g. single stranded RNA molecules) are necessary to form PrPSc molecules with high levels of specific infectivity "in vitro", whereas protein-only PrPSc molecules appear to lack significant levels of biological infectivity. Prions cause neurodegenerative disease by aggregating extracellularly within the central nervous system to form plaques known as amyloid, which disrupt the normal tissue structure. This disruption is characterized by "holes" in the tissue with resultant spongy architecture due to the vacuole formation in the neurons. Other histological changes include astrogliosis and the absence of an inflammatory reaction. While the incubation period for prion diseases is relatively long (5 to 20 years), once symptoms appear the disease progresses rapidly, leading to brain damage and death. Neurodegenerative symptoms can include convulsions, dementia, ataxia (balance and coordination dysfunction), and behavioural or personality changes. All known prion diseases are untreatable and fatal. However, a vaccine developed in mice may provide insight into providing a vaccine to resist prion infections in humans. Additionally, in 2006 scientists announced that they had genetically engineered cattle lacking a necessary gene for prion production – thus theoretically making them immune to BSE, building on research indicating that mice lacking normally occurring prion protein are resistant to infection by scrapie prion protein. In 2013, a study revealed that 1 in 2,000 people in the United Kingdom might harbour the infectious prion protein that causes vCJD. Many different mammalian species can be affected by prion diseases, as the prion protein (PrP) is very similar in all mammals. Due to small differences in PrP between different species it is unusual for a prion disease to transmit from one species to another. The human prion disease variant Creutzfeldt–Jakob disease, however, is thought to be caused by a prion that typically infects cattle, causing bovine spongiform encephalopathy and is transmitted through infected meat. Until 2015 all known mammalian prion diseases were considered to be caused by the prion protein, PrP; in 2015 multiple system atrophy was found to be transmissible and was hypothesized to be caused by a new prion, the misfolded form of a protein called alpha-synuclein. The endogenous, properly folded form of the prion protein is denoted PrPC (for Common" or Cellular"), whereas the disease-linked, misfolded form is denoted PrPSc (for "Scrapie"), after one of the diseases first linked to prions and neurodegeneration. The precise structure of the prion is not known, though they can be formed spontaneously by combining PrPC, homopolymeric polyadenylic acid, and lipids in a protein misfolding cyclic amplification (PMCA) reaction even in the absence of pre-existing infectious prions. This result is further evidence that prion replication does not require genetic information. It has been recognized that prion diseases can arise in three different ways: acquired, familial, or sporadic. It is often assumed that the diseased form directly interacts with the normal form to make it rearrange its structure. One idea, the "Protein X" hypothesis, is that an as-yet unidentified cellular protein (Protein X) enables the conversion of PrPC to PrPSc by bringing a molecule of each of the two together into a complex. The primary method of infection in animals is through ingestion. It is thought that prions may be deposited in the environment through the remains of dead animals and via urine, saliva, and other body fluids. They may then linger in the soil by binding to clay and other minerals. A University of California research team has provided evidence for the theory that infection can occur from prions in manure. And, since manure is present in many areas surrounding water reservoirs, as well as used on many crop fields, it raises the possibility of widespread transmission. It was reported in January 2011 that researchers had discovered prions spreading through airborne transmission on aerosol particles, in an animal testing experiment focusing on scrapie infection in laboratory mice. Preliminary evidence supporting the notion that prions can be transmitted through use of urine-derived human menopausal gonadotropin, administered for the treatment of infertility, was published in 2011. In 2015, researchers at The University of Texas Health Science Center at Houston found that plants can be a vector for prions. When researchers fed hamsters grass that grew on ground where a deer that died with chronic wasting disease (CWD) was buried, the hamsters became ill with CWD, suggesting that prions can bind to plants, which then take them up into the leaf and stem structure, where they can be eaten by herbivores, thus completing the cycle. It is thus possible that there is a progressively accumulating number of prions in the environment. Infectious particles possessing nucleic acid are dependent upon it to direct their continued replication. Prions, however, are infectious by their effect on normal versions of the protein. Sterilizing prions, therefore, requires the denaturation of the protein to a state in which the molecule is no longer able to induce the abnormal folding of normal proteins. In general, prions are quite resistant to proteases, heat, ionizing radiation, and formaldehyde treatments, although their infectivity can be reduced by such treatments. Effective prion decontamination relies upon protein hydrolysis or reduction or destruction of protein tertiary structure. Examples include sodium hypochlorite, sodium hydroxide, and strongly acidic detergents such as LpH. for 18 minutes in a pressurized steam autoclave has been found to be somewhat effective in deactivating the agent of disease. Ozone sterilization is currently being studied as a potential method for prion denaturation and deactivation. Renaturation of a completely denatured prion to infectious status has not yet been achieved; however, partially denatured prions can be renatured to an infective status under certain artificial conditions. The World Health Organization recommends any of the following three procedures for the sterilization of all heat-resistant surgical instruments to ensure that they are not contaminated with prions: Overwhelming evidence shows that prions resist degradation and persist in the environment for years, and proteases do not degrade them. Experimental evidence shows that "unbound" prions degrade over time, while soil-bound prions remain at stable or increasing levels, suggesting that prions likely accumulate in the environment. Proteins showing prion-type behavior are also found in some fungi, which has been useful in helping to understand mammalian prions. Fungal prions do not appear to cause disease in their hosts. In yeast, protein refolding to the prion configuration is assisted by chaperone proteins such as Hsp104. All known prions induce the formation of an amyloid fold, in which the protein polymerises into an aggregate consisting of tightly packed beta sheets. Amyloid aggregates are fibrils, growing at their ends, and replicate when breakage causes two growing ends to become four growing ends. The incubation period of prion diseases is determined by the exponential growth rate associated with prion replication, which is a balance between the linear growth and the breakage of aggregates. Fungal proteins exhibiting templated conformational change were discovered in the yeast "Saccharomyces cerevisiae" by Reed Wickner in the early 1990s. For their mechanistic similarity to mammalian prions, they were termed yeast prions. Subsequent to this, a prion has also been found in the fungus "Podospora anserina". These prions behave similarly to PrP, but, in general, are nontoxic to their hosts. Susan Lindquist's group at the Whitehead Institute has argued some of the fungal prions are not associated with any disease state, but may have a useful role; however, researchers at the NIH have also provided arguments suggesting that fungal prions could be considered a diseased state. There is evidence that fungal proteins have evolved specific functions that are beneficial to the microorganism that enhance their ability to adapt to their diverse environments. Research into fungal prions has given strong support to the protein-only concept, since purified protein extracted from cells with a prion state has been demonstrated to convert the normal form of the protein into a misfolded form "in vitro", and in the process, preserve the information corresponding to different strains of the prion state. It has also shed some light on prion domains, which are regions in a protein that promote the conversion into a prion. Fungal prions have helped to suggest mechanisms of conversion that may apply to all prions, though fungal prions appear distinct from infectious mammalian prions in the lack of cofactor required for propagation. The characteristic prion domains may vary between species – e.g., characteristic fungal prion domains are not found in mammalian prions. There are no effective treatments for prion diseases. Clinical trials in humans have not met with success and have been hampered by the rarity of prion diseases. Although some potential treatments have shown promise in the laboratory, none have been effective once the disease has set in. Prion-like domains have been found in a variety of other mammalian proteins. Some of these proteins have been implicated in the ontogeny of age-related neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS), frontotemporal lobar degeneration with ubiquitin-positive inclusions (FTLD-U), Alzheimer's disease, Parkinson's disease, and Huntington's disease. They are also implicated in some forms of systemic amyloidosis including AA amyloidosis that develops in humans and animals with inflammatory and infectious diseases such as tuberculosis, Crohn's disease, rheumatoid arthritis, and HIV AIDS. AA amyloidosis, like prion disease, may be transmissible. This has given rise to the 'prion paradigm', where otherwise harmless proteins can be converted to a pathogenic form by a small number of misfolded, nucleating proteins. The definition of a prion-like domain arises from the study of fungal prions. In yeast, prionogenic proteins have a portable prion domain that is both necessary and sufficient for self-templating and protein aggregation. This has been shown by attaching the prion domain to a reporter protein, which then aggregates like a known prion. Similarly, removing the prion domain from a fungal prion protein inhibits prionogenesis. This modular view of prion behaviour has led to the hypothesis that similar prion domains are present in animal proteins, in addition to PrP. These fungal prion domains have several characteristic sequence features. They are typically enriched in asparagine, glutamine, tyrosine and glycine residues, with an asparagine bias being particularly conducive to the aggregative property of prions. Historically, prionogenesis has been seen as independent of sequence and only dependent on relative residue content. However, this has been shown to be false, with the spacing of prolines and charged residues having been shown to be critical in amyloid formation. Bioinformatic screens have predicted that over 250 human proteins contain prion-like domains (PrLD). These domains are hypothesized to have the same transmissible, amyloidogenic properties of PrP and known fungal proteins. As in yeast, proteins involved in gene expression and RNA binding seem to be particularly enriched in PrLD's, compared to other classes of protein. In particular, 29 of the known 210 proteins with an RNA recognition motif also have a putative prion domain. Meanwhile, several of these RNA-binding proteins have been independently identified as pathogenic in cases of ALS, FTLD-U, Alzheimer's disease, and Huntington's disease. The pathogenicity of prions and proteins with prion-like domains is hypothesized to arise from their self-templating ability and the resulting exponential growth of amyloid fibrils. The presence of amyloid fibrils in patients with degenerative diseases has been well documented. These amyloid fibrils are seen as the result of pathogenic proteins that self-propagate and form highly stable, non-functional aggregates. While this does not necessarily imply a causal relationship between amyloid and degenerative diseases, the toxicity of certain amyloid forms and the overproduction of amyloid in familial cases of degenerative disorders supports the idea that amyloid formation is generally toxic. Specifically, aggregation of TDP-43, an RNA-binding protein, has been found in ALS/MND patients, and mutations in the genes coding for these proteins have been identified in familial cases of ALS/MND. These mutations promote the misfolding of the proteins into a prion-like conformation. The misfolded form of TDP-43 forms cytoplasmic inclusions in afflicted neurons, and is found depleted in the nucleus. In addition to ALS/MND and FTLD-U, TDP-43 pathology is a feature of many cases of Alzheimer's disease, Parkinson's disease and Huntington's disease. The misfolding of TDP-43 is largely directed by its prion-like domain. This domain is inherently prone to misfolding, while pathological mutations in TDP-43 have been found to increase this propensity to misfold, explaining the presence of these mutations in familial cases of ALS/MND. As in yeast, the prion-like domain of TDP-43 has been shown to be both necessary and sufficient for protein misfolding and aggregation. Similarly, pathogenic mutations have been identified in the prion-like domains of heterogeneous nuclear riboproteins hnRNPA2B1 and hnRNPA1 in familial cases of muscle, brain, bone and motor neuron degeneration. The wild-type form of all of these proteins show a tendency to self-assemble into amyloid fibrils, while the pathogenic mutations exacerbate this behaviour and lead to excess accumulation. In the 1950s, Carleton Gajdusek began research which eventually showed that kuru could be transmitted to chimpanzees by what was possibly a new infectious agent, work for which he eventually won the 1976 Nobel prize. During the 1960s, two London-based researchers, radiation biologist Tikvah Alper and biophysicist John Stanley Griffith, developed the hypothesis that the transmissible spongiform encephalopathies are caused by an infectious agent consisting solely of proteins. Earlier investigations by E.J. Field into scrapie and kuru had found evidence for the transfer of pathologically inert polysaccharides that only become infectious post-transfer, in the new host. Alper and Griffith wanted to account for the discovery that the mysterious infectious agent causing the diseases scrapie and Creutzfeldt–Jakob disease resisted ionizing radiation. Griffith proposed three ways in which a protein could be a pathogen. In the first hypothesis, he suggested that if the protein is the product of a normally suppressed gene, and introducing the protein could induce the gene's expression, that is, wake the dormant gene up, then the result would be a process indistinguishable from replication, as the gene's expression would produce the protein, which would then go wake the gene up in other cells. His second hypothesis forms the basis of the modern prion theory, and proposed that an abnormal form of a cellular protein can convert normal proteins of the same type into its abnormal form, thus leading to replication. His third hypothesis proposed that the agent could be an antibody if the antibody was its own target antigen, as such an antibody would result in more and more antibody being produced against itself. However, Griffith acknowledged that this third hypothesis was unlikely to be true due to the lack of a detectable immune response. Francis Crick recognized the potential significance of the Griffith protein-only hypothesis for scrapie propagation in the second edition of his "Central dogma of molecular biology" (1970): While asserting that the flow of sequence information from protein to protein, or from protein to RNA and DNA was "precluded", he noted that Griffith's hypothesis was a potential contradiction (although it was not so promoted by Griffith). The revised hypothesis was later formulated, in part, to accommodate reverse transcription (which both Howard Temin and David Baltimore discovered in 1970). In 1982, Stanley B. Prusiner of the University of California, San Francisco, announced that his team had purified the hypothetical infectious protein, which did not appear to be present in healthy hosts, though they did not manage to isolate the protein until two years after Prusiner's announcement. The protein was named a prion, for "proteinacious infectious particle", derived from the words protein and infection. When the prion was discovered, Griffith's first hypothesis, that the protein was the product of a normally silent gene was favored by many. It was subsequently discovered, however, that the same protein exists in normal hosts but in different form. Following the discovery of the same protein in different form in uninfected individuals, the specific protein that the prion was composed of was named the Prion Protein (PrP), and Griffith's second hypothesis that an abnormal form of a host protein can convert other proteins of the same type into its abnormal form, became the dominant theory. Prusiner won the Nobel Prize in Physiology or Medicine in 1997 for his research into prions.
https://en.wikipedia.org/wiki?curid=23048
Periodic table The periodic table, also known as the periodic table of elements, is a tabular display of the chemical elements, which are arranged by atomic number, electron configuration, and recurring chemical properties. The structure of the table shows periodic trends. The seven rows of the table, called periods, generally have metals on the left and nonmetals on the right. The columns, called groups, contain elements with similar chemical behaviours. Six groups have accepted names as well as assigned numbers: for example, group 17 elements are the halogens; and group 18 are the noble gases. Also displayed are four simple rectangular areas or blocks associated with the filling of different atomic orbitals. The elements from atomic numbers 1 (hydrogen) through 118 (oganesson) have all been discovered or synthesized, completing seven full rows of the periodic table. The first 94 elements, hydrogen through plutonium, all occur naturally, though some are found only in trace amounts and a few were discovered in nature only after having first been synthesized. Elements 95 to 118 have only been synthesized in laboratories, nuclear reactors, or nuclear explosions. The synthesis of elements having higher atomic numbers is currently being pursued: these elements would begin an eighth row, and theoretical work has been done to suggest possible candidates for this extension. Numerous synthetic radioisotopes of naturally occurring elements have also been produced in laboratories. The organization of the periodic table can be used to derive relationships between the various element properties, and also to predict chemical properties and behaviours of undiscovered or newly synthesized elements. Russian chemist Dmitri Mendeleev published the first recognizable periodic table in 1869, developed mainly to illustrate periodic trends of the then-known elements. He also predicted some properties of unidentified elements that were expected to fill gaps within the table. Most of his forecasts proved to be correct. Mendeleev's idea has been slowly expanded and refined with the discovery or synthesis of further new elements and the development of new theoretical models to explain chemical behaviour. The modern periodic table now provides a useful framework for analyzing chemical reactions, and continues to be widely used in chemistry, nuclear physics and other sciences. Some discussion remains ongoing regarding the placement and categorisation of specific elements, the future extension and limits of the table, and whether there is an optimal form of the table. Each chemical element has a unique atomic number ("Z") representing the number of protons in its nucleus. Most elements have differing numbers of neutrons among different atoms, with these variants being referred to as isotopes. For example, carbon has three naturally occurring isotopes: all of its atoms have six protons and most have six neutrons as well, but about one per cent have seven neutrons, and a very small fraction have eight neutrons. Isotopes are never separated in the periodic table; they are always grouped together under a single element. Elements with no stable isotopes have the atomic masses of their most stable isotopes, where such masses are shown, listed in parentheses. In the standard periodic table, the elements are listed in order of increasing atomic number "Z". A new row ("period") is started when a new electron shell has its first electron. Columns ("groups") are determined by the electron configuration of the atom; elements with the same number of electrons in a particular subshell fall into the same columns (e.g. oxygen and selenium are in the same column because they both have four electrons in the outermost p-subshell). Elements with similar chemical properties generally fall into the same group in the periodic table, although in the f-block, and to some respect in the d-block, the elements in the same period tend to have similar properties, as well. Thus, it is relatively easy to predict the chemical properties of an element if one knows the properties of the elements around it. Since 2016, the periodic table has 118 confirmed elements, from element 1 (hydrogen) to 118 (oganesson). Elements 113, 115, 117 and 118, the most recent discoveries, were officially confirmed by the International Union of Pure and Applied Chemistry (IUPAC) in December 2015. Their proposed names, nihonium (Nh), moscovium (Mc), tennessine (Ts) and oganesson (Og) respectively, were made official in November 2016 by IUPAC. The first 94 elements occur naturally; the remaining 24, americium to oganesson (95–118), occur only when synthesized in laboratories. Of the 94 naturally occurring elements, 83 are primordial and 11 occur only in decay chains of primordial elements. No element heavier than einsteinium (element 99) has ever been observed in macroscopic quantities in its pure form, nor has astatine (element 85); francium (element 87) has been only photographed in the form of light emitted from microscopic quantities (300,000 atoms). A "group" or "family" is a vertical column in the periodic table. Groups usually have more significant periodic trends than periods and blocks, explained below. Modern quantum mechanical theories of atomic structure explain group trends by proposing that elements within the same group generally have the same electron configurations in their valence shell. Consequently, elements in the same group tend to have a shared chemistry and exhibit a clear trend in properties with increasing atomic number. In some parts of the periodic table, such as the d-block and the f-block, horizontal similarities can be as important as, or more pronounced than, vertical similarities. Under an international naming convention, the groups are numbered numerically from 1 to 18 from the leftmost column (the alkali metals) to the rightmost column (the noble gases). Previously, they were known by roman numerals. In America, the roman numerals were followed by either an "A" if the group was in the s- or p-block, or a "B" if the group was in the d-block. The roman numerals used correspond to the last digit of today's naming convention (e.g. the group 4 elements were group IVB, and the group 14 elements were group IVA). In Europe, the lettering was similar, except that "A" was used if the group was before group 10, and "B" was used for groups including and after group 10. In addition, groups 8, 9 and 10 used to be treated as one triple-sized group, known collectively in both notations as group VIII. In 1988, the new IUPAC naming system was put into use, and the old group names were deprecated. Some of these groups have been given trivial (unsystematic) names, as seen in the table below, although some are rarely used. Groups 3–10 have no trivial names and are referred to simply by their group numbers or by the name of the first member of their group (such as "the scandium group" for group 3), since they display fewer similarities and/or vertical trends. Elements in the same group tend to show patterns in atomic radius, ionization energy, and electronegativity. From top to bottom in a group, the atomic radii of the elements increase. Since there are more filled energy levels, valence electrons are found farther from the nucleus. From the top, each successive element has a lower ionization energy because it is easier to remove an electron since the atoms are less tightly bound. Similarly, a group has a top-to-bottom decrease in electronegativity due to an increasing distance between valence electrons and the nucleus. There are exceptions to these trends: for example, in group 11, electronegativity increases farther down the group. A "period" is a horizontal row in the periodic table. Although groups generally have more significant periodic trends, there are regions where horizontal trends are more significant than vertical group trends, such as the f-block, where the lanthanides and actinides form two substantial horizontal series of elements. Elements in the same period show trends in atomic radius, ionization energy, electron affinity, and electronegativity. Moving left to right across a period, atomic radius usually decreases. This occurs because each successive element has an added proton and electron, which causes the electron to be drawn closer to the nucleus. This decrease in atomic radius also causes the ionization energy to increase when moving from left to right across a period. The more tightly bound an element is, the more energy is required to remove an electron. Electronegativity increases in the same manner as ionization energy because of the pull exerted on the electrons by the nucleus. Electron affinity also shows a slight trend across a period. Metals (left side of a period) generally have a lower electron affinity than nonmetals (right side of a period), with the exception of the noble gases. Specific regions of the periodic table can be referred to as "blocks" in recognition of the sequence in which the electron shells of the elements are filled. Elements are assigned to blocks by what orbitals their valence electrons or vacancies lie in. The s-block comprises the first two groups (alkali metals and alkaline earth metals) as well as hydrogen and helium. The p-block comprises the last six groups, which are groups 13 to 18 in IUPAC group numbering (3A to 8A in American group numbering) and contains, among other elements, all of the metalloids. The d-block comprises groups 3 to 12 (or 3B to 2B in American group numbering) and contains all of the transition metals. The f-block, often offset below the rest of the periodic table, has no group numbers and comprises most of the lanthanides and actinides. A hypothetical g-block is expected to begin around element 121, a few elements away from what is currently known. According to their shared physical and chemical properties, the elements can be classified into the major categories of metals, metalloids and nonmetals. Metals are generally shiny, highly conducting solids that form alloys with one another and salt-like ionic compounds with nonmetals (other than noble gases). A majority of nonmetals are coloured or colourless insulating gases; nonmetals that form compounds with other nonmetals feature covalent bonding. In between metals and nonmetals are metalloids, which have intermediate or mixed properties. Metal and nonmetals can be further classified into subcategories that show a gradation from metallic to non-metallic properties, when going left to right in the rows. The metals may be subdivided into the highly reactive alkali metals, through the less reactive alkaline earth metals, lanthanides and actinides, via the archetypal transition metals, and ending in the physically and chemically weak post-transition metals. Nonmetals may be simply subdivided into the polyatomic nonmetals, being nearer to the metalloids and show some incipient metallic character; the essentially nonmetallic diatomic nonmetals, nonmetallic and the almost completely inert, monatomic noble gases. Specialized groupings such as refractory metals and noble metals, are examples of subsets of transition metals, also known and occasionally denoted. Placing elements into categories and subcategories based just on shared properties is imperfect. There is a large disparity of properties within each category with notable overlaps at the boundaries, as is the case with most classification schemes. Beryllium, for example, is classified as an alkaline earth metal although its amphoteric chemistry and tendency to mostly form covalent compounds are both attributes of a chemically weak or post-transition metal. Radon is classified as a nonmetallic noble gas yet has some cationic chemistry that is characteristic of metals. Other classification schemes are possible such as the division of the elements into mineralogical occurrence categories, or crystalline structures. Categorizing the elements in this fashion dates back to at least 1869 when Hinrichs wrote that simple boundary lines could be placed on the periodic table to show elements having shared properties, such as metals, nonmetals, or gaseous elements. The electron configuration or organisation of electrons orbiting neutral atoms shows a recurring pattern or periodicity. The electrons occupy a series of electron shells (numbered 1, 2, and so on). Each shell consists of one or more subshells (named s, p, d, f and g). As atomic number increases, electrons progressively fill these shells and subshells more or less according to the Madelung rule or energy ordering rule, as shown in the diagram. The electron configuration for neon, for example, is 1s2 2s2 2p6. With an atomic number of ten, neon has two electrons in the first shell, and eight electrons in the second shell; there are two electrons in the s subshell and six in the p subshell. In periodic table terms, the first time an electron occupies a new shell corresponds to the start of each new period, these positions being occupied by hydrogen and the alkali metals. Since the properties of an element are mostly determined by its electron configuration, the properties of the elements likewise show recurring patterns or periodic behaviour, some examples of which are shown in the diagrams below for atomic radii, ionization energy and electron affinity. It is this periodicity of properties, manifestations of which were noticed well before the underlying theory was developed, that led to the establishment of the periodic law (the properties of the elements recur at varying intervals) and the formulation of the first periodic tables. The periodic law may then be successively clarified as: depending on atomic weight; depending on atomic number; and depending on the total number of s, p, d, and f electrons in each atom. The cycles last 2, 6, 10, and 14 elements respectively. There is additionally an internal "double periodicity" that splits the shells in half; this arises because the first half of the electrons going into a particular type of subshell fill unoccupied orbitals, but the second half have to fill already occupied orbitals, following Hund's rule of maximum multiplicity. The second half thus suffer additional repulsion that causes the trend to spit between first-half and second-half elements; this is for example evident when observing the ionisation energies of the 2p elements, in which the triads B-C-N and O-F-Ne show increases, but oxygen actually has a first ionisation slightly lower than that of nitrogen as it is easier to remove the extra, paired electron. Atomic radii vary in a predictable and explainable manner across the periodic table. For instance, the radii generally decrease along each period of the table, from the alkali metals to the noble gases; and increase down each group. The radius increases sharply between the noble gas at the end of each period and the alkali metal at the beginning of the next period. These trends of the atomic radii (and of various other chemical and physical properties of the elements) can be explained by the electron shell theory of the atom; they provided important evidence for the development and confirmation of quantum theory. The electrons in the 4f-subshell, which is progressively filled from lanthanum (element 57) to ytterbium (element 70), are not particularly effective at shielding the increasing nuclear charge from the sub-shells further out. The elements immediately following the lanthanides have atomic radii that are smaller than would be expected and that are almost identical to the atomic radii of the elements immediately above them. Hence lutetium has virtually the same atomic radius (and chemistry) as yttrium, hafnium has virtually the same atomic radius (and chemistry) as zirconium, and tantalum has an atomic radius similar to niobium, and so forth. This is an effect of the lanthanide contraction: a similar actinide contraction also exists. The effect of the lanthanide contraction is noticeable up to platinum (element 78), after which it is masked by a relativistic effect known as the inert pair effect. The d-block contraction, which is a similar effect between the d-block and p-block, is less pronounced than the lanthanide contraction but arises from a similar cause. Such contractions exist throughout the table, but are chemically most relevant for the lanthanides with their almost constant +3 oxidation state. The first ionization energy is the energy it takes to remove one electron from an atom, the second ionization energy is the energy it takes to remove a second electron from the atom, and so on. For a given atom, successive ionization energies increase with the degree of ionization. For magnesium as an example, the first ionization energy is 738 kJ/mol and the second is 1450 kJ/mol. Electrons in the closer orbitals experience greater forces of electrostatic attraction; thus, their removal requires increasingly more energy. Ionization energy becomes greater up and to the right of the periodic table. Large jumps in the successive molar ionization energies occur when removing an electron from a noble gas (complete electron shell) configuration. For magnesium again, the first two molar ionization energies of magnesium given above correspond to removing the two 3s electrons, and the third ionization energy is a much larger 7730 kJ/mol, for the removal of a 2p electron from the very stable neon-like configuration of Mg2+. Similar jumps occur in the ionization energies of other third-row atoms. Electronegativity is the tendency of an atom to attract a shared pair of electrons. An atom's electronegativity is affected by both its atomic number and the distance between the valence electrons and the nucleus. The higher its electronegativity, the more an element attracts electrons. It was first proposed by Linus Pauling in 1932. In general, electronegativity increases on passing from left to right along a period, and decreases on descending a group. Hence, fluorine is the most electronegative of the elements, while caesium is the least, at least of those elements for which substantial data is available. There are some exceptions to this general rule. Gallium and germanium have higher electronegativities than aluminium and silicon respectively because of the d-block contraction. Elements of the fourth period immediately after the first row of the transition metals have unusually small atomic radii because the 3d-electrons are not effective at shielding the increased nuclear charge, and smaller atomic size correlates with higher electronegativity. The anomalously high electronegativity of lead, particularly when compared to thallium and bismuth, is an artifact of electronegativity varying with oxidation state: its electronegativity conforms better to trends if it is quoted for the +2 state instead of the +4 state. The electron affinity of an atom is the amount of energy released when an electron is added to a neutral atom to form a negative ion. Although electron affinity varies greatly, some patterns emerge. Generally, nonmetals have more positive electron affinity values than metals. Chlorine most strongly attracts an extra electron. The electron affinities of the noble gases have not been measured conclusively, so they may or may not have slightly negative values. Electron affinity generally increases across a period. This is caused by the filling of the valence shell of the atom; a group 17 atom releases more energy than a group 1 atom on gaining an electron because it obtains a filled valence shell and is therefore more stable. A trend of decreasing electron affinity going down groups would be expected. The additional electron will be entering an orbital farther away from the nucleus. As such this electron would be less attracted to the nucleus and would release less energy when added. In going down a group, around one-third of elements are anomalous, with heavier elements having higher electron affinities than their next lighter congenors. Largely, this is due to the poor shielding by d and f electrons. A uniform decrease in electron affinity only applies to group 1 atoms. The lower the values of ionization energy, electronegativity and electron affinity, the more metallic character the element has. Conversely, nonmetallic character increases with higher values of these properties. Given the periodic trends of these three properties, metallic character tends to decrease going across a period (or row) and, with some irregularities (mostly) due to poor screening of the nucleus by d and f electrons, and relativistic effects, tends to increase going down a group (or column or family). Thus, the most metallic elements (such as caesium) are found at the bottom left of traditional periodic tables and the most nonmetallic elements (such as neon) at the top right. The combination of horizontal and vertical trends in metallic character explains the stair-shaped dividing line between metals and nonmetals found on some periodic tables, and the practice of sometimes categorizing several elements adjacent to that line, or elements adjacent to those elements, as metalloids. With some minor exceptions, oxidation numbers among the elements show four main trends according to their periodic table geographic location: left; middle; right; and south. On the left (groups 1 to 4, not including the f-block elements, and also niobium, tantalum, and probably dubnium in group 5), the highest most stable oxidation number is the group number, with lower oxidation states being less stable. In the middle (groups 3 to 11), higher oxidation states become more stable going down each group. Group 12 is an exception to this trend; they behave as if they were located on the left side of the table. On the right, higher oxidation states tend to become less stable going down a group. The shift between these trends is continuous: for example, group 3 also has lower oxidation states most stable in its lightest member (scandium, with CsScCl3 for example known in the +2 state), and group 12 is predicted to have copernicium more readily showing oxidation states above +2. The lanthanides positioned along the south of the table are distinguished by having the +3 oxidation state in common; this is their most stable state. The early actinides show a pattern of oxidation states somewhat similar to those of their period 6 and 7 transition metal congeners; the later actinides are more similar to the lanthanides, though the last ones (excluding lawrencium) have an increasingly important +2 oxidation state that becomes the most stable state for nobelium. From left to right across the four blocks of the long- or 32-column form of the periodic table are a series of linking or bridging groups of elements, located approximately between each block. In general, groups at the peripheries of blocks display similarities to the groups of the neighbouring blocks as well as to the other groups in their own blocks, as expected as most periodic trends are continuous. These groups, like the metalloids, show properties in between, or that are a mixture of, groups to either side. Chemically, the group 3 elements, lanthanides, and heavy group 4 and 5 elements show some behaviour similar to the alkaline earth metals or, more generally, "s" block metals but have some of the physical properties of "d" block transition metals. In fact, the metals all the way up to group 6 are united by being class-A cations ("hard" acids) that form more stable complexes with ligands whose donor atoms are the most electronegative nonmetals nitrogen, oxygen, and fluorine; metals later in the table form a transition to class-B cations ("soft" acids) that form more stable complexes with ligands whose donor atoms are the less electronegative heavier elements of groups 15 through 17. Meanwhile, lutetium behaves chemically as a lanthanide (with which it is often classified) but shows a mix of lanthanide and transition metal physical properties (as does yttrium). Lawrencium, as an analogue of lutetium, would presumably display like characteristics. The coinage metals in group 11 (copper, silver, and gold) are chemically capable of acting as either transition metals or main group metals. The volatile group 12 metals, zinc, cadmium and mercury are sometimes regarded as linking the "d" block to the "p" block. Notionally they are "d" block elements but they have few transition metal properties and are more like their "p" block neighbors in group 13. The relatively inert noble gases, in group 18, bridge the most reactive groups of elements in the periodic table—the halogens in group 17 and the alkali metals in group 1. The 1s, 2p, 3d, 4f, and 5g shells are each the first to have their value of ℓ, the azimuthal quantum number that determines a subshell's orbital angular momentum. This gives them some special properties, that has been referred to as kainosymmetry (from Greek καινός "new"). Elements filling these orbitals are usually less metallic than their heavier homologues, prefer lower oxidation states, and have smaller atomic and ionic radii. The above contractions may also be considered to be a general incomplete shielding effect in terms of how they impact the properties of the succeeding elements. The 2p, 3d, or 4f shells have no radial nodes and are smaller than expected. They therefore screen the nuclear charge incompletely, and therefore the valence electrons that fill immediately after the completion of such a core subshell are more tightly bound by the nucleus than would be expected. 1s is an exception, providing nearly complete shielding. This is in particular the reason why sodium has a first ionisation energy of 495.8 kJ/mol that is only slightly smaller than that of lithium, 520.2 kJ/mol, and why lithium acts as less electronegative than sodium in simple σ-bonded alkali metal compounds; sodium suffers an incomplete shielding effect from the preceding 2p elements, but lithium essentially does not. Kainosymmetry also explains the specific properties of the 2p, 3d, and 4f elements. The 2p subshell is small and of a similar radial extent as the 2s subshell, which facilitates orbital hybridisation. This does not work as well for the heavier p elements: for example, silicon in silane (SiH4) shows approximate sp2 hybridisation, whereas carbon in methane (CH4) shows an almost ideal sp3 hybridisation. The bonding in these nonorthogonal heavy p element hydrides is weakened; this situation worsens with more electronegative substituents as they magnify the difference in energy between the s and p subshells. The heavier p elements are often more stable in their higher oxidation states in organometallic compounds than in compounds with electronegative ligands. This follows Bent's rule: s character is concentrated in the bonds to the more electropositive substituents, while p character is concentrated in the bonds to the more electronegative substituents. Furthermore, the 2p elements prefer to participate in multiple bonding (observed in O=O and N≡N) to eliminate Pauli repulsion from the otherwise close s and p lone pairs: their π bonds are stronger and their single bonds weaker. The small size of the 2p shell is also responsible for the extremely high electronegativities of the 2p elements. The 3d elements show the opposite effect; the 3d orbitals are smaller than would be expected, with a radial extent similar to the 3p core shell, which weakens bonding to ligands because they cannot overlap with the ligands' orbitals well enough. These bonds are therefore stretched and therefore weaker compared to the homologous ones of the 4d and 5d elements (the 5d elements show an additional d-expansion due to relativistic effects). This also leads to low-lying excited states, which is probably related to the well-known fact that 3d compounds are often coloured (the light absorbed is visible). This also explains why the 3d contraction has a stronger effect on the following elements than the 4d or 5d ones do. As for the 4f elements, the difficulty that 4f has in being used for chemistry is also related to this, as are the strong incomplete screening effects; the 5g elements may show a similar contraction, but it is likely that relativistic effects will partly counteract this, as they would tend to cause expansion of the 5g shell. Another consequence is the increased metallicity of the following elements in a block after the first kainosymmetric orbital, along with a preference for higher oxidation states. This is visible comparing H and He (1s) with Li and Be (2s); N–F (2p) with P–Cl (3p); Fe and Co (3d) with Ru and Rh (4d); and Nd–Dy (4f) with U–Cf (5f). As kainosymmetric orbitals appear in the even rows (except for 1s), this creates an even–odd difference between periods from period 2 onwards: elements in even periods are smaller and have more oxidising higher oxidation states (if they exist), whereas elements in odd periods differ in the opposite direction. In 1789, Antoine Lavoisier published a list of 33 chemical elements, grouping them into gases, metals, nonmetals, and earths. Chemists spent the following century searching for a more precise classification scheme. In 1829, Johann Wolfgang Döbereiner observed that many of the elements could be grouped into triads based on their chemical properties. Lithium, sodium, and potassium, for example, were grouped together in a triad as soft, reactive metals. Döbereiner also observed that, when arranged by atomic weight, the second member of each triad was roughly the average of the first and the third. This became known as the Law of Triads. German chemist Leopold Gmelin worked with this system, and by 1843 he had identified ten triads, three groups of four, and one group of five. Jean-Baptiste Dumas published work in 1857 describing relationships between various groups of metals. Although various chemists were able to identify relationships between small groups of elements, they had yet to build one scheme that encompassed them all. In 1857, German chemist August Kekulé observed that carbon often has four other atoms bonded to it. Methane, for example, has one carbon atom and four hydrogen atoms. This concept eventually became known as valency, where different elements bond with different numbers of atoms. In 1862, the French geologist Alexandre-Émile Béguyer de Chancourtois published an early form of the periodic table, which he called the telluric helix or screw. He was the first person to notice the periodicity of the elements. With the elements arranged in a spiral on a cylinder by order of increasing atomic weight, de Chancourtois showed that elements with similar properties seemed to occur at regular intervals. His chart included some ions and compounds in addition to elements. His paper also used geological rather than chemical terms and did not include a diagram. As a result, it received little attention until the work of Dmitri Mendeleev. In 1864, Julius Lothar Meyer, a German chemist, published a table with 28 elements. Realizing that an arrangement according to atomic weight did not exactly fit the observed periodicity in chemical properties he gave valency priority over minor differences in atomic weight. A missing element between Si and Sn was predicted with atomic weight 73 and valency 4. Concurrently, English chemist William Odling published an arrangement of 57 elements, ordered on the basis of their atomic weights. With some irregularities and gaps, he noticed what appeared to be a periodicity of atomic weights among the elements and that this accorded with "their usually received groupings". Odling alluded to the idea of a periodic law but did not pursue it. He subsequently proposed (in 1870) a valence-based classification of the elements. English chemist John Newlands produced a series of papers from 1863 to 1866 noting that when the elements were listed in order of increasing atomic weight, similar physical and chemical properties recurred at intervals of eight. He likened such periodicity to the octaves of music. This so termed Law of Octaves was ridiculed by Newlands' contemporaries, and the Chemical Society refused to publish his work. Newlands was nonetheless able to draft a table of the elements and used it to predict the existence of missing elements, such as germanium. The Chemical Society only acknowledged the significance of his discoveries five years after they credited Mendeleev. In 1867, Gustavus Hinrichs, a Danish born academic chemist based in America, published a spiral periodic system based on atomic spectra and weights, and chemical similarities. His work was regarded as idiosyncratic, ostentatious and labyrinthine and this may have militated against its recognition and acceptance. Russian chemistry professor Dmitri Mendeleev and German chemist Julius Lothar Meyer independently published their periodic tables in 1869 and 1870, respectively. Mendeleev's table, dated , was his first published version. That of Meyer was an expanded version of his (Meyer's) table of 1864. They both constructed their tables by listing the elements in rows or columns in order of atomic weight and starting a new row or column when the characteristics of the elements began to repeat. The recognition and acceptance afforded to Mendeleev's table came from two decisions he made. The first was to leave gaps in the table when it seemed that the corresponding element had not yet been discovered. Mendeleev was not the first chemist to do so, but he was the first to be recognized as using the trends in his periodic table to predict the properties of those missing elements, such as gallium and germanium. The second decision was to occasionally ignore the order suggested by the atomic weights and switch adjacent elements, such as tellurium and iodine, to better classify them into chemical families. Mendeleev published in 1869, using atomic weight to organize the elements, information determinable to fair precision in his time. Atomic weight worked well enough to allow Mendeleev to accurately predict the properties of missing elements. Mendeleev took the unusual step of naming missing elements using the Sanskrit numerals "eka" (1), "dvi" (2), and "tri" (3) to indicate that the element in question was one, two, or three rows removed from a lighter congener. It has been suggested that Mendeleev, in doing so, was paying homage to ancient Sanskrit grammarians, in particular Pāṇini, who devised a periodic alphabet for the language. Following the discovery of the atomic nucleus by Ernest Rutherford in 1911, it was proposed that the integer count of the nuclear charge is identical to the sequential place of each element in the periodic table. In 1913, English physicist Henry Moseley using X-ray spectroscopy confirmed this proposal experimentally. Moseley determined the value of the nuclear charge of each element and showed that Mendeleev's ordering actually places the elements in sequential order by nuclear charge. Nuclear charge is identical to proton count and determines the value of the atomic number ("Z") of each element. Using atomic number gives a definitive, integer-based sequence for the elements. Moseley predicted, in 1913, that the only elements still missing between aluminium ("Z" = 13) and gold ("Z" = 79) were "Z" = 43, 61, 72, and 75, all of which were later discovered. The atomic number is the absolute definition of an element and gives a factual basis for the ordering of the periodic table. In 1871, Mendeleev published his periodic table in a new form, with groups of similar elements arranged in columns rather than in rows, and those columns numbered I to VIII corresponding with the element's oxidation state. He also gave detailed predictions for the properties of elements he had earlier noted were missing, but should exist. These gaps were subsequently filled as chemists discovered additional naturally occurring elements. It is often stated that the last naturally occurring element to be discovered was francium (referred to by Mendeleev as "eka-caesium") in 1939, but it was technically only the last element to be discovered in nature as opposed to by synthesis. Plutonium, produced synthetically in 1940, was identified in trace quantities as a naturally occurring element in 1971. The popular periodic table layout, also known as the common or standard form (as shown at various other points in this article), is attributable to Horace Groves Deming. In 1923, Deming, an American chemist, published short (Mendeleev style) and medium (18-column) form periodic tables. Merck and Company prepared a handout form of Deming's 18-column medium table, in 1928, which was widely circulated in American schools. By the 1930s Deming's table was appearing in handbooks and encyclopedias of chemistry. It was also distributed for many years by the Sargent-Welch Scientific Company. With the development of modern quantum mechanical theories of electron configurations within atoms, it became apparent that each period (row) in the table corresponded to the filling of a quantum shell of electrons. Larger atoms have more electron sub-shells, so later tables have required progressively longer periods. In 1945, Glenn Seaborg, an American scientist, made the suggestion that the actinide elements, like the lanthanides, were filling an f sub-level. Before this time the actinides were thought to be forming a fourth d-block row. Seaborg's colleagues advised him not to publish such a radical suggestion as it would most likely ruin his career. As Seaborg considered he did not then have a career to bring into disrepute, he published anyway. Seaborg's suggestion was found to be correct and he subsequently went on to win the 1951 Nobel Prize in chemistry for his work in synthesizing actinide elements. Although minute quantities of some transuranic elements occur naturally, they were all first discovered in laboratories. Their production has expanded the periodic table significantly, the first of these being neptunium, synthesized in 1939. Because many of the transuranic elements are highly unstable and decay quickly, they are challenging to detect and characterize when produced. There have been controversies concerning the acceptance of competing discovery claims for some elements, requiring independent review to determine which party has priority, and hence naming rights. In 2010, a joint Russia–US collaboration at Dubna, Moscow Oblast, Russia, claimed to have synthesized six atoms of tennessine (element 117), making it the most recently claimed discovery. It, along with nihonium (element 113), moscovium (element 115), and oganesson (element 118), are the four most recently named elements, whose names all became official on 28 November 2016. The modern periodic table is sometimes expanded into its long or 32-column form by reinstating the footnoted f-block elements into their natural position between the s- and d-blocks, as proposed by Alfred Werner. Unlike the 18-column form this arrangement results in "no interruptions in the sequence of increasing atomic numbers". The relationship of the f-block to the other blocks of the periodic table also becomes easier to see. William B. Jensen advocates a form of table with 32 columns on the grounds that the lanthanides and actinides are otherwise relegated in the minds of students as dull, unimportant elements that can be quarantined and ignored. Despite these advantages the 32-column form is generally avoided by editors on account of its undue rectangular ratio compared to a book page ratio, and the familiarity of chemists with the modern form, as introduced by Seaborg. Within 100 years of the appearance of Mendeleev's table in 1869, Edward G. Mazurs had collected an estimated 700 different published versions of the periodic table. As well as numerous rectangular variations, other periodic table formats have been shaped, for example, like a circle, cube, cylinder, building, spiral, lemniscate, octagonal prism, pyramid, sphere, or triangle. Such alternatives are often developed to highlight or emphasize chemical or physical properties of the elements that are not as apparent in traditional periodic tables. A popular alternative structure is that of Otto Theodor Benfey (1960). The elements are arranged in a continuous spiral, with hydrogen at the centre and the transition metals, lanthanides, and actinides occupying peninsulas. Most periodic tables are two-dimensional; three-dimensional tables are known to as far back as at least 1862 (pre-dating Mendeleev's two-dimensional table of 1869). More recent examples include Courtines' Periodic Classification (1925), Wringley's Lamina System (1949), Giguère's Periodic helix (1965) and Dufour's Periodic Tree (1996). Going one further, Stowe's Physicist's Periodic Table (1989) has been described as being four-dimensional (having three spatial dimensions and one colour dimension). The various forms of periodic tables can be thought of as lying on a chemistry–physics continuum. Towards the chemistry end of the continuum can be found, as an example, Rayner-Canham's "unruly" Inorganic Chemist's Periodic Table (2002), which emphasizes trends and patterns, and unusual chemical relationships and properties. Near the physics end of the continuum is Janet's Left-Step Periodic Table (1928). This has a structure that shows a closer connection to the order of electron-shell filling and, by association, quantum mechanics. A somewhat similar approach has been taken by Alper, albeit criticized by Eric Scerri as disregarding the need to display chemical and physical periodicity. Somewhere in the middle of the continuum is the ubiquitous common or standard form of periodic table. This is regarded as better expressing empirical trends in physical state, electrical and thermal conductivity, and oxidation numbers, and other properties easily inferred from traditional techniques of the chemical laboratory. Its popularity is thought to be a result of this layout having a good balance of features in terms of ease of construction and size, and its depiction of atomic order and periodic trends. Simply following electron configurations, hydrogen (electronic configuration 1s1) and helium (1s2) should be placed in groups 1 and 2, above lithium (1s22s1) and beryllium (1s22s2). While such a placement is common for hydrogen, it is rarely used for helium outside of the context of electron configurations: When the noble gases (then called "inert gases") were first discovered around 1900, they were known as "group 0", reflecting no chemical reactivity of these elements known at that point, and helium was placed on the top of that group, as it did share the extreme chemical inertness seen throughout the group. As the group changed its formal number, many authors continued to assign helium directly above neon, in group 18; one of the examples of such placing is the current IUPAC table. The position of hydrogen in group 1 is reasonably well settled. Its usual oxidation state is +1 as is the case for its heavier alkali metal congeners. Like lithium, it has a significant covalent chemistry. It can stand in for alkali metals in typical alkali metal structures. It is capable of forming alloy-like hydrides, featuring metallic bonding, with some transition metals. Nevertheless, it is sometimes placed elsewhere. A common alternative is at the top of group 17 given hydrogen's strictly univalent and largely non-metallic chemistry, and the strictly univalent and non-metallic chemistry of fluorine (the element otherwise at the top of group 17). Sometimes, to show hydrogen has properties corresponding to both those of the alkali metals and the halogens, it is shown at the top of the two columns simultaneously. Another suggestion is above carbon in group 14: placed that way, it fits well into the trends of increasing ionization potential values and electron affinity values, and is not too far from the electronegativity trend, even though hydrogen cannot show the tetravalence characteristic of the heavier group 14 elements. Finally, hydrogen is sometimes placed separately from any group; this is based on its general properties being regarded as sufficiently different from those of the elements in any other group. The other period 1 element, helium, is most often placed in group 18 with the other noble gases, as its extraordinary inertness is extremely close to that of the other light noble gases neon and argon. Nevertheless, it is occasionally placed separately from any group as well. The property that distinguishes helium from the rest of the noble gases is that in its closed electron shell, helium has only two electrons in the outermost electron orbital, while the rest of the noble gases have eight. Some authors, such as Henry Bent (the eponym of Bent's rule), Wojciech Grochala, and Felice Grandinetti, have argued that helium would be correctly placed in group 2, over beryllium; Charles Janet's left-step table also contains this assignment. The normalized ionization potentials and electron affinities show better trends with helium in group 2 than in group 18; helium is expected to be slightly more reactive than neon (which breaks the general trend of reactivity in the noble gases, where the heavier ones are more reactive); predicted helium compounds often lack neon analogues even theoretically, but sometimes have beryllium analogues; and helium over beryllium better follows the trend of first-row anomalies in the table (s » p > d > f). Although scandium and yttrium are always the first two elements in group 3, the identity of the next two elements is not completely settled. They are commonly lanthanum and actinium, and less often lutetium and lawrencium. The two variants originate from historical difficulties in placing the lanthanides in the periodic table, and arguments as to where the "f" block elements start and end. The detachment of the lanthanides from the main body of the periodic table has been attributed to the Czech chemist Bohuslav Brauner who, in 1902, allocated all of them ("Ce etc.") to one position in group 4, below zirconium. This arrangement was referred to as the "asteroid hypothesis", in analogy to asteroids occupying a single orbit in the solar system. Before this time the lanthanides were generally (and unsuccessfully) placed throughout groups I to VIII of the older 8-column form of periodic table. Although predecessors of Brauner's 1902 arrangement are recorded from as early as 1895, he is known to have referred to the "chemistry of asteroids" in an 1881 letter to Mendeleev. Other authors assigned all of the lanthanides to either group 3, groups 3 and 4, or groups 2, 3 and 4. In 1922 Niels Bohr continued the detachment process by locating the lanthanides between the s- and d-blocks. In 1949 Glenn T. Seaborg (re)introduced the form of periodic table that is popular today, in which the lanthanides and actinides appear as footnotes. Seaborg first published his table in a classified report dated 1944. It was published again by him in 1945 in "Chemical and Engineering News," and in the years up to 1949 several authors commented on, and generally agreed with, Seaborg's proposal. In that year he noted that the best method for presenting the actinides seemed to be by positioning them below, and as analogues of, the lanthanides. It has been claimed that such arguments are proof that, "it is a mistake to break the [periodic] system into sharply delimited blocks". A third common variant shows the two positions below yttrium as being occupied by the lanthanides and the actinides. A fourth variant shows group 3 bifurcating after Sc-Y, into an La-Ac branch, and an Lu-Lr branch. The placement of lanthanum and yttrium in different groups, and with a then unknown heavier homologue of yttrium coming two spaces before tantalum, existed even before the discovery of lutetium. Two such tables are Henry Bassett's table of 1892 and Alfred Werner's of 1905, both of which also place the then-known actinides as heavier homologues of the lanthanides, although neither completely matches the modern form (among other things, both consider beryllium and magnesium to belong to the same group as zinc). Since 1921, many chemical and physical arguments have been made in support of lutetium and lawrencium but the majority of authors seem either unconvinced by them or unaware of them. Most working chemists are not aware there is any controversy. In December 2015 an IUPAC project was established to make a recommendation on the matter, considering only the first two alternatives as possibilities. Lanthanum and actinium are commonly depicted as the remaining group 3 members. It has been suggested that this layout originated in the 1940s, with the appearance of periodic tables relying on the ground-state gas-phase electron configurations of the elements and the notion of the differentiating electron. The ground-state configurations of caesium, barium and lanthanum are [Xe]6s1, [Xe]6s2 and [Xe]5d16s2. Lanthanum thus emerges with a 5d differentiating electron and on these grounds it was considered to be "in group 3 as the first member of the d-block for period 6". However, many elements do not have a well-defined single differentiating electron from the previous element when considering ground-state gas-phase electron configurations; for example, the ground-state configuration of vanadium is [Ar]3d34s2, and that of chromium is [Ar]3d54s1, in which two d electrons are added and one s electron is removed. A superficially consistent set of electron configurations is then seen in group 3: scandium [Ar]3d14s2, yttrium [Kr]4d15s2, lanthanum [Xe]5d16s2, and actinium [Rn]6d17s2. Still in period 6, ytterbium was assigned an electron configuration of [Xe]4f135d16s2 and lutetium [Xe]4f145d16s2, "resulting in a 4f differentiating electron for lutetium and firmly establishing it as the last member of the f-block for period 6". Later spectroscopic work found that the electron configuration of ytterbium was in fact [Xe]4f146s2. This meant that ytterbium and lutetium—the latter with [Xe]4f145d16s2—both had 14 f-electrons, "resulting in a d- rather than an f- differentiating electron" for lutetium and making it an "equally valid candidate" with [Xe]5d16s2 lanthanum, for the group 3 periodic table position below yttrium. Lanthanum has the advantage of incumbency since the 5d1 electron appears for the first time in its structure whereas it appears for the third time in lutetium, having also made a brief second appearance in gadolinium. The same may nevertheless be said of thorium which has incumbency over rutherfordium for the 6d2 position, even though rutherfordium is universally placed there today. This form necessitates a split d-block if expanded to a 32-column periodic table. Some authors have defended it as possibly being the correct placement on the grounds of the ground-state gas-phase configurations of lanthanum and actinium, but Eric Scerri considers it to be an "ad hoc" move that for justification requires an independent argument, that "is especially not available to authors ... who maintain that the d-block perfectly reflects the filling of five d orbitals by ten outer electrons. Why should there be a break only between the first and second of these electron-filling processes?" The form with lanthanum under yttrium has been defended on the grounds that lanthanum and actinium in their ground-state configurations (respectively [Xe]5d16s2 and [Rn]6d17s2) have no electrons in f subshells and therefore should not be placed in the f-block. However, this creates an inconsistency in the treatment of thorium, which has no f-electrons in the ground-state (being [Rn]6d27s2), similar to actinium as [Rn]6d17s2; yet it places thorium in the f-block but not actinium. Considering only ground-state gas-phase configurations, thorium [Rn]6d27s2 by itself is just as good a homologue to zirconium [Kr]4d25s2 as lanthanum [Xe]5d16s2 is to scandium [Ar]3d14s2; yet thorium is invariably placed in the f-block, not in group 4 with zirconium. Thorium thus demonstrates that the possession of an f electron in the ground-state gas-phase configuration of an element is not necessary for it to belong to the f-block. Lanthanum and actinium in a Sc-Y-Lu table do form the only paired anomaly where both elements in a group have no outer electrons in their ground-state gas-phase configurations that match their block. However, the same is true for lutetium and lawrencium in a Sc-Y-La table, neither of which are known in states beyond +3 and for which the f orbitals are definitely core orbitals. Ground-state gas-phase configurations consider only isolated atoms as opposed to bonding atoms in compounds (the latter being more relevant for chemistry), which often show different configurations. Moreover, the lowest levels of two different configurations often are separated by only very small energies, that are minuscule compared to the spreading of "J"-levels of each configuration (e.g. terbium, where the 285 cm−1 difference between [Xe]4f85d16s2 and the ground state [Xe]4f95s2 is much less than 1% of this spreading), making which configuration happens to be the ground state chemically quite irrelevant. It is the dominant electron configuration of atoms in chemical environments, and not free gaseous atoms in a vacuum, that can rationalise qualitative chemical behaviour. Gas-phase ground state electron configurations are only important for a few specialised topics, such as atom–molecular gas-phase reactions. In terms of chemical behaviour, scandium, yttrium, lanthanum and actinium are similar to their group 1–2 counterparts, although so are lutetium, lawrencium, and the period 5 through 7 elements in groups 4 and 5. Trends going down group 3 (if Sc-Y-La is chosen) for properties such as melting point, electronegativity and ionic radius match those in the s-block (groups 1 and 2), but are at variance with the other groups in the early d-block. In this variant, the number of "f" electrons in the most common (trivalent) ions of the f-block elements consistently matches their position in the f-block. For example, the f-electron counts for the trivalent ions of the first three f-block elements are Ce 1, Pr 2 and Nd 3. However, outside the lanthanides there does not exist a typical oxidation state across any period of a block, and the reason for this singular behaviour of the lanthanides in fact has very little to do with the electron configurations of the elements concerned, which on the face of it would seem to predict a preferred +2 oxidation state as they are mostly [Xe]4fn6s2 (lanthanides) or [Rn]5fn7s2 (actinides). Similarity of chemistry is, in addition, not the only factor that needs to be considered for periodic table placement. Tungsten and uranium chemically resemble each other (and were placed in the same group before Seaborg's clarification of the actinides), in a manner that is not worse than the resemblances between tin and lead, or between antimony and bismuth, both of which are universally considered to belong in the same group. Moreover, the resemblance between aluminium and scandium, which are placed in different groups, is actually stronger in some ways than that between aluminium and gallium, which are in the same group. The same is true of the relationship of beryllium and magnesium to zinc, which is in some ways stronger than their relationship to calcium. In other tables, lutetium and lawrencium are the remaining group 3 members. Early techniques for chemically separating scandium, yttrium and lutetium relied on the fact that these elements occurred together in the so-called "yttrium group" whereas La and Ac occurred together in the "cerium group". Accordingly, lutetium rather than lanthanum was assigned to group 3 by some chemists in the 1920s and 30s. The phenomenon of different separation groups is caused by increasing basicity with increasing radius, and does not constitute a fundamental reason to show Lu, rather than La, below Y. Thus, among the Group 2 alkaline earth metals, Mg (less basic) belongs in the "soluble group" and Ca, Sr and Ba (more basic) occur in the "ammonium carbonate group". Nevertheless, Mg, Ca, Sr and Ba are routinely collocated in Group 2 of the periodic table. Several physicists in the 1950s and '60s favoured lutetium, in light of a comparison of several of its physical properties with those of lanthanum. Among the prominent adherents of this form have been Lev Landau and Evgeny Lifshitz, who wrote in their "Course of Theoretical Physics" (1958): Although lanthanum has no f-electrons in its ground-state gas-phase configurations, there is strong evidence that its f orbitals are in fact actively participating in its chemistry and influencing its physical and chemical properties. The binding energies of the 4f levels of excited states of lanthanum that contain a 4f electron clearly show that lanthanum's 4f orbitals are not hydrogenic. In other words, in hydrogen through barium, the 4f orbitals are far enough from the nucleus that when analysing them, one can approximate the core and remaining electrons as a point charge; starting from lanthanum, this ceases to be the case, with lanthanum showing 4f levels more similar to those of the following rare earths. These low-lying empty f orbitals, which lutetium lacks, contribute measurably to the bonding in some lanthanum compounds, for example in lanthanum(III) fluoride (LaF3). While this contribution is small, it is greater for lanthanum than for any other lanthanide, considering for each the analogous LnF3 compound; meanwhile, the Lu–F 4f–2p bond order in LuF3 is less than the analogous one of IrF3, with iridium well into the 5d block. And while the trivalent lanthanides Pr3+ through Yb3+ show characteristic narrow bands with their positions almost completely independent on the ligands, the following 5d elements (along with the 3d and 4d elements) behave significantly differently; while both types of elements show electron-transfer bands, ligand field theory becomes important for these d elements. The order of involvement of 4f in lanthanum is similar to that of 5f in thorium, which is universally placed in the f-block. Lanthanum has the dhcp crystal structure as the most stable one at standard conditions, and actinium is fcc; whereas scandium, yttrium, lutetium, and lawrencium (the last predicted) show the hcp crystal structure. The dhcp crystal structure is only known for some rare earths and actinides in the f-block and is unknown elsewhere on the periodic table. This constitutes an anomaly in the otherwise completely regular variation of the crystal structures of the nonmagnetic transition metals with their valencies (except for the late 6d metals, which should be anomalous due to strong relativistic effects for those superheavy elements), and is a sign of 4f band involvement for lanthanum, because lanthanum without 4f involvement would be expected to be hcp like scandium, yttrium, and lutetium. Instead, the pressure-temperature phase diagram of lanthanum is isomorphic to those of the uncontroversial 4f metals praseodymium and neodymium. Similarly, thorium (which as noted above has a similar level of f involvement as lanthanum) is fcc, rather than hcp like the group 4 metals, because of 5f band involvement. Karl Gschneidner, analysing the melting points of the lanthanides in a 1971 article, reached the conclusion that it was likely that 4f, 5d, 6s, and 6p electrons were all involved in the bonding of lanthanide metals except for lutetium, where 4f electrons were not found to be involved. The fact that lanthanum was demonstrated to be a 4f-band metal (with about 0.17 electrons per atom in fcc lanthanum, which is metastable at standard conditions) whereas the 4f shell appears to have no influence on the metallic properties of lutetium, has been used as an argument to place lutetium in group 3 instead of lanthanum. The 4f occupancy in solid lanthanum may explain some of its properties, such as its low melting point (La 920 °C, versus Sc 1541 °C, Y 1526 °C, Lu 1652 °C) low Debye temperature, and anomalously high superconducting transition temperature at all pressures. Indeed, if lanthanum is treated as a d-block element, it constitutes anomalies in the trends of superconducting transition temperatures at a variety of pressures, all of which are removed if lutetium is put in its place. Jörg Wittig, considering this problem in 1973, found it likely that this small 4f band involvement in lanthanum "represents the screening charge of a 4f scattering resonance safeguarded deep in the interior of the lanthanum ion core", similarly to cerium: this is in agreement with Gschneidner's model. The difficulty in observing this would then be due to the strong d resonance that this 4f virtual bound state also has. This is confirmed by the alloy LaAl2, whose 16% lower Debye temperature and higher electronic specific heat coefficients compared to LuAl2 reflect "directly the additional 4f density of states at the Fermi surface". Scandium, yttrium, and lutetium show a more consistent set of electron configurations matching the global trend on the periodic table: the 5d metals then all add a closed 4f14 shell. For example, the shift from yttrium [Kr]4d15s2 to lutetium [Xe]4f145d16s2 exactly parallels that from zirconium [Kr]4d25s2 to hafnium [Xe]4f145d26s2. The inclusion of lutetium rather than lanthanum also homogenises the 5d transition series: trends in atomic size, coordination number, and relative abundance of metal–oxygen bonds all reveal that lutetium is closer than lanthanum to the behaviour of the uncontroversial 5d metals hafnium through mercury. The same is true considering conduction band structures of the elements: lutetium has a transition-metal-like conduction band structure, but lanthanum does not. Yttrium and lutetium metals have similar d-band occupancies of about 1.5 d electrons per atom; lanthanum instead has about 2.5. As for lawrencium, its gas phase ground-state atomic electron configuration was confirmed in 2015 as [Rn]5f147s27p1. Such a configuration represents another periodic table anomaly, regardless of whether lawrencium is located in the f-block or the d-block, as the only potentially applicable p-block position has been reserved for nihonium with its predicted configuration of [Rn]5f146d107s27p1. However, it is expected that in the condensed phase and in chemical environments lawrencium has the expected 6d occupancy, and simple modelling studies suggest it will behave like a lanthanide, in particular being a homologue of lutetium. Lawrencium's return to +3 as the only stable oxidation state and being predicted to form a trivalent metal is distinct from the behaviour of the other late actinides fermium, mendelevium, and nobelium, which have a tendency towards forming lower oxidation states and form (or are predicted to form) divalent metals; it also makes an exception to the actinide contraction generally being larger than the analogous lanthanide contraction at the end of both series. The steadily increasing stability of the +2 state along the actinide series going to nobelium is similar to that along the 3d series going to zinc. Meanwhile, actinium has a band structure with itinerant 5f electrons, that is similar to those of lanthanum and praseodymium; the 5f bands are in the same region as and hybridise strongly with the 6d and 7s bands, with the width of the 5f band increasing with pressure. While scandium, yttrium and lutetium (and lawrencium, so far as its chemistry is known) do often behave like trivalent versions of the group 1–2 metals, being hard class-A cations mostly restricted to the group oxidation state, they are not the only elements in the d-block or f-block that do so. The early transition metals zirconium and hafnium (probably also rutherfordium) in group 4, as well as niobium and tantalum (probably also dubnium) in group 5, also display such behaviour. (The heavy group 4 elements and thorium are tetravalent; the heavy group 5 elements are pentavalent.) The actinide thorium also displays such behaviour, being almost always tetravalent. Zirconium, hafnium, and probably rutherfordium furthermore show some aqueous cationic chemistry in the group oxidation state as the group 1 through 3 metals do, although it is restricted to more acidic solutions as expected for these more highly charged cations (similar to the actinide +4 cations). Furthermore, the organometallic chemistry of group 3 is dominated by cyclopentadiene compounds and their methyl-substituted derivatives, which is similar to that of the lanthanides but also that of group 4, as expected from the limited backbonding available for these early d elements. Therefore, chemically speaking the group 3 elements act chemically normally for early d elements, as transition properties involving the ready formation of lower oxidation states and paramagnetic compounds come in slowly as more d electrons are added. The physical properties of the group 3 elements are also affected by the presence of a d electron, which forms more localised bonds within the metals than the p electrons in the similar group 13 metals; exactly the same situation is found comparing group 4 to group 14, showing that group 3 physically acts like a normal d-block group. Trends going down group 3 (if Sc-Y-Lu is chosen) for properties such as melting point, electronegativity and ionic radius, are similar to those found among their group 4–8 counterparts in the same block, as noted by William B. Jensen in an often-cited 1982 article in which he argued for this placement. In this variant, the number of "f" electrons in the gaseous forms of the f-block atoms usually matches their position in the f-block. For example, the f-electron counts for the first five f-block elements are La 0, Ce 1, Pr 3, Nd 4 and Pm 5. A few authors position all thirty lanthanides and actinides in the two positions below yttrium (usually via footnote markers). This variant, which is stated in the 2005 "Red Book" to be the IUPAC-agreed version as of 2005 (a number of later versions exist, and the last update is from 1 December 2018), emphasizes similarities in the chemistry of the 15 lanthanide elements (La–Lu), possibly at the expense of ambiguity as to which elements occupy the two group 3 positions below yttrium, and a 15-column wide "f" block (there can only be 14 elements in any row of the "f" block). However, this similarity does not extend to the 15 actinide elements (Ac–Lr), which show a much wider variety in their chemistries. This form moreover reduces the f-block to a degenerate branch of group 3 of the d-block; it dates back to the 1920s when the lanthanides were thought to have their f electrons as core electrons, which is now known to be false. It is also false for the actinides, many of which show stable oxidation states above +3. In this variant, group 3 bifurcates after Sc-Y into a La-Ac branch, and a Lu-Lr branch. This arrangement is consistent with the hypothesis that arguments in favour of either Sc-Y-La-Ac or Sc-Y-Lu-Lr based on chemical and physical data are inconclusive. As noted, trends going down Sc-Y-La-Ac match trends in groups 1−2 whereas trends going down Sc-Y-Lu-Lr better match trends in groups 4−10. The bifurcation of group 3 is a throwback to the Mendeleev eight column-form in which seven of the main groups each have two subgroups. Tables featuring a bifurcated group 3 have been periodically proposed since that time. The definition of a transition metal, as given by IUPAC in the "Gold Book", is an element whose atom has an incomplete d sub-shell, or which can give rise to cations with an incomplete d sub-shell. By this definition all of the elements in groups 3–11 are transition metals. The IUPAC definition therefore excludes group 12, comprising zinc, cadmium and mercury, from the transition metals category. However, the 2005 IUPAC nomenclature as codified in the "Red Book" gives both the group 3–11 and group 3–12 definitions of the transition metals as alternatives. Some chemists treat the categories "d-block elements" and "transition metals" interchangeably, thereby including groups 3–12 among the transition metals. In this instance the group 12 elements are treated as a special case of transition metal in which the d electrons are not ordinarily given up for chemical bonding (they can sometimes contribute to the valence bonding orbitals even so, as in zinc fluoride). The 2007 report of mercury(IV) fluoride (HgF4), a compound in which mercury would use its d electrons for bonding, has prompted some commentators to suggest that mercury can be regarded as a transition metal. Other commentators, such as Jensen, have argued that the formation of a compound like HgF4 can occur only under highly abnormal conditions; indeed, its existence is currently disputed. As such, mercury could not be regarded as a transition metal by any reasonable interpretation of the ordinary meaning of the term. Still other chemists further exclude the group 3 elements from the definition of a transition metal. They do so on the basis that the group 3 elements do not form any ions having a partially occupied d shell and do not therefore exhibit properties characteristic of transition metal chemistry. In this case, only groups 4–11 are regarded as transition metals. This categorisation is however not one of the alternatives considered by IUPAC. Though the group 3 elements show few of the characteristic chemical properties of the transition metals, the same is true of the heavy members of groups 4 and 5, which also are mostly restricted to the group oxidation state in their chemistry. Moreover, the group 3 elements show characteristic physical properties of transition metals (on account of the presence in each atom of a single d electron). Although all elements up to oganesson have been discovered, of the elements above hassium (element 108), only copernicium (element 112), nihonium (element 113), and flerovium (element 114) have known chemical properties, and conclusive categorisation at present has not been reached. Some of these may behave differently from what would be predicted by extrapolation, due to relativistic effects; for example, copernicium and flerovium have been predicted to possibly exhibit some noble-gas-like properties, even though neither is placed in group 18 with the other noble gases. The current experimental evidence still leaves open the question of whether copernicium and flerovium behave more like metals or noble gases. At the same time, oganesson (element 118) is expected to be a solid semiconductor at standard conditions, despite being in group 18. Currently, the periodic table has seven complete rows, with all spaces filled in with discovered elements. Future elements would have to begin an eighth row. Nevertheless, it is unclear whether new eighth-row elements will continue the pattern of the current periodic table, or require further adaptations or adjustments. Seaborg expected the eighth period to follow the previously established pattern exactly, so that it would include a two-element s-block for elements 119 and 120, a new g-block for the next 18 elements, and 30 additional elements continuing the current f-, d-, and p-blocks, culminating in element 168, the next noble gas. More recently, physicists such as Pekka Pyykkö have theorized that these additional elements do not exactly follow the Madelung rule, which predicts how electron shells are filled and thus affects the appearance of the present periodic table. There are currently several competing theoretical models for the placement of the elements of atomic number less than or equal to 172. In all of these it is element 172, rather than element 168, that emerges as the next noble gas after oganesson, although these must be regarded as speculative as no complete calculations have been done beyond element 123. The number of possible elements is not known. A very early suggestion made by Elliot Adams in 1911, and based on the arrangement of elements in each horizontal periodic table row, was that elements of atomic weight greater than circa 256 (which would equate to between elements 99 and 100 in modern-day terms) did not exist. A higher, more recent estimate is that the periodic table may end soon after the island of stability, whose centre is predicted to lie between element 110 and element 126, as the extension of the periodic and nuclide tables is restricted by proton and neutron drip lines as well as decreasing stability towards spontaneous fission. Other predictions of an end to the periodic table include at element 128 by John Emsley, at element 137 by Richard Feynman, at element 146 by Yogendra Gambhir, and at element 155 by Albert Khazan. The Bohr model exhibits difficulty for atoms with atomic number greater than 137, as any element with an atomic number greater than 137 would require 1s electrons to be travelling faster than "c", the speed of light. Hence the non-relativistic Bohr model is inaccurate when applied to such an element. The relativistic Dirac equation has problems for elements with more than 137 protons. For such elements, the wave function of the Dirac ground state is oscillatory rather than bound, and there is no gap between the positive and negative energy spectra, as in the Klein paradox. More accurate calculations taking into account the effects of the finite size of the nucleus indicate that the binding energy first exceeds the limit for elements with more than 173 protons. For heavier elements, if the innermost orbital (1s) is not filled, the electric field of the nucleus will pull an electron out of the vacuum, resulting in the spontaneous emission of a positron. This does not happen if the innermost orbital is filled, so that element 173 is not necessarily the end of the periodic table. The many different forms of periodic table have prompted the question of whether there is an optimal or definitive form of periodic table. The answer to this question is thought to depend on whether the chemical periodicity seen to occur among the elements has an underlying truth, effectively hard-wired into the universe, or if any such periodicity is instead the product of subjective human interpretation, contingent upon the circumstances, beliefs and predilections of human observers. An objective basis for chemical periodicity would settle the questions about the location of hydrogen and helium, and the composition of group 3. Such an underlying truth, if it exists, is thought to have not yet been discovered. In its absence, the many different forms of periodic table can be regarded as variations on the theme of chemical periodicity, each of which explores and emphasizes different aspects, properties, perspectives and relationships of and among the elements. In celebration of the periodic table's 150th anniversary, the United Nations declared the year 2019 as the International Year of the Periodic Table, celebrating "one of the most significant achievements in science".
https://en.wikipedia.org/wiki?curid=23053
Potassium Potassium is a chemical element with the symbol K (from Neo-Latin "kalium") and atomic number19. Potassium is a silvery-white metal that is soft enough to be cut with a knife with little force. Potassium metal reacts rapidly with atmospheric oxygen to form flaky white potassium peroxide in only seconds of exposure. It was first isolated from potash, the ashes of plants, from which its name derives. In the periodic table, potassium is one of the alkali metals, all of which have a single valence electron in the outer electron shell, that is easily removed to create an ion with a positive charge – a cation, that combines with anions to form salts. Potassium in nature occurs only in ionic salts. Elemental potassium reacts vigorously with water, generating sufficient heat to ignite hydrogen emitted in the reaction, and burning with a lilac-colored flame. It is found dissolved in sea water (which is 0.04% potassium by weight), and occurs in many minerals such as orthoclase, a common constituent of granites and other igneous rocks. Potassium is chemically very similar to sodium, the previous element in group 1 of the periodic table. They have a similar first ionization energy, which allows for each atom to give up its sole outer electron. It was suspected in 1702 that they were distinct elements that combine with the same anions to make similar salts, and was proven in 1807 using electrolysis. Naturally occurring potassium is composed of three isotopes, of which is radioactive. Traces of are found in all potassium, and it is the most common radioisotope in the human body. Potassium ions are vital for the functioning of all living cells. The transfer of potassium ions across nerve cell membranes is necessary for normal nerve transmission; potassium deficiency and excess can each result in numerous signs and symptoms, including an abnormal heart rhythm and various electrocardiographic abnormalities. Fresh fruits and vegetables are good dietary sources of potassium. The body responds to the influx of dietary potassium, which raises serum potassium levels, with a shift of potassium from outside to inside cells and an increase in potassium excretion by the kidneys. Most industrial applications of potassium exploit the high solubility in water of potassium compounds, such as potassium soaps. Heavy crop production rapidly depletes the soil of potassium, and this can be remedied with agricultural fertilizers containing potassium, accounting for 95% of global potassium chemical production. The English name for the element "potassium" comes from the word "potash", which refers to an early method of extracting various potassium salts: placing in a "pot" the "ash" of burnt wood or tree leaves, adding water, heating, and evaporating the solution. When Humphry Davy first isolated the pure element using electrolysis in 1807, he named it "potassium", which he derived from the word potash. The symbol "K" stems from "kali", itself from the root word "alkali", which in turn comes from "" "al-qalyah" "plant ashes". In 1797, the German chemist Martin Klaproth discovered "potash" in the minerals leucite and lepidolite, and realized that "potash" was not a product of plant growth but actually contained a new element, which he proposed to call "kali". In 1807, Humphry Davy produced the element via electrolysis: in 1809, Ludwig Wilhelm Gilbert proposed the name "Kalium" for Davy's "potassium". In 1814, the Swedish chemist Berzelius advocated the name "kalium" for potassium, with the chemical symbol "K". The English and French speaking countries adopted Davy and Gay-Lussac/Thénard's name Potassium, while the Germanic countries adopted Gilbert/Klaproth's name Kalium. The "Gold Book" of the International Union of Pure and Applied Chemistry has designated the official chemical symbol as K. Potassium is the second least dense metal after lithium. It is a soft solid with a low melting point, and can be easily cut with a knife. Freshly cut potassium is silvery in appearance, but it begins to tarnish toward gray immediately on exposure to air. In a flame test, potassium and its compounds emit a lilac color with a peak emission wavelength of 766.5 nanometers. Neutral potassium atoms have 19 electrons, one more than the configuration of the noble gas argon. Because of its low first ionization energy of 418.8kJ/mol, the potassium atom is much more likely to lose the last electron and acquire a positive charge, although negatively charged alkalide ions are not impossible). In contrast, the second ionization energy is very high (3052kJ/mol). Potassium reacts with oxygen, water, and carbon dioxide components in air. With oxygen it forms potassium peroxide. With water potassium forms potassium hydroxide. The reaction of potassium with water can be violently exothermic, especially since the coproduced hydrogen gas can ignite. Because of this, potassium and the liquid sodium-potassium (NaK) alloy are potent desiccants, although they are no longer used as such. Three oxides of potassium are well studied: potassium oxide (K2O), potassium peroxide (K2O2), and potassium superoxide (KO2). The binary potassium-oxygen binary compounds react with water forming potassium hydroxide. Potassium hydroxide (KOH) is a strong base. Illustrating its hydrophilic character, as much as 1.21kg of KOH can dissolve in a single liter of water. Anhydrous KOH is rarely encountered. KOH reacts readily with carbon dioxide to produce potassium carbonate and in principle could be used to remove traces of the gas from air. Like the closely related sodium hydroxide, potassium hydroxide reacts with fats to produce soaps. In general, potassium compounds are ionic and, owing to the high hydration energy of the ion, have excellent water solubility. The main species in water solution are the aquated complexes where n = 6 and 7. Potassium heptafluorotantalate is an intermediate in the purification of tantalum from the otherwise persistent contaminant of niobium. Organopotassium compounds illustrate nonionic compounds of potassium. They feature highly polar covalent K---C bonds. Examples include benzyl potassium. Potassium intercalates into graphite to give a variety of compounds, including KC8. There are 25 known isotopes of potassium, three of which occur naturally: (93.3%), (0.0117%), and (6.7%). Naturally occurring has a half-life of 1.250×109 years. It decays to stable by electron capture or positron emission (11.2%) or to stable by beta decay (88.8%). The decay of to is the basis of a common method for dating rocks. The conventional K-Ar dating method depends on the assumption that the rocks contained no argon at the time of formation and that all the subsequent radiogenic argon () was quantitatively retained. Minerals are dated by measurement of the concentration of potassium and the amount of radiogenic that has accumulated. The minerals best suited for dating include biotite, muscovite, metamorphic hornblende, and volcanic feldspar; whole rock samples from volcanic flows and shallow instrusives can also be dated if they are unaltered. Apart from dating, potassium isotopes have been used as tracers in studies of weathering and for nutrient cycling studies because potassium is a macronutrient required for life. Potassium is formed in supernovae by nucleosynthesis from lighter atoms. Potassium is principally created in Type II supernovae via an explosive oxygen-burning process. is also formed in s-process nucleosynthesis and the neon burning process. Potassium is the 20th most abundant element in the solar system and the 17th most abundant element by weight in the Earth. It makes up about 2.6% of the weight of the earth's crust and is the seventh most abundant element in the crust. The potassium concentration in seawater is 0.39g/L (0.039 wt/v%), about one twenty-seventh the concentration of sodium. Potash is primarily a mixture of potassium salts because plants have little or no sodium content, and the rest of a plant's major mineral content consists of calcium salts of relatively low solubility in water. While potash has been used since ancient times, its composition was not understood. Georg Ernst Stahl obtained experimental evidence that led him to suggest the fundamental difference of sodium and potassium salts in 1702, and Henri Louis Duhamel du Monceau was able to prove this difference in 1736. The exact chemical composition of potassium and sodium compounds, and the status as chemical element of potassium and sodium, was not known then, and thus Antoine Lavoisier did not include the alkali in his list of chemical elements in 1789. For a long time the only significant applications for potash were the production of glass, bleach, soap and gunpowder as potassium nitrate. Potassium soaps from animal fats and vegetable oils were especially prized because they tend to be more water-soluble and of softer texture, and are therefore known as soft soaps. The discovery by Justus Liebig in 1840 that potassium is a necessary element for plants and that most types of soil lack potassium caused a steep rise in demand for potassium salts. Wood-ash from fir trees was initially used as a potassium salt source for fertilizer, but, with the discovery in 1868 of mineral deposits containing potassium chloride near Staßfurt, Germany, the production of potassium-containing fertilizers began at an industrial scale. Other potash deposits were discovered, and by the 1960s Canada became the dominant producer. Potassium "metal" was first isolated in 1807 by Humphry Davy, who derived it by electrolysis of molten KOH with the newly discovered voltaic pile. Potassium was the first metal that was isolated by electrolysis. Later in the same year, Davy reported extraction of the metal sodium from a mineral derivative (caustic soda, NaOH, or lye) rather than a plant salt, by a similar technique, demonstrating that the elements, and thus the salts, are different. Although the production of potassium and sodium metal should have shown that both are elements, it took some time before this view was universally accepted. Because of the sensitivity of potassium to water and air, air-free techniques are normally employed for handling the element. It is unreactive toward nitrogen and saturated hydrocarbons such as mineral oil or kerosene. It readily dissolves in liquid ammonia, up to 480 g per 1000 g of ammonia at 0°C. Depending on the concentration, the ammonia solutions are blue to yellow, and their electrical conductivity is similar to that of liquid metals. Potassium slowly reacts with ammonia to form , but this reaction is accelerated by minute amounts of transition metal salts. Because it can reduce the salts to the metal, potassium is often used as the reductant in the preparation of finely divided metals from their salts by the Rieke method. Illustrative is the preparation of magnesium: Elemental potassium does not occur in nature because of its high reactivity. It reacts violently with water (see section Precautions below) and also reacts with oxygen. Orthoclase (potassium feldspar) is a common rock-forming mineral. Granite for example contains 5% potassium, which is well above the average in the Earth's crust. Sylvite (KCl), carnallite , kainite and langbeinite are the minerals found in large evaporite deposits worldwide. The deposits often show layers starting with the least soluble at the bottom and the most soluble on top. Deposits of niter (potassium nitrate) are formed by decomposition of organic material in contact with atmosphere, mostly in caves; because of the good water solubility of niter the formation of larger deposits requires special environmental conditions. Potassium is the eighth or ninth most common element by mass (0.2%) in the human body, so that a 60kg adult contains a total of about 120g of potassium. The body has about as much potassium as sulfur and chlorine, and only calcium and phosphorus are more abundant (with the exception of the ubiquitous CHON elements). Potassium ions are present in a wide variety of proteins and enzymes. Potassium levels influence multiple physiological processes, including Potassium homeostasis denotes the maintenance of the total body potassium content, plasma potassium level, and the ratio of the intracellular to extracellular potassium concentrations within narrow limits, in the face of pulsatile intake (meals), obligatory renal excretion, and shifts between intracellular and extracellular compartments. Plasma potassium is normally kept at 3.5 to 5.0 millimoles (mmol) [or milliequivalents (mEq)] per liter by multiple mechanisms. Levels outside this range are associated with an increasing rate of death from multiple causes, and some cardiac, kidney, and lung diseases progress more rapidly if serum potassium levels are not maintained within the normal range. An average meal of 40–50mmol presents the body with more potassium than is present in all plasma (20–25mmol). However, this surge causes the plasma potassium to rise only 10% at most as a result of prompt and efficient clearance by both renal and extra-renal mechanisms. Hypokalemia, a deficiency of potassium in the plasma, can be fatal if severe. Common causes are increased gastrointestinal loss (vomiting, diarrhea), and increased renal loss (diuresis). Deficiency symptoms include muscle weakness, paralytic ileus, ECG abnormalities, decreased reflex response; and in severe cases, respiratory paralysis, alkalosis, and cardiac arrhythmia. Potassium content in the plasma is tightly controlled by four basic mechanisms, which have various names and classifications. The four are 1) a reactive negative-feedback system, 2) a reactive feed-forward system, 3) a predictive or circadian system, and 4) an internal or cell membrane transport system. Collectively, the first three are sometimes termed the "external potassium homeostasis system"; and the first two, the "reactive potassium homeostasis system". Renal handling of potassium is closely connected to sodium handling. Potassium is the major cation (positive ion) inside animal cells [150mmol/L, (4.8g)], while sodium is the major cation of extracellular fluid [150mmol/L, (3.345g)]. In the kidneys, about 180liters of plasma is filtered through the glomeruli and into the renal tubules per day. This filtering involves about 600g of sodium and 33g of potassium. Since only 1–10g of sodium and 1–4g of potassium are likely to be replaced by diet, renal filtering must efficiently reabsorb the remainder from the plasma. Sodium is reabsorbed to maintain extracellular volume, osmotic pressure, and serum sodium concentration within narrow limits. Potassium is reabsorbed to maintain serum potassium concentration within narrow limits. Sodium pumps in the renal tubules operate to reabsorb sodium. Potassium must be conserved, but because the amount of potassium in the blood plasma is very small and the pool of potassium in the cells is about 30 times as large, the situation is not so critical for potassium. Since potassium is moved passively in counter flow to sodium in response to an apparent (but not actual) Donnan equilibrium, the urine can never sink below the concentration of potassium in serum except sometimes by actively excreting water at the end of the processing. Potassium is excreted twice and reabsorbed three times before the urine reaches the collecting tubules. At that point, urine usually has about the same potassium concentration as plasma. At the end of the processing, potassium is secreted one more time if the serum levels are too high. With no potassium intake, it is excreted at about 200mg per day until, in about a week, potassium in the serum declines to a mildly deficient level of 3.0–3.5mmol/L. If potassium is still withheld, the concentration continues to fall until a severe deficiency causes eventual death. The potassium moves passively through pores in the cell membrane. When ions move through Ion transporters (pumps) there is a gate in the pumps on both sides of the cell membrane and only one gate can be open at once. As a result, approximately 100 ions are forced through per second. Ion channel have only one gate, and there only one kind of ion can stream through, at 10 million to 100 million ions per second. Calcium is required to open the pores, although calcium may work in reverse by blocking at least one of the pores. Carbonyl groups inside the pore on the amino acids mimic the water hydration that takes place in water solution by the nature of the electrostatic charges on four carbonyl groups inside the pore. The U.S. National Academy of Medicine (NAM), on behalf of both the U.S. and Canada, sets Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs), or Adequate Intakes (AIs) for when there is not sufficient information to set EARs and RDAs. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes. For both males and females under 9 years of age, the AIs for potassium are: 400mg of potassium for 0-6-month-old infants, 860mg of potassium for 7-12-month-old infants, 2,000mg of potassium for 1-3-year-old children, and 2,300mg of potassium for 4-8-year-old children. For males 9 years of age and older, the AIs for potassium are: 2,500mg of potassium for 9-13-year-old males, 3,000mg of potassium for 14-18-year-old males, and 3,400mg for males that are 19 years of age and older. For females 9 years of age and older, the AIs for potassium are: 2,300mg of potassium for 9-18-year-old females, and 2,600mg of potassium for females that are 19 years of age and older. For pregnant and lactating females, the AIs for potassium are: 2,600mg of potassium for 14-18-year-old pregnant females, 2,900mg for pregnant females that are 19 years of age and older; furthermore, 2,500mg of potassium for 14-18-year-old lactating females, and 2,800mg for lactating females that are 19 years of age and older. As for safety, the NAM also sets tolerable upper intake levels (ULs) for vitamins and minerals, but for potassium the evidence was insufficient, so no UL was established. As of 2004, most Americans adults consume less than 3,000mg. Likewise, in the European Union, in particular in Germany and Italy, insufficient potassium intake is somewhat common. The British National Health Service recommends a similar intake, saying that adults need 3,500mg per day and that excess amounts may cause health problems such as stomach pain and diarrhoea. Previously the Adequate Intake for adults was set at 4,700 mg per day. In 2019, the National Academies of Sciences, Engineering, and Medicine revised the AI for potassium to 2,600 mg/day for females 19 years and older and 3,400 mg/day for males 19 years and older. Potassium is present in all fruits, vegetables, meat and fish. Foods with high potassium concentrations include yam, parsley, dried apricots, milk, chocolate, all nuts (especially almonds and pistachios), potatoes, bamboo shoots, bananas, avocados, coconut water, soybeans, and bran. The USDA lists tomato paste, orange juice, beet greens, white beans, potatoes, plantains, bananas, apricots, and many other dietary sources of potassium, ranked in descending order according to potassium content. A day's worth of potassium is in 5 plantains or 11 bananas. Diets low in potassium can lead to hypertension and hypokalemia. Supplements of potassium are most widely used in conjunction with diuretics that block reabsorption of sodium and water upstream from the distal tubule (thiazides and loop diuretics), because this promotes increased distal tubular potassium secretion, with resultant increased potassium excretion. A variety of prescription and over-the counter supplements are available. Potassium chloride may be dissolved in water, but the salty/bitter taste make liquid supplements unpalatable. Typical doses range from 10mmol (400mg), to 20mmol (800mg). Potassium is also available in tablets or capsules, which are formulated to allow potassium to leach slowly out of a matrix, since very high concentrations of potassium ion that occur adjacent to a solid tablet can injure the gastric or intestinal mucosa. For this reason, non-prescription potassium pills are limited by law in the US to a maximum of 99mg of potassium. Since the kidneys are the site of potassium excretion, individuals with impaired kidney function are at risk for hyperkalemia if dietary potassium and supplements are not restricted. The more severe the impairment, the more severe is the restriction necessary to avoid hyperkalemia. A meta-analysis concluded that a 1640mg increase in the daily intake of potassium was associated with a 21% lower risk of stroke. Potassium chloride and potassium bicarbonate may be useful to control mild hypertension. In 2017, potassium was the 37th most commonly prescribed medication in the United States, with more than 19 million prescriptions. Potassium can be detected by taste because it triggers three of the five types of taste sensations, according to concentration. Dilute solutions of potassium ions taste sweet, allowing moderate concentrations in milk and juices, while higher concentrations become increasingly bitter/alkaline, and finally also salty to the taste. The combined bitterness and saltiness of high-potassium solutions makes high-dose potassium supplementation by liquid drinks a palatability challenge. Potassium salts such as carnallite, langbeinite, polyhalite, and sylvite form extensive evaporite deposits in ancient lake bottoms and seabeds, making extraction of potassium salts in these environments commercially viable. The principal source of potassium – potash – is mined in Canada, Russia, Belarus, Kazakhstan, Germany, Israel, United States, Jordan, and other places around the world. The first mined deposits were located near Staßfurt, Germany, but the deposits span from Great Britain over Germany into Poland. They are located in the Zechstein and were deposited in the Middle to Late Permian. The largest deposits ever found lie below the surface of the Canadian province of Saskatchewan. The deposits are located in the Elk Point Group produced in the Middle Devonian. Saskatchewan, where several large mines have operated since the 1960s pioneered the technique of freezing of wet sands (the Blairmore formation) to drive mine shafts through them. The main potash mining company in Saskatchewan until its merge was the Potash Corporation of Saskatchewan, now Nutrien. The water of the Dead Sea is used by Israel and Jordan as a source of potash, while the concentration in normal oceans is too low for commercial production at current prices. Several methods are used to separate potassium salts from sodium and magnesium compounds. The most-used method is fractional precipitation using the solubility differences of the salts. Electrostatic separation of the ground salt mixture is also used in some mines. The resulting sodium and magnesium waste is either stored underground or piled up in slag heaps. Most of the mined potassium mineral ends up as potassium chloride after processing. The mineral industry refers to potassium chloride either as potash, muriate of potash, or simply MOP. Pure potassium metal can be isolated by electrolysis of its hydroxide in a process that has changed little since it was first used by Humphry Davy in 1807. Although the electrolysis process was developed and used in industrial scale in the 1920s, the thermal method by reacting sodium with potassium chloride in a chemical equilibrium reaction became the dominant method in the 1950s. The production of sodium potassium alloys is accomplished by changing the reaction time and the amount of sodium used in the reaction. The Griesheimer process employing the reaction of potassium fluoride with calcium carbide was also used to produce potassium. Reagent-grade potassium metal costs about $10.00/pound ($22/kg) in 2010 when purchased by the tonne. Lower purity metal is considerably cheaper. The market is volatile because long-term storage of the metal is difficult. It must be stored in a dry inert gas atmosphere or anhydrous mineral oil to prevent the formation of a surface layer of potassium superoxide, a pressure-sensitive explosive that detonates when scratched. The resulting explosion often starts a fire difficult to extinguish. Potassium is now quantified by ionization techniques, but at one time it was quantitated by gravimetric analysis. Reagents used to precipitate potassium salts include sodium tetraphenylborate, hexachloroplatinic acid, and sodium cobaltinitrite into respectively potassium tetraphenylborate, potassium hexachloroplatinate, and potassium cobaltinitrite. The reaction with sodium cobaltinitrite is illustrative: The potassium cobaltinitrite is obtained as a yellow solid. Potassium ions are an essential component of plant nutrition and are found in most soil types. They are used as a fertilizer in agriculture, horticulture, and hydroponic culture in the form of chloride (KCl), sulfate (), or nitrate (), representing the 'K' in 'NPK'. Agricultural fertilizers consume 95% of global potassium chemical production, and about 90% of this potassium is supplied as KCl. The potassium content of most plants ranges from 0.5% to 2% of the harvested weight of crops, conventionally expressed as amount of . Modern high-yield agriculture depends upon fertilizers to replace the potassium lost at harvest. Most agricultural fertilizers contain potassium chloride, while potassium sulfate is used for chloride-sensitive crops or crops needing higher sulfur content. The sulfate is produced mostly by decomposition of the complex minerals kainite () and langbeinite (). Only a very few fertilizers contain potassium nitrate. In 2005, about 93% of world potassium production was consumed by the fertilizer industry. Furthermore, potassium can play a key role in nutrient cycling by controlling litter composition. Potassium, in the form of potassium chloride is used as a medication to treat and prevent low blood potassium. Low blood potassium may occur due to vomiting, diarrhea, or certain medications. It is given by slow injection into a vein or by mouth. Potassium sodium tartrate (, Rochelle salt) is the main constituent of baking powder; it is also used in the silvering of mirrors. Potassium bromate () is a strong oxidizer (E924), used to improve dough strength and rise height. Potassium bisulfite () is used as a food preservative, for example in wine and beer-making (but not in meats). It is also used to bleach textiles and straw, and in the tanning of leathers. Major potassium chemicals are potassium hydroxide, potassium carbonate, potassium sulfate, and potassium chloride. Megatons of these compounds are produced annually. Potassium hydroxide is a strong base, which is used in industry to neutralize strong and weak acids, to control pH and to manufacture potassium salts. It is also used to saponify fats and oils, in industrial cleaners, and in hydrolysis reactions, for example of esters. Potassium nitrate () or saltpeter is obtained from natural sources such as guano and evaporites or manufactured via the Haber process; it is the oxidant in gunpowder (black powder) and an important agricultural fertilizer. Potassium cyanide (KCN) is used industrially to dissolve copper and precious metals, in particular silver and gold, by forming complexes. Its applications include gold mining, electroplating, and electroforming of these metals; it is also used in organic synthesis to make nitriles. Potassium carbonate ( or potash) is used in the manufacture of glass, soap, color TV tubes, fluorescent lamps, textile dyes and pigments. Potassium permanganate () is an oxidizing, bleaching and purification substance and is used for production of saccharin. Potassium chlorate () is added to matches and explosives. Potassium bromide (KBr) was formerly used as a sedative and in photography. Potassium chromate () is used in inks, dyes, stains (bright yellowish-red color); in explosives and fireworks; in the tanning of leather, in fly paper and safety matches, but all these uses are due to the chemistry of the chromate ion, rather than the potassium ion. There are thousands of uses of various potassium compounds. One example is potassium superoxide, , an orange solid that acts as a portable source of oxygen and a carbon dioxide absorber. It is widely used in respiration systems in mines, submarines and spacecraft as it takes less volume than the gaseous oxygen. Another example is potassium cobaltinitrite, , which is used as artist's pigment under the name of Aureolin or Cobalt Yellow. The stable isotopes of potassium can be laser cooled and used to probe fundamental and technological problems in quantum physics. The two bosonic isotopes possess convenient Feshbach resonances to enable studies requiring tunable interactions, while 40K is one of only two stable fermions amongst the alkali metals. An alloy of sodium and potassium, NaK is a liquid used as a heat-transfer medium and a desiccant for producing dry and air-free solvents. It can also be used in reactive distillation. The ternary alloy of 12% Na, 47% K and 41% Cs has the lowest melting point of −78°C of any metallic compound. Metallic potassium is used in several types of magnetometers. Potassium metal can react violently with water producing potassium hydroxide (KOH) and hydrogen gas. This reaction is exothermic and releases sufficient heat to ignite the resulting hydrogen in the presence of oxygen. Finely powdered potassium ignites in air at room temperature. The bulk metal ignites in air if heated. Because its density is 0.89g/cm3, burning potassium floats in water that exposes it to atmospheric oxygen. Many common fire extinguishing agents, including water, either are ineffective or make a potassium fire worse. Nitrogen, argon, sodium chloride (table salt), sodium carbonate (soda ash), and silicon dioxide (sand) are effective if they are dry. Some Class D dry powder extinguishers designed for metal fires are also effective. These agents deprive the fire of oxygen and cool the potassium metal. During storage, potassium forms peroxides and superoxides. These peroxides may react violently with organic compounds such as oils. Both peroxides and superoxides may react explosively with metallic potassium. Because potassium reacts with water vapor in the air, it is usually stored under anhydrous mineral oil or kerosene. Unlike lithium and sodium, however, potassium should not be stored under oil for longer than six months, unless in an inert (oxygen free) atmosphere, or under vacuum. After prolonged storage in air dangerous shock-sensitive peroxides can form on the metal and under the lid of the container, and can detonate upon opening. Ingestion of large amounts of potassium compounds can lead to hyperkalemia, strongly influencing the cardiovascular system. Potassium chloride is used in the United States for lethal injection executions.
https://en.wikipedia.org/wiki?curid=23055
Pope The Pope ( from "pappas", "father"), also known as the Supreme Pontiff ("Pontifex Maximus"), or the Roman pontiff ("Romanus Pontifex"), is the bishop of Rome, chief pastor of the worldwide Catholic Church, and head of state or sovereign of the Vatican City State. Since 1929, the pope has official residence in the Apostolic Palace in the Vatican City, a city-state enclaved within Rome, Italy. The current pope is Francis, who was elected on 13 March 2013, succeeding Benedict XVI. While his office is called the Papacy, the jurisdiction of the episcopal see is called the Holy See. It is the Holy See that is the sovereign entity by international law headquartered in the distinctively independent Vatican City State, established by the Lateran Treaty in 1929 between Italy and the Holy See to ensure its temporal, diplomatic, and spiritual independence. The Holy See is recognized by its adherence at various levels to international organization and by means of its diplomatic relations and political accords with many independent states. The primacy of the bishop of Rome is largely derived from his role as the apostolic successor to Saint Peter, to whom primacy was conferred by Jesus, giving him the Keys of Heaven and the powers of "binding and loosing", naming him as the "rock" upon which the church would be built. According to Catholic tradition, the apostolic see of Rome was founded by Saint Peter and Saint Paul in the 1st century. The papacy is one of the most enduring institutions in the world and has had a prominent part in world history. In ancient times the popes helped spread Christianity, and intervened to find resolutions in various doctrinal disputes. In the Middle Ages, they played a role of secular importance in Western Europe, often acting as arbitrators between Christian monarchs. Currently, in addition to the expansion of the Christian faith and doctrine, the popes are involved in ecumenism and interfaith dialogue, charitable work, and the defense of human rights. In some periods of history, the papacy, which originally had no temporal powers, accrued wide secular powers rivaling those of temporal rulers. However, in recent centuries the temporal authority of the papacy has declined and the office is now almost exclusively focused on religious matters. By contrast, papal claims of spiritual authority have been increasingly firmly expressed over time, culminating in 1870 with the proclamation of the dogma of papal infallibility for rare occasions when the pope speaks "ex cathedra"—literally "from the chair (of Saint Peter)"—to issue a formal definition of faith or morals. Still, the pope is considered one of the world's most powerful people because of his extensive diplomatic, cultural, and spiritual influence on 1.3 billion Catholics and beyond, and because he heads the world's largest non-government provider of education and health care, with a vast network of charities. The word "pope" derives from Greek (), meaning 'father'. In the early centuries of Christianity, this title was applied, especially in the east, to all bishops and other senior clergy, and later became reserved in the west to the Bishop of Rome, a reservation made official only in the 11th century. The earliest record of the use of this title was in regard to the by then deceased Patriarch of Alexandria, Pope Heraclas of Alexandria (232–248). The earliest recorded use of the title "pope" in English dates to the mid-10th century, when it was used in reference to the 7th century Roman Pope Vitalian in an Old English translation of Bede's "Historia ecclesiastica gentis Anglorum". The Catholic Church teaches that the pastoral office, the office of shepherding the Church, that was held by the apostles, as a group or "college" with Saint Peter as their head, is now held by their successors, the bishops, with the bishop of Rome (the pope) as their head. Thus, is derived another title by which the pope is known, that of "Supreme Pontiff". The Catholic Church teaches that Jesus personally appointed Peter as head of the Church, and the Catholic Church's dogmatic constitution "Lumen gentium" makes a clear distinction between apostles and bishops, presenting the latter as the successors of the former, with the pope as successor of Peter, in that he is head of the bishops as Peter was head of the apostles. Some historians argue against the notion that Peter was the first bishop of Rome, noting that the episcopal see in Rome can be traced back no earlier than the 3rd century. The writings of the Church Father Irenaeus who wrote around AD 180 reflect a belief that Peter "founded and organized" the Church at Rome. Moreover, Irenaeus was not the first to write of Peter's presence in the early Roman Church. Clement of Rome wrote in a letter to the Corinthians, "c." 96, about the persecution of Christians in Rome as the "struggles in our time" and presented to the Corinthians its heroes, "first, the greatest and most just columns", the "good apostles" Peter and Paul. St. Ignatius of Antioch wrote shortly after Clement and in his letter from the city of Smyrna to the Romans he said he would not command them as Peter and Paul did. Given this and other evidence, such as Emperor Constantine's erection of the "Old St. Peter's Basilica" on the location of St. Peter's tomb, as held and given to him by Rome's Christian community, many scholars agree that Peter was martyred in Rome under Nero, although some scholars argue that he may have been martyred in Palestine. First-century Christian communities would have had a group of presbyter-bishops functioning as leaders of their local churches. Gradually, episcopacies were established in metropolitan areas. Antioch may have developed such a structure before Rome. In Rome, there were many who claimed to be the rightful bishop, though again Irenaeus stressed the validity of one line of bishops from the time of St. Peter up to his contemporary Pope Victor I and listed them. Some writers claim that the emergence of a single bishop in Rome probably did not occur until the middle of the 2nd century. In their view, Linus, Cletus and Clement were possibly prominent presbyter-bishops, but not necessarily monarchical bishops. Documents of the 1st century and early 2nd century indicate that the bishop of Rome had some kind of pre-eminence and prominence in the Church as a whole, as even a letter from the bishop, or patriarch, of Antioch acknowledged the Bishop of Rome as "a first among equals", though the detail of what this meant is unclear. It seems that at first the terms "episcopos" and "presbyter" were used interchangeably. The consensus among scholars has been that, at the turn of the 1st and 2nd centuries, local congregations were led by bishops and presbyters whose offices were overlapping or indistinguishable. Some say that there was probably "no single 'monarchical' bishop in Rome before the middle of the 2nd century...and likely later." Other scholars and historians disagree, citing the historical records of St. Ignatius of Antioch (d 107) and St. Irenaeus who recorded the linear succession of Bishops of Rome (the popes) up until their own times. However, 'historical' records written by those wanting to show an unbroken line of popes would naturally do so, and there are no objective substantiating documents. They also cite the importance accorded to the Bishops of Rome in the ecumenical councils, including the early ones. In the early Christian era, Rome and a few other cities had claims on the leadership of worldwide Church. James the Just, known as "the brother of the Lord", served as head of the Jerusalem church, which is still honored as the "Mother Church" in Orthodox tradition. Alexandria had been a center of Jewish learning and became a center of Christian learning. Rome had a large congregation early in the apostolic period whom Paul the Apostle addressed in his Epistle to the Romans, and according to tradition Paul was martyred there. During the 1st century of the Church (c. 30–130), the Roman capital became recognized as a Christian center of exceptional importance. Clement I, at the end of the 1st century, wrote an epistle to the Church in Corinth intervening in a major dispute, and apologizing for not having taken action earlier. However, there are only a few other references of that time to recognition of the authoritative primacy of the Roman See outside of Rome. In the Ravenna Document of 13 October 2007, theologians chosen by the Catholic and the Eastern Orthodox Churches stated: "41. Both sides agree ... that Rome, as the Church that 'presides in love' according to the phrase of St Ignatius of Antioch, occupied the first place in the "taxis", and that the bishop of Rome was therefore the "protos" among the patriarchs. Translated into English, the statement means "first among equals". What form that should take is still a matter of disagreement, just as it was when the Catholic and Orthodox Churches split in the Great East-West Schism. They also disagree on the interpretation of the historical evidence from this era regarding the prerogatives of the Bishop of Rome as "protos", a matter that was already understood in different ways in the first millennium." In the late 2nd century AD, there were more manifestations of Roman authority over other churches. In 189, assertion of the primacy of the Church of Rome may be indicated in Irenaeus's "Against Heresies" (3:3:2): "With [the Church of Rome], because of its superior origin, all the churches must agree ... and it is in her that the faithful everywhere have maintained the apostolic tradition." In AD 195, Pope Victor I, in what is seen as an exercise of Roman authority over other churches, excommunicated the Quartodecimans for observing Easter on the 14th of Nisan, the date of the Jewish Passover, a tradition handed down by John the Evangelist (see Easter controversy). Celebration of Easter on a Sunday, as insisted on by the pope, is the system that has prevailed (see computus). The Edict of Milan in 313 granted freedom to all religions in the Roman Empire, beginning the Peace of the Church. In 325, the First Council of Nicaea condemned Arianism, declaring trinitarianism dogmatic, and in its sixth canon recognized the special role of the Sees of Rome, Alexandria, and Antioch. Great defenders of Trinitarian faith included the popes, especially Pope Liberius, who was exiled to Berea by Constantius II for his Trinitarian faith, Damasus I, and several other bishops. In 380, the Edict of Thessalonica declared Nicene Christianity to be the state religion of the empire, with the name "Catholic Christians" reserved for those who accepted that faith. While the civil power in the Eastern Roman Empire controlled the church, and the Ecumenical Patriarch of Constantinople, the capital, wielded much power, in the Western Roman Empire, the Bishops of Rome were able to consolidate the influence and power they already possessed. After the Fall of the Western Roman Empire, barbarian tribes were converted to Arian Christianity or Catholicism; Clovis I, king of the Franks, was the first important barbarian ruler to convert to Catholicism rather than Arianism, allying himself with the papacy. Other tribes, such as the Visigoths, later abandoned Arianism in favour of Catholicism. After the fall of the Western Roman Empire, the pope served as a source of authority and continuity. Pope Gregory I ("c" 540–604) administered the church with strict reform. From an ancient senatorial family, Gregory worked with the stern judgement and discipline typical of ancient Roman rule. Theologically, he represents the shift from the classical to the medieval outlook; his popular writings are full of dramatic miracles, potent relics, demons, angels, ghosts, and the approaching end of the world. Gregory's successors were largely dominated by the Exarch of Ravenna, the Byzantine emperor's representative in the Italian Peninsula. These humiliations, the weakening of the Byzantine Empire in the face of the Muslim conquests, and the inability of the emperor to protect the papal estates against the Lombards, made Pope Stephen II turn from Emperor Constantine V. He appealed to the Franks to protect his lands. Pepin the Short subdued the Lombards and donated Italian land to the papacy. When Pope Leo III crowned Charlemagne (800) as Roman Emperor, he established the precedent that, in Western Europe, no man would be emperor without being crowned by a Pope. The low point of the papacy was 867–1049. This period includes the Saeculum obscurum, the Crescentii era, and the Tusculan Papacy. The papacy came under the control of vying political factions. Popes were variously imprisoned, starved, killed, and deposed by force. The family of a certain papal official made and unmade popes for fifty years. The official's great-grandson, Pope John XII, held orgies of debauchery in the Lateran Palace. Otto I, Holy Roman Emperor had John accused in an ecclesiastical court, which deposed him and elected a layman as Pope Leo VIII. John mutilated the Imperial representatives in Rome and had himself reinstated as pope. Conflict between the Emperor and the papacy continued, and eventually dukes in league with the emperor were buying bishops and popes almost openly. In 1049, Leo IX became pope, at last a pope with the character to face the papacy's problems. He traveled to the major cities of Europe to deal with the church's moral problems firsthand, notably simony and clerical marriage and concubinage. With his long journey, he restored the prestige of the papacy in Northern Europe. From the 7th century it became common for European monarchies and nobility to found churches and perform investiture or deposition of clergy in their states and fiefdoms, their personal interests causing corruption among the clergy. This practice had become common because often the prelates and secular rulers were also participants in public life. To combat this and other practices that had corrupted the Church between the years 900 and 1050, centres emerged promoting ecclesiastical reform, the most important being the Abbey of Cluny, which spread its ideals throughout Europe. This reform movement gained strength with the election of Pope Gregory VII in 1073, who adopted a series of measures in the movement known as the Gregorian Reform, in order to fight strongly against simony and the abuse of civil power and try to restore ecclesiastical discipline, including clerical celibacy. The conflict between popes and secular autocratic rulers such as the Holy Roman Emperor Henry IV and Henry I of England, known as the Investiture controversy, was only resolved in 1122, by the Concordat of Worms, in which Pope Callixtus II decreed that clerics were to be invested by clerical leaders, and temporal rulers by lay investiture. Soon after, Pope Alexander III began reforms that would lead to the establishment of canon law. Since the beginning of the 7th century, the Caliphate had conquered much of the southern Mediterranean, and represented a threat to Christianity. In 1095, the Byzantine emperor, Alexios I Komnenos, asked for military aid from Pope Urban II in the ongoing Byzantine–Seljuq wars. Urban, at the council of Clermont, called the First Crusade to assist the Byzantine Empire to regain the old Christian territories, especially Jerusalem. With the East–West Schism, the Eastern Orthodox Church and the Catholic Church split definitively in 1054. This fracture was caused more by political events than by slight divergences of creed. Popes had galled the Byzantine emperors by siding with the king of the Franks, crowning a rival Roman emperor, appropriating the Exarchate of Ravenna, and driving into Greek Italy. In the Middle Ages, popes struggled with monarchs over power. From 1309 to 1377, the pope resided not in Rome but in Avignon. The Avignon Papacy was notorious for greed and corruption. During this period, the pope was effectively an ally of the Kingdom of France, alienating France's enemies, such as the Kingdom of England. The pope was understood to have the power to draw on the Treasury of Merit built up by the saints and by Christ, so that he could grant indulgences, reducing one's time in purgatory. The concept that a monetary fine or donation accompanied contrition, confession, and prayer eventually gave way to the common assumption that indulgences depended on a simple monetary contribution. The popes condemned misunderstandings and abuses, but were too pressed for income to exercise effective control over indulgences. Popes also contended with the cardinals, who sometimes attempted to assert the authority of Catholic Ecumenical Councils over the pope's. Conciliarism holds that the supreme authority of the church lies with a General Council, not with the pope. Its foundations were laid early in the 13th century, and it culminated in the 15th century. The failure of Conciliarism to gain broad acceptance after the 15th century is taken as a factor in the Protestant Reformation. Various Antipopes challenged papal authority, especially during the Western Schism (1378–1417). In this schism, the papacy had returned to Rome from Avignon, but an antipope was installed in Avignon, as if to extend the papacy there. The Eastern Church continued to decline with the Eastern Roman (Byzantine) Empire, undercutting Constantinople's claim to equality with Rome. Twice an Eastern Emperor tried to force the Eastern Church to reunify with the West. First in the Second Council of Lyon (1272–1274) and secondly in the Council of Florence (1431–1449). Papal claims of superiority were a sticking point in reunification, which failed in any event. In the 15th century, the Ottoman Empire captured Constantinople. Protestant Reformers criticized the papacy as corrupt and characterized the pope as the antichrist. Popes instituted a Catholic Reformation (1560–1648), which addressed the challenges of the Protestant Reformation and instituted internal reforms. Pope Paul III initiated the Council of Trent (1545–1563), whose definitions of doctrine and whose reforms sealed the triumph of the papacy over elements in the church that sought conciliation with Protestants and opposed papal claims. Gradually forced to give up secular power, the popes focused on spiritual issues. In 1870, the First Vatican Council proclaimed the dogma of papal infallibility for those rare occasions the pope speaks "ex cathedra" when issuing a solemn definition of faith or morals. Later the same year, Victor Emmanuel II of Italy seized Rome from the pope's control and substantially completed the Italian unification. In 1929, the Lateran Treaty between the Kingdom of Italy and the Holy See established Vatican City as an independent city-state, guaranteeing papal independence from secular rule. In 1950, Pope Pius XII defined the Assumption of Mary as dogma, the only time that a pope has spoken "ex cathedra" since papal infallibility was explicitly declared. The Petrine Doctrine is still controversial as an issue of doctrine that continues to divide the eastern and western churches and separate Protestants from Rome. The Catholic Church teaches that, within the Christian community, the bishops as a body have succeeded to the body of the apostles ("apostolic succession") and the Bishop of Rome has succeeded to Saint Peter. Scriptural texts proposed in support of Peter's special position in relation to the church include: The symbolic keys in the Papal coats of arms are a reference to the phrase "the keys of the kingdom of heaven" in the first of these texts. Some Protestant writers have maintained that the "rock" that Jesus speaks of in this text is Jesus himself or the faith expressed by Peter. This idea is undermined by the Biblical usage of "Cephas," which is the masculine form of "rock" in Aramaic, to describe Peter. The "Encyclopædia Britannica" comments that "the consensus of the great majority of scholars today is that the most obvious and traditional understanding should be construed, namely, that rock refers to the person of Peter". The pope was originally chosen by those senior clergymen resident in and near Rome. In 1059 the electorate was restricted to the Cardinals of the Holy Roman Church, and the individual votes of all Cardinal Electors were made equal in 1179. The electors are now limited to those who have not reached 80 on the day before the death or resignation of a pope. The pope does not need to be a Cardinal Elector or indeed a Cardinal; however, since the pope is the Bishop of Rome, only those who can be ordained a bishop can be elected, which means that any male baptized Catholic is eligible. The last to be elected when not yet a bishop was Pope Gregory XVI in 1831, and the last to be elected when not even a priest was Pope Leo X in 1513, and the last to be elected when not a cardinal was Pope Urban VI in 1378. If someone who is not a bishop is elected, he must be given episcopal ordination before the election is announced to the people. The Second Council of Lyon was convened on 7 May 1274, to regulate the election of the pope. This Council decreed that the cardinal electors must meet within ten days of the pope's death, and that they must remain in seclusion until a pope has been elected; this was prompted by the three-year "sede vacante" following the death of Pope Clement IV in 1268. By the mid-16th century, the electoral process had evolved into its present form, allowing for variation in the time between the death of the pope and the meeting of the cardinal electors. Traditionally, the vote was conducted by Acclamation, by selection (by committee), or by plenary vote. Acclamation was the simplest procedure, consisting entirely of a voice vote. The election of the pope almost always takes place in the Sistine Chapel, in a sequestered meeting called a "conclave" (so called because the cardinal electors are theoretically locked in, "cum clave", i.e., with key, until they elect a new pope). Three cardinals are chosen by lot to collect the votes of absent cardinal electors (by reason of illness), three are chosen by lot to count the votes, and three are chosen by lot to review the count of the votes. The ballots are distributed and each cardinal elector writes the name of his choice on it and pledges aloud that he is voting for "one whom under God I think ought to be elected" before folding and depositing his vote on a plate atop a large chalice placed on the altar. For the Papal conclave, 2005, a special urn was used for this purpose instead of a chalice and plate. The plate is then used to drop the ballot into the chalice, making it difficult for electors to insert multiple ballots. Before being read, the ballots are counted while still folded; if the number of ballots does not match the number of electors, the ballots are burned unopened and a new vote is held. Otherwise, each ballot is read aloud by the presiding Cardinal, who pierces the ballot with a needle and thread, stringing all the ballots together and tying the ends of the thread to ensure accuracy and honesty. Balloting continues until someone is elected by a two-thirds majority. (With the promulgation of "Universi Dominici Gregis" in 1996, a simple majority after a deadlock of twelve days was allowed, but this was revoked by Pope Benedict XVI by "motu proprio" in 2007.) One of the most prominent aspects of the papal election process is the means by which the results of a ballot are announced to the world. Once the ballots are counted and bound together, they are burned in a special stove erected in the Sistine Chapel, with the smoke escaping through a small chimney visible from Saint Peter's Square. The ballots from an unsuccessful vote are burned along with a chemical compound to create black smoke, or "fumata nera". (Traditionally, wet straw was used to produce the black smoke, but this was not completely reliable. The chemical compound is more reliable than the straw.) When a vote is successful, the ballots are burned alone, sending white smoke ("fumata bianca") through the chimney and announcing to the world the election of a new pope. Starting with the Papal conclave, 2005, church bells are also rung as a signal that a new pope has been chosen. The Dean of the College of Cardinals then asks two solemn questions of the man who has been elected. First he asks, "Do you freely accept your election as Supreme Pontiff?" If he replies with the word ""Accepto"", his reign begins at that instant. If he replies "not", his reign begins at the inauguration ceremony several days afterward. The Dean asks next, "By what name shall you be called?" The new pope announces the regnal name he has chosen. If the Dean himself is elected pope, the Vice Dean performs this task. The new pope is led through the "Door of Tears" to a dressing room where three sets of white papal vestments ("immantatio") await: small, medium, and large. Donning the appropriate vestments and reemerging into the Sistine Chapel, the new pope is given the "Fisherman's Ring" by the Camerlengo of the Holy Roman Church, whom he first either reconfirms or reappoints. The pope assumes a place of honor as the rest of the cardinals wait in turn to offer their first "obedience" ("adoratio") and to receive his blessing. The Senior Cardinal Deacon announces from a balcony over St. Peter's Square the following proclamation: "Annuntio vobis gaudium magnum! Habemus Papam!" ("I announce to you a great joy! We have a pope!"). He announces the new pope's Christian name along with his newly chosen regnal name. Until 1978 the pope's election was followed in a few days by the Papal coronation, which started with a procession with great pomp and circumstance from the Sistine Chapel to St. Peter's Basilica, with the newly elected pope borne in the "sedia gestatoria". After a solemn Papal Mass, the new pope was crowned with the "triregnum" (papal tiara) and he gave for the first time as pope the famous blessing "Urbi et Orbi" ("to the City [Rome] and to the World"). Another renowned part of the coronation was the lighting of a bundle of flax at the top of a gilded pole, which would flare brightly for a moment and then promptly extinguish, as he said, "Sic transit gloria mundi" ("Thus passes worldly glory"). A similar warning against papal hubris made on this occasion was the traditional exclamation, ""Annos Petri non-videbis"", reminding the newly crowned pope that he would not live to see his rule lasting as long as that of St. Peter. According to tradition, he headed the church for 35 years and has thus far been the longest-reigning pope in the history of the Catholic Church. A traditionalist Catholic belief that lacks reliable authority claims that a Papal Oath was sworn, at their coronation, by all popes from Pope Agatho to Pope Paul VI and that it was omitted with the abolition of the coronation ceremony. The Latin term, "sede vacante" ("while the see is vacant"), refers to a papal interregnum, the period between the death or resignation of a pope and the election of his successor. From this term is derived the term sedevacantism, which designates a category of dissident Catholics who maintain that there is no canonically and legitimately elected pope, and that there is therefore a "sede vacante". One of the most common reasons for holding this belief is the idea that the reforms of the Second Vatican Council, and especially the reform of the Tridentine Mass with the Mass of Paul VI, are heretical and that those responsible for initiating and maintaining these changes are heretics and not true popes. For centuries, from 1378 on, those elected to the papacy were predominantly Italians. Prior to the election of the Polish cardinal Karol Wojtyla as Pope John Paul II in 1978, the last non-Italian was Pope Adrian VI of the Netherlands, elected in 1522. John Paul II was followed by election of the German-born Pope Benedict XVI, who was in turn followed by Argentine-born Pope Francis, who is the first non-European after 1272 years and the first Latin American, despite having an Italian ancestry. The current regulations regarding a papal interregnum—that is, a "sede vacante" ("vacant seat")—were promulgated by Pope John Paul II in his 1996 document "Universi Dominici Gregis". During the "sede vacante" period, the College of Cardinals is collectively responsible for the government of the Church and of the Vatican itself, under the direction of the Camerlengo of the Holy Roman Church; however, canon law specifically forbids the cardinals from introducing any innovation in the government of the Church during the vacancy of the Holy See. Any decision that requires the assent of the pope has to wait until the new pope has been elected and accepts office. In recent centuries, when a pope was judged to have died, it was reportedly traditional for the Cardinal Camerlengo to confirm the death ceremonially by gently tapping the pope's head thrice with a silver hammer, calling his birth name each time. This was not done on the deaths of popes John Paul I and John Paul II. The Cardinal Camerlengo retrieves the Ring of the Fisherman and cuts it in two in the presence of the Cardinals. The pope's seals are defaced, to keep them from ever being used again, and his personal apartment is sealed. The body lies in state for several days before being interred in the crypt of a leading church or cathedral; all popes who have died in the 20th and 21st centuries have been interred in St. Peter's Basilica. A nine-day period of mourning ("novendialis") follows the interment. It is highly unusual for a pope to resign. The 1983 Code of Canon Law states, "If it happens that the Roman Pontiff resigns his office, it is required for validity that the resignation is made freely and properly manifested but not that it is accepted by anyone." Benedict XVI, who vacated the Holy See on 28 February 2013, was the most recent to do so since Gregory XII's resignation in 1415. Popes adopt a new name on their accession, known as papal name, in Italian and Latin. Currently, after a new pope is elected and accepts the election, he is asked "By what name shall you be called?". The new pope chooses the name by which he will be known from that point on. The senior Cardinal Deacon, or Cardinal Protodeacon, then appears on the balcony of Saint Peter's to proclaim the new pope by his birth name, and announce his papal name in Latin. It's customary when referring to popes to translate the regnal name into all local languages. Thus, for example, Papa Franciscus is Papa Francesco in Italian, but he is also known as Papa Francisco in his native Spanish, Pope Francis in English, etc. The official list of titles of the pope, in the order in which they are given in the "Annuario Pontificio", is: The best-known title, that of "Pope", does not appear in the official list, but is commonly used in the titles of documents, and appears, in abbreviated form, in their signatures. Thus Pope Paul VI signed as "Paulus PP. VI", the "PP." standing for ""papa pontifex"" ("pope and pontiff"). The title "Pope" was from the early 3rd century an honorific designation used for "any" bishop in the West. In the East, it was used only for the Bishop of Alexandria. Pope Marcellinus (d. 304) is the first Bishop of Rome shown in sources to have had the title "Pope" used of him. From the 6th century, the imperial chancery of Constantinople normally reserved this designation for the Bishop of Rome. From the early 6th century, it began to be confined in the West to the Bishop of Rome, a practice that was firmly in place by the 11th century, when Pope Gregory VII declared it reserved for the Bishop of Rome. In Eastern Christianity, where the title "Pope" is used also of the Bishop of Alexandria, the Bishop of Rome is often referred to as the "Pope of Rome", regardless of whether the speaker or writer is in communion with Rome or not. "Vicar of Jesus Christ" ("Vicarius Iesu Christi") is one of the official titles of the Pope given in the "Annuario Pontificio". It is commonly used in the slightly abbreviated form "Vicar of Christ" ("Vicarius Christi"). While it is only one of the terms with which the pope is referred to as "Vicar", it is "more expressive of his supreme headship of the Church on Earth, which he bears in virtue of the commission of Christ and with vicarial power derived from him", a vicarial power believed to have been conferred on Saint Peter when Christ said to him: "Feed my lambs...Feed my sheep" (). The first record of the application of this title to a Bishop of Rome appears in a synod of 495 with reference to Pope Gelasius I. But at that time, and down to the 9th century, other bishops too referred to themselves as vicars of Christ, and for another four centuries this description was sometimes used of kings and even judges, as it had been used in the 5th and 6th centuries to refer to the Byzantine emperor. Earlier still, in the 3rd century, Tertullian used "vicar of Christ" to refer to the Holy Spirit sent by Jesus. Its use specifically for the pope appears in the 13th century in connection with the reforms of Pope Innocent III, as can be observed already in his 1199 letter to Leo I, King of Armenia. Other historians suggest that this title was already used in this way in association with the pontificate of Pope Eugene III (1145–1153). This title "Vicar of Christ" is thus not used of the pope alone and has been used of all bishops since the early centuries. The Second Vatican Council referred to all bishops as "vicars and ambassadors of Christ", and this description of the bishops was repeated by Pope John Paul II in his encyclical "Ut unum sint," 95. The difference is that the other bishops are vicars of Christ for their own local churches, the pope is vicar of Christ for the whole Church. On at least one occasion the title "Vicar of God" (a reference to Christ as God) was used of the pope. The title "Vicar of Peter" ("Vicarius Petri") is used only of the pope, not of other bishops. Variations of it include: "Vicar of the Prince of the Apostles" ("Vicarius Principis Apostolorum") and "Vicar of the Apostolic See" ("Vicarius Sedis Apostolicae"). Saint Boniface described Pope Gregory II as vicar of Peter in the oath of fealty that he took in 722. In today's Roman Missal, the description "vicar of Peter" is found also in the collect of the Mass for a saint who was a pope. The term "pontiff" is derived from the , which literally means "bridge builder" ("pons" + "facere") and which designated a member of the principal college of priests in ancient Rome. The Latin word was translated into ancient Greek variously: as , , , (hierophant), or (archiereus, high priest) The head of the college was known as the Pontifex Maximus (the greatest pontiff). In Christian use, "pontifex" appears in the Vulgate translation of the New Testament to indicate the High Priest of Israel (in the original Koine Greek, ). The term came to be applied to any Christian bishop, but since the 11th century commonly refers specifically to the Bishop of Rome, who is more strictly called the "Roman Pontiff". The use of the term to refer to bishops in general is reflected in the terms "Roman Pontifical" (a book containing rites reserved for bishops, such as confirmation and ordination), and "pontificals" (the insignia of bishops). The "Annuario Pontificio" lists as one of the official titles of the pope that of "Supreme Pontiff of the Universal Church" (). He is also commonly called the Supreme Pontiff or the Sovereign Pontiff (). "Pontifex Maximus", similar in meaning to "Summus Pontifex", is a title commonly found in inscriptions on papal buildings, paintings, statues and coins, usually abbreviated as "Pont. Max" or "P.M." The office of Pontifex Maximus, or head of the College of Pontiffs, was held by Julius Caesar and thereafter, by the Roman emperors, until Gratian (375–383) relinquished it. Tertullian, when he had become a Montanist, used the title derisively of either the pope or the Bishop of Carthage. The popes began to use this title regularly only in the 15th century. Although the description "servant of the servants of God" () was also used by other Church leaders, including Augustine of Hippo and Benedict of Nursia, it was first used extensively as a papal title by Pope Gregory I, reportedly as a lesson in humility for the Patriarch of Constantinople, John the Faster, who had assumed the title "Ecumenical Patriarch". It became reserved for the pope in the 12th century and is used in papal bulls and similar important papal documents. From 1863 until 2005, the "Annuario Pontificio" also included the title "Patriarch of the West". This title was first used by Pope Theodore I in 642, and was only used occasionally. Indeed, it did not begin to appear in the pontifical yearbook until 1863. On 22 March 2006, the Vatican released a statement explaining this omission on the grounds of expressing a "historical and theological reality" and of "being useful to ecumenical dialogue". The title Patriarch of the West symbolized the pope's special relationship with, and jurisdiction over, the Latin Church—and the omission of the title neither symbolizes in any way a change in this relationship, nor distorts the relationship between the Holy See and the Eastern Churches, as solemnly proclaimed by the Second Vatican Council. Other titles commonly used are "His Holiness" (either used alone or as an honorific prefix "His Holiness Pope Francis"; and as "Your Holiness" as a form of address), "Holy Father". In Spanish and Italian, ""Beatísimo/Beatissimo Padre"" (Most Blessed Father) is often used in preference to ""Santísimo/Santissimo Padre"" (Most Holy Father). In the medieval period, ""Dominus Apostolicus"" ("the Apostolic Lord") was also used. Pope Francis signs some documents with his name alone, either in Latin ("Franciscus", as in an encyclical dated 29 June 2013) or in another language. Other documents he signs in accordance with the tradition of using Latin only and including, in the abbreviated form "PP.", for the Latin "Papa Pontifex" ("Pope and Pontiff"). Popes who have an ordinal numeral in their name traditionally place the abbreviation "PP." before the ordinal numeral, as in "Benedictus PP. XVI" (Pope Benedict XVI), except in bulls of canonization and decrees of ecumenical councils, which a pope signs with the formula, "Ego N. Episcopus Ecclesiae catholicae", without the numeral, as in "Ego Benedictus Episcopus Ecclesiae catholicae" (I, Benedict, Bishop of the Catholic Church). The pope's signature is followed, in bulls of canonization, by those of all the cardinals resident in Rome, and in decrees of ecumenical councils, by the signatures of the other bishops participating in the council, each signing as Bishop of a particular see. Papal bulls are headed "N. Episcopus Servus Servorum Dei" ("Name, Bishop, Servant of the Servants of God"). In general, they are not signed by the pope, but Pope John Paul II introduced in the mid-1980s the custom by which the pope signs not only bulls of canonization but also, using his normal signature, such as "Benedictus PP. XVI", bulls of nomination of bishops. In heraldry, each pope has his own personal coat of arms. Though unique for each pope, the arms have for several centuries been traditionally accompanied by two keys in saltire (i.e., crossed over one another so as to form an "X") behind the escutcheon (shield) (one silver key and one gold key, tied with a red cord), and above them a silver "triregnum" with three gold crowns and red "infulae" (lappets—two strips of fabric hanging from the back of the triregnum which fall over the neck and shoulders when worn). This is blazoned: "two keys in saltire or and argent, interlacing in the rings or, beneath a tiara argent, crowned or". The 21st century has seen departures from this tradition. In 2005, Pope Benedict XVI, while maintaining the crossed keys behind the shield, omitted the papal tiara from his personal coat of arms, replacing it with a mitre with three horizontal lines. Beneath the shield he added the pallium, a papal symbol of authority more ancient than the tiara, the use of which is also granted to metropolitan archbishops as a sign of communion with the See of Rome. Although the tiara was omitted in the pope's personal coat of arms, the coat of arms of the Holy See, which includes the tiara, remained unaltered. In 2013, Pope Francis maintained the mitre that replaced the tiara, but omitted the pallium. He also departed from papal tradition by adding beneath the shield his personal pastoral motto: "Miserando atque eligendo". The flag most frequently associated with the pope is the yellow and white flag of Vatican City, with the arms of the Holy See (blazoned: "Gules, two keys in saltire or and argent, interlacing in the rings or, beneath a tiara argent, crowned or") on the right-hand side (the "fly") in the white half of the flag (the left-hand side—the "hoist"—is yellow). The pope's escucheon does not appear on the flag. This flag was first adopted in 1808, whereas the previous flag had been red and gold. Although Pope Benedict XVI replaced the triregnum with a mitre on his personal coat of arms, it has been retained on the flag. Pope Pius V (reigned 1566–1572), is often credited with having originated the custom whereby the pope wears white, by continuing after his election to wear the white habit of the Dominican order. In reality, the basic papal attire was white long before. The earliest document that describes it as such is the "Ordo XIII", a book of ceremonies compiled in about 1274. Later books of ceremonies describe the pope as wearing a red mantle, mozzetta, camauro and shoes, and a white cassock and stockings. Many contemporary portraits of 15th and 16th-century predecessors of Pius V show them wearing a white cassock similar to his. The status and authority of the Pope in the Catholic Church was dogmatically defined by the First Vatican Council on 18 July 1870. In its Dogmatic Constitution of the Church of Christ, the Council established the following canons: If anyone says that the blessed Apostle Peter was not established by the Lord Christ as the chief of all the apostles, and the visible head of the whole militant Church, or, that the same received great honour but did not receive from the same our Lord Jesus Christ directly and immediately the primacy in true and proper jurisdiction: let him be anathema. If anyone says that it is not from the institution of Christ the Lord Himself, or by divine right that the blessed Peter has perpetual successors in the primacy over the universal Church, or that the Roman Pontiff is not the successor of blessed Peter in the same primacy, let him be anathema. If anyone thus speaks, that the Roman Pontiff has only the office of inspection or direction, but not the full and supreme power of jurisdiction over the universal Church, not only in things which pertain to faith and morals, but also in those which pertain to the discipline and government of the Church spread over the whole world; or, that he possesses only the more important parts, but not the whole plenitude of this supreme power; or that this power of his is not ordinary and immediate, or over the churches altogether and individually, and over the pastors and the faithful altogether and individually: let him be anathema. We, adhering faithfully to the tradition received from the beginning of the Christian faith, to the glory of God, our Saviour, the elevation of the Catholic religion and the salvation of Christian peoples, with the approbation of the sacred Council, teach and explain that the dogma has been divinely revealed: that the Roman Pontiff, when he speaks ex cathedra, that is, when carrying out the duty of the pastor and teacher of all Christians by his supreme apostolic authority he defines a doctrine of faith or morals to be held by the universal Church, through the divine assistance promised him in blessed Peter, operates with that infallibility with which the divine Redeemer wished that His church be instructed in defining doctrine on faith and morals; and so such definitions of the Roman Pontiff from himself, but not from the consensus of the Church, are unalterable. But if anyone presumes to contradict this definition of Ours, which may God forbid: let him be anathema. In its Dogmatic Constitution on the Church (1964), the Second Vatican Council declared: On 11 October 2012, on the occasion of the 50th anniversary of the opening of the Second Vatican Council 60 prominent theologians, (including Hans Küng), put out a Declaration, stating that the intention of Vatican II to balance authority in the Church has not been realised. "Many of the key insights of Vatican II have not at all, or only partially, been implemented... A principal source of present-day stagnation lies in misunderstanding and abuse affecting the exercise of authority in our Church." The pope's official seat is in the Archbasilica of Saint John Lateran, considered the cathedral of the Diocese of Rome, and his official residence is the Apostolic Palace. He also possesses a summer residence at Castel Gandolfo, situated on the site of the ancient city of Alba Longa. Until the time of the Avignon Papacy, the residence of the pope was the Lateran Palace, donated by Roman emperor Constantine the Great. The pope's ecclesiastical jurisdiction (the Holy See) is distinct from his secular jurisdiction (Vatican City). It is the Holy See that conducts international relations; for hundreds of years, the papal court (the Roman Curia) has functioned as the government of the Catholic Church. The names "Holy See" and "Apostolic See" are ecclesiastical terminology for the ordinary jurisdiction of the Bishop of Rome (including the Roman Curia); the pope's various honors, powers, and privileges within the Catholic Church and the international community derive from his Episcopate of Rome in lineal succession from the Saint Peter, one of the twelve apostles (see Apostolic succession). Consequently, Rome has traditionally occupied a central position in the Catholic Church, although this is not necessarily so. The pope derives his pontificate from being Bishop of Rome but is not required to live there; according to the Latin formula "ubi Papa, ibi Curia", wherever the pope resides is the central government of the Church, provided that the pope is Bishop of Rome. As such, between 1309 and 1378, the popes lived in Avignon, France (see Avignon Papacy), a period often called the "Babylonian captivity" in allusion to the Biblical narrative of Jews of the ancient Kingdom of Judah living as captives in Babylonia. Though the pope is the diocesan bishop of Rome, he delegates most of the day-to-day work of leading the diocese to the Cardinal Vicar, who assures direct episcopal oversight of the diocese's pastoral needs, not in his own name but in that of the pope. The current Cardinal Vicar is Angelo De Donatis, who was appointed to the office in June 2017. Though the progressive Christianisation of the Roman Empire in the 4th century did not confer upon bishops civil authority within the state, the gradual withdrawal of imperial authority during the 5th century left the pope the senior imperial civilian official in Rome, as bishops were increasingly directing civil affairs in other cities of the Western Empire. This status as a secular and civil ruler was vividly displayed by Pope Leo I's confrontation with Attila in 452. The first expansion of papal rule outside of Rome came in 728 with the Donation of Sutri, which in turn was substantially increased in 754, when the Frankish ruler Pippin the Younger gave to the pope the land from his conquest of the Lombards. The pope may have utilized the forged Donation of Constantine to gain this land, which formed the core of the Papal States. This document, accepted as genuine until the 15th century, states that Constantine the Great placed the entire Western Empire of Rome under papal rule. In 800, Pope Leo III crowned the Frankish ruler Charlemagne as Roman Emperor, a major step toward establishing what later became known as the Holy Roman Empire; from that date onward the popes claimed the prerogative to crown the Emperor, though the right fell into disuse after the coronation of Charles V in 1530. Pope Pius VII was present at the coronation of Napoleon I in 1804 but did not actually perform the crowning. As mentioned above, the pope's sovereignty over the Papal States ended in 1870 with their annexation by Italy. Popes like Alexander VI, an ambitious if spectacularly corrupt politician, and Pope Julius II, a formidable general and statesman, were not afraid to use power to achieve their own ends, which included increasing the power of the papacy. This political and temporal authority was demonstrated through the papal role in the Holy Roman Empire (especially prominent during periods of contention with the Emperors, such as during the Pontificates of Pope Gregory VII and Pope Alexander III). Papal bulls, interdict, and excommunication (or the threat thereof) have been used many times to increase papal power. The Bull "Laudabiliter" in 1155 authorized Henry II of England to invade Ireland. In 1207, Innocent III placed England under interdict until King John made his kingdom a fiefdom to the Pope, complete with yearly tribute, saying, "we offer and freely yield...to our lord Pope Innocent III and his catholic successors, the whole kingdom of England and the whole kingdom of Ireland with all their rights and appurtenences for the remission of our sins". The Bull "Inter caetera" in 1493 led to the Treaty of Tordesillas in 1494, which divided the world into areas of Spanish and Portuguese rule. The Bull "Regnans in Excelsis" in 1570 excommunicated Elizabeth I of England and declared that all her subjects were released from all allegiance to her. The Bull, "Inter gravissimas", in 1582 established the Gregorian calendar. Under international law, a serving head of state has sovereign immunity from the jurisdiction of the courts of other countries, though not from that of international tribunals. This immunity is sometimes loosely referred to as "diplomatic immunity", which is, strictly speaking, the immunity enjoyed by the "diplomatic representatives" of a head of state. International law treats the Holy See, essentially the central government of the Catholic Church, as the juridical equal of a state. It is distinct from the state of Vatican City, existing for many centuries before the foundation of the latter. (It is common for publications and news media to use "the Vatican", "Vatican City", and even "Rome" as metonyms for the Holy See.) Most countries of the world maintain the same form of diplomatic relations with the Holy See that they entertain with other states. Even countries without those diplomatic relations participate in international organizations of which the Holy See is a full member. It is as head of the state-equivalent worldwide religious jurisdiction of the Holy See (not of the territory of Vatican City) that the U.S. Justice Department ruled that the pope enjoys head-of-state immunity. This head-of-state immunity, recognized by the United States, must be distinguished from that envisaged under the United States' Foreign Sovereign Immunities Act of 1976, which, while recognizing the basic immunity of foreign governments from being sued in American courts, lays down nine exceptions, including commercial activity and actions in the United States by agents or employees of the foreign governments. It was in relation to the latter that, in November 2008, the United States Court of Appeals in Cincinnati decided that a case over sexual abuse by Catholic priests could proceed, provided the plaintiffs could prove that the bishops accused of negligent supervision were acting as employees or agents of the Holy See and were following official Holy See policy. In April 2010, there was press coverage in Britain concerning a proposed plan by atheist campaigners and a prominent barrister to have Pope Benedict XVI arrested and prosecuted in the UK for alleged offences, dating from several decades before, in failing to take appropriate action regarding Catholic sex abuse cases and concerning their disputing his immunity from prosecution in that country. This was generally dismissed as "unrealistic and spurious". Another barrister said that it was a "matter of embarrassment that a senior British lawyer would want to allow himself to be associated with such a silly idea". The pope's claim to authority is either disputed or not recognised at all by other churches. The reasons for these objections differ from denomination to denomination. Other traditional Christian churches (Assyrian Church of the East, the Oriental Orthodox Church, the Eastern Orthodox Church, the Old Catholic Church, the Anglican Communion, the Independent Catholic churches, etc.) accept the doctrine of Apostolic succession and, to varying extents, papal claims to a primacy of honour, while generally rejecting the pope as the successor to Peter in any other sense than that of other bishops. Primacy is regarded as a consequence of the pope's position as bishop of the original capital city of the Roman Empire, a definition explicitly spelled out in the 28th canon of the Council of Chalcedon. These churches see no foundation to papal claims of "universal immediate jurisdiction", or to claims of papal infallibility. Several of these churches refer to such claims as "ultramontanism". In 1973, the United States Conference of Catholic Bishops' Committee on Ecumenical and Interreligious Affairs and the USA National Committee of the Lutheran World Federation in the official Catholic–Lutheran dialogue included this passage in a larger statement on papal primacy: Protestant denominations of Christianity reject the claims of Petrine primacy of honor, Petrine primacy of jurisdiction, and papal infallibility. These denominations vary from simply not accepting the pope's claim to authority as legitimate and valid, to believing that the pope is the Antichrist from 1 John 2:18, the Man of Sin from 2 Thessalonians 2:3–12, and the Beast out of the Earth from Revelation 13:11–18. This sweeping rejection is held by, among others, some denominations of Lutherans: Confessional Lutherans hold that the pope is the Antichrist, stating that this article of faith is part of a "quia" ("because") rather than "quatenus" ("insofar as") subscription to the Book of Concord. In 1932, one of these Confessional churches, the Lutheran Church–Missouri Synod (LCMS), adopted "A Brief Statement of the Doctrinal Position of the Missouri Synod", which a small number of Lutheran church bodies now hold. The Lutheran Churches of the Reformation, the Concordia Lutheran Conference, the Church of the Lutheran Confession, and the Illinois Lutheran Conference all hold to the "Brief Statement", which the LCMS places on its website. The Wisconsin Evangelical Lutheran Synod (WELS), another Confessional Lutheran church that declares the Papacy to be the Antichrist, released its own statement, the "Statement on the Antichrist", in 1959. The WELS still holds to this statement. Historically, Protestants objected to the papacy's claim of temporal power over all secular governments, including territorial claims in Italy, the papacy's complex relationship with secular states such as the Roman and Byzantine Empires, and the autocratic character of the papal office. In Western Christianity these objections both contributed to and are products of the Protestant Reformation. Groups sometimes form around antipopes, who claim the Pontificate without being canonically and properly elected to it. Traditionally, this term was reserved for claimants with a significant following of cardinals or other clergy. The existence of an antipope is usually due either to doctrinal controversy within the Church (heresy) or to confusion as to who is the legitimate pope at the time (schism). Briefly in the 15th century, three separate lines of popes claimed authenticity (see Papal Schism). Even Catholics do not all agree whether certain historical figures were popes or antipopes. Though antipope movements were significant at one time, they are now overwhelmingly minor fringe causes. In the earlier centuries of Christianity, the title "Pope", meaning "father", had been used by all bishops. Some popes used the term and others did not. Eventually, the title became associated especially with the Bishop of Rome. In a few cases, the term is used for other Christian clerical authorities. In English, Catholic priests are still addressed as "father", but the term "pope" is reserved for the head of the church hierarchy. "Black Pope" is a name that was popularly, but unofficially, given to the Superior General of the Society of Jesus due to the Jesuits' importance within the Church. This name, based on the black colour of his cassock, was used to suggest a parallel between him and the "White Pope" (since the time of Pope Pius V the popes dress in white) and the Cardinal Prefect of the Congregation for the Evangelization of Peoples (formerly called the Sacred Congregation for the Propagation of the Faith), whose red cardinal's cassock gave him the name of the "Red Pope" in view of the authority over all territories that were not considered in some way Catholic. In the present time this cardinal has power over mission territories for Catholicism, essentially the Churches of Africa and Asia, but in the past his competence extended also to all lands where Protestants or Eastern Christianity was dominant. Some remnants of this situation remain, with the result that, for instance, New Zealand is still in the care of this Congregation. Since the papacy of Heraclas in the 3rd century, the Bishop of Alexandria in both the Coptic Orthodox Church of Alexandria and the Greek Orthodox Church of Alexandria continues to be called "Pope", the former being called "Coptic Pope" or, more properly, "Pope and Patriarch of All Africa on the Holy Orthodox and Apostolic Throne of Saint Mark the Evangelist and Holy Apostle" and the latter called "Pope and Patriarch of Alexandria and All Africa". In the Bulgarian Orthodox Church, Russian Orthodox Church and Serbian Orthodox Church, it is not unusual for a village priest to be called a "pope" ("поп" "pop"). However, this should be differentiated from the words used for the head of the Catholic Church (Bulgarian "папа" "papa", Russian "папа римский" "papa rimskiy"). Some new religious movements within Christianity, especially those that have disassociated themselves from the Catholic Church yet retain a Catholic hierarchical framework, have used the designation "pope" for a founder or current leader. Examples include the African Legio Maria Church and the European Palmarian Catholic Church in Spain. The Cao Dai, a Vietnamese faith that duplicates the Catholic hierarchy, is similarly headed by a pope. Although the average reign of the pope from the Middle Ages was a decade, a number of those whose reign lengths can be determined from contemporary historical data are the following: During the Western Schism, Avignon Pope Benedict XIII (1394–1423) ruled for 28 years, seven months and 12 days, which would place him third in the above list. However, since he is regarded as an anti-pope, he is not mentioned in the list above. There have been a number of popes whose reign lasted about a month or less. In the following list the number of calendar days includes partial days. Thus, for example, if a pope's reign commenced on 1 August and he died on 2 August, this would count as having reigned for two calendar days. Stephen (23–26 March 752) died of stroke three days after his election, and before his consecration as a bishop. He is not recognized as a valid pope, but was added to the lists of popes in the 15th century as "Stephen II", causing difficulties in enumerating later popes named Stephen. The Holy See's "Annuario Pontificio", in its list of popes and antipopes, attaches a footnote to its mention of Stephen II (III): Published every year by the Roman Curia, the "Annuario Pontificio" attaches no consecutive numbers to the popes, stating that it is impossible to decide which side represented at various times the legitimate succession, in particular regarding Pope Leo VIII, Pope Benedict V and some mid-11th-century popes.
https://en.wikipedia.org/wiki?curid=23056
Passover Passover or Pesach (; "") is a major Jewish holiday that occurs in the spring on the 15th day of the Hebrew month of Nisan. One of the biblically ordained Three Pilgrimage Festivals, Passover is traditionally celebrated in the Land of Israel for seven days and for eight days among many Jews in the Diaspora, based on the concept of "yom tov sheni shel galuyot." In the Bible, Passover marks the Exodus of the Children of Israel from Egyptian slavery, when God "passed over" the houses of the Israelites during the last of the ten plagues. When the Temple in Jerusalem stood, the paschal lamb was offered and eaten on Passover eve, while the wave offering of barley was offered on the second day of the festival. Nowadays, in addition to the biblical prohibition of owning leavened foods for the duration of the holiday, the Passover seder is one of the most widely observed rituals in Judaism. The Hebrew is rendered as Tiberian , and Modern Hebrew: "Pesah, Pesakh". The etymology is disputed, and hypotheses are divided whether to connect it to "psh" (to protect, save) or to a word meaning "limp, dance with limping motions". Cognate languages yield similar terms with distinct meanings, such as "make soft, soothe, placate" (Akkadian "passahu"), "harvest, commemoration, blow" (Egyptian), or "separate" (Arabic "fsh"). The verb "pasàch" () is first mentioned in the Torah's account of the Exodus from Egypt (), and there is some debate about its exact meaning. The commonly held assumption that it means "He passed over" (פסח), in reference to God "passing over" (or "skipping") the houses of the Hebrews during the final of the Ten Plagues of Egypt, stems from the translation provided in the Septuagint (παρελευσεται [Greek: "pareleusetai"] in , and εσκεπασεν [Greek: "eskepasen"] in ). Targum Onkelos translates "pesach" as "ve-yeiḥos" (Hebrew: וְיֵחוֹס "we-yēḥôs") "he had pity" coming from the Hebrew root חסה meaning to have pity. The term "Pesach" (Hebrew: "Pesaḥ") may also refer to the lamb or goat which was designated as the Passover sacrifice (called the "Korban Pesach" in Hebrew). Four days before the Exodus, the Hebrews were commanded to set aside a lamb (), and inspect it daily for blemishes. During the day on the 14th of Nisan, they were to slaughter the animal and use its blood to mark their lintels and door posts. Before midnight on the 15th of Nisan they were to consume the lamb. The English term "Passover" is first known to be recorded in the English language in William Tyndale's translation of the Bible, later appearing in the King James Version as well. It is a literal translation of the Hebrew term. In the King James Version, Exodus 12:23 reads:For the LORD will pass through to smite the Egyptians; and when he seeth the blood upon the lintel, and on the two side posts, the LORD will pass over the door, and will not suffer the destroyer to come in unto your houses to smite "you". The Passover ritual is widely thought to have its origins in an apotropaic rite, unrelated to the Exodus, to ensure the protection of a family home, a rite conducted wholly within a clan. Hyssop was employed to daub the blood of a slaughtered sheep on the lintels and door posts to ensure that demonic forces could not enter the home. A further hypothesis maintains that, once the Priestly Code was promulgated, the Exodus narrative took on a central function, as the apotropaic rite was, arguably, amalgamated with the Canaanite agricultural festival of spring which was a ceremony of unleavened bread, connected with the barley harvest. As the Exodus motif grew, the original function and symbolism of these double origins was lost. Several motifs replicate the features associated with the Mesopotamian Akitu festival. Other scholars, John Van Seters, J.B.Segal and Tamara Prosic disagree with the merged two-festivals hypothesis. In the Book of Exodus, the Israelites are enslaved in ancient Egypt. Yahweh, the god of the Israelites, appears to Moses in a burning bush and commands Moses to confront Pharaoh. To show his power, Yahweh inflicts a series of 10 plagues on the Egyptians, culminating in the 10th plague, the death of the first-born. Before this final plague Yahweh commands Moses to tell the Israelites to mark a lamb's blood above their doors in order that Yahweh will pass over them (i.e., that they will not be touched by the death of the firstborn). The biblical regulations for the observance of the festival require that all leavening be disposed of before the beginning of the 15th of Nisan An unblemished lamb or goat, known as the "Korban Pesach" or "Paschal Lamb", is to be set apart on 10th Nisan, and slaughtered at dusk as 14th Nisan ends in preparation for the 15th of Nisan when it will be eaten after being roasted. The literal meaning of the Hebrew is "between the two evenings". It is then to be eaten "that night", 15th Nisan, roasted, without the removal of its internal organs with unleavened bread, known as matzo, and bitter herbs known as "maror". Nothing of the sacrifice on which the sun rises by the morning of the 15th of Nisan may be eaten, but must be burned. The biblical regulations pertaining to the original Passover, at the time of the Exodus only, also include how the meal was to be eaten: "with your loins girded, your shoes on your feet, and your staff in your hand; and ye shall eat it in haste: it is the LORD's passover" . The biblical requirements of slaying the Paschal lamb in the individual homes of the Hebrews and smearing the blood of the lamb on their doorways were celebrated in Egypt. However, once Israel was in the wilderness and the tabernacle was in operation, a change was made in those two original requirements (). Passover lambs were to be sacrificed at the door of the tabernacle and no longer in the homes of the Jews. No longer, therefore, could blood be smeared on doorways. Called the "festival [of] the matzot" (Hebrew: חג המצות "ḥag ha-matzôth") in the Hebrew Bible, the commandment to keep Passover is recorded in the Book of Leviticus: The sacrifices may be performed only in a specific place prescribed by God. For Judaism, this is Jerusalem. The biblical commandments concerning the Passover (and the Feast of Unleavened Bread) stress the importance of remembering: In and , King Josiah of Judah restores the celebration of the Passover, to a standard not seen since the days of the judges or the days of the prophet Samuel. Some of these details can be corroborated, and to some extent amplified, in extrabiblical sources. The removal (or "sealing up") of the leaven is referred to in the Elephantine papyri, an Aramaic papyrus from 5th century BCE Elephantine in Egypt. The slaughter of the lambs on the 14th is mentioned in "The Book of Jubilees", a Jewish work of the Ptolemaic period, and by the Herodian-era writers Josephus and Philo. These sources also indicate that "between the two evenings" was taken to mean the afternoon. "Jubilees" states the sacrifice was eaten that night, and together with Josephus states that nothing of the sacrifice was allowed to remain until morning. Philo states that the banquet included hymns and prayers. The Passover begins on the 15th day of the month of Nisan, which typically falls in March or April of the Gregorian calendar. The 15th day begins in the evening, after the 14th day, and the seder meal is eaten that evening. Passover is a spring festival, so the 15th day of Nisan typically begins on the night of a full moon after the northern vernal equinox. However, due to leap months falling after the vernal equinox, Passover sometimes starts on the second full moon after vernal equinox, as in 2016. To ensure that Passover did not start before spring, the tradition in ancient Israel held that the first day of Nisan would not start until the barley was ripe, being the test for the onset of spring. If the barley was not ripe, or various other phenomena indicated that spring was not yet imminent, an intercalary month (Adar II) would be added. However, since at least the 4th century, the date has been fixed mathematically. In Israel, Passover is the seven-day holiday of the Feast of Unleavened Bread, with the first and last days celebrated as legal holidays and as holy days involving holiday meals, special prayer services, and abstention from work; the intervening days are known as Chol HaMoed ("Weekdays [of] the Festival"). Jews outside the Land of Israel celebrate the festival for eight days. Reform and Reconstructionist Jews usually celebrate the holiday over seven days. Karaites use a different version of the Jewish calendar, differing from that used with modern Jewish calendar by one or two days.The Samaritans use a calendrical system that uses a different method to that current in Jewish practice, in order to determine their timing of feastdays. In "2009", for example, Nisan 15 on the Jewish calendar used by Rabbinic Judaism corresponds to April 9. On the calendars used by Karaites and Samaritans, "Abib" or "Aviv" 15 (as opposed to 'Nisan') corresponds to April 11 in "2009". The Karaite and Samaritan Passovers are each one day long, followed by the six-day Festival of Unleavened Bread – for a total of seven days. The main entity in Passover according to Judaism is the sacrificial lamb. During the existence of the Tabernacle and later the Temple in Jerusalem, the focus of the Passover festival was the Passover sacrifice (Hebrew: "korban Pesach"), also known as the Paschal lamb, eaten during the Passover Seder on the 15th of Nisan. Every family large enough to completely consume a young lamb or wild goat was required to offer one for sacrifice at the Jewish Temple on the afternoon of the 14th day of Nisan (), and eat it that night, which was the 15th of Nisan (). If the family was too small to finish eating the entire offering in one sitting, an offering was made for a group of families. The sacrifice could not be offered with anything leavened (), and had to be roasted, without its head, feet, or inner organs being removed () and eaten together with unleavened bread ("matzo") and bitter herbs ("maror"). One had to be careful not to break any bones from the offering (), and none of the meat could be left over by morning ( ). Because of the Passover sacrifice's status as a sacred offering, the only people allowed to eat it were those who had the obligation to bring the offering. Among those who could not offer or eat the Passover lamb were an apostate (), a servant (), an uncircumcised man (), a person in a state of ritual impurity, except when a majority of Jews are in such a state ("Pesahim" 66b), and a non-Jew. The offering had to be made before a quorum of 30 ("Pesahim" 64b). In the Temple, the Levites sang Hallel while the priests performed the sacrificial service. Men and women were equally obligated regarding the offering ("Pesahim" 91b). Today, in the absence of the Temple, when no sacrifices are offered or eaten, the mitzvah of the "Korban Pesach" is memorialized in the "Seder Korban Pesach", a set of scriptural and Rabbinic passages dealing with the Passover sacrifice, customarily recited after the "Mincha" (afternoon prayer) service on the 14th of Nisan, and in the form of the "zeroa", a symbolic food placed on the Passover Seder Plate (but not eaten), which is usually a roasted shankbone (or a chicken wing or neck). The eating of the afikoman substitutes for the eating of the "Korban Pesach" at the end of the Seder meal (Mishnah Pesachim 119a). Many Sephardi Jews have the custom of eating lamb or goat meat during the Seder in memory of the "Korban Pesach". Leaven, in Hebrew "chametz" (Hebrew: חמץ "ḥamets", "leavening") is made from one of five types of grains combined with water and left to stand for more than eighteen minutes. The consumption, keeping, and owning of "chametz" is forbidden during Passover. Yeast and fermentation are not themselves forbidden as seen for example by wine, which is required, rather than merely permitted. According to Halakha, the ownership of such "chametz" is also proscribed. "Chametz" does not include baking soda, baking powder or like products. Although these are defined in English as leavening agents, they leaven by chemical reaction, not by biological fermentation. Thus, bagels, waffles and pancakes made with baking soda and matzo meal are considered permissible, while bagels made with sourdough and pancakes and waffles made with yeast are prohibited. The Torah commandments regarding "chametz" are: Observant Jews spend the weeks before Passover in a flurry of thorough housecleaning, to remove every morsel of "chametz" from every part of the home. Jewish law requires the elimination of olive-sized or larger quantities of leavening from one's possession, but most housekeeping goes beyond this. Even the cracks of kitchen counters are thoroughly scrubbed, for example, to remove any traces of flour and yeast, however small. Any item or implement that has handled "chametz" is generally put away and not used during Passover. Some hotels, resorts, and even cruise ships across America, Europe and Israel also undergo a thorough housecleaning to make their premises "kosher for Pesach" to cater to observant Jews. Some scholars suggest that the command to abstain from leavened food or yeast suggests that sacrifices offered to God involve the offering of objects in "their least altered state", that would be nearest to the way in which they were initially made by God. According to other scholars the absence of leaven or yeast means that leaven or yeast symbolizes corruption and spoiling. There are also variations with restrictions on eating matzah before Passover so that there will be an increased appetite for it during Passover itself. Primarily among Chabad Chassidim, there is a custom of not eating matzoh (flat unleavened bread) in the 30 days before Passover begins. Others have a custom to refrain from eating matzah from Rosh Chodesh Nissan, while the halacha merely restricts one from eating matzah on the day before Passover. Leaven or "chametz" may be sold rather than discarded, especially in the case of relatively valuable forms such as liquor distilled from wheat, with the products being repurchased afterward. In some cases, they may never leave the house, instead being formally sold while remaining in the original owner's possession in a locked cabinet until they can be repurchased after the holiday. Modern observance may also include sealing cabinets and drawers which contain "Chametz" shut by using adhesive tape, which serves a similar purpose to a lock but also shows evidence of tampering. Although the practice of selling "Chametz" dates back many years, some Reform rabbinical authorities have come to regard it with disdain – since the supposed "new owner" never takes actual possession of the goods. The sale of "chametz" may also be conducted communally via a rabbi, who becomes the "agent" for all the community's Jews through a halakhic procedure called a "kinyan" (acquisition). Each householder must put aside all the "chametz" he is selling into a box or cupboard, and the rabbi enters into a contract to sell all the "chametz" to a non-Jew (who is not obligated to celebrate the commandments) in exchange for a small down payment ("e.g." $1.00), with the remainder due after Passover. This sale is considered completely binding according to Halakha, and at any time during the holiday, the buyer may come to take or partake of his property. The rabbi then re-purchases the goods for less than they were sold at the end of the holiday. On the night of the fourteenth of Nisan, the night before the Passover Seder (after nightfall on the evening before Passover eve), Jews do a formal search in their homes known as "bedikat chametz" for any possible remaining leaven ("chametz"). The Talmudic sages instructed that a search for "chametz" be made in every home, place of work, or any place where "chametz" may have been brought during the year. When the first Seder is on a Saturday night, the search is conducted on the preceding Thursday night (thirteenth of Nisan) as "chametz" cannot be burned during Shabbat. The Talmud in Pesahim (p. 2a) derives from the Torah that the search for "chametz" be conducted by the light of a candle and therefore is done at night, and although the final destruction of the "chametz" (usually by burning it in a small bonfire) is done on the next morning, the blessing is made at night because the search is both in preparation for and part of the commandments to remove and destroy all "chametz" from one's possession. Before the search is begun there is a special blessing. If several people or family members assist in the search then only one person, usually the head of that family recites the blessing having in mind to include everyone present: In Hebrew: ברוך אתה י-הוה א-להינו מלך העולם אשר קדשנו במצותיו וצונו על בעור חמץ The search is then usually conducted by the head of the household joined by his family including children under the supervision of their parents. It is customary to turn off the lights and conduct the search by candlelight, using a feather and a wooden spoon: candlelight effectively illuminates corners without casting shadows; the feather can dust crumbs out of their hiding places; and the wooden spoon which collects the crumbs can be burned the next day with the "chametz". However, most contemporary Jewish-Orthodox authorities permit using a flashlight, while some strongly encourage it due to the danger coupled with using a candle. Because the house is assumed to have been thoroughly cleaned by the night before Passover, there is some concern that making a blessing over the search for "chametz" will be in vain ("bracha l'vatala") if nothing is found. Thus, 10 morsels of bread or cereal smaller than the size of an olive are traditionally hidden throughout the house in order to ensure that some "chametz" will be found. Upon conclusion of the search, with all the small pieces safely wrapped up and put in one bag or place, to be burned the next morning, the following is said: Original declaration as recited in Aramaic: כל חמירא וחמיעא דאכא ברשותי דלא חמתה ודלא בערתה ודלא ידענא לה לבטל ולהוי הפקר כעפרא דארעא Note that if the 14th of Nissan is Shabbat, many of the below will be celebrated on the 13th instead due to restrictions in place during Shabbat. On the day preceding the first Passover seder (or on Thursday morning preceding the seder, when the first seder falls on Motza'ei Shabbat), firstborn sons are commanded to celebrate the Fast of the Firstborn which commemorates the salvation of the Hebrew firstborns. According to , God struck down all Egyptian firstborns while the Israelites were not affected. However, it is customary for synagogues to conduct a "siyum" (ceremony marking the completion of a section of Torah learning) right after morning prayers, and the celebratory meal that follows cancels the firstborn's obligation to fast. On the morning of the 14th of Nisan, any leavened products that remain in the householder's possession, along with the 10 morsels of bread from the previous night's search, are burned ("s'rayfat chametz"). The head of the household repeats the declaration of "biyur chametz", declaring any "chametz" that may not have been found to be null and void "as the dust of the earth": Original declaration as recited in Aramaic: כל חמירא וחמיעא דאכא ברשותי דלא חמתה ודלא בערתה ודלא ידענא לה לבטל ולהוי הפקר כעפרא דארעא Should more "chametz" actually be found in the house during the Passover holiday, it must be burnt as soon as possible. Unlike "chametz", which can be eaten any day of the year except during Passover, kosher for Passover foods can be eaten year-round. They need not be burnt or otherwise discarded after the holiday ends. The historic "Paschal lamb" Passover sacrifice ("Korban Pesach") has not been brought following the Romans' destruction of the Second Jewish temple approximately two thousand years ago, and it is therefore still not part of the modern Jewish holiday. In the times when the Jewish Temples stood, the lamb was slaughtered and cooked on the evening of Passover and was completely consumed before the morning as described in . Due to the Torah injunction not to eat "chametz" (leaven) during Passover (), observant families typically own complete sets of serving dishes, glassware and silverware (and in some cases, even separate dishwashers and sinks) which have never come into contact with "chametz", for use only during Passover. Under certain circumstances, some "chametz" utensils can be immersed in boiling water ("hagalat keilim") to purge them of any traces of "chametz" that may have accumulated during the year. Many Sephardic families thoroughly wash their year-round glassware and then use it for Passover, as the Sephardic position is that glass does not absorb enough traces of food to present a problem. Similarly, ovens may be used for Passover either by setting the self-cleaning function to the highest degree for a certain period of time, or by applying a blow torch to the interior until the oven glows red hot (a process called "libun gamur"). A symbol of the Passover holiday is matzo, an unleavened flatbread made solely from flour and water which is continually worked from mixing through baking, so that it is not allowed to rise. Matzo may be made by machine or by hand. The Torah contains an instruction to eat matzo, specifically, on the first night of Passover and to eat only unleavened bread (in practice, matzo) during the entire week of Passover. Consequently, the eating of matzo figures prominently in the Passover Seder. There are several explanations for this. The Torah says that it is because the Hebrews left Egypt with such haste that there was no time to allow baked bread to rise; thus flat, unleavened bread, matzo, is a reminder of the rapid departure of the Exodus. Other scholars teach that in the time of the Exodus, matzo was commonly baked for the purpose of traveling because it preserved well and was light to carry (making it similar to hardtack), suggesting that matzo was baked intentionally for the long journey ahead. Matzo has also been called "Lechem Oni" (Hebrew: "bread of poverty"). There is an attendant explanation that matzo serves as a symbol to remind Jews what it is like to be a poor slave and to promote humility, appreciate freedom, and avoid the inflated ego symbolized by more luxurious leavened bread. "Shmura matzo" ("watched" or "guarded" matzo), is the bread of preference for the Passover Seder in Orthodox Jewish communities. Shmura matzo is made from wheat that is guarded from contamination by leaven ("chametz") from the time of summer harvest to its baking into matzos five to ten months later. In the weeks before Passover, matzos are prepared for holiday consumption. In many Orthodox Jewish communities, men traditionally gather in groups (""chaburas"") to bake handmade matzo for use at the Seder, the dough being rolled by hand, resulting in a large and round matzo. "Chaburas" also work together in machine-made matzo factories, which produce the typically square-shaped matzo sold in stores. The baking of matzo is labor-intensive, as less than 18 minutes is permitted between the mixing of flour and water to the conclusion of baking and removal from the oven. Consequently, only a small number of matzos can be baked at one time, and the "chabura" members are enjoined to work the dough constantly so that it is not allowed to ferment and rise. A special cutting tool is run over the dough just before baking to prick any bubbles which might make the matza puff up; this creates the familiar dotted holes in the matzo. After the matzos come out of the oven, the entire work area is scrubbed down and swept to make sure that no pieces of old, potentially leavened dough remain, as any stray pieces are now "chametz", and can contaminate the next batch of matzo. Some machine-made matzos are completed within 5 minutes of being kneaded. It is traditional for Jewish families to gather on the first night of Passover (first two nights in Orthodox and Conservative communities outside Israel) for a special dinner called a seder (Hebrew: סדר "seder" – derived from the Hebrew word for "order" or "arrangement", referring to the very specific order of the ritual). The table is set with the finest china and silverware to reflect the importance of the meal. During this meal, the story of the Exodus from Egypt is retold using a special text called the Haggadah. Four cups of wine are consumed at various stages in the narrative. The Haggadah divides the night's procedure into 15 parts: These 15 parts parallel the 15 steps in the Temple in Jerusalem on which the Levites stood during Temple services, and which were memorialized in the 15 Psalms (#120–134) known as "Shir HaMa'a lot" (Hebrew: "shiyr ha-ma‘alôth", "Songs of Ascent"). The seder is replete with questions, answers, and unusual practices (e.g. the recital of Kiddush which is not immediately followed by the blessing over bread, which is the traditional procedure for all other holiday meals) to arouse the interest and curiosity of the children at the table. The children are also rewarded with nuts and candies when they ask questions and participate in the discussion of the Exodus and its aftermath. Likewise, they are encouraged to search for the "afikoman", the piece of matzo which is the last thing eaten at the seder. Audience participation and interaction is the rule, and many families' seders last long into the night with animated discussions and much singing. The seder concludes with additional songs of praise and faith printed in the Haggadah, including "Chad Gadya" ("One Little Kid" or "One Little Goat"). Maror (bitter herbs) symbolizes the bitterness of slavery in Egypt. The following verse from the Torah underscores that symbolism: "And they embittered (Hebrew: וימררו "ve-yimareru") their lives with hard labor, with mortar and with bricks and with all manner of labor in the field; any labor that they made them do was with hard labor" (Exodus 1:14). There is a Rabbinic requirement that four cups of wine are to be drunk during the seder meal. This applies to both men and women. The Mishnah says (Pes. 10:1) that even the poorest man in Israel has an obligation to drink. Each cup is connected to a different part of the seder: the first cup is for Kiddush, the second cup is connected with the recounting of the Exodus, the drinking of the third cup concludes Birkat Hamazon and the fourth cup is associated with Hallel. Children have a very important role in the Passover seder. Traditionally the youngest child is prompted to ask questions about the Passover seder, beginning with the words, "Mah Nishtana HaLeila HaZeh" (Why is this night different from all other nights?). The questions encourage the gathering to discuss the significance of the symbols in the meal. The questions asked by the child are: Often the leader of the seder and the other adults at the meal will use prompted responses from the Haggadah, which states, "The more one talks about the Exodus from Egypt, the more praiseworthy he is." Many readings, prayers, and stories are used to recount the story of the Exodus. Many households add their own commentary and interpretation and often the story of the Jews is related to the theme of liberation and its implications worldwide. The "afikoman" – an integral part of the Seder itself – is used to engage the interest and excitement of the children at the table. During the fourth part of the Seder, called "Yachatz", the leader breaks the middle piece of matzo into two. He sets aside the larger portion as the "afikoman". Many families use the "afikoman" as a device for keeping the children awake and alert throughout the Seder proceedings by hiding the "afikoman" and offering a prize for its return. Alternatively, the children are allowed to "steal" the "afikoman" and demand a reward for its return. In either case, the "afikoman" must be consumed during the twelfth part of the Seder, "Tzafun". After the Hallel, the fourth glass of wine is drunk, and participants recite a prayer that ends in "Next year in Jerusalem!". This is followed by several lyric prayers that expound upon God's mercy and kindness, and give thanks for the survival of the Jewish people through a history of exile and hardship. "Echad Mi Yodea" ("Who Knows One?") is a playful song, testing the general knowledge of the children (and the adults). Some of these songs, such as "Chad Gadya" are allegorical. Beginning on the second night of Passover, the 16th day of Nisan, Jews begin the practice of the Counting of the Omer, a nightly reminder of the approach of the holiday of Shavuot 50 days hence. Each night after the evening prayer service, men and women recite a special blessing and then enumerate the day of the Omer. On the first night, for example, they say, "Today is the first day in (or, to) the Omer"; on the second night, "Today is the second day in the Omer." The counting also involves weeks; thus, the seventh day is commemorated, "Today is the seventh day, which is one week in the Omer." The eighth day is marked, "Today is the eighth day, which is one week and one day in the Omer," etc. When the Temple stood in Jerusalem, a sheaf of new-cut barley was presented before the altar on the second day of Unleavened Bread. Josephus writes: On the second day of unleavened bread, that is to say the sixteenth, our people partake of the crops which they have reaped and which have not been touched till then, and esteeming it right first to do homage to God, to whom they owe the abundance of these gifts, they offer to him the first-fruits of the barley in the following way. After parching and crushing the little sheaf of ears and purifying the barley for grinding, they bring to the altar an "assaron" for God, and, having flung a handful thereof on the altar, they leave the rest for the use of the priests. Thereafter all are permitted, publicly or individually, to begin harvest. Since the destruction of the Temple, this offering is brought in word rather than deed. One explanation for the Counting of the Omer is that it shows the connection between Passover and Shavuot. The physical freedom that the Hebrews achieved at the Exodus from Egypt was only the beginning of a process that climaxed with the spiritual freedom they gained at the giving of the Torah at Mount Sinai. Another explanation is that the newborn nation which emerged after the Exodus needed time to learn their new responsibilities vis-a-vis Torah and mitzvot before accepting God's law. The distinction between the Omer offering – a measure of barley, typically animal fodder – and the Shavuot offering – two loaves of wheat bread, human food – symbolizes the transition process. In Israel, Passover lasts for seven days with the first and last days being major Jewish holidays. In Orthodox and Conservative communities, no work is performed on those days, with most of the rules relating to the observances of Shabbat being applied. Outside Israel, in Orthodox and Conservative communities, the holiday lasts for eight days with the first two days and last two days being major holidays. In the intermediate days necessary work can be performed. Reform Judaism observes Passover over seven days, with the first and last days being major holidays. Like the holiday of Sukkot, the intermediary days of Passover are known as Chol HaMoed (festival weekdays) and are imbued with a semi-festive status. It is a time for family outings and picnic lunches of matzo, hardboiled eggs, fruits and vegetables, and Passover treats such as macaroons and homemade candies. Passover cake recipes call for potato starch or Passover cake flour made from finely granulated matzo instead of regular flour, and a large amount of eggs to achieve fluffiness. Cookie recipes use matzo farfel (broken bits of matzo) or ground nuts as the base. For families with Eastern European backgrounds, borsht, a soup made with beets, is a Passover tradition. While kosher for Passover packaged goods are available in stores, some families opt to cook everything from scratch during Passover week. In Israel, families that do not kasher their ovens can bake cakes, casseroles, and even meat on the stovetop in a Wonder Pot, an Israeli invention consisting of three parts: an aluminium pot shaped like a Bundt pan, a hooded cover perforated with venting holes, and a thick, round, metal disc with a center hole which is placed between the Wonder Pot and the flame to disperse heat. "Shvi'i shel Pesach" (שביעי של פסח) ("seventh [day] of Passover") is another full Jewish holiday, with special prayer services and festive meals. Outside the Land of Israel, in the Jewish diaspora, "Shvi'i shel Pesach" is celebrated on both the seventh and eighth days of Passover. This holiday commemorates the day the Children of Israel reached the Red Sea and witnessed both the miraculous "Splitting of the Sea" (Passage of the Red Sea), the drowning of all the Egyptian chariots, horses and soldiers that pursued them. According to the Midrash, only the Pharaoh was spared to give testimony to the miracle that occurred. Hasidic Rebbes traditionally hold a "tish" on the night of "Shvi'i shel Pesach" and place a cup or bowl of water on the table before them. They use this opportunity to speak about the Splitting of the Sea to their disciples, and sing songs of praise to God. The "Second Passover" (Pesach Sheni) on the 14th of Iyar in the Hebrew Calendar is mentioned in the Hebrew Bible's Book of Numbers () as a make-up day for people who were unable to offer the pesach sacrifice at the appropriate time due to ritual impurity or distance from Jerusalem. Just as on the first Pesach night, breaking bones from the second Paschal offering or leaving meat over until morning is prohibited (). Today, Pesach Sheni on the 14th of Iyar has the status of a very minor holiday (so much so that many of the Jewish people have never even heard of it, and it essentially does not exist outside of Orthodox and traditional Conservative Judaism). There are not really any special prayers or observances that are considered Jewish law. The only change in the liturgy is that in some communities "Tachanun", a penitential prayer omitted on holidays, is not said. There is a custom, though not Jewish law, to eat just one piece of matzo on that night. Because the house is free of leaven ("chametz") for eight days, the Jewish household typically eats different foods during the week of Passover. Some include: Ashkenazi foods Sephardi foods The story of Passover, with its message that slaves can go free, and that the future can be better than the present, has inspired a number of religious sermons, prayers, and songs – including spirituals (what used to be called "Negro Spirituals"), within the African-American community. Rabbi Philip R. Alstat, an early leader of Conservative Judaism, known for his fiery rhetoric and powerful oratory skills, wrote and spoke in 1939 about the power of the Passover story during the rise of Nazi persecution and terror: Perhaps in our generation the counsel of our Talmudic sages may seem superfluous, for today the story of our enslavement in Egypt is kept alive not only by ritualistic symbolism, but even more so by tragic realism. We are the contemporaries and witnesses of its daily re-enactment. Are not our hapless brethren in the German Reich eating "the bread of affliction"? Are not their lives embittered by complete disenfranchisement and forced labor? Are they not lashed mercilessly by brutal taskmasters behind the walls of concentration camps? Are not many of their men-folk being murdered in cold blood? Is not the ruthlessness of the Egyptian Pharaoh surpassed by the sadism of the Nazi dictators?And yet, even in this hour of disaster and degradation, it is still helpful to "visualize oneself among those who had gone forth out of Egypt." It gives stability and equilibrium to the spirit. Only our estranged kinsmen, the assimilated, and the de-Judaized, go to pieces under the impact of the blow...But those who visualize themselves among the groups who have gone forth from the successive Egypts in our history never lose their sense of perspective, nor are they overwhelmed by confusion and despair... It is this faith, born of racial experience and wisdom, which gives the oppressed the strength to outlive the oppressors and to endure until the day of ultimate triumph when we shall "be brought forth from bondage unto freedom, from sorrow unto joy, from mourning unto festivity, from darkness unto great light, and from servitude unto redemption. The Samaritan religion celebrates its own, similar Passover holiday, based on the Samaritan Pentateuch. Samaritanism holds that the Jews and Samaritans share a common history, but split into distinct communities after the time of Moses. Passover is also celebrated in Karaite Judaism, which rejects the Oral Torah that characterizes mainstream Rabbinic Judaism, as well as other groups claiming affiliation with Israelites. In Christianity, the celebration of Good Friday finds its roots in the Jewish feast of Passover, the evening on which Jesus was crucified as the Passover Lamb. Some Christians, including Messianic Jews, also celebrate Passover itself as a Christian holiday. In the Sunni sect of Islam, it is recommended to fast on the day of Ashurah (10th of Muharram) based on narrations attributed to Muhammad. The fast is celebrated in order to commemorate the day when Moses and his followers were saved from Pharaoh by God by creating a path in the Red Sea (The Exodus). According to Muslim tradition, the Jews of Madinah used to fast on the tenth of Muharram in observance of Passover. In narrations recorded in the al-Hadith (sayings of the Islamic Prophet Muhammad) of Sahih al-Bukhari, it is recommended that Muslims fast on this day. It is also stipulated that its observance should differ from the feast of Passover which is celebrated by the Jews, and he stated that Muslims should fast for two days instead of one, either on the 9th and 10th day or on the 10th and 11th day of Muharram.
https://en.wikipedia.org/wiki?curid=23059
Post Office Protocol In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by e-mail clients to retrieve e-mail from a mail server. POP version 3 (POP3) is the version in common use. The Post Office Protocol provides access via an Internet Protocol (IP) network for a user client application to a mailbox ("maildrop") maintained on a mail server. The protocol supports download and delete operations for messages. POP3 clients connect, retrieve all messages, store them on the client computer, and finally delete them from the server. This design of POP and its procedures was driven by the need of users having only temporary Internet connections, such as dial-up access, allowing these users to retrieve e-mail when connected, and subsequently to view and manipulate the retrieved messages when offline. POP3 clients also have an option to leave mail on the server after download. By contrast, the Internet Message Access Protocol (IMAP) was designed to normally leave all messages on the server to permit management with multiple client applications, and to support both connected ("online") and disconnected ("offline") modes of operation. A POP3 server listens on well-known port number 110 for service requests. Encrypted communication for POP3 is either requested after protocol initiation, using the STLS command, if supported, or by POP3S, which connects to the server using Transport Layer Security (TLS) or Secure Sockets Layer (SSL) on well-known TCP port number 995. Messages available to the client are determined when a POP3 session opens the maildrop, and are identified by message-number local to that session or, optionally, by a unique identifier assigned to the message by the POP server. This unique identifier is permanent and unique to the maildrop and allows a client to access the same message in different POP sessions. Mail is retrieved and marked for deletion by the message-number. When the client exits the session, mail marked for deletion is removed from the maildrop. The first version of the Post Office Protocol, POP1, was specified in RFC 918 (1984). POP2 was specified in RFC 937 (1985). POP3 is the version in most common use. It originated with RFC 1081 (1988) but the most recent specification is RFC 1939, updated with an extension mechanism (RFC 2449) and an authentication mechanism in RFC 1734. This led to a number of POP implementations such as Pine, POPmail, and other early mail clients. While the original POP3 specification supported only an unencrypted USER/PASS login mechanism or Berkeley .rhosts access control, today POP3 supports several authentication methods to provide varying levels of protection against illegitimate access to a user's e-mail. Most are provided by the POP3 extension mechanisms. POP3 clients support SASL authentication methods via the AUTH extension. MIT Project Athena also produced a Kerberized version. RFC 1460 introduced APOP into the core protocol. APOP is a challenge/response protocol which uses the MD5 hash function in an attempt to avoid replay attacks and disclosure of the shared secret. Clients implementing APOP include Mozilla Thunderbird, Opera Mail, Eudora, KMail, Novell Evolution, RimArts' Becky!, Windows Live Mail, PowerMail, Apple Mail, and Mutt. RFC 1460 was obsoleted by RFC 1725, which was in turn obsoleted by RFC 1939. POP4 exists only as an informal proposal adding basic folder management, multipart message support, as well as message flag management to compete with IMAP; however, its development has not progressed since 2003. An extension mechanism was proposed in RFC 2449 to accommodate general extensions as well as announce in an organized manner support for optional commands, such as TOP and UIDL. The RFC did not intend to encourage extensions, and reaffirmed that the role of POP3 is to provide simple support for mainly download-and-delete requirements of mailbox handling. The extensions are termed capabilities and are listed by the CAPA command. With the exception of APOP, the optional commands were included in the initial set of capabilities. Following the lead of ESMTP (RFC 5321), capabilities beginning with an X signify local capabilities. The STARTTLS extension allows the use of Transport Layer Security (TLS) or Secure Sockets Layer (SSL) to be negotiated using the "STLS" command, on the standard POP3 port, rather than an alternate. Some clients and servers instead use the alternate-port method, which uses TCP port 995 (POP3S). Demon Internet introduced extensions to POP3 that allow multiple accounts per domain, and has become known as "Standard Dial-up POP3 Service" (SDPS). To access each account, the username includes the hostname, as "john@hostname" or "john+hostname". Google Apps uses the same method. In computing, local e-mail clients can use the Kerberized Post Office Protocol (KPOP), an application-layer Internet standard protocol, to retrieve e-mail from a remote server over a TCP/IP connection. The KPOP protocol is based on the POP3 protocol – differing in that it adds Kerberos security and that it runs by default over TCP port number 1109 instead of 110. One mail server software implementation is found in the Cyrus IMAP server. The following POP3 session dialog is an example in RFC 1939: POP3 servers without the optional APOP command expect the client to log in with the USER and PASS commands: The Internet Message Access Protocol (IMAP) is an alternative and more recent mailbox access protocol. The highlights of differences are:
https://en.wikipedia.org/wiki?curid=23062
Punch (magazine) Punch; or, The London Charivari was a British weekly magazine of humour and satire established in 1841 by Henry Mayhew and wood-engraver Ebenezer Landells. Historically, it was most influential in the 1840s and 1850s, when it helped to coin the term "cartoon" in its modern sense as a humorous illustration. After the 1940s, when its circulation peaked, it went into a long decline, closing in 1992. It was revived in 1996, but closed again in 2002. "Punch" was founded on 17 July 1841 by Henry Mayhew and wood-engraver Ebenezer Landells, on an initial investment of £25. It was jointly edited by Mayhew and Mark Lemon. It was subtitled "The London Charivari" in homage to Charles Philipon's French satirical humour magazine "Le Charivari". Reflecting their satiric and humorous intent, the two editors took for their name and masthead the anarchic glove puppet, Mr. Punch, of Punch and Judy; the name also referred to a joke made early on about one of the magazine's first editors, Lemon, that "punch is nothing without lemon". Mayhew ceased to be joint editor in 1842 and became "suggestor in chief" until he severed his connection in 1845. The magazine initially struggled for readers, except for an 1842 "Almanack" issue which shocked its creators by selling 90,000 copies. In December 1842 due to financial difficulties the magazine was sold to Bradbury and Evans, both printers and publishers. Bradbury and Evans capitalised on newly evolving mass printing technologies and also were the publishers for Charles Dickens and William Makepeace Thackeray. The term "cartoon" to refer to comic drawings was first used in "Punch" in 1843, when the Houses of Parliament were to be decorated with murals, and "cartoons" for the mural were displayed for the public; the term "cartoon" then meant a finished preliminary sketch on a large piece of cardboard, or in Italian. "Punch" humorously appropriated the term to refer to its political cartoons, and the popularity of the "Punch" cartoons led to the term's widespread use. The illustrator Archibald Henning designed the cover of the magazine's first issues. The cover design varied in the early years, though Richard Doyle designed what became the magazine's masthead in 1849. Artists who published in "Punch" during the 1840s and 50s included John Leech, Richard Doyle, John Tenniel and Charles Keene. This group became known as "The "Punch" Brotherhood", which also included Charles Dickens who joined Bradbury and Evans after leaving Chapman and Hall in 1843. "Punch" authors and artists also contributed to another Bradbury and Evans literary magazine called "Once A Week" (est.1859), created in response to Dickens' departure from "Household Words". In the 1860s and 1870s, conservative "Punch" faced competition from upstart liberal journal "Fun", but after about 1874, "Fun"'s fortunes faded. At Evans's café in London, the two journals had "Round tables" in competition with each other. After months of financial difficulty and lack of market success, "Punch" became a staple for British drawing rooms because of its sophisticated humour and absence of offensive material, especially when viewed against the satirical press of the time. "The Times" and the Sunday paper "News of the World" used small pieces from "Punch" as column fillers, giving the magazine free publicity and indirectly granting a degree of respectability, a privilege not enjoyed by any other comic publication. "Punch" shared a friendly relationship with not only "The Times" but journals aimed at intellectual audiences such as the "Westminster Review", which published a fifty-three page illustrated article on "Punch's" first two volumes. Historian Richard Altick writes that "To judge from the number of references to it in the private letters and memoirs of the 1840s..."Punch" had become a household word within a year or two of its founding, beginning in the middle class and soon reaching the pinnacle of society, royalty itself". Increasing in readership and popularity throughout the remainder of the 1840s and 1850s, "Punch" was the success story of a threepenny weekly paper that had become one of the most talked-about and enjoyed periodicals. "Punch" enjoyed an audience including Elizabeth Barrett, Robert Browning, Thomas Carlyle, Edward FitzGerald, Charlotte Brontë, Queen Victoria, Prince Albert, Ralph Waldo Emerson, Emily Dickinson, Herman Melville, Henry Wadsworth Longfellow, and James Russell Lowell. "Punch" gave several phrases to the English language, including The Crystal Palace, and the "Curate's egg" (first seen in an 1895 cartoon by George du Maurier). Several British humour classics were first serialised in "Punch", such as the "Diary of a Nobody" and "1066 and All That". Towards the end of the nineteenth century, the artistic roster included Harry Furniss, Linley Sambourne, Francis Carruthers Gould, and Phil May. Among the outstanding cartoonists of the following century were Bernard Partridge, H. M. Bateman, Bernard Hollowood who also edited the magazine from 1957 to 1968, Kenneth Mahood and Norman Thelwell. Circulation broke the 100,000 mark around 1910, and peaked in 1947–1948 at 175,000 to 184,000. Sales declined steadily thereafter; ultimately, the magazine was forced to close in 2002 after 161 years of publication. "Punch" was widely emulated worldwide and was popular in the colonies. The colonial experience, especially in India, influenced Punch and its iconography. Tenniel's "Punch" cartoons of the 1857 Sepoy Mutiny led to a surge in the magazine's popularity. Colonial India was frequently caricatured in "Punch" and was an important source of knowledge of India for British readers. "Punch" material was collected in book formats from the late nineteenth century, which included "Pick of the Punch" annuals with cartoons and text features, "Punch and the War" (a 1941 collection of WWII-related cartoons), and "A Big Bowl of Punch" – which was republished a number of times. Many Punch cartoonists of the late 20th century published collections of their own, partly based on "Punch" contributions. "Punch" magazine ceased publishing in 1992. In early 1996 the businessman Mohamed Al-Fayed bought the rights to the name, and "Punch" was relaunched later that year. It was reported that the new version of the magazine was intended to be a spoiler aimed at "Private Eye", which had published many items critical of Fayed. "Punch" never became profitable in its new incarnation, and at the end of May 2002 it was announced that it would once more cease publication. Press reports quoted a loss of £16 million over the six years of publication, with only 6,000 subscribers at the end. Whereas the earlier version of "Punch" prominently featured the clownish character Punchinello (Punch of Punch and Judy) performing antics on front covers, the resurrected "Punch" did not use the character, but featured on its weekly covers a photograph of a boxing glove, thus informing its readers that the new magazine intended its name to mean "punch" in the sense of a punch in the eye. In 2004 much of the archive was acquired by the British Library, including the "Punch" table. The long oval Victorian table was brought into the offices some time around 1855, and was used for staff meetings and on other occasions. The wooden surface is scarred with the carved initials of the magazine's longtime writers, artists and editors, as well as six invited "strangers", including James Thurber and Prince Charles. Mark Twain declined the invitation, saying that the already-carved initials of William Makepeace Thackeray included his own. "Punch" was influential in British colonies around the world, and in countries including Turkey, India, Japan, and China, with "Punch" imitators appearing in Cairo, Yokohama, Tokyo, Hong Kong, and Shanghai.
https://en.wikipedia.org/wiki?curid=23069
Pacific Ocean The Pacific Ocean is the largest and deepest of Earth's oceanic divisions. It extends from the Arctic Ocean in the north to the Southern Ocean (or, depending on definition, to Antarctica) in the south and is bounded by the continents of Asia and Australia in the west and the Americas in the east. At in area (as defined with an Antarctic southern border), this largest division of the World Ocean—and, in turn, the hydrosphere—covers about 46% of Earth's water surface and about 32% of its total surface area, making it larger than all of Earth's land area combined. The centers of both the Water Hemisphere and the Western Hemisphere are in the Pacific Ocean. The equator subdivides it into the North(ern) Pacific Ocean and South(ern) Pacific Ocean, with two exceptions: the Galápagos and Gilbert Islands, while straddling the equator, are deemed wholly within the South Pacific. Its mean depth is . Challenger Deep in the Mariana Trench, located in the western north Pacific, is the deepest point in the world, reaching a depth of . The Pacific also contains the deepest point in the Southern Hemisphere, the Horizon Deep in the Tonga Trench, at . The third deepest point on Earth, the Sirena Deep, is also located in the Mariana Trench. The western Pacific has many major marginal seas, including the South China Sea, the East China Sea, the Sea of Japan, the Sea of Okhotsk, the Philippine Sea, the Coral Sea, and the Tasman Sea. Though the peoples of Asia and Oceania have traveled the Pacific Ocean since prehistoric times, the eastern Pacific was first sighted by Europeans in the early 16th century when Spanish explorer Vasco Núñez de Balboa crossed the Isthmus of Panama in 1513 and discovered the great "southern sea" which he named "Mar del Sur" (in Spanish). The ocean's current name was coined by Portuguese explorer Ferdinand Magellan during the Spanish circumnavigation of the world in 1521, as he encountered favorable winds on reaching the ocean. He called it "Mar Pacífico", which in both Portuguese and Spanish means "peaceful sea". Important human migrations occurred in the Pacific in prehistoric times. About 3000 BC, the Austronesian peoples on the island of Taiwan mastered the art of long-distance canoe travel and spread themselves and their languages south to the Philippines, Indonesia, and maritime Southeast Asia; west towards Madagascar; southeast towards New Guinea and Melanesia (intermarrying with native Papuans); and east to the islands of Micronesia, Oceania and Polynesia. Long-distance trade developed all along the coast from Mozambique to Japan. Trade, and therefore knowledge, extended to the Indonesian islands but apparently not Australia. By at least 878 when there was a significant Islamic settlement in Canton much of this trade was controlled by Arabs or Muslims. In 219 BC Xu Fu sailed out into the Pacific searching for the elixir of immortality. From 1404 to 1433 Zheng He led expeditions into the Indian Ocean. The first contact of European navigators with the western edge of the Pacific Ocean was made by the Portuguese expeditions of António de Abreu and Francisco Serrão, via the Lesser Sunda Islands, to the Maluku Islands, in 1512, and with Jorge Álvares's expedition to southern China in 1513, both ordered by Afonso de Albuquerque from Malacca. The east side of the ocean was discovered by Spanish explorer Vasco Núñez de Balboa in 1513 after his expedition crossed the Isthmus of Panama and reached a new ocean. He named it "Mar del Sur" (literally, "Sea of the South" or "South Sea") because the ocean was to the south of the coast of the isthmus where he first observed the Pacific. In 1519, Portuguese explorer Ferdinand Magellan sailed the Pacific East to West on a Spanish expedition to the Spice Islands that would eventually result in the first world circumnavigation. Magellan called the ocean "Pacífico" (or "Pacific" meaning, "peaceful") because, after sailing through the stormy seas off Cape Horn, the expedition found calm waters. The ocean was often called the "Sea of Magellan" in his honor until the eighteenth century. Magellan stopped at one uninhabited Pacific island before stopping at Guam in March 1521. Although Magellan himself died in the Philippines in 1521, Spanish Basque navigator Juan Sebastián Elcano led the remains of the expedition back to Spain across the Indian Ocean and round the Cape of Good Hope, completing the first world circumnavigation in a single expedition in 1522. Sailing around and east of the Moluccas, between 1525 and 1527, Portuguese expeditions discovered the Caroline Islands, the Aru Islands, and Papua New Guinea. In 1542–43 the Portuguese also reached Japan. In 1564, five Spanish ships carrying 379 explorers crossed the ocean from Mexico led by Miguel López de Legazpi, and sailed to the Philippines and Mariana Islands. For the remainder of the 16th century, Spanish influence was paramount, with ships sailing from Mexico and Peru across the Pacific Ocean to the Philippines via Guam, and establishing the Spanish East Indies. The Manila galleons operated for two and a half centuries, linking Manila and Acapulco, in one of the longest trade routes in history. Spanish expeditions also discovered Tuvalu, the Marquesas, the Cook Islands, the Solomon Islands, and the Admiralty Islands in the South Pacific. Later, in the quest for Terra Australis ("the [great] Southern Land"), Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Dutch explorers, sailing around southern Africa, also engaged in discovery and trade; Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula, and Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Tasmania and New Zealand in 1642. In the 16th and 17th centuries, Spain considered the Pacific Ocean a "mare clausum"—a sea closed to other naval powers. As the only known entrance from the Atlantic, the Strait of Magellan was at times patrolled by fleets sent to prevent entrance of non-Spanish ships. On the western side of the Pacific Ocean the Dutch threatened the Spanish Philippines. The 18th century marked the beginning of major exploration by the Russians in Alaska and the Aleutian Islands, such as the First Kamchatka expedition and the Great Northern Expedition, led by the Danish Russian navy officer Vitus Bering. Spain also sent expeditions to the Pacific Northwest, reaching Vancouver Island in southern Canada, and Alaska. The French explored and settled Polynesia, and the British made three voyages with James Cook to the South Pacific and Australia, Hawaii, and the North American Pacific Northwest. In 1768, Pierre-Antoine Véron, a young astronomer accompanying Louis Antoine de Bougainville on his voyage of exploration, established the width of the Pacific with precision for the first time in history. One of the earliest voyages of scientific exploration was organized by Spain in the Malaspina Expedition of 1789–1794. It sailed vast areas of the Pacific, from Cape Horn to Alaska, Guam and the Philippines, New Zealand, Australia, and the South Pacific. Growing imperialism during the 19th century resulted in the occupation of much of Oceania by European powers, and later Japan and the United States. Significant contributions to oceanographic knowledge were made by the voyages of HMS "Beagle" in the 1830s, with Charles Darwin aboard; HMS "Challenger" during the 1870s; the USS "Tuscarora" (1873–76); and the German "Gazelle" (1874–76). In Oceania, France obtained a leading position as imperial power after making Tahiti and New Caledonia protectorates in 1842 and 1853, respectively. After navy visits to Easter Island in 1875 and 1887, Chilean navy officer Policarpo Toro negotiated the incorporation of the island into Chile with native Rapanui in 1888. By occupying Easter Island, Chile joined the imperial nations. By 1900 nearly all Pacific islands were in control of Britain, France, United States, Germany, Japan, and Chile. Although the United States gained control of Guam and the Philippines from Spain in 1898, Japan controlled most of the western Pacific by 1914 and occupied many other islands during the Pacific War; however, by the end of that war, Japan was defeated and the U.S. Pacific Fleet was the virtual master of the ocean. The Japanese-ruled Northern Mariana Islands came under the control of the United States. Since the end of World War II, many former colonies in the Pacific have become independent states. The Pacific separates Asia and Australia from the Americas. It may be further subdivided by the equator into northern (North Pacific) and southern (South Pacific) portions. It extends from the Antarctic region in the South to the Arctic in the north. The Pacific Ocean encompasses approximately one-third of the Earth's surface, having an area of — larger than Earth's entire landmass combined, . Extending approximately from the Bering Sea in the Arctic to the northern extent of the circumpolar Southern Ocean at 60°S (older definitions extend it to Antarctica's Ross Sea), the Pacific reaches its greatest east–west width at about 5°N latitude, where it stretches approximately from Indonesia to the coast of Colombia—halfway around the world, and more than five times the diameter of the Moon. The lowest known point on Earth—the Mariana Trench—lies below sea level. Its average depth is , putting the total water volume at roughly . Due to the effects of plate tectonics, the Pacific Ocean is currently shrinking by roughly per year on three sides, roughly averaging a year. By contrast, the Atlantic Ocean is increasing in size. Along the Pacific Ocean's irregular western margins lie many seas, the largest of which are the Celebes Sea, Coral Sea, East China Sea (East Sea), Philippine Sea, Sea of Japan, South China Sea (South Sea), Sulu Sea, Tasman Sea, and Yellow Sea (West Sea of Korea). The Indonesian Seaway (including the Strait of Malacca and Torres Strait) joins the Pacific and the Indian Ocean to the west, and Drake Passage and the Strait of Magellan link the Pacific with the Atlantic Ocean on the east. To the north, the Bering Strait connects the Pacific with the Arctic Ocean. As the Pacific straddles the 180th meridian, the "West Pacific" (or "western Pacific", near Asia) is in the Eastern Hemisphere, while the "East Pacific" (or "eastern Pacific", near the Americas) is in the Western Hemisphere. The Southern Pacific Ocean harbors the Southeast Indian Ridge crossing from south of Australia turning into the Pacific-Antarctic Ridge (north of the South Pole) and merges with another ridge (south of South America) to form the East Pacific Rise which also connects with another ridge (south of North America) which overlooks the Juan de Fuca Ridge. For most of Magellan's voyage from the Strait of Magellan to the Philippines, the explorer indeed found the ocean peaceful; however, the Pacific is not always peaceful. Many tropical storms batter the islands of the Pacific. The lands around the Pacific Rim are full of volcanoes and often affected by earthquakes. Tsunamis, caused by underwater earthquakes, have devastated many islands and in some cases destroyed entire towns. The Martin Waldseemüller map of 1507 was the first to show the Americas separating two distinct oceans. Later, the Diogo Ribeiro map of 1529 was the first to show the Pacific at about its proper size. "1 The status of Taiwan and China is disputed. For more information, see political status of Taiwan." The ocean has most of the islands in the world. There are about 25,000 islands in the Pacific Ocean. The islands entirely within the Pacific Ocean can be divided into three main groups known as Micronesia, Melanesia and Polynesia. Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the east and the islands of Kiribati in the southeast. Melanesia, to the southwest, includes New Guinea, the world's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands, Santa Cruz, Vanuatu, Fiji and New Caledonia. The largest area, Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east. Islands in the Pacific Ocean are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. Continental islands lie outside the andesite line and include New Guinea, the islands of New Zealand, and the Philippines. Some of these islands are structurally associated with nearby continents. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and the Solomon Islands. The coral reefs of the South Pacific are low-lying structures that have built up on basaltic lava flows under the ocean's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia. The volume of the Pacific Ocean, representing about 50.1 percent of the world's oceanic water, has been estimated at some . Surface water temperatures in the Pacific can vary from , the freezing point of sea water, in the poleward areas to about near the equator. Salinity also varies latitudinally, reaching a maximum of 37 parts per thousand in the southeastern area. The water near the equator, which can have a salinity as low as 34 parts per thousand, is less salty than that found in the mid-latitudes because of abundant equatorial precipitation throughout the year. The lowest counts of less than 32 parts per thousand are found in the far north as less evaporation of seawater takes place in these frigid areas. The motion of Pacific waters is generally clockwise in the Northern Hemisphere (the North Pacific gyre) and counter-clockwise in the Southern Hemisphere. The North Equatorial Current, driven westward along latitude 15°N by the trade winds, turns north near the Philippines to become the warm Japan or Kuroshio Current. Turning eastward at about 45°N, the Kuroshio forks and some water moves northward as the Aleutian Current, while the rest turns southward to rejoin the North Equatorial Current. The Aleutian Current branches as it approaches North America and forms the base of a counter-clockwise circulation in the Bering Sea. Its southern arm becomes the chilled slow, south-flowing California Current. The South Equatorial Current, flowing west along the equator, swings southward east of New Guinea, turns east at about 50°S, and joins the main westerly circulation of the South Pacific, which includes the Earth-circling Antarctic Circumpolar Current. As it approaches the Chilean coast, the South Equatorial Current divides; one branch flows around Cape Horn and the other turns north to form the Peru or Humboldt Current. The climate patterns of the Northern and Southern Hemispheres generally mirror each other. The trade winds in the southern and eastern Pacific are remarkably steady while conditions in the North Pacific are far more varied with, for example, cold winter temperatures on the east coast of Russia contrasting with the milder weather off British Columbia during the winter months due to the preferred flow of ocean currents. In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions. To determine the phase of ENSO, the most recent three-month sea surface temperature average for the area approximately to the southeast of Hawaii is computed, and if the region is more than above or below normal for that period, then an El Niño or La Niña is considered in progress. In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest; however, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are active. The Pacific hosts the two most active tropical cyclone basins, which are the northwestern Pacific and the eastern Pacific. Pacific hurricanes form south of Mexico, sometimes striking the western Mexican coast and occasionally the southwestern United States between June and October, while typhoons forming in the northwestern Pacific moving into southeast and east Asia from May to December. Tropical cyclones also form in the South Pacific basin, where they occasionally impact island nations. In the arctic, icing from October to May can present a hazard for shipping while persistent fog occurs from June to December. A climatological low in the Gulf of Alaska keeps the southern coast wet and mild during the winter months. The Westerlies and associated jet stream within the Mid-Latitudes can be particularly strong, especially in the Southern Hemisphere, due to the temperature difference between the tropics and Antarctica, which records the coldest temperature readings on the planet. In the Southern hemisphere, because of the stormy and cloudy conditions associated with extratropical cyclones riding the jet stream, it is usual to refer to the Westerlies as the Roaring Forties, Furious Fifties and Shrieking Sixties according to the varying degrees of latitude. The ocean was first mapped by Abraham Ortelius; he called it Maris Pacifici following Ferdinand Magellan's description of it as "a pacific sea" during his circumnavigation from 1519 to 1522. To Magellan, it seemed much more calm (pacific) than the Atlantic. The andesite line is the most significant regional distinction in the Pacific. A petrologic boundary, it separates the deeper, mafic igneous rock of the Central Pacific Basin from the partially submerged continental areas of felsic igneous rock on its margins. The andesite line follows the western edge of the islands off California and passes south of the Aleutian arc, along the eastern edge of the Kamchatka Peninsula, the Kuril Islands, Japan, the Mariana Islands, the Solomon Islands, and New Zealand's North Island. The dissimilarity continues northeastward along the western edge of the Andes Cordillera along South America to Mexico, returning then to the islands off California. Indonesia, the Philippines, Japan, New Guinea, and New Zealand lie outside the andesite line. Within the closed loop of the andesite line are most of the deep troughs, submerged volcanic mountains, and oceanic volcanic islands that characterize the Pacific basin. Here basaltic lavas gently flow out of rifts to build huge dome-shaped volcanic mountains whose eroded summits form island arcs, chains, and clusters. Outside the andesite line, volcanism is of the explosive type, and the Pacific Ring of Fire is the world's foremost belt of explosive volcanism. The Ring of Fire is named after the several hundred active volcanoes that sit above the various subduction zones. The Pacific Ocean is the only ocean which is almost totally bounded by subduction zones. Only the Antarctic and Australian coasts have no nearby subduction zones. The Pacific Ocean was born 750 million years ago at the breakup of Rodinia, although it is generally called the Panthalassic Ocean until the breakup of Pangea, about 200 million years ago. The oldest Pacific Ocean floor is only around 180 Ma old, with older crust subducted by now. The Pacific Ocean contains several long seamount chains, formed by hotspot volcanism. These include the Hawaiian–Emperor seamount chain and the Louisville Ridge. The exploitation of the Pacific's mineral wealth is hampered by the ocean's great depths. In shallow waters of the continental shelves off the coasts of Australia and New Zealand, petroleum and natural gas are extracted, and pearls are harvested along the coasts of Australia, Japan, Papua New Guinea, Nicaragua, Panama, and the Philippines, although in sharply declining volume in some cases. Fish are an important economic asset in the Pacific. The shallower shoreline waters of the continents and the more temperate islands yield herring, salmon, sardines, snapper, swordfish, and tuna, as well as shellfish. Overfishing has become a serious problem in some areas. For example, catches in the rich fishing grounds of the Okhotsk Sea off the Russian coast have been reduced by at least half since the 1990s as a result of overfishing. The quantity of small plastic fragments floating in the north-east Pacific Ocean increased a hundredfold between 1972 and 2012. The ever-growing Great Pacific garbage patch between California and Japan is three times the size of France. An estimated 80,000 metric tons of plastic inhabit the patch, totaling 1.8 trillion pieces. Marine pollution is a generic term for the harmful entry into the ocean of chemicals or particles. The main culprits are those using the rivers for disposing of their waste. The rivers then empty into the ocean, often also bringing chemicals used as fertilizers in agriculture. The excess of oxygen-depleting chemicals in the water leads to hypoxia and the creation of a dead zone. Marine debris, also known as marine litter, is human-created waste that has ended up floating in a lake, sea, ocean, or waterway. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter. From 1946 to 1958, Marshall Islands served as the Pacific Proving Grounds for the United States and was the site of 67 nuclear tests on various atolls. Several nuclear weapons were lost in the Pacific Ocean, including one-megaton bomb lost during the 1965 Philippine Sea A-4 incident. In addition, the Pacific Ocean has served as the crash site of satellites, including Mars 96, Fobos-Grunt, and Upper Atmosphere Research Satellite.
https://en.wikipedia.org/wiki?curid=23070
Prince Edward Island Prince Edward Island (PEI) is a province of Canada and one of the three Maritime provinces. It is the smallest province of Canada in both land area and population, but the most densely populated. Part of the traditional lands of the Mi'kmaq, it became a British colony in the 1700s and was federated into Canada as a province in 1873. Its capital is Charlottetown. According to Statistics Canada, the province of PEI has 158,158 residents. The backbone of the island economy is farming; it produces 25% of Canada's potatoes. Other important industries include the fisheries, tourism, aerospace, bio-science, IT, and renewable energy. The island has several informal names: "Garden of the Gulf", referring to the pastoral scenery and lush agricultural lands throughout the province; and "Birthplace of Confederation" or "Cradle of Confederation", referring to the Charlottetown Conference in 1864, although PEI did not join Confederation until 1873, when it became the seventh Canadian province. Historically, PEI is one of Canada's older settlements and demographically still reflects older immigration to the country, with Scottish, Irish, English and French surnames being dominant. PEI is located about north of Halifax, Nova Scotia, and east of Quebec City and has a land area of . The main island is in size. It is the 104th-largest island in the world and Canada's 23rd-largest island. In 1798, the British named the island colony for Prince Edward, Duke of Kent and Strathearn (1767–1820), the fourth son of King George III and the father of Queen Victoria. Prince Edward has been called "Father of the Canadian Crown". The following island landmarks are also named after the Duke of Kent: In French, the island is today called "Île-du-Prince-Édouard", but its former French name, as part of Acadia, was "Île Saint-Jean" (St. John's Island). The island is known in Scottish Gaelic as "Eilean a' Phrionnsa" (lit. "the Island of the Prince", the local form of the longer 'Eilean a' Phrionnsa Iomhair/Eideard') or "Eilean Eòin" for some Gaelic speakers in Nova Scotia though not on PEI (lit. "John's Island" in reference to the island's former name). The island is known in the Mi'kmaq language as "Abegweit" or "Epekwitk" roughly translated as "land cradled in the waves". Prince Edward Island is located in the Gulf of St. Lawrence, west of Cape Breton Island, north of the Nova Scotia peninsula, and east of New Brunswick. Its southern shore bounds the Northumberland Strait. The island has two urban areas. The larger surrounds Charlottetown Harbour, situated centrally on the island's southern shore, and consists of the capital city Charlottetown, and suburban towns Cornwall and Stratford and a developing urban fringe. A much smaller urban area surrounds Summerside Harbour, situated on the southern shore west of Charlottetown Harbour, and consists primarily of the city of Summerside. As with all natural harbours on the island, Charlottetown and Summerside harbours are created by rias. The island's landscape is pastoral. Rolling hills, woods, reddish white sand beaches, ocean coves and the famous red soil have given Prince Edward Island a reputation as a province of outstanding natural beauty. The provincial government has enacted laws to preserve the landscape through regulation, although there is a lack of consistent enforcement, and an absence of province-wide zoning and land-use planning. Under the Planning Act of the province, municipalities have the option to assume responsibility for land-use planning through the development and adoption of official plans and land use bylaws. Thirty-one municipalities have taken responsibility for planning. In areas where municipalities have not assumed responsibility for planning, the Province remains responsible for development control. The island's lush landscape has a strong bearing on its economy and culture. The author Lucy Maud Montgomery drew inspiration from the land during the late Victorian Era for the setting of her classic novel "Anne of Green Gables" (1908). Today, many of the same qualities that Montgomery and others found in the island are enjoyed by tourists who visit year-round. They enjoy a variety of leisure activities, including beaches, various golf courses, eco-tourism adventures, touring the countryside, and enjoying cultural events in local communities around the island. The smaller, rural communities as well as the towns and villages throughout the province retain a slower-pace. Prince Edward Island has become popular as a tourist destination for relaxation. The economy of most rural communities on the island is based on small-scale agriculture. Industrial farming has increased as businesses buy and consolidate older farm properties. The coastline has a combination of long beaches, dunes, red sandstone cliffs, salt water marshes, and numerous bays and harbours. The beaches, dunes and sandstone cliffs consist of sedimentary rock and other material with a high iron concentration, which oxidises upon exposure to the air. The geological properties of a white silica sand found at Basin Head are unique in the province; the sand grains cause a scrubbing noise as they rub against each other when walked on, and have been called the "singing sands". Large dune fields on the north shore can be found on barrier islands at the entrances to various bays and harbours. The magnificent sand dunes at Greenwich are of particular significance. The shifting, parabolic dune system is home to a variety of birds and rare plants; it is also a site of significant archeological interest. Despite Prince Edward Island's small size and reputation as a largely rural province, it is the most densely populated province in Canada. The climate of the island is considered to be moderate and strongly influenced by the surrounding seas. As such, it is milder than inland locations owing to the warm waters from the Gulf of St. Lawrence. The climate is characterized by changeable weather throughout the year; it has some of the most variable day-to-day weather in Canada, in which specific weather conditions seldom last for long. During July and August, the average daytime high in PEI is ; however, the temperature can sometimes exceed during these months. In the winter months of January and February, the average daytime high is . The Island receives an average yearly rainfall of and an average yearly snowfall of . Winters are moderately cold and long but are milder than inland locations, with clashes of cold Arctic air and milder Atlantic air causing frequent temperature swings. The climate is considered to be more humid continental climate than oceanic since the Gulf of St. Lawrence freezes over, thus eliminating any moderation. The mean temperature is in January. During the winter months, the island usually has many storms (which may produce rain as well as snow) and blizzards since during this time, storms originating from the North Atlantic or the Gulf of Mexico frequently pass through. Springtime temperatures typically remain cool until the sea ice has melted, usually in late April or early May. Summers are moderately warm, but rarely uncomfortable, with the daily maximum temperature only occasionally reaching as high as . Autumn is a pleasant season, as the moderating Gulf waters delay the onset of frost, although storm activity increases compared to the summer. There is ample precipitation throughout the year, although it is heaviest in the late autumn, early winter and mid spring. The following climate chart depicts the average conditions of Charlottetown, as an example of the small province's climate. Between 250 and 300 million years ago, freshwater streams flowing from ancient mountains brought silt, sand and gravel into what is now the Gulf of St. Lawrence. These sediments accumulated to form a sedimentary basin, and make up the island's bedrock. When the Pleistocene glaciers receded about 15,000 years ago, glacial debris such as till were left behind to cover most of the area that would become the island. This area was connected to the mainland by a strip of land, but when ocean levels rose as the glaciers melted, this land strip was flooded, forming the island. As the land rebounded from the weight of the ice, the island rose up to elevate it farther from the surrounding water. Most of the bedrock in Prince Edward Island is composed of red sandstone, part of the Permianage Pictou Group. Although commercial deposits of minerals have not been found, exploration in the 1940s for natural gas beneath the northeastern end of the province resulted in the discovery of an undisclosed quantity of gas. The Island was reported by government to have only 0.08 tcf of "technically recoverable" natural gas. Twenty exploration wells for hydrocarbon resources have been drilled on Prince Edward Island and offshore. The first reported well was Hillsborough No.#1, drilled in Charlottetown Harbour in 1944 (the world's first offshore well), and the most recent was New Harmony No.#1 in 2007. Since the resurgence of exploration in the mid-1990s, all wells that have shown promising gas deposits have been stimulated through hydraulic fracture or “fracking”. All oil and natural gas exploration and exploitation activities on the Island are governed by the "Oil and Natural Gas Act" R.S.P.E.I. 1988, Cap. 0-5 and its associated regulations and orders. The Province of Prince Edward Island is completely dependent on groundwater for its source of drinking water, with approximately 305 high capacity wells in use as of December 2018. As groundwater flows through an aquifer, it is naturally filtered. The water for the city of Charlottetown is extracted from thirteen wells in three wellfields and distributed to customers. The water removed is replenished by precipitation. Infrastructure in Charlottetown that was installed in 1888 is still in existence. With the age of the system in the older part of Charlottetown, concern has been raised regarding lead pipes. The Utility has been working with its residents on a lead-replacement program. A plebiscite in 1967 was held in Charlottetown over fluoridation, and residents voted in favour. Under provincial legislation, the Utility is required to report to its residents on an annual basis. It is also required to do regular sampling of the water and an overview is included in each annual report. The Winter River watershed provides about 92 per cent of the 18-million-litre water supply for the city of Charlottetown, which had difficulty in each of 2011, 2012 and 2013 with its supply, until water meters were installed. Government tabled a discussion paper on the proposed "Water Act" for the province on July 8, 2015. The use of groundwater came under scrutiny as the potato industry, which accounts for $1 billion every year and 50% of farm receipts, has pressed the government to lift a moratorium on high-capacity water wells for irrigation. The release of the discussion paper was to set off a consultation process in the autumn of 2015. Detailed information about the quality of drinking water in PEI communities and watersheds can be found on the provincial government's official website. It provides a summary of the ongoing testing of drinking water done by the Prince Edward Island Analytical Laboratories. Average drinking-water quality results are available, and information on the following parameters are provided: alkalinity; cadmium; calcium; chloride; chromium,; iron; magnesium; manganese; nickel; nitrate; pH; phosphorus; potassium; sodium; and sulfate, as well as the presence of pesticides. Water-testing services are provided for a variety of clients through the PEI Analytical Laboratories which assesses according to the recommendations of the Guidelines for Canadian Drinking Water Quality published by Health Canada. Prince Edward Island used to have native moose, bear, caribou, wolf, and other larger species. Due to hunting and habitat disruption these species are no longer found on the island. Some species common to P.E.I. are red foxes, coyote, blue jays, and robins. Skunks and raccoons are common non-native species. Species at risk in P.E.I. include piping plovers, american eel, bobolinks, little brown bat, and beach pinweed. Some species are unique to the province. In 2008, a new ascomycete species, "Jahnula apiospora" (Jahnulales, Dothideomycetes), was collected from submerged wood in a freshwater creek on Prince Edward Island. North Atlantic right whales, one of the rarest whale species, were once thought to be rare visitors into St. Lawrence regions until 1994, have been showing dramatic increases (annual concentrations were discovered off Percé in 1995 and gradual increases across the regions since in 1998), and since in 2014, notable numbers of whales have been recorded around Cape Breton to Prince Edward Island as 35 to 40 whales were seen in these areas in 2015. Since before the influx of Europeans, the Mi'kmaq First Nations have inhabited Prince Edward Island as part of the region of Mi'kma'ki. They named the Island "Epekwitk", meaning 'cradled on the waves'; Europeans represented the pronunciation as "Abegweit". Another name is "Minegoo". The Mi'kmaq's legend is that the island was formed by the Great Spirit placing on the Blue Waters some dark red crescent-shaped clay. There are two Mi'kmaq First Nation communities on Epekwitk today. In 1534, Jacques Cartier was the first European to see the island. In 1604, the Kingdom of France laid claim to the lands of the Maritimes, including Prince Edward Island, establishing the French colony of Acadia. The island was named "Île Saint-Jean" by the French. The Mi'kmaq never recognized the claim but welcomed the French as trading partners and allies. During the 18th century, the French were engaged in a series of conflicts with the Kingdom of Great Britain and its colonies. Several battles between the two belligerents occurred on Prince Edward Island during this period. Following the British capture Louisbourg during the War of the Austrian Succession, New Englanders launched an attack on Île Saint-Jean (Prince Edward Island); with a British detachment landed at Port-la-Joye. The island's capital had a garrison of 20 French soldiers under the command of Joseph du Pont Duvivier. The troops fled the settlement, and the New Englanders burned the settlement to the ground. Duvivier and the twenty men retreated up the Northeast River (Hillsborough River), pursued by the New Englanders until the French troops were reinforced with the arrival of the Acadian militia and the Mi'kmaq. The French troops and their allies were able to drive the New Englanders to their boats. Nine New Englanders were killed, wounded or made prisoner. The New Englanders took six Acadian hostages, who would be executed if the Acadians or Mi'kmaq rebelled against New England control. The New England troops left for Louisbourg. Duvivier and his 20 troops left for Quebec. After the fall of Louisbourg, the resident French population of Île Royale were deported to France, with the remaining Acadians of Île Saint-Jean lived under the threat of deportation for the remainder of the war. New Englanders had a force of 200 soldiers stationed at Port-La-Joye, as well as two warships boarding supplies for its journey of Louisbourg. To regain Acadia, Ramezay was sent from Quebec to the region to join forces with the Duc d'Anville expedition. Upon arriving at Chignecto, he sent Boishebert to Île Saint-Jean to ascertain the size of the New England force. After Boishebert returned, Ramezay sent Joseph-Michel Legardeur de Croisille et de Montesson along with over 500 men, 200 of whom were Mi'kmaq, to Port-La-Joye. In July 1746, the battle happened near York River. Montesson and his troops killed forty New Englanders and captured the rest. Montesson was commended for having distinguished himself in his first independent command. Hostilities between the British and French was ended in 1748 with the Treaty of Aix-la-Chapelle in 1748. Roughly one thousand Acadians lived on the island prior to the Acadian Exodus from Nova Scotia. The population grew to nearly 5,000 the late 1740s and early 1750s, as Acadians from Nova Scotia fled to the island during the Acadian Exodus, and the subsequent British-ordered expulsions beginning in 1755. Hostilities between British and French colonial forces resumed in 1754, although formal declarations of war were not issued until 1756. After French forces were defeated at the siege of Louisbourg, the British performed a military campaign on Ile Saint-Jean to secure the island. The campaign was led by Colonel Andrew Rollo under orders from General Jeffery Amherst. The following campaigns saw the deportation of the most Acadians from the island. Many Acadians died in the expulsion en route to France; on December 13, 1758, the transport ship "Duke William" sank and 364 died. A day earlier the "Violet" sank and 280 died; several days later sank with 213 on board. The French formally ceded the island, and most of New France to the British in the Treaty of Paris of 1763. Initially named St. John's Island by the British, the island was administered as part of the colony of Nova Scotia, until it was split into a separate colony in 1769. In the mid-1760s, a survey team divided the Island into 67 lots. On July 1, 1767, these properties were allocated to supporters of King George III by means of a lottery. Ownership of the land remained in the hands of landlords in England, angering Island settlers who were unable to gain title to land on which they worked and lived. Significant rent charges (to absentee landlords) created further anger. The land had been given to the absentee landlords with a number of conditions attached regarding upkeep and settlement terms; many of these conditions were not satisfied. Islanders spent decades trying to convince the Crown to confiscate the lots, however the descendants of the original owners were generally well connected to the British government and refused to give up the land. After becoming a separate colony from Nova Scotia, Walter Patterson was appointed the first British governor of St. John's Island in 1769. Assuming the office in 1770, he had a controversial career during which land title disputes and factional conflict slowed the initial attempts to populate and develop the island under a feudal system. In an attempt to attract settlers from Ireland, in one of his first acts (1770) Patterson led the island's colonial assembly to rename the island "New Ireland", but the British Government promptly vetoed this as it exceeded the authority vested in the colonial government; only the Privy Council in London could change the name of a colony. During the American Revolutionary War Charlottetown was raided in 1775 by a pair of American-employed privateers. Two armed schooners, "Franklin" and "Hancock", from Beverly, Massachusetts, made prisoner of the attorney-general at Charlottetown, on advice given them by some Pictou residents after they had taken eight fishing vessels in the Gut of Canso. During and after the American Revolutionary War, from 1776 to 1783, the colony's efforts to attract exiled Loyalist refugees from the rebellious American colonies met with some success. Walter Patterson's brother, John Patterson, one of the original grantees of land on the island, was a temporarily exiled Loyalist and led efforts to persuade others to come. Governor Patterson dismissal in 1787, and his recall to London in 1789 dampened his brother's efforts, leading John to focus on his interests in the United States. Edmund Fanning, also a Loyalist exiled by the Revolution, took over as the second governor, serving until 1804. His tenure was more successful than Patterson's. A high influx of Scottish Highlanders in the late 1700s also resulted in St. John's Island having the highest proportion of Scottish immigrants in Canada. This led to a higher proportion of Scottish Gaelic speakers and thriving culture surviving on the island than in Scotland itself, as the settlers could more easily avoid English influence overseas. On 29 November 1798, during Fanning's administration, the British government granted approval to change the colony's name from St. John's Island to Prince Edward Island to distinguish it from areas with similar names in what is now Atlantic Canada, such as the cities of Saint John in New Brunswick and St. John's in Newfoundland. The colony's new name honoured the fourth son of King George III, Prince Edward Augustus, the Duke of Kent (1767–1820), who subsequently led the British military forces on the continent as Commander-in-Chief, North America (1799–1800), with his headquarters in Halifax. In 1853, the Island government passed the Land Purchase Act which empowered them to purchase lands from those owners who were willing to sell, and then resell the land to settlers for low prices. This scheme collapsed when the Island ran short of money to continue with the purchases. Many of these lands also were fertile, and were some of the key factors to sustaining Prince Edward Island's economy. In September 1864, Prince Edward Island hosted the Charlottetown Conference, which was the first meeting in the process leading to the Quebec Resolutions and the creation of Canada in 1867. Prince Edward Island did not find the terms of union favourable and balked at joining in 1867, choosing to remain a colony of the United Kingdom. In the late 1860s, the colony examined various options, including the possibility of becoming a discrete dominion unto itself, as well as entertaining delegations from the United States, who were interested in Prince Edward Island joining the United States. In 1871, the colony began construction of a railway and, frustrated by Great Britain's Colonial Office, began negotiations with the United States. In 1873, Canadian Prime Minister Sir John A. Macdonald, anxious to thwart American expansionism and facing the distraction of the Pacific Scandal, negotiated for Prince Edward Island to join Canada. The Dominion Government of Canada assumed the colony's extensive railway debts and agreed to finance a buy-out of the last of the colony's absentee landlords to free the island of leasehold tenure and from any new immigrants entering the island (accomplished through the passage of the "Land Purchase Act, 1875"). Prince Edward Island entered Confederation on July 1, 1873. As a result of having hosted the inaugural meeting of Confederation, the Charlottetown Conference, Prince Edward Island presents itself as the "Birthplace of Confederation" and this is commemorated through several buildings, a ferry vessel, and the Confederation Bridge (constructed 1993 to 1997). The most prominent building in the province honouring this event is the Confederation Centre of the Arts, presented as a gift to Prince Edward Islanders by the 10 provincial governments and the Federal Government upon the centenary of the Charlottetown Conference, where it stands in Charlottetown as a national monument to the "Fathers of Confederation". The Centre is one of the 22 National Historic Sites of Canada located in Prince Edward Island. According to the 2011 National Household Survey, the largest ethnic group consists of people of Scottish descent (39.2%), followed by English (31.1%), Irish (30.4%), French (21.1%), German (5.2%), and Dutch (3.1%) descent. Prince Edward Island's population is largely white; there are few visible minorities. Chinese Canadians are the largest visible minority group of Prince Edward Island, comprising 1.3% of the province's population. Almost half of respondents identified their ethnicity as "Canadian". "Source: Statistics Canada" The Canada 2016 Census showed a population of 142,910. Of the 140,020 singular responses to the census question concerning mother tongue, the most commonly reported languages were as follows: In addition, there were 460 responses of both English and a "non-official language"; 30 of both French and a "non-official language"; 485 of both English and French; and 20 of English, French, and a "non-official language". (Figures shown are for the number of single language responses and the percentage of total single-language responses.) Traditionally the population has been evenly divided between Catholic and Protestant affiliations. The 2001 census indicated number of adherents for the Roman Catholic Church with 63,240 (47%) and various Protestant churches with 57,805 (43%). This included the United Church of Canada with 26,570 (20%); the Presbyterian Church with 7,885 (6%) and the Anglican Church of Canada with 6,525 (5%); those with no religion were among the lowest of the provinces with 8,705 (6.5%). If one considers that the founders of the United Church of Canada were largely Presbyterians in Prince Edward Island, the Island has one of the highest percentages of Presbyterians in the country. The Island also has one of the largest number of Free Church of Scotland buildings in Canada. The provincial economy is dominated by the seasonal industries of agriculture, tourism, and the fishery. The province is limited in terms of heavy industry and manufacturing, though Cavendish Farms runs extensive food manufacturing operations on PEI. Agriculture remains the dominant industry in the provincial economy, as it has since colonial times. In 2015, agriculture and agri-food manufacturing was responsible for 7.6% of the province's GDP. The Island has a total land area of with approximately cleared for agricultural use. In 2016, the Census of Agriculture counted 1,353 farms on the Island, which is a 9.5% decrease from the previous census (2011). During the 20th century, potatoes replaced mixed farming as the leading cash crop, accounting for one-third of provincial farm income. The number of acres under potato production in 2010 was 88,000, while soy accounted for 55,000. There are approximately 330 potato growers on PEI, with the grand majority of these being family farms, often with multiple generations working together. The province currently accounts for a third of Canada's total potato production, producing approximately annually. Comparatively, the state of Idaho produces approximately annually, with a population approximately 9.5 times greater. The province is a major producer of seed potatoes, exporting to more than twenty countries around the world. An estimated total of 70% of the land is cultivated and 25% of all potatoes grown in Canada originate from P.E.I. The processing of frozen fried potatoes, green vegetables, and berries is a leading business activity. As a legacy of the island's colonial history, the provincial government enforces extremely strict rules for non-resident land ownership, especially since the PEI "Lands Protection Act" of 1982. Residents and corporations are limited to maximum holdings of 400 and 1,200 hectares respectively. There are also restrictions on non-resident ownership of shorelines. Many of the province's coastal communities rely upon shellfish harvesting, particularly lobster fishing as well as oyster fishing and mussel farming. The island's economy has grown significantly over the last decade in key areas of innovation. Aerospace, bioscience, information and communications technology, and renewable energy have been a focus for growth and diversification. Aerospace alone now accounts for over 25% of the province's international exports and is the island's fourth largest industry at $355 million in annual sales. The bioscience industry employs over 1,300 people and generates over $150 million in sales. The sale of carbonated beverages such as beer and soft drinks in non-refillable containers, such as aluminum cans or plastic bottles, was banned in 1976 as an environmental measure in response to public concerns over litter. Beer and soft drink companies opted to use refillable glass bottles for their products which were redeemable at stores and bottle depots. Though often environmental and economic agendas may be at odds, the ‘ban the can’ legislation, along with being environmentally driven, was also economically motivated as it protected jobs. Seaman's Beverages, a bottling company and carbonated beverage manufacturer, was established in 1939 and a major employer in Charlottetown, Prince Edward Island. Making it illegal to retail cans led to a bigger share of the carbonated beverage market for Seamans. Seamans Beverages was eventually acquired by Pepsi Bottling Group Inc in 2002 prior to the lifting of the legislation. The introduction of recycling programs for cans and plastic bottles in neighbouring provinces in recent years (also using a redemption system) has seen the provincial government introduce legislation to reverse this ban with the restriction lifted on May 3, 2008. Prior to harmonization in 2013, Prince Edward Island had one of Canada's highest provincial retail sales tax rates at 10%. On April 1, 2013, the provincial tax was harmonized with the federal Goods and Services Tax, and became known as the "Harmonized Sales Tax". The 15% tax is applied to almost all goods and services except some clothing, food and home heating fuel. This rate is the same as the neighbouring Atlantic provinces. The provincial government provides consumer protection in the form of regulation for certain items, ranging from apartment rent increases to petroleum products including gas, diesel, propane and heating oil. These are regulated through the Prince Edward Island Regulatory and Appeals Commission (IRAC). IRAC is authorized to limit the number of companies who are permitted to sell petroleum products. , the median family income on Prince Edward Island is $76,607/year. The minimum wage is $12.25/hour . At present, approximately twenty-five percent of electricity consumed on the island is generated from renewable energy (largely wind turbines); the provincial government has set renewable energy targets as high as 30-50% for electricity consumed by 2015. This goal has not been met. The total capacity of wind power on the island is 204 MWs. There are eight wind farms on the island, the largest being West Cape Wind Park with a capacity of 99 MWs from 55 turbines. There is a total of 89 turbines in the province. All of the turbines have been manufactured by Vestas, the Vestas V-80, Vestas V90, and Vestas V-47. A thermal oil-fired generating station, the Charlottetown Thermal Generating Station, is used sometimes for emergencies. It is being decommissioned. A second thermal generation station exists in Borden aptly named the Borden Generating Station. The majority of electricity consumed on Prince Edward Island comes from New Brunswick through undersea cables. A recent $140M upgrade brought the capacity of the cable system from 200 MW to 560 MW. The Point Lepreau nuclear plant in New Brunswick was closed for refurbishments from 2008 to 2012, resulting in a steep price hike of about 25 per cent, but the province later subsidized rates. Residents were to pay 11.2 per cent more for electricity when the harmonized sales tax was adopted in April 2013, according to the P.E.I. Energy Accord that was tabled in the legislature on December 7, 2012. and passed as the "Electric Power (Energy Accord Continuation) Amendment Act", which establishes electric pricing from April 1, 2013, to March 1, 2016. Regulatory powers are derived for IRAC from the "Electric Power Act". Since 1918 Maritime Electric has delivered electricity to customers on the Island. The utility is currently owned and operated by Fortis Inc. Prince Edward Island's public school system has an English school district named the Public Schools Branch (previously the English Language School Board), as well as a Francophone district, the Commission scolaire de langue française. The English language districts have a total of 10 secondary schools and 54 intermediate and elementary schools while the Francophone district has 6 schools covering all grades. 22 per cent of the student population is enrolled in French immersion. This is one of the highest levels in the country. Three public post-secondary institutions operate in the province, including one university, and two colleges. The University of Prince Edward Island is the province's only public university, and is located in the city of Charlottetown. The university was created by the Island legislature to replace Prince of Wales College and St. Dunstan's University. UPEI is also home to the Atlantic Veterinary College, which offers the region's only veterinary medicine program. Collège de l'Île, and Holland College are two public colleges that operate in the province; the former being a French first language institution, while the latter was an English first language institution. Holland College included specialised facilities such as the Atlantic Police Academy, Marine Training Centre, and the Culinary Institute of Canada. In addition to public post-secondary institutions, Prince Edward Island is also home to a private post-secondary institutions, Maritime Christian College. Today 23.5 per cent of residents aged 15 to 19 have bilingual skills, an increase of 100 per cent in a decade. Prince Edward Island, along with most rural regions in North America, is experiencing an accelerated rate of youth emigration. The provincial government has projected that public school enrollment will decline by 40% during the 2010s. The provincial government is responsible for such areas as health and social services, education, economic development, labour legislation and civil law. These matters of government are overseen in the provincial capital, Charlottetown. Prince Edward Island is governed by a parliamentary government within the construct of constitutional monarchy; the monarchy in Prince Edward Island is the foundation of the executive, legislative, and judicial branches. The sovereign is Queen Elizabeth II, who also serves as head of state of 15 other Commonwealth countries, each of Canada's nine other provinces, and the Canadian federal realm, and resides predominantly in the United Kingdom. As such, the Queen's representative, the Lieutenant Governor of Prince Edward Island (presently Antoinette Perry), carries out most of the royal duties in Prince Edward Island. The direct participation of the royal and viceroyal figures in any of these areas of governance is limited; in practice, their use of the executive powers is directed by the Executive Council, a committee of ministers of the Crown responsible to the unicameral, elected Legislative Assembly and chosen and headed by the Premier of Prince Edward Island (presently Dennis King), the head of government. To ensure the stability of government, the lieutenant governor will usually appoint as premier the person who is the current leader of the political party that can obtain the confidence of a plurality in the Legislative Assembly. The leader of the party with the second-most seats usually becomes the Leader of Her Majesty's Loyal Opposition (presently Peter Bevan-Baker) and is part of an adversarial parliamentary system intended to keep the government in check. Each of the 27 Members of the Legislative Assembly (MLA) is elected by simple plurality in an electoral district. General elections are called by the lieutenant governor for the first Monday in October four years after the previous election, or may be called earlier on the advice of the premier. Historically, politics in the province have been dominated by the Liberal and the Progressive Conservative Parties since the province joined Confederation. From the 2015 election, the Green Party of Prince Edward Island gained a small representation in the Legislative Assembly, and in the 2019 election gained an additional six seats to form the Official Opposition. The Mi'kmaq Confederacy of PEI is the tribal council and provincial-territorial organization in the province that represents both the Lennox Island and Abegweit First Nations. The province has a single health administrative region (or district health authority) called Health PEI. Health PEI receives funding for its operations and is regulated by the Department of Health and Wellness. Many PEI homes and businesses are served by central sewage collection and/or treatment systems. These are operated either by a municipality or a private utility. Many industrial operations have their own wastewater treatment facilities. Staff members with the Department of Environment, Water and Climate Change provide advice to operators, as needed, on proper system maintenance. The IRAC regulates municipal water and sewer in the province, now under the "Environmental Protection Act". Since around 1900, the residents of the City of Charlottetown have benefited from a central sanitary sewer service. Early disposal practices, while advanced for their time, eventually were found to compromise the ecological integrity of the nearby Hillsborough River and the Charlottetown Harbour. By 1974, the Commission had spearheaded the development of a primary wastewater treatment plant, known as the Charlottetown Pollution Control Plant, together with the construction of several pumping stations along the City's waterfront, and outfall piping deep into the Hillsborough River. There are eight hospitals in the province. Prince Edward Island offers programs and services in areas such as acute care, primary care, home care, palliative care, public health, chronic disease prevention, and mental health and addictions, to name a few. The provincial government has opened several family health centres in recent years in various rural and urban communities. A provincial cancer treatment centre at the Queen Elizabeth Hospital provides support to those dealing with various types of cancer-related illnesses. A family medicine residency program was established in 2009 with the Dalhousie University Faculty of Medicine as a means to encourage new physicians to work in Prince Edward Island. Long-term-care services are also available with several programs in place to support seniors wishing to remain independent in their communities. Many medications for seniors are subsidized through a provincial pharmaceutical plan, however, Prince Edward Island remains one of the only provinces lacking a catastrophic drug coverage program for its residents. The provincial government has several programs for early illness detection, including mammography and pap screening clinics. There are also asthma education and diabetes education programs, as well as prenatal programs, immunization programs and dental health risk prevention programs for children. The government is also attempting to implement a comprehensive integrated Electronic Health Record system. The provincial government has recently committed to enhancing primary care and home care services and has invested in health care facilities in recent capital budgets; mostly replacements and upgrades to provincial government operated nursing homes and hospitals. Some specialist services require patients to be referred to clinics and specialists in neighbouring provinces. Specialist operations and treatments are also provided at larger tertiary referral hospitals in neighbouring provinces such as the IWK Health Centre and Queen Elizabeth II Health Sciences Centre in Nova Scotia or the Saint John Regional Hospital, Moncton Hospital, and Dr. Georges-L.-Dumont University Hospital Centre in New Brunswick. Ground ambulance service in Prince Edward Island is provided under contract by Island EMS. Air ambulance service is provided under contract by LifeFlight. In recent decades, Prince Edward Island's population has shown statistically significant and abnormally high rates of diagnosed rare cancers, particularly in rural areas. Health officials, ecologists and environmental activists point to the use of pesticides for industrial potato farming as a primary contaminant. Prince Edward Island is the only province in Canada that does not provide abortion services through its hospitals. The last abortion was performed in the province in 1982 prior to the opening of the Queen Elizabeth Hospital which saw the closure of the Roman Catholic-affiliated Charlottetown Hospital and the non-denominational Prince Edward Island Hospital; a condition of the "merger" being that abortions not be performed in the province. In 1988, following the court decision "R. v. Morgentaler", the then-opposition Progressive Conservative Party of Prince Edward Island tabled a motion demanding that the ban on abortions be upheld at the province's hospitals; the then-governing Prince Edward Island Liberal Party under Premier Joe Ghiz acquiesced and the ban was upheld. The Government of Prince Edward Island will fund abortions for women who travel to another province. Women from Prince Edward Island may also travel to the nearest private user-pay clinic, where they must pay for the procedure using their own funds. Formerly this was the Morgentaler Clinic in Fredericton, New Brunswick until this clinic closed due to lack of funds in July 2014. The clinic was reopened under new ownership in 2016 as Clinic 554 with expanded services. During that gap, women had to travel to Halifax or further. In 2016, the Liberal government led by Premier Wade MacLauchlan announced they would open a women's reproductive health clinic to provide abortions within the province. Prince Edward Island's transportation network has traditionally revolved around its seaports of Charlottetown, Summerside, Borden, Georgetown, and Souris —linked to its railway system, and the two main airports in Charlottetown and Summerside, for communication with mainland North America. The railway system was abandoned by CN in 1989 in favour of an agreement with the federal government to improve major highways. Until May 1997, the province was linked by two passenger-vehicle ferry services to the mainland: one, provided by Marine Atlantic, operated year-round between Borden and Cape Tormentine, New Brunswick; the other, provided by Northumberland Ferries Limited, operates seasonally between Wood Islands and Caribou, Nova Scotia. A third ferry service provided by CTMA operates all year round with seasonal times between Souris and Cap-aux-Meules, Quebec, in the Magdalen Islands. In May 1997, the Confederation Bridge opened, connecting Borden-Carleton to Cape Jourimain, New Brunswick. The world's longest bridge over ice-covered waters, it replaced the Marine Atlantic ferry service. Since then, the Confederation Bridge's assured transportation link to the mainland has altered the province's tourism and agricultural and fisheries export economies. The Island has the highest concentration of roadways in Canada. The provincially managed portion of the network consists of of paved roadways and of non-paved or clay roads. The province has very strict laws regarding use of roadside signs. Billboards and the use of portable signs are banned. There are standard direction information signs on roads in the province for various businesses and attractions in the immediate area. The by-laws of some municipalities also restrict the types of permanent signs that may be installed on private property. Several airlines service the Charlottetown Airport (CYYG); the Summerside Airport (CYSU) is an additional option for general aviation. There is an extensive bicycling and hiking trail that spans the island. The Confederation Trail is a recreational trail system. The land was once owned and used by Canadian National Railway (CN) as a rail line on the island. The island's cultural traditions of art, music and creative writing are supported through the public education system. There is an annual arts festival, the Charlottetown Festival, hosted at the Confederation Centre of the Arts. Lucy Maud Montgomery, who was born in Clifton (now New London) in 1874, wrote some 20 novels and numerous short stories that have been collected into anthologies. Her first "Anne" book "Anne of Green Gables" was published in 1908. The musical play "Anne of Green Gables" has run every year at the Charlottetown festival for more than four decades. The sequel, "Anne & Gilbert", premiered in the Playhouse in Victoria in 2005. The actual location of Green Gables, the house featured in Montgomery's "Anne" books, is in Cavendish, on the north shore of PEI. Elmer Blaney Harris founded an artists colony at Fortune Bridge and set his famous play "Johnny Belinda" on the island. Robert Harris was a well-known artist. Prince Edward Island's documented music history begins in the 19th century with religious music, some written by the local pump and block maker and organ-importer, Watson Duchemin. Several big bands including the Sons of Temperance Band and the Charlottetown Brass Band were active. Today, Acadian, Celtic, folk, and rock music prevail, with exponents including Gene MacLellan, his daughter Catherine MacLellan, Al Tuck, Lennie Gallant, Two Hours Traffic and Paper Lions. The celebrated singer-songwriter Stompin' Tom Connors spent his formative years in Skinners Pond. Celtic music is certainly the most common traditional music on the island, with fiddling and step dancing being very common. This tradition, largely Scottish, Irish and Acadian in origin is very similar to the music of Cape Breton and to a lesser extent, Newfoundland and is unique to the region. Due to the Islands influence as a former Highlander Clans Scottish colony, a March 4/4 for bagpipes was composed in honour of Prince Edward Island. There is an annual arts festival, the Charlottetown Festival, hosted at the Confederation Centre of the Arts as well as the Island Fringe Festival that takes place around Charlottetown. An annual jazz festival, the P.E.I. Jazz and Blues Festival. is a one-week-long series of concerts taking place at several venues including Murphy's Community Centre, outdoor stages, and churches in Charlottetown. The moving of its date to mid-August caused in 2011 a serious loss in funding from Ottawa's regional development agency ACOA. The musician's line up in 2011 included Oliver Jones, Sophie Milman, Matt Dusk, Jack de Keyzer, Jack Semple, Meaghan Smith and Jimmy Bowskill. There is also Canada Rocks, and the Cavendish Beach Music Festival. With agriculture and fishery playing a large role in the economy, P.E.I. has been marketed as a food tourism destination. Several food festivals have become popular such as the Fall Flavours festival and the Shellfish Festival. The most common sports played on the Island are hockey, curling, golf, horse racing, baseball, soccer, rugby, football and basketball. Water sports are also popular on Prince Edward Island during the summer. The province is home to two professional sports teams, a major junior ice hockey team, the Charlottetown Islanders of the Quebec Major Junior Hockey League, and the Island Storm, a basketball team of the National Basketball League of Canada. Prince Edward Island is also home to the Summerside Western Capitals, a junior ice hockey team of the Maritime Junior A Hockey League. In 2008 and 2009, Prince Edward Island hosted the Tour de PEI, a province-wide cycling race consisting of women from around the world. Prince Edward Island has also hosted two Canada Games, the 1991 Canada Winter Games, and the 2009 Canada Summer Games. Hainan Province, China, has been the sister province of Prince Edward Island since 2001. This came about after Vice-Governor Lin Fanglue stayed for two days to hold discussions about partnership opportunities and trade.
https://en.wikipedia.org/wiki?curid=23071
Pretty Good Privacy Pretty Good Privacy (PGP) is an encryption program that provides cryptographic privacy and authentication for data communication. PGP is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and whole disk partitions and to increase the security of e-mail communications. Phil Zimmermann developed PGP in 1991. PGP and similar software follow the OpenPGP, an open standard of PGP encryption software, standard (RFC 4880) for encrypting and decrypting data. PGP encryption uses a serial combination of hashing, data compression, symmetric-key cryptography, and finally public-key cryptography; each step uses one of several supported algorithms. Each public key is bound to a username or an e-mail address. The first version of this system was generally known as a web of trust to contrast with the X.509 system, which uses a hierarchical approach based on certificate authority and which was added to PGP implementations later. Current versions of PGP encryption include both options through an automated key management server. A public key fingerprint is a shorter version of a public key. From a fingerprint, someone can validate the correct corresponding public key. A fingerprint like C3A6 5E46 7B54 77DF 3C4C 9790 4D22 B3CA 5B32 FF66 can be printed on a business card. As PGP evolves, versions that support newer features and algorithms are able to create encrypted messages that older PGP systems cannot decrypt, even with a valid private key. Therefore, it is essential that partners in PGP communication understand each other's capabilities or at least agree on PGP settings. PGP can be used to send messages confidentially. For this, PGP uses hybrid cryptosystem by combining symmetric-key encryption and public-key encryption. The message is encrypted using a symmetric encryption algorithm, which requires a symmetric key generated by the sender. The symmetric key is used only once and is also called a session key. The message and its session key are sent to the receiver. The session key must be sent to the receiver so they know how to decrypt the message, but to protect it during transmission it is encrypted with the receiver's public key. Only the private key belonging to the receiver can decrypt the session key, and use it to symmetrically decrypt the message. PGP supports message authentication and integrity checking. The latter is used to detect whether a message has been altered since it was completed (the "message integrity" property) and the former, to determine whether it was actually sent by the person or entity claimed to be the sender (a "digital signature"). Because the content is encrypted, any changes in the message will result in failure of the decryption with the appropriate key. The sender uses PGP to create a digital signature for the message with either the RSA or DSA algorithms. To do so, PGP computes a hash (also called a message digest) from the plaintext and then creates the digital signature from that hash using the sender's private key. Both when encrypting messages and when verifying signatures, it is critical that the public key used to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not a reliable assurance of that association; deliberate (or accidental) impersonation is possible. From its first version, PGP has always included provisions for distributing users' public keys in an 'identity certification', which is also constructed cryptographically so that any tampering (or accidental garble) is readily detectable. However, merely making a certificate which is impossible to modify without being detected is insufficient; this can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person or entity claiming it. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence which can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key. The web of trust protocol was first described by Phil Zimmermann in 1992, in the manual for PGP version 2.0: The web of trust mechanism has advantages over a centrally managed public key infrastructure scheme such as that used by S/MIME but has not been universally used. Users have to be willing to accept certificates and check their validity manually or have to simply accept them. No satisfactory solution has been found for the underlying problem. In the (more recent) OpenPGP specification, "trust signatures" can be used to support creation of certificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities. PGP versions have always included a way to cancel ('revoke') identity certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to the certificate revocation lists of centralised PKI schemes. Recent PGP versions have also supported certificate expiration dates. The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key/private key cryptosystems have the same problem, even if in slightly different guises, and no fully satisfactory solution is known. PGP's original scheme at least leaves the decision as to whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a central certificate authority be accepted as correct. To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic, or computational means. Indeed, in 1995, cryptographer Bruce Schneier characterized an early version as being "the closest you're likely to get to military-grade encryption." Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended. In addition to protecting data in transit over a network, PGP encryption can also be used to protect data in long-term data storage such as disk files. These long-term storage options are also known as data at rest, i.e. data stored, not in transit. The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by direct cryptanalysis with current equipment and techniques. In the original version, the RSA algorithm was used to encrypt session keys. RSA's security depends upon the one-way function nature of mathematical integer factoring. Similarly, the symmetric key algorithm used in PGP version 2 was IDEA, which might at some point in the future be found to have previously undetected cryptanalytic flaws. Specific instances of current PGP or IDEA insecurities (if they exist) are not publicly known. As current versions of PGP have added additional encryption algorithms, their cryptographic vulnerability varies with the algorithm used. However, none of the algorithms in current use are publicly known to have cryptanalytic weaknesses. New versions of PGP are released periodically and vulnerabilities fixed by developers as they come to light. Any agency wanting to read PGP messages would probably use easier means than standard cryptanalysis, e.g. rubber-hose cryptanalysis or black-bag cryptanalysis (e.g. installing some form of trojan horse or keystroke logging software/hardware on the target computer to capture encrypted keyrings and their passwords). The FBI has already used this attack against PGP in its investigations. However, any such vulnerabilities apply not just to PGP but to any conventional encryption software. In 2003, an incident involving seized Psion PDAs belonging to members of the Red Brigade indicated that neither the Italian police nor the FBI were able to decrypt PGP-encrypted files stored on them. A second incident in December 2006, (see "In re Boucher"), involving US customs agents who seized a laptop PC that allegedly contained child pornography, indicates that US government agencies find it "nearly impossible" to access PGP-encrypted files. Additionally, a magistrate judge ruling on the case in November 2007 has stated that forcing the suspect to reveal his PGP passphrase would violate his Fifth Amendment rights i.e. a suspect's constitutional right not to incriminate himself. The Fifth Amendment issue was opened again as the government appealed the case, after which a federal district judge ordered the defendant to provide the key. Evidence suggests that , British police investigators are unable to break PGP, so instead have resorted to using RIPA legislation to demand the passwords/keys. In November 2009 a British citizen was convicted under RIPA legislation and jailed for nine months for refusing to provide police investigators with encryption keys to PGP-encrypted files. PGP as a cryptosystem has been criticized for complexity of the standard, implementation and very low usability of the user interface including by recognized figures in cryptography research. It uses an ineffective serialization format for storage of both keys and encrypted data, which resulted in signature-spamming attacks on public keys of prominent developers of GNU Privacy Guard. Backwards compatibility of the OpenPGP standard results in usage of relatively weak default choices of cryptographic primitives (CAST5 cipher, CFB mode, S2K password hashing). The standard has been also criticized for leaking metadata, usage of long-term keys and lack of forward secrecy. Popular end-user implementations have suffered from various signature-striping, cipher downgrade and metadata leakage vulnerabilities which have been attributed to the complexity of the standard. Phil Zimmermann created the first version of PGP encryption in 1991. The name, "Pretty Good Privacy" was inspired by the name of a grocery store, "Ralph's Pretty Good Grocery", featured in radio host Garrison Keillor's fictional town, Lake Wobegon. This first version included a symmetric-key algorithm that Zimmermann had designed himself, named BassOmatic after a "Saturday Night Live" sketch. Zimmermann had been a long-time anti-nuclear activist, and created PGP encryption so that similarly inclined people might securely use BBSs and securely store messages and files. No license fee was required for its non-commercial use, and the complete source code was included with all copies. In a posting of June 5, 2001, entitled "PGP Marks 10th Anniversary", Zimmermann describes the circumstances surrounding his release of PGP: PGP found its way onto the Internet and rapidly acquired a considerable following around the world. Users and supporters included dissidents in totalitarian countries (some affecting letters to Zimmermann have been published, some of which have been included in testimony before the US Congress), civil libertarians in other parts of the world (see Zimmermann's published testimony in various hearings), and the 'free communications' activists who called themselves cypherpunks (who provided both publicity and distribution); decades later, CryptoParty activists did much the same via Twitter. Shortly after its release, PGP encryption found its way outside the United States, and in February 1993 Zimmermann became the formal target of a criminal investigation by the US Government for "munitions export without a license". At the time, cryptosystems using keys larger than 40 bits were considered munitions within the definition of the US export regulations; PGP has never used keys smaller than 128 bits, so it qualified at that time. Penalties for violation, if found guilty, were substantial. After several years, the investigation of Zimmermann was closed without filing criminal charges against him or anyone else. Zimmermann challenged these regulations in an imaginative way. He published the entire source code of PGP in a hardback book, via MIT Press, which was distributed and sold widely. Anybody wishing to build their own copy of PGP could cut off the covers, separate the pages, and scan them using an OCR program (or conceivably enter it as a type-in program if OCR software was not available), creating a set of source code text files. One could then build the application using the freely available GNU Compiler Collection. PGP would thus be available anywhere in the world. The claimed principle was simple: export of "munitions"—guns, bombs, planes, and software—was (and remains) restricted; but the export of "books" is protected by the First Amendment. The question was never tested in court with respect to PGP. In cases addressing other encryption software, however, two federal appeals courts have established the rule that cryptographic software source code is speech protected by the First Amendment (the Ninth Circuit Court of Appeals in the Bernstein case and the Sixth Circuit Court of Appeals in the Junger case). US export regulations regarding cryptography remain in force, but were liberalized substantially throughout the late 1990s. Since 2000, compliance with the regulations is also much easier. PGP encryption no longer meets the definition of a non-exportable weapon, and can be exported internationally except to seven specific countries and a list of named groups and individuals (with whom substantially all US trade is prohibited under various US export controls). During this turmoil, Zimmermann's team worked on a new version of PGP encryption called PGP 3. This new version was to have considerable security improvements, including a new certificate structure which fixed small security flaws in the PGP 2.x certificates as well as permitting a certificate to include separate keys for signing and encryption. Furthermore, the experience with patent and export problems led them to eschew patents entirely. PGP 3 introduced use of the CAST-128 (a.k.a. CAST5) symmetric key algorithm, and the DSA and ElGamal asymmetric key algorithms, all of which were unencumbered by patents. After the Federal criminal investigation ended in 1996, Zimmermann and his team started a company to produce new versions of PGP encryption. They merged with Viacrypt (to whom Zimmermann had sold commercial rights and who had licensed RSA directly from RSADSI), which then changed its name to PGP Incorporated. The newly combined Viacrypt/PGP team started work on new versions of PGP encryption based on the PGP 3 system. Unlike PGP 2, which was an exclusively command line program, PGP 3 was designed from the start as a software library allowing users to work from a command line or inside a GUI environment. The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997. In December 1997, PGP Inc. was acquired by Network Associates, Inc. ("NAI"). Zimmermann and the PGP team became NAI employees. NAI was the first company to have a legal export strategy by publishing source code. Under NAI, the PGP team added disk encryption, desktop firewalls, intrusion detection, and IPsec VPNs to the PGP family. After the export regulation liberalizations of 2000 which no longer required publishing of source, NAI stopped releasing source code. In early 2001, Zimmermann left NAI. He served as Chief Cryptographer for Hush Communications, who provide an OpenPGP-based e-mail service, Hushmail. He has also worked with Veridis and other companies. In October 2001, NAI announced that its PGP assets were for sale and that it was suspending further development of PGP encryption. The only remaining asset kept was the PGP E-Business Server (the original PGP Commandline version). In February 2002, NAI canceled all support for PGP products, with the exception of the renamed commandline product. NAI (formerly McAfee, then Intel Security, and now McAfee again) continued to sell and support the product under the name McAfee E-Business Server until 2013. In August 2002, several ex-PGP team members formed a new company, PGP Corporation, and bought the PGP assets (except for the command line version) from NAI. The new company was funded by Rob Theis of Doll Capital Management (DCM) and Terry Garnett of Venrock Associates. PGP Corporation supported existing PGP users and honored NAI's support contracts. Zimmermann served as a special advisor and consultant to PGP Corporation while continuing to run his own consulting company. In 2003, PGP Corporation created a new server-based product called PGP Universal. In mid-2004, PGP Corporation shipped its own command line version called PGP Command Line, which integrated with the other PGP Encryption Platform applications. In 2005, PGP Corporation made its first acquisition: the German software company Glück & Kanja Technology AG, which became PGP Deutschland AG. In 2010, PGP Corporation acquired Hamburg-based certificate authority TC TrustCenter and its parent company, ChosenSecurity, to form its PGP TrustCenter division. After the 2002 purchase of NAI's PGP assets, PGP Corporation offered worldwide PGP technical support from its offices in Draper, Utah; Offenbach, Germany; and Tokyo, Japan. On April 29, 2010, Symantec Corp. announced that it would acquire PGP for $300 million with the intent of integrating it into its Enterprise Security Group. This acquisition was finalized and announced to the public on June 7, 2010. The source code of PGP Desktop 10 is available for peer review. Also in 2010, Intel Corporation acquired McAfee. In 2013, the McAfee E-Business Server was transferred to Software Diversified Services, which now sells, supports, and develops it under the name SDS E-Business Server. For the enterprise, Townsend Security currently offers a commercial version of PGP for the IBM i and IBM z mainframe platforms. Townsend Security partnered with Network Associates in 2000 to create a compatible version of PGP for the IBM i platform. Townsend Security again ported PGP in 2008, this time to the IBM z mainframe. This version of PGP relies on free z/OS encryption facility, which utilizes hardware acceleration. Software Diversified Services also offers a commercial version of PGP (SDS E-Business Server) for the IBM z mainframe. In May 2018, a bug named EFAIL was discovered in certain implementations of PGP which from 2003 could reveal the plaintext contents of emails encrypted with it. While originally used primarily for encrypting the contents of e-mail messages and attachments from a desktop client, PGP products have been diversified since 2002 into a set of encryption applications which can be managed by an optional central policy server. PGP encryption applications include e-mails and attachments, digital signatures, laptop full disk encryption, file and folder security, protection for IM sessions, batch file transfer encryption, and protection for files and folders stored on network servers and, more recently, encrypted or signed HTTP request/responses by means of a client-side (Enigform) and a server-side (mod openpgp) module. There is also a Wordpress plugin available, called wp-enigform-authentication, that takes advantage of the session management features of Enigform with mod_openpgp. The PGP Desktop 9.x family includes PGP Desktop Email, PGP Whole Disk Encryption, and PGP NetShare. Additionally, a number of Desktop bundles are also available. Depending on application, the products feature desktop e-mail, digital signatures, IM security, whole disk encryption, file and folder security, encrypted self-extracting archives, and secure shredding of deleted files. Capabilities are licensed in different ways depending on features required. The PGP Universal Server 2.x management console handles centralized deployment, security policy, policy enforcement, key management, and reporting. It is used for automated e-mail encryption in the gateway and manages PGP Desktop 9.x clients. In addition to its local keyserver, PGP Universal Server works with the PGP public keyserver—called the PGP Global Directory—to find recipient keys. It has the capability of delivering e-mail securely when no recipient key is found via a secure HTTPS browser session. With PGP Desktop 9.x managed by PGP Universal Server 2.x, first released in 2005, all PGP encryption applications are based on a new proxy-based architecture. These newer versions of PGP software eliminate the use of e-mail plug-ins and insulate the user from changes to other desktop applications. All desktop and server operations are now based on security policies and operate in an automated fashion. The PGP Universal server automates the creation, management, and expiration of keys, sharing these keys among all PGP encryption applications. The Symantec PGP platform has now undergone a rename. PGP Desktop is now known as Symantec Encryption Desktop (SED), and the PGP Universal Server is now known as Symantec Encryption Management Server (SEMS). The current shipping versions are Symantec Encryption Desktop 10.3.0 (Windows and macOS platforms) and Symantec Encryption Server 3.3.2. Also available are PGP Command Line, which enables command line-based encryption and signing of information for storage, transfer, and backup, as well as the PGP Support Package for BlackBerry which enables RIM BlackBerry devices to enjoy sender-to-recipient messaging encryption. New versions of PGP applications use both OpenPGP and the S/MIME, allowing communications with any user of a NIST specified standard. Inside PGP Inc., there was still concern about patent issues. RSADSI was challenging the continuation of the Viacrypt RSA license to the newly merged firm. The company adopted an informal internal standard they called "Unencumbered PGP" which would "use no algorithm with licensing difficulties". Because of PGP encryption's importance worldwide, many wanted to write their own software that would interoperate with PGP 5. Zimmermann became convinced that an open standard for PGP encryption was critical for them and for the cryptographic community as a whole. In July 1997, PGP Inc. proposed to the IETF that there be a standard called OpenPGP. They gave the IETF permission to use the name OpenPGP to describe this new standard as well as any program that supported the standard. The IETF accepted the proposal and started the OpenPGP Working Group. OpenPGP is on the Internet Standards Track and is under active development. Many e-mail clients provide OpenPGP-compliant email security as described in RFC 3156. The current specification is RFC 4880 (November 2007), the successor to RFC 2440. RFC 4880 specifies a suite of required algorithms consisting of ElGamal encryption, DSA, Triple DES and SHA-1. In addition to these algorithms, the standard recommends RSA as described in PKCS #1 v1.5 for encryption and signing, as well as AES-128, CAST-128 and IDEA. Beyond these, many other algorithms are supported. The standard was extended to support Camellia cipher by RFC 5581 in 2009, and signing and key exchange based on Elliptic Curve Cryptography (ECC) (i.e. ECDSA and ECDH) by RFC 6637 in 2012. Support for ECC encryption was added by the proposed RFC 4880bis in 2014. The Free Software Foundation has developed its own OpenPGP-compliant program called GNU Privacy Guard (abbreviated GnuPG or GPG). GnuPG is freely available together with all source code under the GNU General Public License (GPL) and is maintained separately from several Graphical User Interfaces (GUIs) that interact with the GnuPG library for encryption, decryption and signing functions (see KGPG, Seahorse, MacGPG). Several other vendors have also developed OpenPGP-compliant software. The development of an open source OpenPGP-compliant library, OpenPGP.js, written in JavaScript, has allowed web-based applications to use PGP encryption in the web browser. OpenPGP's encryption can ensure secure delivery of files and messages, as well as provide verification of who created or sent the message using a process called digital signing. The open source office suite LibreOffice implemented document signing with OpenPGP as of version 5.4.0 on Linux. Using OpenPGP for communication requires participation by both the sender and recipient. OpenPGP can also be used to secure sensitive files when they're stored in vulnerable places like mobile devices or in the cloud. With the advancement of cryptography, parts of PGP have been criticized for being dated: In October 2017, the ROCA vulnerability was announced, which affects RSA keys generated by buggy Infineon firmware used on Yubikey 4 tokens, often used with PGP. Many published PGP keys were found to be susceptible. Yubico offers free replacement of affected tokens.
https://en.wikipedia.org/wiki?curid=23080
Playing card A playing card is a piece of specially prepared card stock, heavy paper, thin cardboard, plastic-coated paper, cotton-paper blend, or thin plastic that is marked with distinguishing motifs. Often the front (face) and back of each card has a finish to make handling easier. They are most commonly used for playing card games, and are also used in magic tricks, cardistry, card throwing, and card houses; cards may also be collected. Some types of cards such as tarot cards are also used for divination. Playing cards are typically palm-sized for convenient handling, and usually are sold together in a set as a deck of cards or pack of cards. Playing cards are available in a wide variety of styles, as decks may be custom-produced for casinos and magicians (sometimes in the form of trick decks), made as promotional items, or intended as souvenirs, artistic works, educational tools, or branded accessories. Decks of cards or even single cards are also collected as a hobby or for monetary value. Different types of card decks can be found in different areas of the world—while the standard 52-card deck is known and used internationally, other types of cards such as Japanese hanafuda and Italian playing cards are well-known in their locales. Cards may also be produced for trading card sets or collectible card games, which can comprise hundreds if not thousands of unique cards. Playing cards were first invented in China during the Tang dynasty. Playing cards may have been invented during the Tang dynasty around the 9th century AD as a result of the usage of woodblock printing technology. The first possible reference to card games comes from a 9th-century text known as the "Collection of Miscellanea at Duyang", written by Tang dynasty writer Su E. It describes Princess Tongchang, daughter of Emperor Yizong of Tang, playing the "leaf game" in 868 with members of the Wei clan, the family of the princess's husband. The first known book on the "leaf" game was called the "Yezi Gexi" and allegedly written by a Tang woman. It received commentary by writers of subsequent dynasties. The Song dynasty (960–1279) scholar Ouyang Xiu (1007–1072) asserts that the "leaf" game existed at least since the mid-Tang dynasty and associated its invention with the development of printed sheets as a writing medium. However, Ouyang also claims that the "leaves" were pages of a book used in a board game played with dice, and that the rules of the game were lost by 1067. Other games revolving around alcoholic drinking involved using playing cards of a sort from the Tang dynasty onward. However, these cards did not contain suits or numbers. Instead, they were printed with instructions or forfeits for whomever drew them. The earliest dated instance of a game involving cards occurred on 17 July 1294 when "Yan Sengzhu and Zheng Pig-Dog were caught playing cards [zhi pai] and that wood blocks for printing them had been impounded, together with nine of the actual cards." William Henry Wilkinson suggests that the first cards may have been actual paper currency which doubled as both the tools of gaming and the stakes being played for, similar to trading card games. Using paper money was inconvenient and risky so they were substituted by play money known as "money cards". One of the earliest games in which we know the rules is "madiao", a trick-taking game, which dates to the Ming Dynasty (1368–1644). 15th-century scholar Lu Rong described it is as being played with 38 "money cards" divided into four suits: 9 in coins, 9 in strings of coins (which may have been misinterpreted as sticks from crude drawings), 9 in myriads (of coins or of strings), and 11 in tens of myriads (a myriad is 10,000). The two latter suits had "Water Margin" characters instead of pips on them with Chinese to mark their rank and suit. The suit of coins is in reverse order with 9 of coins being the lowest going up to 1 of coins as the high card. Despite the wide variety of patterns, the suits show a uniformity of structure. Every suit contains twelve cards with the top two usually being the court cards of king and vizier and the bottom ten being pip cards. Half the suits use reverse ranking for their pip cards. There are many motifs for the suit pips but some include coins, clubs, jugs, and swords which resemble later Mamluk and Latin suits. Michael Dummett speculated that Mamluk cards may have descended from an earlier deck which consisted of 48 cards divided into four suits each with ten pip cards and two court cards. By the 11th century, playing cards were spreading throughout the Asian continent and later came into Egypt. The oldest surviving cards in the world are four fragments found in the Keir Collection and one in the Benaki Museum. They are dated to the 12th and 13th centuries (late Fatimid, Ayyubid, and early Mamluk periods). A near complete pack of Mamluk playing cards dating to the 15th century and of similar appearance to the fragments above was discovered by Leo Aryeh Mayer in the Topkapı Palace, Istanbul, in 1939. It is not a complete set and is actually composed of three different packs, probably to replace missing cards. The Topkapı pack originally contained 52 cards comprising four suits: polo-sticks, coins, swords, and cups. Each suit contained ten pip cards and three court cards, called "malik" (king), "nā'ib malik" (viceroy or deputy king), and "thānī nā'ib" (second or under-deputy). The "thānī nā'ib" is a non-existent title so it may not have been in the earliest versions; without this rank, the Mamluk suits would structurally be the same as a Ganjifa suit. In fact, the word "Kanjifah" appears in Arabic on the king of swords and is still used in parts of the Middle East to describe modern playing cards. Influence from further east can explain why the Mamluks, most of whom were Central Asian Turkic Kipchaks, called their cups "tuman" which means myriad in Turkic, Mongolian and Jurchen languages. Wilkinson postulated that the cups may have been derived from inverting the Chinese and Jurchen ideogram for myriad (). The Mamluk court cards showed abstract designs or calligraphy not depicting persons possibly due to religious proscription in Sunni Islam, though they did bear the ranks on the cards. "Nā'ib" would be borrowed into French ("nahipi"), Italian ("naibi"), and Spanish ("naipes"), the latter word still in common usage. Panels on the pip cards in two suits show they had a reverse ranking, a feature found in madiao, "ganjifa", and old European card games like ombre, tarot, and maw. A fragment of two uncut sheets of Moorish-styled cards of a similar but plainer style was found in Spain and dated to the early 15th century. Export of these cards (from Cairo, Alexandria, and Damascus), ceased after the fall of the Mamluks in the 16th century. The rules to play these games are lost but they are believed to be plain trick games without trumps. The earliest record of playing cards in Europe is believed by some researchers to be a ban on card games in the city of Berne in 1367, although this source is questionable. Generally accepted as the first is a Florentine ban dating to 1377. Also appearing in 1377 was the treatise by John of Rheinfelden, in which he describes playing cards and their moral meaning. From this year onwards more and more records (usually bans) of playing cards occur. Among the early patterns of playing card were those probably derived from the Mamluk suits of cups, coins, swords, and polo-sticks, which are still used in traditional Latin decks. As polo was an obscure sport to Europeans then, the polo-sticks became batons or cudgels. Their presence is attested in Catalonia in 1371, 1377 in Switzerland, and 1380 in many locations including Florence and Paris. Wide use of playing cards in Europe can, with some certainty, be traced from 1377 onward. In the account books of Johanna, Duchess of Brabant and Wenceslaus I, Duke of Luxembourg, an entry dated May 14, 1379, by receiver general of Brabant Renier Hollander reads: "Given to Monsieur and Madame four peters and two florins, worth eight and a half sheep, for the purchase of packs of cards". In his book of accounts for 1392 or 1393, Charles or Charbot Poupart, treasurer of the household of Charles VI of France, records payment for the painting of three sets of cards. From about 1418 to 1450 professional card makers in Ulm, Nuremberg, and Augsburg created printed decks. Playing cards even competed with devotional images as the most common uses for woodcuts in this period. Most early woodcuts of all types were coloured after printing, either by hand or, from about 1450 onwards, stencils. These 15th-century playing cards were probably painted. The Flemish Hunting Deck, held by the Metropolitan Museum of Art, is the oldest complete set of ordinary playing cards made in Europe from the 15th century. As cards spread from Italy to Germanic countries, the Latin suits were replaced with the suits of leaves (or shields), hearts (or roses), bells, and acorns, and a combination of Latin and Germanic suit pictures and names resulted in the French suits of (clovers), (tiles), (hearts), and (pikes) around 1480. The "trèfle" (clover) was probably derived from the acorn and the (pike) from the leaf of the German suits. The names and "spade", however, may have derived from the sword () of the Italian suits. In England, the French suits were eventually used, although the earliest packs circulating may have had Latin suits. This may account for why the English called the clovers "clubs" and the pikes "spades". In the late 14th century, Europeans changed the Mamluk court cards to represent European royalty and attendants. In a description from 1377, the earliest courts were originally a seated "king", an upper marshal that held his suit symbol up, and a lower marshal that held it down. The latter two correspond with the "ober" and "unter" cards found in German and Swiss playing cards. The Italians and Iberians replaced the / system with the "Knight" and "" or "" before 1390, perhaps to make the cards more visually distinguishable. In England, the lowest court card was called the "knave" which originally meant "male child" (compare German ), so in this context the character could represent the "prince", son to the king and queen; the meaning "servant" developed later. Queens appeared sporadically in packs as early as 1377, especially in Germany. Although the Germans abandoned the queen before the 1500s, the French permanently picked it up and placed it under the king. Packs of 56 cards containing in each suit a king, queen, knight, and knave (as in tarot) were once common in the 15th century. In 1628, the Worshipful Company of Makers of Playing Cards was incorporated under a royal charter by Charles I; the Company received livery status from the Court of Aldermen of the City of London in 1792. The Company still exists today, having expanded its member ranks to include "card makers... card collectors, dealers, bridge players, [and] magicians". During the mid 16th century, Portuguese traders introduced playing cards to Japan. The first indigenous Japanese deck was the named after the period. Packs with corner and edge indices (i.e. the value of the card printed at the corner(s) of the card) enabled players to hold their cards close together in a fan with one hand (instead of the two hands previously used). The first such pack known with Latin suits was printed by Infirerra and dated 1693, but this feature was commonly used only from the end of the 18th century. The first American-manufactured (French) deck with this innovation was the Saladee's Patent, printed by Samuel Hart in 1864. In 1870, he and his cousins at Lawrence & Cohen followed up with the Squeezers, the first cards with indices that had a large diffusion. This was followed by the innovation of reversible court cards. This invention is attributed to a French card maker of Agen in 1745. But the French government, which controlled the design of playing cards, prohibited the printing of cards with this innovation. In central Europe (Trappola cards) and Italy (Tarocco Bolognese) the innovation was adopted during the second half of the 18th century. In Great Britain, the pack with reversible court cards was patented in 1799 by Edmund Ludlow and Ann Wilcox. The French pack with this design was printed around 1802 by Thomas Wheeler. Sharp corners wear out more quickly, and could possibly reveal the card's value, so they were replaced with rounded corners. Before the mid-19th century, British, American, and French players preferred blank backs. The need to hide wear and tear and to discourage writing on the back led cards to have designs, pictures, photos, or advertising on the reverse. The United States introduced the joker into the deck. It was devised for the game of euchre, which spread from Europe to America beginning shortly after the American Revolutionary War. In euchre, the highest trump card is the Jack of the trump suit, called the "right bower" (from the German ""); the second-highest trump, the "left bower", is the jack of the suit of the same color as trumps. The joker was invented c. 1860 as a third trump, the "imperial" or "best bower", which ranked higher than the other two "bowers". The name of the card is believed to derive from "juker", a variant name for euchre. The earliest reference to a joker functioning as a wild card dates to 1875 with a variation of poker. Columbia University's Rare Book and Manuscript Library holds the Albert Field Collection of Playing Cards, an archive of over 6,000 individual decks from over 50 countries and dating back to the 1550s. In 2018 the university digitized over 100 of its decks. Since 2017, Vanderbilt University has been home to the 1,000-volume George Clulow and United States Playing Card Co. Gaming Collection, which has been called one of the "most complete and scholarly collections [of books on cards and gaming] that has ever been gathered together". Contemporary playing cards are grouped into three broad categories based on the suits they use: French, Latin, and Germanic. Latin suits are used in the closely related Spanish and Italian formats. The Swiss-German suits are distinct enough to merit their subcategory. Excluding jokers and tarot trumps, the French 52-card deck preserves the number of cards in the original Mamluk deck, while Latin and Germanic decks average fewer. Latin decks usually drop the higher-valued pip cards, while Germanic decks drop the lower-valued ones. Within suits, there are regional or national variations called "standard patterns." Because these patterns are in the public domain, this allows multiple card manufacturers to recreate them. Pattern differences are most easily found in the face cards but the number of cards per deck, the use of numeric indices, or even minor shape and arrangement differences of the pips can be used to distinguish them. Some patterns have been around for hundreds of years. Jokers are not part of any pattern as they are a relatively recent invention and lack any standardized appearance so each publisher usually puts its own trademarked illustration into their decks. The wide variation of jokers has turned them into collectible items. Any card that bore the stamp duty like the ace of spades in England, the ace of clubs in France or the ace of coins in Italy are also collectible as that is where the manufacturer's logo is usually placed. Usually the cards have their indices printed in the upper left and lower right corners, assuming they will be held in the left hand of a right-handed person. This design is often uncomfortable for left-handed people who may prefer to hold their cards in the right hand. To mitigate this issue non-biased designs exist, that have indices in all four corners of the card. French decks come in a variety of patterns and deck sizes. The 52-card deck is the most popular deck and includes 13 ranks of each suit with reversible "court" or face cards. Each suit includes an ace, depicting a single symbol of its suit, a king, queen, and jack, each depicted with a symbol of their suit; and ranks two through ten, with each card depicting that number of pips of its suit. As well as these 52 cards, commercial packs often include between one and six jokers, most often two. Decks with fewer than 52 cards are known as stripped decks. The piquet pack has all values from 2 through 6 in each suit removed for a total of 32 cards. It is popular in France, the Low Countries, Central Europe and Russia and is used to play piquet, belote, bezique and skat. It is also used in the Sri Lankan, whist-based game known as "omi". Forty-card French suited packs are common in northwest Italy; these remove the 8s through 10s like Latin suited decks. 24 card decks, removing 2s through 8s are also sold in Austria and Bavaria to play schnapsen. A pinochle deck consists of two copies of a 24 card schnapsen deck, thus 48 cards. The 78 card tarot nouveau adds the knight card between queens and jacks along with 21 numbered trumps and the unnumbered Fool. Today the process of making playing cards is highly automated. Large sheets of paper are glued together to create a sheet of pasteboard; the glue may be black or dyed another dark color to increase the card stock's opacity. In the industry, this black compound is sometimes known as "gick". Some card manufacturers may purchase pasteboard from various suppliers; large companies such as USPCC create their own proprietary pasteboard. After the desired imagery is etched into printing plates, the art is printed onto each side of the pasteboard sheet, which is coated with a textured or smooth finish, sometimes called a varnish or print coating. These coatings can be water- or solvent-based, and different textures and visual effects can be achieved by adding certain dyes or foils, or using multiple varnish processes. The pasteboard is then split into individual uncut sheets, which are cut into single cards and sorted into decks. The corners are then rounded, after which the decks are packaged, commonly in tuck boxes wrapped in cellophane. The tuck box may have a seal applied. Card manufacturers must pay special attention to the registration of the cards, as non-symmetrical cards can be used to cheat. Gambling corporations commonly have playing cards made specifically for their casinos. As casinos go through large numbers of decks each day, they may sometimes resell used cards that were "on the floor" — however, the cards sold to the public are altered, either by cutting the deck's corners or by punching a hole in the deck. Because of the long history and wide variety in designs, playing cards are also collector's items. According to "Guinness World Records", the largest playing card collection comprises 11,087 decks and is owned by Liu Fuchang of China. Individual playing cards are also collected, such as the world record collection of 8,520 different Jokers belonging to Tony De Santis of Italy. Custom decks may be produced for myriad purposes. Across the world, both individuals and large companies such as United States Playing Card Company (USPCC) design and release many different styles of decks, including commemorative decks and souvenir decks. Bold and colorful designs tend to be used for cardistry decks, while more generally, playing cards (as well as tarot cards) may focus on artistic value. Custom deck production is commonly funded on platforms such as Kickstarter, with companies as large as USPCC and Cartamundi offering card printing services to the public. In 1976, the JPL Gallery in London commissioned a card deck from a variety of contemporary British artists including Maggie Hambling, Patrick Heron, David Hockney, Howard Hodgkin, John Hoyland, and Allen Jones called "The Deck of Cards". Forty years later in 2016, the British Council commissioned a similar deck called "Taash ke Patte" featuring Indian artists such as Bhuri Bai, Shilpa Gupta, Krishen Khanna, Ram Rahman, Gulam Mohammed Sheikh, Arpita Singh, and Thukral & Tagra. Police departments, local governments, state prison systems, and even private organizations across the United States have created decks of cards that feature photos, names, and details of cold case victims or missing persons on each card. These decks are sold in prison commissaries, or even to the public, in the hopes that an inmate (or anyone else) might provide a new lead. Cold case card programs have been introduced in over a dozen states, including by Oklahoma's State Bureau of Investigation, Connecticut's Division of Criminal Justice, Delaware's Department of Correction, the Florida Department of Law Enforcement, and Rhode Island's Department of Corrections, among others. Among inmates, they may be called "snitch cards". The Unicode standard for text encoding on computers defines 8 characters for card suits in the Miscellaneous Symbols block, at . Unicode 7.0 added a unified pack for French-suited tarot nouveau's trump cards and the 52 cards of the modern French pack, with 4 knights, together with a character for "Playing Card Back" and black, red, and white jokers in the block . The Unicode names for each group of four glyphs are 'black' and 'white' but might have been more accurately described as 'solid' and 'outline' since the colour actually used at display or printing time is an application choice. Playing card societies (collectors and researchers) History of playing cards Playing card iconography Museums, Institutes and Organisations Playing card collections online
https://en.wikipedia.org/wiki?curid=23083
Paleontology Paleontology, also spelled palaeontology or palæontology (), is the scientific study of life that existed prior to, and sometimes including, the start of the Holocene Epoch (roughly 11,700 years before present). It includes the study of fossils to classify organisms and study interactions with each other and their environments (their paleoecology). Paleontological observations have been documented as far back as the 5th century BCE. The science became established in the 18th century as a result of Georges Cuvier's work on comparative anatomy, and developed rapidly in the 19th century. The term itself originates from Greek παλαιός, "palaios", "old, ancient", ὄν, "on" (gen. "ontos"), "being, creature" and λόγος, "logos", "speech, thought, study". Paleontology lies on the border between biology and geology, but differs from archaeology in that it excludes the study of anatomically modern humans. It now uses techniques drawn from a wide range of sciences, including biochemistry, mathematics, and engineering. Use of all these techniques has enabled paleontologists to discover much of the evolutionary history of life, almost all the way back to when Earth became capable of supporting life, about 3.8 billion years ago. As knowledge has increased, paleontology has developed specialised sub-divisions, some of which focus on different types of fossil organisms while others study ecology and environmental history, such as ancient climates. Body fossils and trace fossils are the principal types of evidence about ancient life, and geochemical evidence has helped to decipher the evolution of life before there were organisms large enough to leave body fossils. Estimating the dates of these remains is essential but difficult: sometimes adjacent rock layers allow radiometric dating, which provides absolute dates that are accurate to within 0.5%, but more often paleontologists have to rely on relative dating by solving the "jigsaw puzzles" of biostratigraphy (arrangement of rock layers from youngest to oldest). Classifying ancient organisms is also difficult, as many do not fit well into the Linnaean taxonomy classifying living organisms, and paleontologists more often use cladistics to draw up evolutionary "family trees". The final quarter of the 20th century saw the development of molecular phylogenetics, which investigates how closely organisms are related by measuring the similarity of the DNA in their genomes. Molecular phylogenetics has also been used to estimate the dates when species diverged, but there is controversy about the reliability of the molecular clock on which such estimates depend. The simplest definition of "paleontology" is "the study of ancient life". The field seeks information about several aspects of past organisms: "their identity and origin, their environment and evolution, and what they can tell us about the Earth's organic and inorganic past". William Whewell (1794-1866) classified paleontology as one of the historical sciences, along with archaeology, geology, astronomy, cosmology, philology and history itself: paleontology aims to describe phenomena of the past and to reconstruct their causes. Hence it has three main elements: description of past phenomena; developing a general theory about the causes of various types of change; and applying those theories to specific facts. When trying to explain the past, paleontologists and other historical scientists often construct a set of one or more hypotheses about the causes and then look for a "smoking gun", a piece of evidence that strongly accords with one hypothesis over any others. Sometimes researchers discover a "smoking gun" by a fortunate accident during other research. For example, the 1980 discovery by Luis and Walter Alvarez of iridium, a mainly extraterrestrial metal, in the Cretaceous–Tertiary boundary layer made asteroid impact the most favored explanation for the Cretaceous–Paleogene extinction event - although debate continues about the contribution of volcanism. A complementary approach to developing scientific knowledge, experimental science, is often said to work by conducting experiments to "disprove" hypotheses about the workings and causes of natural phenomena. This approach cannot prove a hypothesis, since some later experiment may disprove it, but the accumulation of failures to disprove is often compelling evidence in favor. However, when confronted with totally unexpected phenomena, such as the first evidence for invisible radiation, experimental scientists often use the same approach as historical scientists: construct a set of hypotheses about the causes and then look for a "smoking gun". Paleontology lies between biology and geology since it focuses on the record of past life, but its main source of evidence is fossils in rocks. For historical reasons, paleontology is part of the geology department at many universities: in the 19th and early 20th centuries, geology departments found fossil evidence important for dating rocks, while biology departments showed little interest. Paleontology also has some overlap with archaeology, which primarily works with objects made by humans and with human remains, while paleontologists are interested in the characteristics and evolution of humans as a species. When dealing with evidence about humans, archaeologists and paleontologists may work together – for example paleontologists might identify animal or plant fossils around an archaeological site, to discover what the people who lived there ate; or they might analyze the climate at the time of habitation. In addition, paleontology often borrows techniques from other sciences, including biology, osteology, ecology, chemistry, physics and mathematics. For example, geochemical signatures from rocks may help to discover when life first arose on Earth, and analyses of carbon isotope ratios may help to identify climate changes and even to explain major transitions such as the Permian–Triassic extinction event. A relatively recent discipline, molecular phylogenetics, compares the DNA and RNA of modern organisms to re-construct the "family trees" of their evolutionary ancestors. It has also been used to estimate the dates of important evolutionary developments, although this approach is controversial because of doubts about the reliability of the "molecular clock". Techniques from engineering have been used to analyse how the bodies of ancient organisms might have worked, for example the running speed and bite strength of "Tyrannosaurus," or the flight mechanics of "Microraptor". It is relatively commonplace to study the internal details of fossils using X-ray microtomography. Paleontology, biology, archaeology, and paleoneurobiology combine to study endocranial casts (endocasts) of species related to humans to clarify the evolution of the human brain. Paleontology even contributes to astrobiology, the investigation of possible life on other planets, by developing models of how life may have arisen and by providing techniques for detecting evidence of life. As knowledge has increased, paleontology has developed specialised subdivisions. Vertebrate paleontology concentrates on fossils from the earliest fish to the immediate ancestors of modern mammals. Invertebrate paleontology deals with fossils such as molluscs, arthropods, annelid worms and echinoderms. Paleobotany studies fossil plants, algae, and fungi. Palynology, the study of pollen and spores produced by land plants and protists, straddles paleontology and botany, as it deals with both living and fossil organisms. Micropaleontology deals with microscopic fossil organisms of all kinds. Instead of focusing on individual organisms, paleoecology examines the interactions between different ancient organisms, such as their food chains, and the two-way interactions with their environments.  For example, the development of oxygenic photosynthesis by bacteria caused the oxygenation of the atmosphere and hugely increased the productivity and diversity of ecosystems. Together, these led to the evolution of complex eukaryotic cells, from which all multicellular organisms are built. Paleoclimatology, although sometimes treated as part of paleoecology, focuses more on the history of Earth's climate and the mechanisms that have changed it – which have sometimes included evolutionary developments, for example the rapid expansion of land plants in the Devonian period removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus helping to cause an ice age in the Carboniferous period. Biostratigraphy, the use of fossils to work out the chronological order in which rocks were formed, is useful to both paleontologists and geologists. Biogeography studies the spatial distribution of organisms, and is also linked to geology, which explains how Earth's geography has changed over time. Fossils of organisms' bodies are usually the most informative type of evidence. The most common types are wood, bones, and shells. Fossilisation is a rare event, and most fossils are destroyed by erosion or metamorphism before they can be observed. Hence the fossil record is very incomplete, increasingly so further back in time. Despite this, it is often adequate to illustrate the broader patterns of life's history. There are also biases in the fossil record: different environments are more favorable to the preservation of different types of organism or parts of organisms. Further, only the parts of organisms that were already mineralised are usually preserved, such as the shells of molluscs. Since most animal species are soft-bodied, they decay before they can become fossilised. As a result, although there are 30-plus phyla of living animals, two-thirds have never been found as fossils. Occasionally, unusual environments may preserve soft tissues. These lagerstätten allow paleontologists to examine the internal anatomy of animals that in other sediments are represented only by shells, spines, claws, etc. – if they are preserved at all. However, even lagerstätten present an incomplete picture of life at the time. The majority of organisms living at the time are probably not represented because lagerstätten are restricted to a narrow range of environments, e.g. where soft-bodied organisms can be preserved very quickly by events such as mudslides; and the exceptional events that cause quick burial make it difficult to study the normal environments of the animals. The sparseness of the fossil record means that organisms are expected to exist long before and after they are found in the fossil record – this is known as the Signor–Lipps effect. Trace fossils consist mainly of tracks and burrows, but also include coprolites (fossil feces) and marks left by feeding. Trace fossils are particularly significant because they represent a data source that is not limited to animals with easily fossilised hard parts, and they reflect organisms' behaviours. Also many traces date from significantly earlier than the body fossils of animals that are thought to have been capable of making them. Whilst exact assignment of trace fossils to their makers is generally impossible, traces may for example provide the earliest physical evidence of the appearance of moderately complex animals (comparable to earthworms). Geochemical observations may help to deduce the global level of biological activity at a certain period, or the affinity of certain fossils. For example, geochemical features of rocks may reveal when life first arose on Earth, and may provide evidence of the presence of eukaryotic cells, the type from which all multicellular organisms are built. Analyses of carbon isotope ratios may help to explain major transitions such as the Permian–Triassic extinction event. Naming groups of organisms in a way that is clear and widely agreed is important, as some disputes in paleontology have been based just on misunderstandings over names. Linnaean taxonomy is commonly used for classifying living organisms, but runs into difficulties when dealing with newly discovered organisms that are significantly different from known ones. For example: it is hard to decide at what level to place a new higher-level grouping, e.g. genus or family or order; this is important since the Linnaean rules for naming groups are tied to their levels, and hence if a group is moved to a different level it must be renamed. Simple example cladogram Warm-bloodedness evolved somewhere in thesynapsid–mammal transition. Warm-bloodedness must also have evolved at one of these points – an example of convergent evolution. Paleontologists generally use approaches based on cladistics, a technique for working out the evolutionary "family tree" of a set of organisms. It works by the logic that, if groups B and C have more similarities to each other than either has to group A, then B and C are more closely related to each other than either is to A. Characters that are compared may be anatomical, such as the presence of a notochord, or molecular, by comparing sequences of DNA or proteins. The result of a successful analysis is a hierarchy of clades – groups that share a common ancestor. Ideally the "family tree" has only two branches leading from each node ("junction"), but sometimes there is too little information to achieve this and paleontologists have to make do with junctions that have several branches. The cladistic technique is sometimes fallible, as some features, such as wings or camera eyes, evolved more than once, convergently – this must be taken into account in analyses. Evolutionary developmental biology, commonly abbreviated to "Evo Devo", also helps paleontologists to produce "family trees", and understand fossils. For example, the embryological development of some modern brachiopods suggests that brachiopods may be descendants of the halkieriids, which became extinct in the Cambrian period. Paleontology seeks to map out how living things have changed through time. A substantial hurdle to this aim is the difficulty of working out how old fossils are. Beds that preserve fossils typically lack the radioactive elements needed for radiometric dating. This technique is our only means of giving rocks greater than about 50 million years old an absolute age, and can be accurate to within 0.5% or better. Although radiometric dating requires very careful laboratory work, its basic principle is simple: the rates at which various radioactive elements decay are known, and so the ratio of the radioactive element to the element into which it decays shows how long ago the radioactive element was incorporated into the rock. Radioactive elements are common only in rocks with a volcanic origin, and so the only fossil-bearing rocks that can be dated radiometrically are a few volcanic ash layers. Consequently, paleontologists must usually rely on stratigraphy to date fossils. Stratigraphy is the science of deciphering the "layer-cake" that is the sedimentary record, and has been compared to a jigsaw puzzle. Rocks normally form relatively horizontal layers, with each layer younger than the one underneath it. If a fossil is found between two layers whose ages are known, the fossil's age must lie between the two known ages. Because rock sequences are not continuous, but may be broken up by faults or periods of erosion, it is very difficult to match up rock beds that are not directly next to one another. However, fossils of species that survived for a relatively short time can be used to link up isolated rocks: this technique is called "biostratigraphy". For instance, the conodont "Eoplacognathus pseudoplanus" has a short range in the Middle Ordovician period. If rocks of unknown age are found to have traces of "E. pseudoplanus", they must have a mid-Ordovician age. Such index fossils must be distinctive, be globally distributed and have a short time range to be useful. However, misleading results are produced if the index fossils turn out to have longer fossil ranges than first thought. Stratigraphy and biostratigraphy can in general provide only relative dating ("A" was before "B"), which is often sufficient for studying evolution. However, this is difficult for some time periods, because of the problems involved in matching up rocks of the same age across different continents. Family-tree relationships may also help to narrow down the date when lineages first appeared. For instance, if fossils of B or C date to X million years ago and the calculated "family tree" says A was an ancestor of B and C, then A must have evolved more than X million years ago. It is also possible to estimate how long ago two living clades diverged – i.e. approximately how long ago their last common ancestor must have lived – by assuming that DNA mutations accumulate at a constant rate. These "molecular clocks", however, are fallible, and provide only a very approximate timing: for example, they are not sufficiently precise and reliable for estimating when the groups that feature in the Cambrian explosion first evolved, and estimates produced by different techniques may vary by a factor of two. Earth formed about and, after a collision that formed the Moon about 40 million years later, may have cooled quickly enough to have oceans and an atmosphere about . There is evidence on the Moon of a Late Heavy Bombardment by asteroids from . If, as seems likely, such a bombardment struck Earth at the same time, the first atmosphere and oceans may have been stripped away. Paleontology traces the evolutionary history of life back to over , possibly as far as . The oldest clear evidence of life on Earth dates to , although there have been reports, often disputed, of fossil bacteria from and of geochemical evidence for the presence of life . Some scientists have proposed that life on Earth was "seeded" from elsewhere, but most research concentrates on various explanations of how life could have arisen independently on Earth. For about 2,000 million years microbial mats, multi-layered colonies of different bacteria, were the dominant life on Earth. The evolution of oxygenic photosynthesis enabled them to play the major role in the oxygenation of the atmosphere from about . This change in the atmosphere increased their effectiveness as nurseries of evolution. While eukaryotes, cells with complex internal structures, may have been present earlier, their evolution speeded up when they acquired the ability to transform oxygen from a poison to a powerful source of metabolic energy. This innovation may have come from primitive eukaryotes capturing oxygen-powered bacteria as endosymbionts and transforming them into organelles called mitochondria. The earliest evidence of complex eukaryotes with organelles (such as mitochondria) dates from . Multicellular life is composed only of eukaryotic cells, and the earliest evidence for it is the Francevillian Group Fossils from , although specialisation of cells for different functions first appears between (a possible fungus) and (a probable red alga). Sexual reproduction may be a prerequisite for specialisation of cells, as an asexual multicellular organism might be at risk of being taken over by rogue cells that retain the ability to reproduce. The earliest known animals are cnidarians from about , but these are so modern-looking that must be descendants of earlier animals. Early fossils of animals are rare because they had not developed mineralised, easily fossilized hard parts until about . The earliest modern-looking bilaterian animals appear in the Early Cambrian, along with several "weird wonders" that bear little obvious resemblance to any modern animals. There is a long-running debate about whether this Cambrian explosion was truly a very rapid period of evolutionary experimentation; alternative views are that modern-looking animals began evolving earlier but fossils of their precursors have not yet been found, or that the "weird wonders" are evolutionary "aunts" and "cousins" of modern groups. Vertebrates remained a minor group until the first jawed fish appeared in the Late Ordovician. The spread of animals and plants from water to land required organisms to solve several problems, including protection against drying out and supporting themselves against gravity. The earliest evidence of land plants and land invertebrates date back to about and respectively. Those invertebrates, as indicated by their trace and body fossils, were shown to be arthropods known as euthycarcinoids. The lineage that produced land vertebrates evolved later but very rapidly between and ; recent discoveries have overturned earlier ideas about the history and driving forces behind their evolution. Land plants were so successful that their detritus caused an ecological crisis in the Late Devonian, until the evolution of fungi that could digest dead wood. During the Permian period, synapsids, including the ancestors of mammals, may have dominated land environments, but this ended with the Permian–Triassic extinction event , which came very close to wiping out all complex life. The extinctions were apparently fairly sudden, at least among vertebrates. During the slow recovery from this catastrophe a previously obscure group, archosaurs, became the most abundant and diverse terrestrial vertebrates. One archosaur group, the dinosaurs, were the dominant land vertebrates for the rest of the Mesozoic, and birds evolved from one group of dinosaurs. During this time mammals' ancestors survived only as small, mainly nocturnal insectivores, which may have accelerated the development of mammalian traits such as endothermy and hair. After the Cretaceous–Paleogene extinction event killed off all the dinosaurs except the birds, mammals increased rapidly in size and diversity, and some took to the air and the sea. Fossil evidence indicates that flowering plants appeared and rapidly diversified in the Early Cretaceous between and . Their rapid rise to dominance of terrestrial ecosystems is thought to have been propelled by coevolution with pollinating insects. Social insects appeared around the same time and, although they account for only small parts of the insect "family tree", now form over 50% of the total mass of all insects. Humans evolved from a lineage of upright-walking apes whose earliest fossils date from over . Although early members of this lineage had chimp-sized brains, about 25% as big as modern humans', there are signs of a steady increase in brain size after about . There is a long-running debate about whether "modern" humans are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species, or arose worldwide at the same time as a result of interbreeding. Life on earth has suffered occasional mass extinctions at least since . Despite their disastrous effects, mass extinctions have sometimes accelerated the evolution of life on earth. When dominance of an ecological niche passes from one group of organisms to another, this is rarely because the new dominant group outcompetes the old, but usually because an extinction event allows new group to outlive the old and move into its niche. The fossil record appears to show that the rate of extinction is slowing down, with both the gaps between mass extinctions becoming longer and the average and background rates of extinction decreasing. However, it is not certain whether the actual rate of extinction has altered, since both of these observations could be explained in several ways: Biodiversity in the fossil record, which is shows a different trend: a fairly swift rise from , a slight decline from , in which the devastating Permian–Triassic extinction event is an important factor, and a swift rise from to the present. Although paleontology became established around 1800, earlier thinkers had noticed aspects of the fossil record. The ancient Greek philosopher Xenophanes (570–480 BC) concluded from fossil sea shells that some areas of land were once under water. During the Middle Ages the Persian naturalist Ibn Sina, known as "Avicenna" in Europe, discussed fossils and proposed a theory of petrifying fluids on which Albert of Saxony elaborated in the 14th century. The Chinese naturalist Shen Kuo (1031–1095) proposed a theory of climate change based on the presence of petrified bamboo in regions that in his time were too dry for bamboo. In early modern Europe, the systematic study of fossils emerged as an integral part of the changes in natural philosophy that occurred during the Age of Reason. In the Italian Renaissance, Leonardo Da Vinci made various significant contributions to the field as well depicted numerous fossils. Leonardo's contributions are central to the history of paleontology because he established a line of continuity between the two main branches of paleontology—ichnology and body fossil paleontology. He identified the following: At the end of the 18th century Georges Cuvier's work established comparative anatomy as a scientific discipline and, by proving that some fossil animals resembled no living ones, demonstrated that animals could become extinct, leading to the emergence of paleontology. The expanding knowledge of the fossil record also played an increasing role in the development of geology, particularly stratigraphy. The first half of the 19th century saw geological and paleontological activity become increasingly well organised with the growth of geologic societies and museums and an increasing number of professional geologists and fossil specialists. Interest increased for reasons that were not purely scientific, as geology and paleontology helped industrialists to find and exploit natural resources such as coal. This contributed to a rapid increase in knowledge about the history of life on Earth and to progress in the definition of the geologic time scale, largely based on fossil evidence. In 1822 Henri Marie Ducrotay de Blanville, editor of "Journal de Physique", coined the word "palaeontology" to refer to the study of ancient living organisms through fossils. As knowledge of life's history continued to improve, it became increasingly obvious that there had been some kind of successive order to the development of life. This encouraged early evolutionary theories on the transmutation of species. After Charles Darwin published "Origin of Species" in 1859, much of the focus of paleontology shifted to understanding evolutionary paths, including human evolution, and evolutionary theory. The last half of the 19th century saw a tremendous expansion in paleontological activity, especially in North America. The trend continued in the 20th century with additional regions of the Earth being opened to systematic fossil collection. Fossils found in China near the end of the 20th century have been particularly important as they have provided new information about the earliest evolution of animals, early fish, dinosaurs and the evolution of birds. The last few decades of the 20th century saw a renewed interest in mass extinctions and their role in the evolution of life on Earth. There was also a renewed interest in the Cambrian explosion that apparently saw the development of the body plans of most animal phyla. The discovery of fossils of the Ediacaran biota and developments in paleobiology extended knowledge about the history of life back far before the Cambrian. Increasing awareness of Gregor Mendel's pioneering work in genetics led first to the development of population genetics and then in the mid-20th century to the modern evolutionary synthesis, which explains evolution as the outcome of events such as mutations and horizontal gene transfer, which provide genetic variation, with genetic drift and natural selection driving changes in this variation over time. Within the next few years the role and operation of DNA in genetic inheritance were discovered, leading to what is now known as the "Central Dogma" of molecular biology. In the 1960s molecular phylogenetics, the investigation of evolutionary "family trees" by techniques derived from biochemistry, began to make an impact, particularly when it was proposed that the human lineage had diverged from apes much more recently than was generally thought at the time. Although this early study compared proteins from apes and humans, most molecular phylogenetics research is now based on comparisons of RNA and DNA.
https://en.wikipedia.org/wiki?curid=23084
Plotter A plotter produces vector graphics drawings. Plotters draw lines on paper using a pen. In the past, plotters were used in applications such as computer-aided design, as they were able to produce line drawings much faster and of a higher quality than contemporary conventional printers, and small desktop plotters were often used for business graphics. Although they retained a niche for producing very large drawings for many years, plotters have now largely been replaced by wide-format conventional printers. Digitally controlled plotters evolved from earlier fully analog XY-writers used as output devices for measurement instruments and analog computers. Pen plotters print by moving a pen or other instrument across the surface of a piece of paper. This means that plotters are vector graphics devices, rather than raster graphics as with other printers. Pen plotters can draw complex line art, including text, but do so slowly because of the mechanical movement of the pens. They are often incapable of efficiently creating a solid region of color, but can hatch an area by drawing a number of close, regular lines. Plotters offered the fastest way to efficiently produce very large drawings or color high-resolution vector-based artwork when computer memory was very expensive and processor power was very limited, and other types of printers had limited graphic output capabilities. Pen plotters have essentially become obsolete, and have been replaced by large-format inkjet printers and LED toner-based printers. Such devices may still understand vector languages originally designed for plotter use, because in many uses, they offer a more efficient alternative to raster data. Electrostatic plotters used a dry toner transfer process similar to that in many photocopiers. They were faster than pen plotters and were available in large formats, suitable for reproducing engineering drawings. The quality of image was often not as good as contemporary pen plotters. Electrostatic plotters were made in both flat-bed and drum types. The electrostatic plotter uses the pixel as a drawing means, like a raster graphics display device. The plotter head consists of a large number of tiny styluses (as many as 21760) embedded in it. This head traverses over the width of the paper as it rolls past the head to make a drawing. The resolutions available may be 100 to 508 dots per inch. Electrostatic plotters are very fast with plotting speed of 6 to 32 mm/s, depending on the plotter resolution. Cutting plotters use knives to cut into a piece of material (such as paper, mylar film, or vinyl film) that is lying on the flat surface area of the plotter. It is achieved because the cutting plotter is connected to a computer, which is equipped with specialized cutting design or drawing computer software programs. Those computer software programs are responsible for sending the necessary cutting dimensions or designs in order to command the cutting knife to produce the correct project cutting needs. In recent years the use of cutting plotters (generally called die-cut machines) has become popular with home enthusiasts of paper crafts such as cardmaking and scrapbooking. Such tools allow desired card and decal shapes to be cut out very precisely, and repeated perfectly identically. A number of printer control languages were created to operate pen plotters, and transmit commands like "lift pen from paper", "place pen on paper", or "draw a line from here to here". Three common ASCII-based plotter control languages are Hewlett-Packard's HP-GL, its successor HP-GL/2, and Houston Instruments DMPL. Here is a simple HP-GL script drawing a line: SP1; PA500,500; PD; PR0,1000; PU; SP; This program instructs the plotter, in order, to take the first pen (SP1 = Select Pen 1), to go to coordinates X=500, Y=500 on the paper sheet (PA = Plot Absolute), to lower the pen against the paper (PD = Pen Down), to move 1000 units in the Y direction (thus drawing a vertical line - PR = Plot Relative), to lift the pen (PU = Pen Up) and finally to put it back in its stall. Programmers using FORTRAN or BASIC generally did not program these directly, but used software packages, such as the Calcomp library, or device independent graphics packages, such as Hewlett-Packard's AGL libraries or BASIC extensions or high end packages such as DISSPLA. These would establish scaling factors from world coordinates to device coordinates, and translate to the low level device commands. For example, to plot X*X in HP 9830 BASIC, the program would be 10 SCALE -1,1,1,1 20 FOR X = -1 to 1 STEP 0.1 30 PLOT X, X*X 40 NEXT X 50 PEN 60 END Early pen plotters, e.g., the Calcomp 565 of 1959, worked by placing the paper over a roller that moved the paper back and forth for X motion, while the pen moved back and forth on a track for Y motion. The paper was supplied in roll form and had perforations along both edges that were engaged by sprockets on the rollers. Another approach, e.g. Computervision's Interact I, involved attaching ball-point pens to drafting pantographs and driving the machines with stepper motors controlled by the computer. This had the disadvantage of being somewhat slow to move, as well as requiring floor space equal to the size of the paper, but could double as a digitizer. A later change was the addition of an electrically controlled clamp to hold the pens, which allowed them to be changed, and thus create multi-colored output. Hewlett Packard and Tektronix produced small, desktop-sized flatbed plotters in the late 1960s and 1970s. The pens were mounted on a traveling bar, whereby the y-axis was represented by motion up and down the length of the bar and the x-axis was represented by motion of the bar back and forth across the plotting table. Due to the mass of the bar, these plotters operated relatively slowly. In the 1980s, the small and lightweight HP 7470 introduced the "grit wheel" mechanism, eliminating the need for perforations along the edges, unlike the Calcomp plotters two decades earlier. The grit wheels at opposite edges of the sheet press against resilient polyurethane-coated rollers and form tiny indentations in the sheet. As the sheet is moved back and forth, the grit wheels keep the sheet in proper registration due to the grit particles falling into the earlier indentations, much like the teeth of two gears meshing. The pen is mounted on a carriage that moves back and forth in a line between the grit wheels, representing the orthogonal axis. These smaller "home-use" plotters became popular for desktop business graphics and in engineering laboratories, but their low speed meant they were not useful for general printing purposes, and different conventional printer would be required for those jobs. One category, introduced by Hewlett Packard's MultiPlot for the HP 2647, was the "word chart", which used the plotter to draw large letters on a transparency. This was the forerunner of the modern Powerpoint chart. With the widespread availability of high-resolution inkjet and laser printers, inexpensive memory and computers fast enough to rasterize color images, pen plotters have all but disappeared. However, the grit wheel mechanism is still found in inkjet-based, large format engineering plotters. Plotters were also used in the Create-A-Card kiosks that were available for a while in the greeting card area of supermarkets that used the HP 7475 six-pen plotter. Plotters are used primarily in technical drawing and CAD applications, where they have the advantage of working on very large paper sizes while maintaining high resolution. Another use has been found by replacing the pen with a cutter, and in this form plotters can be found in many garment and sign shops. Changing the color or width of a line required the plotter to change pens. This was either done manually on small plotters, but more typically the plotter would have a magazine of four or more pens which could be automatically mounted. A niche application of plotters is in creating tactile images for visually handicapped people on special thermal cell paper. Unlike other printer types, pen plotter speed is measured by pen speed and acceleration rate, instead of by page printing speed. A pen plotter's speed is primarily limited by the type of pen used, so the choice of pen is a key factor in pen plotter output speed. Indeed, most modern pen plotters have commands to control slewing speed, depending on the type of pen currently in use. There are many types of plotter pen, some of which are no longer mass-produced. Technical pen tips are often used, many of which can be renewed using parts and supplies for manual drafting pens. Early HP flatbed and grit wheel plotters used small, proprietary fiber-tipped or plastic nib disposable pens. One type of plotter pen uses a cellulose fiber rod inserted through a circular foam tube saturated with ink, with the end of the rod sharpened into a conical tip. As the pen moves across the paper surface, capillary wicking draws the ink from the foam, down the rod, and onto the paper. As the ink supply in the foam is depleted, the migration of ink to the tip begins to slow down, resulting in faint lines. Slowing the plotting speed will allow the lines drawn by a worn-out pen to remain dark, but the fading will continue until the foam is completely depleted. Also, as the fiber tip pen is used, the tip slowly wears away on the plotting medium, producing a progressively wider, smudged line. Ball-point plotter pens with refillable clear plastic ink reservoirs are available. They do not have the fading or wear effects of fiber pens, but are generally more expensive and uncommon. Also, conventional ball-point pens can be modified to work in most pen plotters. A vinyl cutter (sometimes known as a cutting plotter) is used to create posters, billboards, signs, T-shirt logos, and other weather-resistant graphical designs. The vinyl can also be applied to car bodies and windows for large, bright company advertising and to sailboat transoms. A similar process is used to cut tinted vinyl for automotive windows. Colors are limited by the collection of vinyl on hand. To prevent creasing of the material, it is stored in rolls. Typical vinyl roll sizes are 15-inch, 24-inch, 36-inch and 48-inch widths, and have a backing material for maintaining the relative placement of all design elements. Vinyl cutter hardware is similar to a traditional plotter except that the ink pen is replaced by a very sharp knife to outline each shape, and may have a pressure control to adjust how hard the knife presses down into the vinyl film, preventing the cuts from also penetrating the backing material. Besides losing relative placement of separate design elements, loose pieces cut out of the backing material may fall out and jam the plotter roll feed or the cutter head. After cutting, the vinyl material outside of the design is peeled away, leaving the design on the backing material which can be applied using self-adhesion, glue, lamination, or a heat press. The vinyl knife is usually shaped like a plotter pen and is also mounted on a swivel head so that the knife edge self-rotates to face the correct direction as the plotter head moves. Vinyl cutters are primarily used to produce single-color line art and lettering. Multiple color designs require cutting separate sheets of vinyl, then overlaying them during application; but this process quickly becomes cumbersome for more than a couple of hues. Sign cutting plotters are in decline in applications such as general billboard design, where wide-format inkjet printers that use solvent-based inks are employed to print directly onto a variety of materials. Cutting plotters are still relied upon for precision contour-cutting of graphics produced by wide-format inkjet printers – for example to produce window or car graphics, or shaped stickers. Large-format inkjet printers are increasingly used to print onto heat-shrink plastic sheeting, which is then applied to cover a vehicle surface and shrunk to fit using a heat gun, known as a vehicle wrap. A static cutting table is a type of cutting plotter used a large flat vacuum table. It is used for cutting non-rigid and porous material such as textiles, foam, or leather, that may be too difficult or impossible to cut with roll-fed plotters. Static cutters can also cut much thicker and heavier materials than a typical roll-fed or sheet-fed plotter is capable of handling. The surface of the table has a series of small pinholes drilled in it. Material is placed on the table, and a coversheet of plastic or paper is overlaid onto the material to be cut. A vacuum pump is turned on, and air pressure pushes down on the coversheet to hold the material in place. The table then operates like a normal vector plotter, using various cutting tools to cut holes or slits into the fabric. The coversheet is also cut, which may lead to a slight loss of vacuum around the edges of the coversheet, but this loss is not significant. In the mid-to-late 2000s artists and hackers began to rediscover pen plotters as quirky, customizable output devices. The quality of the lines produced by pens on paper is quite different from other digital output techniques. Even 30-year-old pen plotters typically still function reliably, and many were available for less than $100 on auction and resale websites. While support for driving pen plotters directly or saving files as HP-GL has disappeared from most commercial graphics applications, several contemporary software packages make working with HPGL on modern operating systems possible. As use of plotters has waned, the large-format printers that have largely replaced them have come to be called plotters as well.
https://en.wikipedia.org/wiki?curid=23085
Blind (poker) The blinds are forced bets posted by players to the left of the dealer button in flop-style poker games. The number of blinds is usually two, but it can range from none to three. The small blind is placed by the player to the left of the dealer button and the big blind is then posted by the next player to the left. The one exception is when there are only two players (a "heads-up" game), when the player on the button is the small blind, and the other player is the big blind. (Both the player and the bet may be referred to as big or small blind.) After the cards are dealt, the player to the left of the big blind is the first to act during the first betting round. If any players call the big blind, the big blind is then given an extra opportunity to raise. This is known as a "live blind". If the live blind checks, the betting round then ends. Generally, the "big blind" is equal to the minimum bet. The "small blind" is normally half the big blind. In cases where posting exactly half the big blind is impractical due to the big blind being some odd-valued denomination, the small blind is rounded (usually down) to the nearest practical value. For example, if the big blind in a live table game is $3, then the small blind will usually be $1 or $2 since most casinos do not distribute large quantities of $0.50 poker chips. The blinds exist because Omaha and Texas hold 'em are frequently played without antes, allowing a player to fold his hand without placing a bet. The blind bets introduce a regular cost to take part in the game, thus inducing a player to enter pots in an attempt to compensate for that expense. It is possible to play without blinds. The minimum bet is then the lowest denomination chip in play, and tossing only one chip is considered as a call. Anything higher than that is considered a raise. Poker without blinds is usually played with everyone posting an ante to receive cards. In cash games, otherwise known as ring games, blinds primarily serve to ensure all players are subject to some minimum, ongoing cost for participating in the game. This encourages players to play hands they otherwise might not, thereby increasing the average size of the pots and, by extension, increasing the amount of rake earned by the cardroom hosting the game. In cash games, the amount of the blinds are normally fixed for each particular table and will not change for the duration of the game. However, many cardrooms will allow blind levels to change in cases where all players unanimously agree to a change. Larger cardrooms will often include tables with different blind levels to give players the option of playing at whatever stakes they are most comfortable with. In online poker, blinds range from as little as one U.S. cent to USD1,000 or more. The minimum and maximum buy-in at a table is usually set in relation to the big blind. At live games, the minimum buy-in is usually between 20 and 50 big blinds, while the maximum buy-in is usually between 100 and 250 big blinds. Some online cardrooms offer "short stack" tables where the maximum buy-in is 50 big blinds or less and/or "deep stack" tables where the minimum buy-in is 100 big blinds or more. In cash games that do not deal cards to players who are absent from the table at the start of the hand (or, in online games, are designated as "sitting out"), special rules are necessary to deal with players who miss their blinds. In such a situation, if a player misses his or her big blind, he or she will not be dealt in again until the button has passed. At that point, if the player wishes to rejoin the game, he or she must "super-post" - he or she must post both the big and small blinds in order to be dealt cards. Of these, only the big blind is considered "live" while the small blind is "dead" - it is placed in the center of the pot apart from the big blind and will not count towards calling any additional bets or raises by other players. If the player has only missed the small blind, then the same procedure applies except that the player only has to post the "dead" small blind to rejoin the game. Most cardrooms allow players to relieve themselves of these obligations if they wait until they are again due to post the big blind before rejoining the game. Some cardrooms hosting live cash games do not allow players to miss and/or avoid paying blinds in this manner. In these games, all players with chips on the table are dealt in whether or not they are present at the table. Any blinds due will be posted from the player's stack - depending on the cardroom's rules this will be done either by the dealer, another cardroom employee or a nearby player under staff supervision. Whenever a player has not returned to the table by the time it is his turn to act, his or her hand is automatically folded. Under such rules, if a player wishes to be absent from the table then the only way he or she can avoid paying blinds is to cash out and leave the game altogether. In poker tournament play, blinds serve a dual purpose. In addition to the purpose explained above, blinds are also used to control how long the tournament will last. Before the tournament begins, the players will agree to a blinds structure, usually set by the tournament organizer. This structure defines how long each round is and how much the blinds increase per round. Typically, they are increased at a smooth rate of between 25% and 50% per round over the previous round. As the blinds increase, players need to increase their chip counts (or "stacks") to stay in the game. The blinds will eventually consume all of a player's stack if he or she does not play to win more. Unlike many cash games, it is not possible for a player to "miss" blinds in a tournament. If a player is absent from the table, he will continue to have his or her cards dealt and mucked and will have blinds and, if applicable, antes taken from his stack as they are due, either until he or she returns or until his or her stack is completely consumed by blinds and antes. A player who loses his or her chips in this manner is said to have been "blinded off." There are two main goals for the blinds structure: If desired, antes can be added to further increase the pressure to win more chips. If each player in a tournament starts with 5,000 in chips and after four hours, the big blind is 10,000 (with a small blind of 5,000), it will be very difficult for a player with only 15,000 in chips to stay in the game.
https://en.wikipedia.org/wiki?curid=23124
Check-raise A check-raise in poker is a common deceptive play in which a player checks early in a betting round, hoping someone else will open. The player who checked then raises in the same round. This might be done, for example, when the first player believes that an opponent has an inferior hand and will not call a direct bet, but that they may attempt to bluff, allowing the first player to win more money than they would by betting straightforwardly. The key point is that if no one else is keen to bet, then the most a player can raise by (in a limit game) is one single bet. If someone else bets first, they can raise, thus increasing the value of the pot by two bets. In a no-limit game, there is no restriction on the size of one's bet, and a raise is likely to be much larger than the second player's bet. Of course, if no other player chooses to open, the betting will be "checked around" and the play will have failed to elicit additional money for the pot. Like a simple check, a failed check-raise provides other players an opportunity to view the next card or cards dealt without requiring the other players to commit more money to the pot. A check-raise thus contains an element of risk because the check-raising player's advantage may deteriorate when new cards are revealed. While it can be an important part of one's poker strategy, this play is not allowed by a house rule in some home games and certain small-stakes casino games. It is also frequently not allowed in the game of California lowball. In older poker material and among stud and draw poker players, it is sometimes referred to as "sandbagging". Check-raises can also be used as an intimidation technique over the course of a game; a player who has frequently been check-raised may be less likely to attempt to steal the pot. In online poker games special tracking software can be used to determine the exact percentage of times a player check-raised when they had the opportunity. This information helps to determine if a player who check-raised has a monster hand or is bluffing as part of their routine poker play. Not all players agree that a check-raise is an especially effective play, however. In "Super/System", poker legend Doyle Brunson claims to check-raise very rarely in no-limit hold 'em; he contends that it is more profitable to simply bet a quality hand, regardless of whether his opponent will try to bluff. His reasoning for this is twofold: First, a failed check-raise gives other players the chance to see free cards that may improve their hand; second, it makes it obvious to other players that you potentially have a very strong hand. The latter, however, may be used as a strong bluff technique, although the opponent could put in a re-raise to scare off a bluff.
https://en.wikipedia.org/wiki?curid=23134
Stripped deck A stripped deck (US) or shortened pack (UK) is a set of playing cards from which some cards have been removed. The removed cards are usually the pip cards. Many card games use stripped decks, and stripped decks for popular games are commercially available. When playing cards first arrived in Europe during the 1370s, they had the same format as the modern standard 52-card deck, consisting of four suits each with ten pip cards and three face cards. During the late 14th and 15th centuries, the Spanish and Portuguese decks dropped the 10s while the German and Swiss packs removed the Aces to create 48-card decks. It is far easier to print 48 cards using two woodblocks than 52 cards. While the removal of the above cards was motivated by manufacturing considerations, later expulsions are the result of trying to speed up card games to make them more exciting. Trappola is the first known card game to be played with a deck that was stripped for game play. It removed all the cards from 6 to 3 to create a 36-card deck. The most popular card game in 16th-century Europe was Piquet, played with a 36-card deck that dropped ranks from 5 to 2. Around 1700, it dropped the 6s as well to create the 32-card deck which is now the most popular format in France. 32 and 36-card decks are the most widespread in countries that were once part of the Holy Roman (the Low Countries, Germany, and Switzerland), Austro-Hungarian, and Russian empires. 24-card decks to play Schnapsen are widely available in central Europe although it may be shortened to 20 in the future as that is how the modern variant is now commonly played. The Spanish, Portuguese, Italians, and Latin Americans use mostly 40-card decks. Unlike the countries above, they drop the higher-ranking numerals so that the 7 is located immediately under the face cards. This was due to the popularity of Ombre, the game that introduced the concept of bidding. The British and the Scandinavians are the most resistant against stripped decks, having maintained the 52-card format since receiving them in the 15th century. The British have also propagated that deck size through whist, the most popular card game of the 19th century. In the 20th century, this has been followed by contract bridge, gin rummy, canasta, and poker which all require that deck size. The British prefer games involving four players as opposed to the continental three-player games which uses smaller decks. Asian countries also created stripped decks using their traditional playing cards. In contrast to the Western practice of removing "ranks", Asians remove "suits". During the Qing dynasty, the Chinese money-suited cards dropped one suit as rummy-type games became more popular. In India, the gambling game of Naqsha overtook the Ganjifa trick-taking game and many decks were made with only half of the traditional suits. The opposite of a stripped deck is an expanded deck. Many commercial attempts have tried and failed to increase the standard deck above 52 cards. The most successful addition to the standard deck is the Joker which first appeared during the American Civil War as a Euchre trump card. The Joker has since been adopted as a wild card in a few other standard playing card games with different values and quantities depending on which game is being played. 500 is a Euchre offshoot invented by the United States Playing Card Company (USPCC) during the early 20th century. To play the six-handed version, USPCC created a deck with ranks 11, 12, and 13. 500 decks are now produced by other manufacturers and are sold primarily in English-speaking countries where the game is played. A much older expanded deck is tarot, invented in 15th-century Italy, with an extra suit of trumps. Tarot card games were the most popular card games of the 18th century but have since declined. They are still played in various continental European countries with France having the largest community. Tarot decks are not immune to stripping either. The Tarocco Bolognese, Tarocco Siciliano, Industrie und Glück, and Cego decks have excised some pip cards. A French-suited deck of 32 cards, consisting of 7, 8, 9, 10, Jack, Queen, King and Ace in four suits each, is used in the two-player game Piquet, which dates back to the 16th century. Games played with a piquet deck (or the equivalent German- or Swiss-suited decks) are still among the most popular in some parts of Europe. This includes belote and klaverjas (the national games of France and the Netherlands, respectively) and skat (the German national game, which is also played with the equivalent German-suited decks in some regions). Bezique is played with two piquet decks. Stripped decks are used in certain poker variants. The earliest form of poker was played with only 20 cards. The Australian game of Manila uses a piquet deck, and Mexican stud is played with the 8s, 9s, and 10s removed from the deck (and a joker added). This may require adjusting hand values: in both of these games, a flush ranks above a full house, because having fewer cards of each suit available makes flushes rarer. A hand such as 6-7-J-Q-K plays as a straight in Mexican stud, skipping over the removed ranks. Some places may allow a hand such as 10-9-8-7-A to play as a straight (by analogy to a wheel) in the 32-card game, the A playing low and skipping over the removed ranks (although this is not the case in Manila). Finally, the relative frequency of straights versus three of a kind is also sensitive to the deck composition (and to the number of cards dealt), so some places may consider three of a kind to be superior to a straight, but the difference is small enough that this complication is not necessary for most games. Similarly, a full house tends to occur more often than a flush in a piquet deck, due to the increased frequency of each playing card rank, creating a change in poker combination ranking. Five-card stud is also often played with a piquet deck. In lively home games it might work better to only strip three ranks (2s through 4s) with seven or eight players; with only two or three players 7s and 8s could be stripped as well, leaving the same 24-card deck used in euchre. In any of these cases, a flush should rank above a full house (in a 24-card deck it is actually rarer than four of a kind, but is rarely played that flushes are superior to four of a kind). Stripped deck five-card stud is a game particularly well-suited to cheating by collusion, because it is easy for partners to signal a single hole card and the relative value of knowing the location of a single card is higher than with a full deck. The game of euchre is also played with a 24-card stripped deck, consisting of only 9-10-J-Q-K-A of each suit, the 2-8 being stripped from the deck. The game of pinochle is played with 48 cards, consisting of a doubled euchre deck (that is, two copies of 9-A of each suit). In some games, a small number of cards are stripped from the deck to make the deal exact. For example, it is customary to remove the 2 when three people play Hearts.
https://en.wikipedia.org/wiki?curid=23145
Declaration (poker) There are several actions in poker called declaration, in which a player formally expresses his intent to take some action (which he may perform at a later point). For example, one may verbally declare an action (fold, call, raise) while in turn, which obligates the player to complete that action. One may declare a number of cards to draw in a draw poker game (which is typically not binding), or one may declare some other choice specific to the variant being played. But most commonly, the term refers to the declaration in the final phase of a high-low split game, in which players indicate whether their hands are to be evaluated as high hands, low hands, or both at showdown. This is only one option for high-low split games; the other is known as "cards speak", in which players simply reveal their hands at showdown and award the pot to the highest and lowest hands shown (possibly subject to qualifications). Cards speak is used commonly in casinos because it is the much simpler method. High-low with declaration is common in home games. First, declarations can be made either in turn or simultaneously. Games with verbal in-turn declarations (called "last raise declares") are uncommon, because the positional value of declaring last is so great. Some think that makes the game unfair. Others see it merely as strategy, making the game more interesting, because players may alter their betting in the last rounds to get the position of declaring last or after a certain player. Also, if all the other remaining players declare one way, the last player to declare can then call the other way and take half the pot regardless of the actual rank of his hand. Simultaneous declarations are commonly done by the "chips in hand" method. Each player remaining in the game takes two chips or coins below the table, then brings up a closed hand containing zero, one, or two of the chips. After all players have brought their closed hands above the table, they all then open their hands to reveal their choices: for example, no chips in the hand means the player is declaring "low", one chip "high", and two chips "swing" (both ways). Some games then have another round of betting after the declaration, called "bet/declare/bet", which clearly gives an advantage if there is just one person going a certain way. After declaration and showdown, half of the pot is awarded to the highest hand among those players who declared high, and half to the lowest hand among those who declared low. If no one declared in one direction, the whole pot is awarded to the other (for example, if all players declared low, the lowest hand is awarded the whole pot). If any player declared "swing", then that player must have both the high and low hands to take any part of the pot, though there are several rule variations covering the specifics. First, if the rules specify that ties are acceptable, then a player declaring swing must win or tie both directions to win anything, but if he does, he is entitled to his appropriate share. For example, if the swing player has the clearly highest hand but shares the lowest hand with another player, he wins three-fourths of the pot and the other low hand wins one-fourth. If the rules specify that ties are not acceptable, then a swing player must clearly win both directions: even a tie in one direction means he wins nothing. If a swing player fails for half the pot, the half that he would have otherwise won can be awarded either to the second-best hand in that direction, or to the player who defeated him in the other. The latter rule affords more strategic possibilities in declaration. For example, if a player declaring swing has the best high hand but loses for low (or ties for low with a no-ties rule), the whole pot is awarded to the low hand that defeated him. A rule must be adopted for the case where no player is eligible to win the pot (for example, if all players declare swing, and no player winds both ways). Some possible rules include playing the hand as a no-declare hand, or having the pot ride over to the next hand.
https://en.wikipedia.org/wiki?curid=23151
Protection (poker) Protection in poker is a bet made with a strong but vulnerable hand, such as top pair when straight or flush draws are possible. The bet forces opponents with draws to either call with insufficient pot odds, or to fold, both of which are profitable for the betting player. By contrast, if he failed to protect his hand, another player could draw out on him at no cost, meaning he gets no value from his made hand. A protection play differs from a bluff in that the bluff can win "only" when the opponent folds, while protection bet is made with a hand that is likely to win a showdown, but isn't strong enough for slow playing. The importance of protection increases when there are multiple opponents. For example, if a hand is currently the best, but each of four opponents has a 1-in-6 chance of drawing an out, the four opponents "combined" become the favorite to win, even though each one is individually an underdog. With a protection bet, some or all of them may fold, leaving fewer opponents and a better chance of winning. The term "protection" is also often heard in the context of an "all-in" player (see poker table stakes rules). A bet by an opponent serves to protect the all-in player by reducing the number of opponents the all-in player must beat. To deliberately make such a bet solely to protect another player's hand constitutes collusion. A player may also be said to "protect" his or her cards by placing an object like a specialty chip or miniature figure upon them. This prevents the player from having his cards accidentally collected by the dealer or being fouled by other players' discards.
https://en.wikipedia.org/wiki?curid=23159
Draw (poker) A poker player is drawing if they have a hand that is incomplete and needs further cards to become valuable. The hand itself is called a draw or drawing hand. For example, in seven-card stud, if four of a player's first five cards are all spades, but the hand is otherwise weak, they are "drawing to" a flush. In contrast, a made hand already has value and does not necessarily need to draw to win. A made starting hand with no help can lose to an inferior starting hand with a favorable draw. If an opponent has a made hand that will beat the player's draw, then the player is "drawing dead"; even if they make their desired hand, they will lose. Not only draws benefit from additional cards; many made hands can be improved by catching an out — and may have to in order to win. An unseen card that would improve a drawing hand to a likely winner is an out. "Playing a drawing hand has a positive expectation if the probability of catching an out is greater than the pot odds offered by the pot." The probability formula_1 of catching an out with one card to come is: The probability formula_3 of catching at least one out with two cards to come is: A dead out is a card that would normally be considered an out for a particular drawing hand, but should be excluded when calculating the probability of catching an out. Outs can be dead for two reasons: A flush draw, or four flush, is a hand with four cards of the same suit that may improve to a flush. For example, K♣ 9♣ 8♣ 5♣ x. A flush draw has nine outs (thirteen cards of the suit less the four already in the hand). If a player has a flush draw in Hold'em, the probability to flush the hand in the end is 34.97 percent if there are two more cards to come, and 19.56 percent (9 live cards divided by 46 unseen cards) if there is only one more card to come. An outside straight draw, also called up and down, double-ended straight draw or open-ended straight draw, is a hand with four of the five needed cards in sequence (and could be completed on either end) that may improve to a straight. For example, x-9-8-7-6-x. An outside straight draw has eight outs (four cards to complete the top of the straight and four cards to complete the bottom of the straight). Straight draws including an ace are not outside straight draws, because the straight can only be completed on one end (has four outs). An inside straight draw, or gutshot draw or belly buster draw, is a hand with four of the five cards needed for a straight, but missing one in the middle. For example, 9-x-7-6-5. An inside straight draw has four outs (four cards to fill the missing internal rank). Because straight draws including an ace only have four outs, they are also considered inside straight draws. For example, A-K-Q-J-x or A-2-3-4-x. The probability of catching an out for an inside straight draw is half that of catching an out for an outside straight draw. A double inside straight draw, or double gutshot draw or double belly buster draw can occur when either of two ranks will make a straight, but both are "inside" draws. For example in 11-card games, 9-x-7-6-5-x-3, or 9-8-x-6-5-x-3-2, or in Texas Hold'em when holding 9-J hole cards on a 7-10-K flop. The probability of catching an out for a double inside straight draw is the same as for an outside straight draw. Sometimes a made hand needs to draw to a better hand. For example, if a player has two pair or three of a kind, but an opponent has a straight or flush, to win the player must draw an out to improve to a full house (or four of a kind). There are a multitude of potential situations where one hand needs to improve to beat another, but the expected value of most drawing plays can be calculated by counting outs, computing the probability of winning, and comparing the probability of winning to the pot odds. A backdoor draw, or runner-runner draw, is a drawing hand that needs to catch two outs to win. For example, a hand with three cards of the same suit has a "backdoor flush draw" because it needs two more cards of the suit. The probability formula_6 of catching two outs with two cards to come is: For example, if after the flop in Texas hold 'em, a player has a backdoor flush draw (e.g., three spades), the probability of catching two outs on the turn and river is (10 ÷ 47) × (9 ÷ 46) = 4.16 percent. Backdoor draws are generally unlikely; with 43 unseen cards, it is equally likely to catch two out of seven outs as to catch one out of one. A backdoor outside straight draw (such as J-10-9) is equally likely as a backdoor flush, but any other 3-card straight combination is not worth even one out. A player is said to be "drawing dead" when the hand he hopes to complete will nonetheless lose to a player who already has a better one. For example, drawing to a straight or flush when the opponent already has a full house. In games with community cards, the term can also refer to a situation where no possible additional community card draws results in a win for a player. (This may be because another player has folded the cards that would complete his hand, his opponent's hand is already stronger than any hand he can possibly draw to or that the card that completes his hand also augments his opponent's.)
https://en.wikipedia.org/wiki?curid=23160
Out (poker) In a poker game with more than one betting round, an out is any unseen card that, if drawn, will improve a player's hand to one that is likely to win. Knowing the number of outs a player has is an important part of poker strategy. For example, in draw poker, a hand with four diamonds has nine outs to make a flush: there are 13 diamonds in the deck, and four of them have been seen. If a player has two small pairs, and he believes that it will be necessary for him to make a full house to win, then he has four outs: the two remaining cards of each rank that he holds. One's number of outs is often used to describe a drawing hand: "I had a two-outer" meaning you had a hand that only two cards in the deck could improve to a winner, for example. In draw poker, one also hears the terms "12-way" or "16-way" straight draw for hands such as 6♥ 7♥ 8♠ (Joker), in which any of sixteen cards (4 fours, 4 fives, 4 nines, 4 tens) can fill a straight. The number of outs can be converted to the probability of making the hand on the next card by dividing the number of outs by the number of unseen cards. For example, say a Texas Holdem player holds two spades, and two more appear in the flop. He has seen five cards (regardless of the number of players, as there are no upcards in Holdem except the board), of which four are spades. He thus has 9 outs for a flush out of 47 cards yet to be drawn, giving him a 9/47 chance to fill his flush on the turn. If he fails on the turn, he then has a 9/46 chance to fill on the river. Calculating the combined odds of filling on "either" the turn or river is more complicated: it is (1 - ((38/47) * (37/46))), or about 35%. A common approximation used is to double the number of outs and add one for the percentage to hit on the next card, or to multiply outs by four for the either-of-two case. This approximation works out to within a 1% error margin for up to 14 outs. Note that the hidden cards of a player's opponents may affect the calculation of outs. For example, assume that a Texas hold 'em board looks like this after the third round: 5♠ K♦ 7♦ J♠, and that a player is holding A♦ 10♦. The player's current hand is just a high ace, which is not likely to win unimproved, so the player has a drawing hand. He has a minimum of nine outs for certain, called "nut outs", because they will make his hand the best possible: those are the 2♦, 3♦, 4♦, 6♦, 8♦, 9♦, and Q♦ (which will give him an ace-high flush with no possible better hand on the board) and the Q♣ and Q♥, which will give him an ace-high straight with no higher hand possible. The 5♦ and J♦ will also make him an ace-high flush, so those are "possible outs" since they give him a hand that is likely to win, but they also make it possible for an opponent to have a full house (if the opponent has something like K♠ K♣, for example). Likewise, the Q♠ will fill his ace-high straight, but will also make it possible for an opponent to have a spade flush. It is possible that an opponent could have as little as something like 7♣ 9♣ (making a pair of sevens); in this case even catching any of the three remaining aces or tens will give the player a pair to beat the opponent's, so those are even more "potential outs". In sum, the player has 9 guaranteed outs, and possibly as many as 18, depending on what cards he expects his opponents to have.
https://en.wikipedia.org/wiki?curid=23162
Pot odds In poker, pot odds are the ratio of the current size of the pot to the cost of a contemplated call. Pot odds are often compared to the probability of winning a hand with a future card in order to estimate the call's expected value. Odds are most commonly expressed as ratios, but converting them to percentages often make them easier to work with. The ratio has two numbers: the size of the pot and the cost of the call. To convert this ratio to the equivalent percentage, these two numbers are added together and the cost of the call is divided by this sum. For example, the pot is $30, and the cost of the call is $10. The pot odds in this situation are 30:10, or 3:1 when simplified. To get the percentage, 30 and 10 are added to get a sum of 40 and then 10 is divided by 40, giving 0.25, or 25%. To convert any percentage or fraction to the equivalent odds, the numerator is subtracted from the denominator and then this difference is divided by the numerator. For example, to convert 25%, or 1/4, 1 is subtracted from 4 to get 3 (or 25 from 100 to get 75) and then 3 is divided by 1 (or 75 by 25), giving 3, or 3:1. When a player holds a drawing hand (a hand that is behind now but is likely to win if a certain card is drawn) pot odds are used to determine the expected value of that hand when the player is faced with a bet. The expected value of a call is determined by comparing the pot odds to the odds of drawing a card that wins the pot. When the odds of drawing a card that wins the pot are numerically higher than the pot odds, the call has a positive expectation; on average, a portion of the pot that is greater than the cost of the call is won. Conversely, if the odds of drawing a winning card are numerically lower than the pot odds, the call has a negative expectation, and the expectation is to win less money on average than it costs to call the bet. Implied pot odds, or simply implied odds, are calculated the same way as pot odds, but take into consideration estimated future betting. Implied odds are calculated in situations where the player expects to fold in the following round if the draw is missed, thereby losing no additional bets, but expects to gain additional bets when the draw is made. Since the player expects to always gain additional bets in later rounds when the draw is made, and never lose any additional bets when the draw is missed, the extra bets that the player expects to gain, excluding his own, can fairly be added to the current size of the pot. This adjusted pot value is known as the implied pot. On the turn, Alice's hand is certainly behind, and she faces a $1 call to win a $10 pot against a single opponent. There are four cards remaining in the deck that make her hand a certain winner. Her probability of drawing one of those cards is therefore 4/47 (8.5%), which when converted to odds is 10.75:1. Since the pot lays 10:1 (9.1%), Alice will on average lose money by calling if there is no future betting. However, Alice expects her opponent to call her additional $1 bet on the final betting round if she makes her draw. Alice will fold if she misses her draw and thus lose no additional bets. Alice's implied pot is therefore $11 ($10 plus the expected $1 call to her additional $1 bet), so her implied pot odds are 11:1 (8.3%). Her call now has a positive expectation. Reverse implied pot odds, or simply reverse implied odds, apply to situations where a player will win the minimum if holding the best hand but lose the maximum if not having the best hand. Aggressive actions (bets and raises) are subject to reverse implied odds, because they win the minimum if they win immediately (the current pot), but may lose the maximum if called (the current pot plus the called bet or raise). These situations may also occur when a player has a made hand with little chance of improving what is believed to be currently the best hand, but an opponent continues to bet. An opponent with a weak hand will be likely to give up after the player calls and not call any bets the player makes. An opponent with a superior hand, will, on the other hand, continue, (extracting additional bets or calls from the player). With one card to come, Alice holds a made hand with little chance of improving and faces a $10 call to win a $30 pot. If her opponent has a weak hand or is bluffing, Alice expects no further bets or calls from her opponent. If her opponent has a superior hand, Alice expects the opponent to bet another $10 on the end. Therefore, if Alice wins, she only expects to win the $30 currently in the pot, but if she loses, she expects to lose $20 ($10 call on the turn plus $10 call on the river). Because she is risking $20 to win $30, Alice's reverse implied pot odds are 1.5-to-1 ($30/$20) or 40 percent (1/(1.5+1)). For calling to have a positive expectation, Alice must believe the probability of her opponent having a weak hand is over 40 percent. Often a player will bet to manipulate the pot odds offered to other players. A common example of manipulating pot odds is to make a bet to protect a made hand that discourages opponents from chasing a drawing hand. With one card to come, Bob has a made hand, but the board shows a potential flush draw. Bob wants to bet enough to make it wrong for an opponent with a flush draw to call, but Bob does not want to bet more than he has to in the event the opponent already has him beat. Assuming a $20 pot and one opponent, if Bob bets $10 (half the pot), when his opponent acts, the pot will be $30 and it will cost $10 to call. The opponent's pot odds will be 3-to-1, or 25 percent. If the opponent is on a flush draw (9/46, approximately 19.565 percent or 4.11-to-1 odds against with one card to come), the pot is not offering adequate pot odds for the opponent to call unless the opponent thinks they can induce additional final round betting from Bob if the opponent completes their flush draw (see implied pot odds). A bet of $6.43, resulting in pot odds of 4.11-to-1, would make his opponent mathematically indifferent to calling if implied odds are disregarded. According to David Sklansky, game theory shows that a player should bluff a percentage of the time equal to his opponent's pot odds to call the bluff. For example, in the final betting round, if the pot is $30 and a player is contemplating a $30 bet (which will give his opponent 2-to-1 pot odds for the call), the player should bluff half as often as he would bet for value (one out of three times). However, this conclusion does not take into account some of the context of specific situations. A player's bluffing frequency often accounts for many different factors, particularly the tightness or looseness of their opponents. Bluffing against a tight player is more likely to induce a fold than bluffing against a loose player, who is more likely to call the bluff. Sklansky's strategy is an equilibrium strategy in the sense that it is optimal against someone playing an optimal strategy against it.
https://en.wikipedia.org/wiki?curid=23163
Position (poker) Position in poker refers to the order in which players are seated around the table and the related poker strategy implications. Players who act first are in "early position"; players who act later are in "late position"; players who act in between are in "middle position". A player "has position" on opponents acting before him and is "out of position" to opponents acting after him. Because players act in clockwise order, a player "has position" on opponents seated to his right, except when the opponent has the button and certain cases in the first betting round of games with blinds. The primary advantage held by a player in late position is that he will have more information with which to make better decisions than players in early position, who will have to act first, without the benefit of this extra information. This advantage has led to many players in heads-up play raising on the button with an extremely wide range of hands because of this positional advantage. Also, as earlier opponents fold, the probability of a hand being the best goes up as the number of opponents goes down. The blinds are the least desirable position because a player is forced to contribute to the pot and they must act first on all betting rounds after the flop. Although the big blind has a big advantage on the first round of betting, it is on average the biggest money losing position. There are 10 players playing $4/$8 fixed limit. Alice pays the $2 small blind. Bob pays the $4 big blind. Carol is under the gun (first to act). If Carol has a hand like K♥ J♠, she may choose to fold. With 9 opponents remaining to act, there is approximately a 40% chance that at least one of them will have a better hand than Carol's like A-A, K-K, Q-Q, J-J, A-K, A-Q, A-J or K-Q. And even if no one does, seven of them (all but the two players in the blind) will have position on Carol in the next three betting rounds. Now instead, suppose David in the cut-off position (to the right of the button) has the same K♥ J♠ and all players fold to him. In this situation, there are only three opponents left to act, so the odds that one of them has a better hand are considerably less (only around 16%). Secondly, two of those three (Alice and Bob) will be out of position to David on later betting rounds. A common play would be for David to raise and hope that the button (the only player who has position on David) folds. David's raise might simply steal the blinds if they don't have playable hands, but if they do play, David will be in good shape to take advantage of his position in later betting rounds.
https://en.wikipedia.org/wiki?curid=23165
Dead money (poker) In poker, dead money is the amount of money in the pot other than the equal amounts bet by active remaining players in that pot. Examples of dead money include money contributed to the pot by players who have folded, a dead blind posted by a player returning to a game after missing blinds, or an odd chip left in the pot from a previous deal. For example, eight players each ante $1, one player opens for $2, and gets two callers, making the pot total $14. Three players are now in the pot having contributed $3 each, for $9 "live" money; the remaining $5 (representing the antes of the players who folded) is dead money. The amount of dead money in a pot affects the pot odds of plays or rules of thumb that are based on the number of players. The term "dead money" is also used in a derogatory sense to refer to money put in the pot by players who are still legally eligible to win it, but who are unlikely to do so because they are unskilled, increasing the expected return of other players. This can also be applied to the player himself: "Let's invite John every week; he's dead money". The term "dead money" also applies in tournaments, when many casual players enter events with virtually no chance of winning.
https://en.wikipedia.org/wiki?curid=23169
Freeroll In poker, a freeroll tournament is a tournament with no entry fee, and a freeroll hand is where a player is guaranteed to at least split the pot with his opponent, with a chance they can win the whole pot if certain final cards are dealt. In playing a particular hand of poker, a freeroll is a situation that arises (usually when only two players remain) before the last card has been dealt, in which one player is guaranteed to at least split the pot with his opponent no matter what the final cards are, but where there is some chance he can win the whole pot if certain final cards are dealt. This most commonly occurs in a high-low split game where one player knows that he has a guaranteed low hand made, his opponent cannot make a better low no matter what the last card is, but the player who is low might possibly catch a lucky card that gives him a straight or flush, winning high as well. Here's an example from Texas hold'em: Angie holds , and Burt holds . After the fourth card is dealt, the board is . Both players have an ace-high straight, the current nut hand, and so they will most likely split the pot. But if the final card happens to be a club, Burt's straight will lose to Angie's flush. There is no other possible final card that will give Burt more than a straight; only Angie can improve, so she is "freerolling" Burt. If a player knows he has a freeroll, he can raise the pot with impunity, and often a less-skilled opponent with a good hand who does not realize that he is on the wrong end of the freeroll will continue to put in raises with no possible hope of gain. A freeroll tournament is a tournament with no entry fee, although some freerolls require a payment at some point to gain entry to the tournament. In a typical pay-to-play tournament, the prize pool consists of an accumulation of the entry fees minus a "fee" which is retained by the house. In a freeroll (at least from the players' perspective) the prize pool is essentially a "donation" provided by the house. Of course, in most freerolls the house is able defray a significant portion of the prize pool (or even turn a profit) by charging for food and beverages, sponsorship fees, admission to spectators, broadcast rights fees, or any combination of these. Sometimes a particular cardroom or casino (either traditional or online) will offer a freeroll tournament to frequent players. Invitation-only tournaments are frequently freerolls. Freerolls at Internet poker sites should not be confused with their close counterpart -- play money tournaments. Freerolls are different from play-money tournaments in two respects. Play money tournaments usually require the 'payment' of play money and the tournament winnings are play money. Freeroll tournaments can be genuinely free, may require a payment of points (from a point system developed by the site), or on some occasions require a deposit of funds into the player's account. The winnings are either real money, points, merchandise or entry tickets (invitations) to other tournaments. Most if not all Internet poker sites have freeroll tournaments although in many cases require a payment of points to play. These points typically can only be earned by paying and playing real money hands which in essence is a payment required to play their 'freerolls' and therefore a loose use of the term 'freeroll'. There are Internet sites that allow playing in freerolls without payment of any kind and with the chance to win real money. It is not unusual to pay to play in a feeder tournament that gives the winner(s) a free entry to another tournament but it is debatable whether these second level tournaments can be called 'freerolls', since they require a buy-in, albeit smaller than the major tournament one. More often, such tournaments are called 'satellites'. This format is typical of freeroll tournaments both on the Internet and in the 'brick and mortar' sites. The Professional Poker Tour is one such 'freeroll', with entrants being required to qualify through their results in previous tournaments. Sponsorship and broadcast-rights fees fund the prize pools. Freeroll tournaments are not exclusive to poker. Casinos frequently offer them to frequent and/or high-value players in games such as craps, blackjack, video poker and slot machines. Many believe the term comes from early 1950s Las Vegas, when guests would often be given a "free roll" of nickels to play at the slot machines upon check-in. Guests would often ask for their "free rolls" and the words became fused together and expanded to mean any complimentary gaming bonus.
https://en.wikipedia.org/wiki?curid=23172
Omaha hold 'em Omaha hold 'em (also known as Omaha holdem or simply Omaha) is a community card poker game similar to Texas hold 'em, where each player is dealt four cards and must make his or her best hand using exactly two of them, plus exactly three of the five community cards. The exact origin of the game is unknown, but casino executive Robert Turner first brought Omaha into a casino setting when he introduced the game to Bill Boyd, who offered it as a game at the Las Vegas Golden Nugget Casino (calling it "Nugget Hold'em"). Omaha uses a 52-card French deck. Limit Omaha hold 'em 8-or-better is the "O" game featured in H.O.R.S.E. Both limit Omaha/8 and pot limit Omaha high are featured in the 8-Game. Omaha hold 'em derives its name from two types of games. In the original Omaha poker game, players were only dealt two hole cards and had to use both to make a hand combined with community cards. This version of Omaha is defined in the glossary of "Super/System" (under Omaha) as being interchangeable with "Tight hold 'em". Across all the variations of the game, the requirement of using exactly two hole cards is the only consistent rule. The "Omaha" part of the name represents this aspect of the game. "Hold'em" refers to a game using community cards that are shared by all players. This is opposed to draw games, where each player's hand is composed only of hole cards, and stud games, where each player hand contains a mix of non-community cards that are visible to the other players and concealed hole cards. In North American casinos, the term "Omaha" can refer to several poker games. The original game is also commonly known as "Omaha high". A high-low split version called "Omaha Hi-Lo", or sometimes "Omaha eight-or-better" or "Omaha/8", is also played. In Europe, "Omaha" still typically refers to the high version of the game, usually played pot-limit. Pot-limit Omaha is often abbreviated as "PLO." Pot-limit and no-limit Omaha eight-or-better can be found in some casinos and online, though no-limit is rarer. It is often said that Omaha is a game of "the nuts", i.e. the best possible high or low hand, because it frequently takes "the nuts" to win a showdown. It is also a game where between the cards in his hand and the community cards a player may have drawing possibilities to multiple different types of holdings. For example, a player may have both a draw to a flush and a full house using different combinations of cards. At times, even seasoned players may need additional time to figure what draws are possible for their hand. The basic differences between Omaha and Texas hold 'em are these: first, each player is dealt four hole cards instead of two. The betting rounds and layout of community cards are identical. At showdown, each player's hand is the best five-card hand made from "exactly three" of the five cards on the board, plus "exactly two" of the player's own cards. Unlike Texas hold 'em, a player cannot play four or five of the cards on the board with fewer than two of his own, nor can a player use three or four hole cards to disguise a strong hand. Some specific things to notice about Omaha hands are: In Omaha hi-low split-8 or better (simply Omaha/8), each player makes a separate five-card high hand and five-card ace-to-five low hand (eight-high or lower to qualify), and the pot is split between the high and low (which may be the same player). To qualify for low, a player must be able to play an 8-7-6-5-4 or lower (this is why it is called "eight or better"). A few casinos play with a 9-low qualifier instead, but this is rare. Each player can play any two of his four hole cards to make his high hand, and any two of his four hole cards to make his low hand. If there is no qualifying low hand, the high hand wins ("scoops") the whole pot. This game is usually played in the fixed limit version, although pot limit Omaha/8 is becoming more popular. A few low-stakes online tournaments feature no limit Omaha/8. The brief explanation above belies the complexity of the game, so a number of examples will be useful here to clarify it. The table below shows a five-card board of community cards at the end of play, and then lists for each player the initial private four-card hand dealt to him or her, and the best five-card high hand and low hand each player can play on showdown: In the deal above, Chris wins the high-hand half of the pot with his J-high straight, and Bryan and Eve split the low half (getting a quarter of the pot each) with 7-5-3-2-A. Some specific things to notice about Omaha/8 hands are: Pot-limit Omaha ( frequently shortened to PLO) is popular in Europe, online, and in high-stakes "mixed games" played in some American casinos. This variant is more often played high only, but can also be played high-low. To a still greater degree than in Limit Omaha Hi-Lo, PLO is a game of drawing, when drawing, to the nut hand. Second best flushes and straights can be, and frequently become, losing hands, especially when a player is willing to commit their entire stack to the pot. Furthermore, because of the exponential growth of the pot size in pot-limit play, seeing one of these hands to the end can be very expensive and carry immense reverse implied odds. In poker, an out is any unseen card in the deck that will give a player the best hand. A wrap is a straight draw with nine or more outs. This is called a wrap because the player’s hole cards are said to wrap-around the board cards. In hold-em, where players have two hole cards, the greatest number of straight outs possible is eight; however, in Omaha, there are four hole cards, which can result in straight draws which can have up to 20 outs. An example of a twenty-out wrap is on a flop of . To hit a straight, any of the following cards is needed: . A desirable hand to have in PLO is the current best hand with a redraw. For example, if the board is , and the player has , then not only do they have the current best hand possible (their ace-king makes the ace-high straight), but they also have a redraw with the two queens in their hand because if the board pairs, they will make a full house, or four queens. would be an even better hand because it has flush and royal flush redraws as well. In fact, with the board, is approximately an 80-20 money favorite over a random hand containing ace-king (see freerolling). Even a pair of queens with any two spades is better than 55-45 against a random ace-king hand. The most common variations of Pot Limit Omaha high are Five-card Omaha, commonly referred as "Big O" very popular in Southeastern United States as a home game and Six-card Omaha or 6-O which can be found in many casinos across the UK. Some online poker rooms support Five-card Omaha, Six-card Omaha and Courchevel. "Big O" (occasionally called Five-card Omaha or 5-O) began appearing in Southern California in 2008, and had spread to most of the card rooms in the area by the end of the year. Sometimes the high-low split game is played with a 9 or a 7-high qualifier instead of 8-high. It can also be played with five cards dealt to each player instead of four. In that case, the same rules for making a hand apply: exactly two from the player's hand, and exactly three from the board. Courchevel is named after the high-end ski resort in the French Alps, near the Italian border. According to the urban legend, bored tourists wanted to play a version of poker no-one has ever played before, so they came up with this game. The place where Courchevel was most commonly played was the Aviation Club de France in Paris. That casino is now closed. In the game of Courchevel, players are dealt five hole cards rather than four. Simultaneously, the first community card is dealt. Following an opening round of betting, two additional community cards are dealt, creating a 3-card flop, where the structure of the game is then identical to standard Omaha. Still, exactly two of the five hole cards must be used. Courchevel is popular in France but its popularity has expanded in other parts of Europe, particularly the United Kingdom. Courchevel is also available in a hi-low 8 or better variety, and while Courchevel is rarely offered on any of the major online poker sites, as of 2019, hi-low sit-and-go games at the micro stakes level can be found taking place several times a day on Pokerstars, which had the game since 2013.
https://en.wikipedia.org/wiki?curid=23174
Shuffling Shuffling is a procedure used to randomize a deck of playing cards to provide an element of chance in card games. Shuffling is often followed by a cut, to help ensure that the shuffler has not manipulated the outcome. One of the easiest shuffles to accomplish after a little practice is the overhand shuffle. Johan Jonasson wrote, "The overhand shuffle... is the shuffling technique where you gradually transfer the deck from, say, your right hand to your left hand by sliding off small packets from the top of the deck with your thumb." In detail as normally performed, with the pack initially held in the left hand (say), most of the cards are grasped as a group from the bottom of the pack between the thumb and fingers of the right hand and lifted clear of the small group that remains in the left hand. Small packets are then released from the right hand a packet at a time so that they drop on the top of the pack accumulating in the left hand. The process is repeated several times. The randomness of the whole shuffle is increased by the number of small packets in each shuffle and the number of repeat shuffles performed. The overhand shuffle offers sufficient opportunity for sleight of hand techniques to be used to affect the ordering of cards, creating a stacked deck. The most common way that players cheat with the overhand shuffle is by having a card at the top or bottom of the pack that they require, and then slipping it to the bottom at the start of a shuffle (if it was on top to start), or leaving it as the last card in a shuffle and just dropping it on top (if it was originally on the bottom of the deck). A common shuffling technique is called the "riffle," or "dovetail" shuffle or "leafing the cards", in which half of the deck is held in each hand with the thumbs inward, then cards are released by the thumbs so that they fall to the table interleaved. Many also lift the cards up after a riffle, forming what is called a bridge which puts the cards back into place; it can also be done by placing the halves flat on the table with their rear corners touching, then lifting the back edges with the thumbs while pushing the halves together. While this method is more difficult, it is often used in casinos because it minimizes the risk of exposing cards during the shuffle. There are two types of perfect riffle shuffles: if the top card moves to be second from the top then it is an in shuffle, otherwise it is known as an out shuffle (which preserves both the top and bottom cards). The Gilbert–Shannon–Reeds model provides a mathematical model of the random outcomes of riffling, that has been shown experimentally to be a good fit to human shuffling and that forms the basis for a recommendation that card decks be riffled seven times in order to randomize them thoroughly. Later, mathematicians Lloyd M. Trefethen and Lloyd N. Trefethen authored a paper using a tweaked version of the Gilbert-Shannon-Reeds model showing that the minimum number of riffles for total randomization could also be six, if the method of defining randomness is changed. Also known as the "Indian", "Kattar", "Kenchi" (Hindi for scissor) or "Kutti Shuffle". The deck is held face down, with the middle finger on one long edge and the thumb on the other on the bottom half of the deck. The other hand draws off a packet from the top of the deck. This packet is allowed to drop into the palm. The maneuver is repeated over and over, with newly drawn packets dropping onto previous ones, until the deck is all in the second hand. Indian shuffle differs from stripping in that all the action is in the hand "taking" the cards, whereas in stripping, the action is performed by the hand with the original deck, "giving" the cards to the resulting pile. This is the most common shuffling technique in Asia and other parts of the world, while the overhand shuffle is primarily used in Western countries. Cards are simply dealt out into a number of piles, then the piles are stacked on top of each other. Though this is deterministic and does not randomize the cards at all, it ensures that cards that were next to each other are now separated. Some variations on the pile shuffle attempt to make it slightly random by dealing to the piles in a random order each circuit. Also known as the Chemmy, Irish, scramble, beginner shuffle, smooshing, schwirsheling, or washing the cards, this involves simply spreading the cards out face down, and sliding them around and over each other with one's hands. Then the cards are moved into one pile so that they begin to intertwine and are then arranged back into a stack. This method is useful for beginners and small children or if one is inept at shuffling cards. However, the beginner shuffle requires a large surface for spreading out the cards and takes longer than the other methods. The Mongean shuffle, or Monge's shuffle, is performed as follows (by a right-handed person): Start with the unshuffled deck in the left hand and transfer the top card to the right. Then repeatedly take the top card from the left hand and transfer it to the right, putting the second card at the top of the new deck, the third at the bottom, the fourth at the top, the fifth at the bottom, etc. The result, if one started with cards numbered consecutively formula_1, would be a deck with the cards in the following order: formula_2. For a deck of given size, the number of Mongean shuffles that it takes to return a deck to starting position, is known . Twelve perfect Mongean shuffles restore a 52-card deck. "Weaving" is the procedure of pushing the ends of two halves of a deck against each other in such a way that they naturally intertwine. Sometimes the deck is split into equal halves of 26 cards which are then pushed together in a certain way so as to make them perfectly interweave. This is known as a "Faro Shuffle". The faro shuffle is performed by cutting the deck into two, preferably equal, packs in both hands as follows (right-handed): The cards are held from above in the right and from below in the left hand. Separation of the deck is done simply lifting up half the cards with the right hand thumb slightly and pushing the left hand's packet forward away from the right hand. The two packets are often crossed and slammed into each other as to align them. They are then pushed together by the short sides and bent (either up or down). The cards then alternately fall into each other, much like a zipper. A flourish can be added by springing the packets together by applying pressure and bending them from above, as called the bridge finish. The faro is a controlled shuffle which does not randomize a deck when performed properly. A perfect faro shuffle, where the cards are perfectly alternated, is considered one of the most difficult sleights by card magicians, simply because it requires the shuffler to be able to cut the deck into two equal packets and apply just the right amount of pressure when pushing the cards into each other. Performing eight perfect faro shuffles in a row restores the order of the deck to the original order only if there are 52 cards in the deck and if the original top and bottom cards remain in their positions (1st and 52nd) during the eight shuffles. If the top and bottom cards are weaved in during each shuffle, it takes 52 shuffles to return the deck back into original order (or 26 shuffles to reverse the order). The Mexican spiral shuffle is performed by cyclic actions of moving the top card onto the table, then the new top card under the deck, the next onto the table, next under the deck, and so on until the last card is dealt onto the table. It takes quite a long time, compared with riffle or overhand shuffles, but allows other players to fully control cards which are on the table. The Mexican spiral shuffle was popular at the end of the 19th century in some areas of Mexico as a protection from gamblers and con men arriving from the United States. Magicians, sleight-of-hand artists, and card cheats employ various methods of shuffling whereby the deck appears to have been shuffled fairly, when in reality one or more cards (up to and including the entire deck) stays in the same position. It is also possible, though generally considered very difficult, to "stack the deck" (place cards into a desirable order) by means of one or more riffle shuffles; this is called "riffle stacking". Both performance magicians and card sharps regard the Zarrow shuffle and the Push-Through-False-Shuffle as particularly effective examples of the false shuffle. In these shuffles, the entire deck remains in its original order, although spectators think they see an honest riffle shuffle. Casinos often equip their tables with shuffling machines instead of having croupiers shuffle the cards, as it gives the casino a few advantages, including an increased complexity to the shuffle and therefore an increased difficulty for players to make predictions, even if they are collaborating with croupiers. The shuffling machines are carefully designed to avoid biasing the shuffle and are typically computer-controlled. Shuffling machines also save time that would otherwise be wasted on manual shuffling, thereby increasing the profitability of the table. These machines are also used to lessen repetitive-motion-stress injuries to a dealer. Players with superstitions often regard with suspicion any electronic equipment, so casinos sometimes still have the croupiers perform the shuffling at tables that typically attract those crowds (Baccarat tables). There are exactly 52 factorial (expressed in shorthand as 52!) possible orderings of the cards in a 52-card deck. In other words, there are 52 × 51 × 50 × 49 × ··· × 4 × 3 × 2 × 1 possible combinations of card sequence. This is approximately (80,658vigintillion) possible orderings, or specifically 80,658,175,170,943,878,571,660,636,856,403,766,975,289,505,440,883,277,824,000,000,000,000. The magnitude of this number means that it is exceedingly improbable that two randomly selected, truly randomized decks will be the same. However, while the exact sequence of all cards in a randomized deck is unpredictable, it may be possible to make some probabilistic predictions about a deck that is not sufficiently randomized. The number of shuffles that are sufficient for a "good" level of randomness depends on the type of shuffle and the measure of "good enough randomness", which in turn depends on the game in question. For most games, four to seven riffle shuffles are sufficient: for unsuited games such as blackjack, four riffle shuffles are sufficient, while for suited games, seven riffle shuffles are necessary. There are some games, however, for which even seven riffle shuffles are insufficient. In practice the number of shuffles required depends both on the quality of the shuffle and how significant non-randomness is, particularly how good the people playing are at noticing and using non-randomness. Two to four shuffles is good enough for casual play. But in club play, good bridge players take advantage of non-randomness after four shuffles, and top blackjack players supposedly track aces through the deck; this is known as "ace tracking", or more generally, as "shuffle tracking". Following early research at Bell Labs, which was abandoned in 1955, the question of how many shuffles was required remained open until 1990, when it was convincingly solved as "seven shuffles," as elaborated below. Some results preceded this, and refinements have continued since. A leading figure in the mathematics of shuffling is mathematician and magician Persi Diaconis, who began studying the question around 1970, and has authored many papers in the 1980s, 1990s, and 2000s on the subject with numerous co-authors. Most famous is , co-authored with mathematician Dave Bayer, which analyzed the Gilbert–Shannon–Reeds model of random riffle shuffling and concluded that the deck did not start to become random until five good riffle shuffles, and was truly random after seven, in the precise sense of variation distance described in Markov chain mixing time; of course, you would need more shuffles if your shuffling technique is poor. Recently, the work of Trefethen et al. has questioned some of Diaconis' results, concluding that six shuffles are enough. The difference hinges on how each measured the randomness of the deck. Diaconis used a very sensitive test of randomness, and therefore needed to shuffle more. Even more sensitive measures exist, and the question of what measure is best for specific card games is still open. Diaconis released a response indicating that you only need four shuffles for un-suited games such as blackjack. On the other hand, variation distance may be too forgiving a measure and seven riffle shuffles may be many too few. For example, seven shuffles of a new deck leaves an 81% probability of winning New Age Solitaire where the probability is 50% with a uniform random deck. One sensitive test for randomness uses a standard deck without the jokers divided into suits with two suits in ascending order from ace to king, and the other two suits in reverse. (Many decks already come ordered this way when new.) After shuffling, the measure of randomness is the number of rising sequences that are left in each suit. If a computer has access to purely random numbers, it is capable of generating a "perfect shuffle", a random permutation of the cards; beware that this terminology (an algorithm that perfectly randomizes the deck) differs from "a perfectly executed single shuffle", notably a perfectly interleaving faro shuffle. The Fisher–Yates shuffle, popularized by Donald Knuth, is simple (a few lines of code) and efficient (O("n") on an "n"-card deck, assuming constant time for fundamental steps) algorithm for doing this. Shuffling can be seen as the opposite of sorting. There are other, less-desirable algorithms in common use. For example, one can assign a random number to each card, and then sort the cards in order of their random numbers. This will generate a random permutation, unless any of the random numbers generated are the same as any others (i.e. pairs, triplets etc.). This can be eliminated either by adjusting one of the pair's values randomly up or down by a small amount, or reduced to an arbitrarily low probability by choosing a sufficiently wide range of random number choices. If using efficient sorting such as mergesort or heapsort this is an O("n" log "n") average and worst-case algorithm. These issues are of considerable commercial importance in online gambling, where the randomness of the shuffling of packs of simulated cards for online card games is crucial. For this reason, many online gambling sites provide descriptions of their shuffling algorithms and the sources of randomness used to drive these algorithms, with some gambling sites also providing auditors' reports of the performance of their systems. Physical card shuffling: Mathematics of shuffling: Real world (historical) application:
https://en.wikipedia.org/wiki?curid=23189
Cut (cards) In many card games, to cut the cards (or "cut the deck") is to split the deck into two piles by lifting one pile from the top, before placing the lower pile on top of it. This is typically done after the cards have already been shuffled, and the procedure is used just prior to the cards being dealt to the players. A common procedure is that after the cards have been shuffled, the dealer sets the cards face-down on the table near the player designated to make the cut, typically the player to the dealer's right. That player initiates a cut of the deck by taking a contiguous range of cards off the top of the deck and placing it face-down on the table farther from the dealer; the dealer completes the cut by taking the original bottom portion of the deck and placing it on top of the just-moved cards. Another common procedure is that the person making the cut, places the top part of the cut closer to the dealer, as the deck originally was placed nearer to the cutter. Once the cut is complete, the dealer picks up the deck, straightens or "squares" it, and deals the cards. Rules of procedure or etiquette may vary concerning who makes the cut, the minimum or maximum number of cards which may be cut off the top, whether the dealer or the cutter restacks the cards, whether a cut card is employed, and whether a cut is mandatory. The practice of cutting is primarily a method of reducing the likelihood of someone cheating by manipulating the order of cards to gain advantage. Even if the dealer (or the shuffler, if he is not the dealer) does not plan on cheating, cutting will prevent suspicions, thus many rules require it. Some players also consider the cut to be lucky. Parlett says the purpose of cutting is to prevent the bottom card from being known. The contiguous section may also be taken from the middle of the deck. This is called "Scarne's cut", though in some settings this is considered poor etiquette or against the rules. A cut involving a very small number of cards, such as taking only the top card, taking some cards from the bottom or taking every card bar the bottom one as a cut, is often acceptable according to rules. Other rules may specify that at least three cards must be taken or left in making a cut. Sometimes up to three cuts are allowed. A sensible minimum is about one-fifth of the deck. During informal card games, the dealer is typically not required to offer the cut, and even if offered, the designated player can decline the request. On the other hand, any player may specifically request to cut the cards before they are dealt. If a cut is requested by a player, it must be granted by the dealer. In formal player dealt settings, such as in a casino or during a tournament, an offer to cut the deck is mandatory and the designated player must perform the cut, generally by inserting a cut card (a plastic card about the size of a playing card, usually solid-colored) into the deck; the dealer then makes the actual cut at that point in the deck. When the dealer is not a player (i.e. a casino employee), the cut is mandatory and is usually performed by the dealer. In this instance, the deck is cut onto the aforementioned cut card, and the cut completed; this prevents players from seeing the bottom card of the deck. A cut should always be completed with one hand to limit possibility of a false cut. Scarne's cut was developed by John Scarne during World War II to help protect servicemen against cheating by unscrupulous dealers. First one pulls out a portion of the middle of the stack and places it back on top of the deck; one then performs a regular cut described earlier. It can be demonstrated that multiple top-to-bottom (non-Scarne's) cuts are equivalent to some single cut. In fact, knowing the size of the deck and the size of the cuts, the formula for the composite single cut is given as the sum of the sizes of the cuts modulo the size of the deck. For example, in a 10 card deck, if a 7 card cut and a 4 card cut are made, that is, 7 cards are moved from the top of the deck to the bottom and then the resulting top 4 cards are also moved to the bottom, then those two consecutive cuts are equivalent to a cut the size of (7 + 4 = 11 (mod 10)) = 1. The deck will be in the order (2,3...,10,1). A false cut is a move used either in magic, or for cheating when playing card games. It appears to be a real cut, but leaves the deck in the same order as when it began. More sophisticated versions may make specific desired changes to the deck's order, while still appearing to be an innocuous normal cut. There are many ways to accomplish a false cut, involving misdirection or using complex moves to conceal the real result. Cutting cards is usually a prelude to a game, but it can be a game unto itself. Each player, in turn, removes a selection of cards from the top and reveals the bottom card to all the players, and then replaces the cards in the original position. Whoever has revealed the highest (or sometimes lowest) card is the winner. This is often used in an informal setting, much like flipping coins; it is also sometimes used to determine who will play first in a card game. The command to "cut the cards", followed by someone literally chopping the deck in half with an axe, is a none-too-subtle gag that has been used many times in popular media, going back to at least the vaudeville days. Examples include Harpo Marx in "Horse Feathers", Curly Howard in "Ants in the Pantry", and Bugs Bunny in "Bugs Bunny Rides Again".
https://en.wikipedia.org/wiki?curid=23190
Philology Philology is the study of language in oral and written historical sources; it is the intersection of textual criticism, literary criticism, history, and linguistics. Philology is more commonly defined as the study of literary texts as well as oral and written records, the establishment of their authenticity and their original form, and the determination of their meaning. A person who pursues this kind of study is known as a philologist. In older usage, especially British, philology is more general, covering comparative and historical linguistics. Classical philology studies classical languages. Classical philology principally originated from the Library of Pergamum and the Library of Alexandria around the fourth century BCE, continued by Greeks and Romans throughout the Roman/Byzantine Empire. It was preserved and promoted during the Islamic Golden Age, and eventually resumed by European scholars of the Renaissance, where it was soon joined by philologies of other non-Asian (European) (Germanic, Celtic), Eurasian (Slavistics, etc.), Asian (Arabic, Persian, Sanskrit, Chinese, etc.), and African (Egyptian, Nubian, etc.) languages. Indo-European studies involves the comparative philology of all Indo-European languages. Philology, with its focus on historical development (diachronic analysis), is contrasted with linguistics due to Ferdinand de Saussure's insistence on the importance of synchronic analysis. The contrast continued with the emergence of structuralism and Chomskyan linguistics alongside its emphasis on syntax, although research in the field of historical linguistics is often characterized by reliance on philological materials and findings. The term "philology" is derived from the Greek ("philología"), from the terms ("phílos") "love, affection, loved, beloved, dear, friend" and ("lógos") "word, articulation, reason", describing a love of learning, of literature, as well as of argument and reasoning, reflecting the range of activities included under the notion of . The term changed little with the Latin "philologia", and later entered the English language in the 16th century, from the Middle French "philologie", in the sense of 'love of literature'. The adjective ("philólogos") meant 'fond of discussion or argument, talkative', in Hellenistic Greek, also implying an excessive ("sophistic") preference of argument over the love of true wisdom, ("philósophos"). As an allegory of literary erudition, "philologia" appears in fifth-century postclassical literature (Martianus Capella, "De nuptiis Philologiae et Mercurii"), an idea revived in Late Medieval literature (Chaucer, Lydgate). The meaning of "love of learning and literature" was narrowed to "the study of the historical development of languages" (historical linguistics) in 19th-century usage of the term. Due to the rapid progress made in understanding sound laws and language change, the "golden age of philology" lasted throughout the 19th century, or "from Giacomo Leopardi and Friedrich Schlegel to Nietzsche". In the Anglo-Saxon world, the term philology to describe work on languages and literatures, which had become synonymous with the practices of German scholars, was abandoned as a consequence of anti-German feeling following World War I. Most continental European countries still maintain the term to designate departments, colleges, position titles, and journals. J. R. R. Tolkien opposed the nationalist reaction against philological practices, claiming that "the philological instinct" was "universal as is the use of language". In British English usage, and in British academia, "philology" remains largely synonymous with "historical linguistics", while in US English, and US academia, the wider meaning of "study of a language's grammar, history and literary tradition" remains more widespread. Based on the harsh critique of Friedrich Nietzsche, some US scholars since the 1980s have viewed philology as responsible for a narrowly scientistic study of language and literature. The comparative linguistics branch of philology studies the relationship between languages. Similarities between Sanskrit and European languages were first noted in the early 16th century and led to speculation of a common ancestor language from which all these descended. It is now named Proto-Indo-European. Philology's interest in ancient languages led to the study of what were, in the 18th century, "exotic" languages, for the light they could cast on problems in understanding and deciphering the origins of older texts. Philology also includes the study of texts and their history. It includes elements of textual criticism, trying to reconstruct an author's original text based on variant copies of manuscripts. This branch of research arose among Ancient scholars in the 4th century BC Greek-speaking world, who desired to establish a standard text of popular authors for the purposes of both sound interpretation and secure transmission. Since that time, the original principles of textual criticism have been improved and applied to other widely distributed texts such as the Bible. Scholars have tried to reconstruct the original readings of the Bible from the manuscript variants. This method was applied to Classical Studies and to medieval texts as a way to reconstruct the author's original work. The method produced so-called "critical editions", which provided a reconstructed text accompanied by a "critical apparatus", i.e., footnotes that listed the various manuscript variants available, enabling scholars to gain insight into the entire manuscript tradition and argue about the variants. A related study method known as higher criticism studies the authorship, date, and provenance of text to place such text in historical context. As these philological issues are often inseparable from issues of interpretation, there is no clear-cut boundary between philology and hermeneutics. When text has a significant political or religious influence (such as the reconstruction of Biblical texts), scholars have difficulty reaching objective conclusions. Some scholars avoid all critical methods of textual philology, especially in historical linguistics, where it is important to study the actual recorded materials. The movement known as New Philology has rejected textual criticism because it injects editorial interpretations into the text and destroys the integrity of the individual manuscript, hence damaging the reliability of the data. Supporters of New Philology insist on a strict "diplomatic" approach: a faithful rendering of the text exactly as found in the manuscript, without emendations. Another branch of philology, cognitive philology, studies written and oral texts. Cognitive philology considers these oral texts as the results of human mental processes. This science compares the results of textual science with the results of experimental research of both psychology and artificial intelligence production systems. In the case of Bronze Age literature, philology includes the prior decipherment of the language under study. This has notably been the case with the Egyptian, Sumerian, Assyrian, Hittite, Ugaritic and Luwian languages. Beginning with the famous decipherment and translation of the Rosetta Stone by Jean-François Champollion in 1822, a number of individuals attempted to decipher the writing systems of the Ancient Near East and Aegean. In the case of Old Persian and Mycenaean Greek, decipherment yielded older records of languages already known from slightly more recent traditions (Middle Persian and Alphabetic Greek). Work on the ancient languages of the Near East progressed rapidly. In the mid-19th century, Henry Rawlinson and others deciphered the Behistun Inscription, which records the same text in Old Persian, Elamite, and Akkadian, using a variation of cuneiform for each language. The elucidation of cuneiform led to the decipherment of Sumerian. Hittite was deciphered in 1915 by Bedřich Hrozný. Linear B, a script used in the ancient Aegean, was deciphered in 1952 by Michael Ventris and John Chadwick, who demonstrated that it recorded an early form of Greek, now known as Mycenaean Greek. Linear A, the writing system that records the still-unknown language of the Minoans, resists deciphering, despite many attempts. Work continues on scripts such as the Maya, with great progress since the initial breakthroughs of the phonetic approach championed by Yuri Knorozov and others in the 1950s. Since the late 20th century, the Maya code has been almost completely deciphered, and the Mayan languages are among the most documented and studied in Mesoamerica. The code is described as a logosyllabic style of writing, which could be used to fully express any spoken thought. In the "Space Trilogy" by C.S. Lewis, the main character, Elwin Ransom, is a philologist - as was Lewis' close friend J. R. R. Tolkien. Dr. Edward Morbius, one of the main characters in the science-fiction film "Forbidden Planet", is a philologist. Philip, the main character of Christopher Hampton's 'bourgeois comedy' The Philanthropist, is a professor of philology in an English university town. Moritz-Maria von Igelfeld, the main character in Alexander McCall Smith's 1997 comic novel "Portuguese Irregular Verbs" is a philologist, educated at Cambridge. The main character in the Academy Award Nominee for Best Foreign Language film in 2012, "Footnote", is a Hebrew philologist, and a significant part of the film deals with his work.
https://en.wikipedia.org/wiki?curid=23193
Phonetics Phonetics is a branch of linguistics that studies how humans make and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Phoneticians—linguists who specialize in phonetics—study the physical properties of speech. The field of phonetics is traditionally divided into three subdisciplines based on the research questions involved such as how humans plan and execute movements to produce speech (articulatory phonetics), how different movements affect the properties of the resulting sound (acoustic phonetics), or how humans convert sound waves to linguistic information (auditory phonetics). Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language—which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones. Phonetics broadly deals with two aspects of human speech: production—the ways humans make sounds—and perception—the way speech is understood. The modality of a language describes the method by which a language produces and perceives languages. Languages with oral-aural modalities such as English produce speech orally (using the mouth) and perceive speech aurally (using the ears). Many sign languages such as Auslan have a manual-visual modality and produce speech manually (using the hands) and perceive speech visually (using the eyes), while some languages like American Sign have manual-manual dialect for use in tactile signing by deafblind speakers where signs are produced with the hands and perceived with the hands as well. Language production consists of several interdependent processes which transform a nonlinguistic message into a spoken or signed linguistic signal. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection. During phonological encoding, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced. These movements disrupt and modify an airstream which results in a sound wave. The modification is done by the articulators, with different places and manners of articulation producing different acoustic results. For example, the words "tack" and "sack" both begin with alveolar sounds in English, but differ in how far the tongue is from the alveolar ridge. This difference has large effects on the air stream and thus the sound that is produced. Similarly, the direction and source of the airstream can affect the sound. The most common airstream mechanism is pulmonic—using the lungs—but the glottis and tongue can also be used to produce airstreams. Language perception is the process by which a linguistic signal is decoded and understood by a listener. In order to perceive speech the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes, and words. In order to correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories. While certain cues are prioritized over others, many aspects of the signal can contribute to perception. For example, though oral languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable. Modern phonetics has three main branches: The first known phonetic studies occurred in the Indic subcontinent during the 6th century BCE, among which was Hindu scholar Pāṇini's articulatory description of voicing, though this pioneering work was primarily concerned with the relationship between written Vedic texts and spoken vernacular languages. With the advent of modern phonetics in the 19th century CE, the focus of scholarship shifted to the physical properties of speech itself. Before the widespread availability of recording devices, phoneticians relied upon phonetic transcription systems to collect and share data. Some systems, such as the International Phonetic Alphabet are still in wide use among phoneticians. Language production consists of several interdependent processes which transform a nonlinguistic message into a spoken or signed linguistic signal. Linguists debate whether the process of language production occurs in a series of stages (serial processing) or whether production processes occur in parallel. After identifying a message to be linguistically encoded, a speaker must select the individual words—known as lexical items—to represent that message in a process called lexical selection. The words are selected based on their meaning, which in linguistics is called semantic information. Lexical selection activates the word's lemma, which contains both semantic and grammatical information about the word. After an utterance has been planned, it then goes through phonological encoding. In this stage of language production, the mental representation of the words are assigned their phonological content as a sequence of phonemes to be produced. The phonemes are specified for articulatory features which denote particular goals such as closed lips or the tongue in a particular location. These phonemes are then coordinated into a sequence of muscle commands that can be sent to the muscles, and when these commands are executed properly the intended sounds are produced. Thus the process of production from message to sound can be summarized as the following sequence: Sounds which are made by a full or partial construction of the vocal tract are called consonants. Consonants are pronounced in the vocal tract, usually in the mouth, and the location of this construction affects the resulting sound. Because of the close connection between the position of the tongue and the resulting sound, the place of articulation is an important concept in many subdisciplines of phonetics. Sounds are partly categorized by the location of a construction as well as the part of the body doing the constricting. For example, in English the words "fought" and "thought" are a minimal pair differing only in the organ making the construction rather than the location of the construction. The "f" in "fought" is a labiodental articulation made with the bottom lip against the teeth. The "th" in "thought" is a linguodental articulation made with the tongue against the teeth. Constrictions made by the lips are called labials while those made with the tongue are called lingual. Constrictions made with the tongue can be made in several parts of the vocal tract, broadly classified into coronal, dorsal and radical places of articulation. Coronal articulations are made with the front of the tongue, dorsal articulations are made with the back of the tongue, and radical articulations are made in the pharynx. These divisions are not sufficient for distinguishing and describing all speech sounds. For example, in English the sounds and are both coronal, but they are produced in different places of the mouth. To account for this, more detailed places of articulation are needed based upon the area of the mouth in which the constriction occurs. Articulations involving the lips can be made in three different ways: with both lips (bilabial), with one lip and the teeth (labiodental), and with the tongue and the upper lip (linguolabial). Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations. Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which also moves down slightly, though in some cases the force from air moving through the aperture (opening between the lips) may cause the lips to separate faster than they can come together. Unlike most other articulations, both articulators are made from soft tissue, and so bilabial stops are more likely to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are also unusual in that an articulator in the upper section of the vocal tract actively moves downwards, as the upper lip shows some active downward movement. Linguolabial consonants are made with the blade of the tongue approaching or contacting the upper lip. Like in bilabial articulations, the upper lip moves slightly towards the more active articulator. Articulations in this group do not have their own symbols in the International Phonetic Alphabet, rather, they are formed by combining an apical symbol with a diacritic implicitly placing them in the coronal category. They exist in a number of languages indigenous to Vanuatu such as Tangoa. Labiodental consonants are made by the lower lip rising to the upper teeth. Labiodental consonants are most often fricatives while labiodental nasals are also typologically common. There is debate as to whether true labiodental plosives occur in any natural language, though a number of languages are reported to have labiodental plosives including Zulu, Tonga, and Shubi. Coronal consonants are made with the tip or blade of the tongue and, because of the agility of the front of the tongue, represent a variety not only in place but in the posture of the tongue. The coronal places of articulation represent the areas of the mouth where the tongue contacts or makes a constriction, and include dental, alveolar, and post-alveolar locations. Tongue postures using the tip of the tongue can be apical if using the top of the tongue tip, laminal if made with the blade of the tongue, or sub-apical if the tongue tip is curled back and the bottom of the tongue is used. Coronals are unique as a group in that every manner of articulation is attested. Australian languages are well known for the large number of coronal contrasts exhibited within and across languages in the region. Dental consonants are made with the tip or blade of the tongue and the upper teeth. They are divided into two groups based upon the part of the tongue used to produce them: apical dental consonants are produced with the tongue tip touching the teeth; interdental consonants are produced with the blade of the tongue as the tip of the tongue sticks out in front of the teeth. No language is known to use both contrastively though they may exist allophonically. Alveolar consonants are made with the tip or blade of the tongue at the alveolar ridge just behind the teeth and can similarly be apical or laminal. Crosslinguistically, dental consonants and alveolar consonants are frequently contrasted leading to a number of generalizations of crosslinguistic patterns. The different places of articulation tend to also be contrasted in the part of the tongue used to produce them: most languages with dental stops have laminal dentals, while languages with apical stops usually have apical stops. Languages rarely have two consonants in the same place with a contrast in laminality, though Taa (ǃXóõ) is a counterexample to this pattern. If a language has only one of a dental stop or an alveolar stop, it will usually be laminal if it is a dental stop, and the stop will usually be apical if it is an alveolar stop, though for example Temne and Bulgarian do not follow this pattern. If a language has both an apical and laminal stop, then the laminal stop is more likely to be affricated like in Isoko, though Dahalo show the opposite pattern with alveolar stops being more affricated. Retroflex consonants have several different definitions depending on whether the position of the tongue or the position on the roof of the mouth is given prominence. In general, they represent a group of articulations in which the tip of the tongue is curled upwards to some degree. In this way, retroflex articulations can occur in several different locations on the roof of the mouth including alveolar, post-alveolar, and palatal regions. If the underside of the tongue tip makes contact with the roof of the mouth, it is sub-apical though apical post-alveolar sounds are also described as retroflex. Typical examples of sub-apical retroflex stops are commonly found in Dravidian languages, and in some languages indigenous to the southwest United States the contrastive difference between dental and alveolar stops is a slight retroflexion of the alveolar stop. Acoustically, retroflexion tends to affect the higher formants. Articulations taking place just behind the alveolar ridge, known as post-alveolar consonants, have been referred to using a number of different terms. Apical post-alveolar consonants are often called retroflex, while laminal articulations are sometimes called palato-alveolar; in the Australianist literature, these laminal stops are often described as 'palatal' though they are produced further forward than the palate region typically described as palatal. Because of individual anatomical variation, the precise articulation of palato-alveolar stops (and coronals in general) can vary widely within a speech community. Dorsal consonants are those consonants made using the tongue body rather than the tip or blade and are typically produced at the palate, velum or uvula. Palatal consonants are made using the tongue body against the hard palate on the roof of the mouth. They are frequently contrasted with velar or uvular consonants, though it is rare for a language to contrast all three simultaneously, with Jaqaru as a possible example of a three-way contrast. Velar consonants are made using the tongue body against the velum. They are incredibly common cross-linguistically; almost all languages have a velar stop. Because both velars and vowels are made using the tongue body, they are highly affected by coarticulation with vowels and can be produced as far forward as the hard palate or as far back as the uvula. These variations are typically divided into front, central, and back velars in parallel with the vowel space. They can be hard to distinguish phonetically from palatal consonants, though are produced slightly behind the area of prototypical palatal consonants. Uvular consonants are made by the tongue body contacting or approaching the uvula. They are rare, occurring in an estimated 19 percent of languages, and large regions of the Americas and Africa have no languages with uvular consonants. In languages with uvular consonants, stops are most frequent followed by continuants (including nasals). Consonants made by constrictions of the throat are pharyngeals, and those made by a constriction in the larynx are laryngeal. Laryngeals are made using the vocal folds as the larynx is too far down the throat to reach with the tongue. Pharyngeals however are close enough to the mouth that parts of the tongue can reach them. Radical consonants either use the root of the tongue or the epiglottis during production and are produced very far back in the vocal tract. Pharyngeal consonants are made by retracting the root of the tongue far enough to almost touch the wall of the pharynx. Due to production difficulties, only fricatives and approximants can produced this way. Epiglottal consonants are made with the epiglottis and the back wall of the pharynx. Epiglottal stops have been recorded in Dahalo. Voiced epiglottal consonants are not deemed possible due to the cavity between the glottis and epiglottis being too small to permit voicing. Glottal consonants are those produced using the vocal folds in the larynx. Because the vocal folds are the source of phonation and below the oro-nasal vocal tract, a number of glottal consonants are impossible such as a voiced glottal stop. Three glottal consonants are possible, a voiceless glottal stop and two glottal fricatives, and all are attested in natural languages. Glottal stops, produced by closing the vocal folds, are notably common in the world's languages. While many languages use them to demarcate phrase boundaries, some languages like Huatla Mazatec have them as contrastive phonemes. Additionally, glottal stops can be realized as laryngealization of the following vowel in this language. Glottal stops, especially between vowels, do usually not form a complete closure. True glottal stops normally occur only when they're geminated. The larynx, commonly known as the "voice box", is a cartilaginous structure in the trachea responsible for phonation. The vocal folds (chords) are held together so that they vibrate, or held apart so that they do not. The positions of the vocal folds are achieved by movement of the arytenoid cartilages. The intrinsic laryngeal muscles are responsible for moving the arytenoid cartilages as well as modulating the tension of the vocal folds. If the vocal folds are not close or tense enough, they will either vibrate sporadically or not at all. If they vibrate sporadically it will result in either creaky or breathy voice, depending on the degree; if don't vibrate at all, the result will be voicelessness. In addition to correctly positioning the vocal folds, there must also be air flowing across them or they will not vibrate. The difference in pressure across the glottis required for voicing is estimated at 1 – 2 cm H20 (98.0665 – 196.133 pascals). The pressure differential can fall below levels required for phonation either because of an increase in pressure above the glottis (superglottal pressure) or a decrease in pressure below the glottis (subglottal pressure). The subglottal pressure is maintained by the respiratory muscles. Supraglottal pressure, with no constrictions or articulations, is equal to about atmospheric pressure. However, because articulations—especially consonants—represent constrictions of the airflow, the pressure in the cavity behind those constrictions can increase resulting in a higher supraglottal pressure. According to the lexical access model two different stages of cognition are employed; thus, this concept is known as the two-stage theory of lexical access. The first stage, lexical selection provides information about lexical items required to construct the functional level representation. These items are retrieved according to their specific semantic and syntactic properties, but phonological forms are not yet made available at this stage. The second stage, retrieval of wordforms, provides information required for building the positional level representation. When producing speech, the articulators move through and contact particular locations in space resulting in changes to the acoustic signal. Some models of speech production take this as the basis for modeling articulation in a coordinate system that may be internal to the body (intrinsic) or external (extrinsic). Intrinsic coordinate systems model the movement of articulators as positions and angles of joints in the body. Intrinsic coordinate models of the jaw often use two to three degrees of freedom representing translation and rotation. These face issues with modeling the tongue which, unlike joints of the jaw and arms, is a muscular hydrostat—like an elephant trunk—which lacks joints. Because of the different physiological structures, movement paths of the jaw are relatively straight lines during speech and mastication, while movements of the tongue follow curves. Straight-line movements have been used to argue articulations as planned in extrinsic rather than intrinsic space, though extrinsic coordinate systems also include acoustic coordinate spaces, not just physical coordinate spaces. Models that assume movements are planned in extrinsic space run into an inverse problem of explaining the muscle and joint locations which produce the observed path or acoustic signal. The arm, for example, has seven degrees of freedom and 22 muscles, so multiple different joint and muscle configurations can lead to the same final position. For models of planning in extrinsic acoustic space, the same one-to-many mapping problem applies as well, with no unique mapping from physical or acoustic targets to the muscle movements required to achieve them. Concerns about the inverse problem may be exaggerated, however, as speech is a highly learned skill using neurological structures which evolved for the purpose. The equilibrium-point model proposes a resolution to the inverse problem by arguing that movement targets be represented as the position of the muscle pairs acting on a joint. Importantly, muscles are modeled as springs, and the target is the equilibrium point for the modeled spring-mass system. By using springs, the equilibrium point model can easily account for compensation and response when movements are disrupted. They are considered a coordinate model because they assume that these muscle positions are represented as points in space, equilibrium points, where the spring-like action of the muscles converges. Gestural approaches to speech production propose that articulations are represented as movement patterns rather than particular coordinates to hit. The minimal unit is a gesture that represents a group of "functionally equivalent articulatory movement patterns that are actively controlled with reference to a given speech-relevant goal (e.g., a bilabial closure)." These groups represent coordinative structures or "synergies" which view movements not as individual muscle movements but as task-dependent groupings of muscles which work together as a single unit. This reduces the degrees of freedom in articulation planning, a problem especially in intrinsic coordinate models, which allows for any movement that achieves the speech goal, rather than encoding the particular movements in the abstract representation. Coarticulation is well described by gestural models as the articulations at faster speech rates can be explained as composites of the independent gestures at slower speech rates. Speech sounds are created by the modification of an airstream which results in a sound wave. The modification is done by the articulators, with different places and manners of articulation producing different acoustic results. Because the posture of the vocal tract, not just the position of the tongue can affect the resulting sound, the manner of articulation is important for describing the speech sound. The words "tack" and "sack" both begin with alveolar sounds in English, but differ in how far the tongue is from the alveolar ridge. This difference has large affects on the air stream and thus the sound that is produced. Similarly, the direction and source of the airstream can affect the sound. The most common airstream mechanism is pulmonic—using the lungs—but the glottis and tongue can also be used to produce airstreams. A major distinction between speech sounds is whether they are voiced. Sounds are voiced when the vocal folds begin to vibrate in the process of phonation. Many sounds can be produced with or without phonation, though physical constraints may make phonation difficult or impossible for some articulations. When articulations are voiced, the main source of noise is the periodic vibration of the vocal folds. Articulations like voiceless plosives have no acoustic source and are noticeable by their silence, but other voiceless sounds like fricatives create their own acoustic source regardless of phonation. Phonation is controlled by the muscles of the larynx, and languages make use of more acoustic detail than binary voicing. During phonation, the vocal folds vibrate at a certain rate. This vibration results in a periodic acoustic waveform comprising a fundamental frequency and its harmonics. The fundamental frequency of the acoustic wave can be controlled by adjusting the muscles of the larynx, and listeners perceive this fundamental frequency as pitch. Languages use pitch manipulation to convey lexical information in tonal languages, and many languages use pitch to mark prosodic or pragmatic information. For the vocal folds to vibrate, they must be in the proper position and there must be air flowing through the glottis. Phonation types are modeled on a continuum of glottal states from completely open (voiceless) to completely closed (glottal stop). The optimal position for vibration, and the phonation type most used in speech, modal voice, exists in the middle of these two extremes. If the glottis is slightly wider, breathy voice occurs, while bringing the vocal folds closer together results in creaky voice. The normal phonation pattern used in typical speech is modal voice, where the vocal folds are held close together with moderate tension. The vocal folds vibrate as a single unit periodically and efficiently with a full glottal closure and no aspiration. If they are pulled farther apart, they do not vibrate and so produce voiceless phones. If they are held firmly together they produce a glottal stop. If the vocal folds are held slightly further apart than in modal voicing, they produce phonation types like breathy voice (or murmur) and whispery voice. The tension across the vocal ligaments (vocal cords) is less than in modal voicing allowing for air to flow more freely. Both breathy voice and whispery voice exist on a continuum loosely characterized as going from the more periodic waveform of breathy voice to the more noisy waveform of whispery voice. Acoustically, both tend to dampen the first formant with whispery voice showing more extreme deviations. Holding the vocal folds more tightly together results in a creaky voice. The tension across the vocal folds is less than in modal voice, but they are held tightly together resulting in only the ligaments of the vocal folds vibrating. The pulses are highly irregular, with low pitch and frequency amplitude. Some languages do not maintain a voicing distinction for some consonants, but all languages use voicing to some degree. For example, no language is known to have a phonemic voicing contrast for vowels with all known vowels canonically voiced. Other positions of the glottis, such as breathy and creaky voice, are used in a number of languages, like Jalapa Mazatec, to contrast phonemes while in other languages, like English, they exist allophonically. There are several ways to determine if a segment is voiced or not, the simplest being to feel the larynx during speech and note when vibrations are felt. More precise measurements can be obtained through acoustic analysis of a spectrogram or spectral slice. In a spectrographic analysis, voiced segments show a voicing bar, a region of high acoustic energy, in the low frequencies of voiced segments. In examining a spectral splice, the acoustic spectrum at a given point in time a model of the vowel pronounced reverses the filtering of the mouth producing the spectrum of the glottis. A computational model of the unfiltered glottal signal is then fitted to the inverse filtered acoustic signal to determine the characteristics of the glottis. Visual analysis is also available using specialized medical equipment such as ultrasound and endoscopy. Vowels are broadly categorized by the area of the mouth in which they are produced, but because they are produced without a constriction in the vocal tract their precise description relies on measuring acoustic correlates of tongue position. The location of the tongue during vowel production changes the frequencies at which the cavity resonates, and it is these resonances—known as formants—which are measured and used to characterize vowels. Vowel height traditionally refers to the highest point of the tongue during articulation. The height parameter is divided into four primary levels: high (close), close-mid, open-mid and low (open). Vowels whose height are in the middle are referred to as mid. Slightly opened close vowels and slightly closed open vowels are referred to as near-close and near-open respectively. The lowest vowels are not just articulated with a lowered tongue, but also by lowering the jaw. While the IPA implies that there are seven levels of vowel height, it is unlikely that a given language can minimally contrast all seven levels. Chomsky and Halle suggest that there are only three levels, although four levels of vowel height seem to be needed to describe Danish and it's possible that some languages might even need five. Vowel backness is dividing into three levels: front, central and back. Languages usually do not minimally contrast more than two levels of vowel backness. Some languages claimed to have a three-way backness distinction include Nimboran and Norwegian. In most languages, the lips during vowel production can be classified as either rounded or unrounded (spread), although other types of lip positions, such as compression and protrusion, have been described. Lip position is correlated with height and backness: front and low vowels tend to be unrounded whereas back and high vowels are usually rounded. Paired vowels on the IPA chart have the spread vowel on the left and the rounded vowel on the right. Together with the universal vowel features described above, some languages have additional features such as nasality, length and different types of phonation such as voiceless or creaky. Sometimes more specialized tongue gestures such as rhoticity, advanced tongue root, pharyngealization, stridency and frication are required to describe a certain vowel. Knowing the place of articulation is not enough to fully describe a consonant, the way in which the stricture happens is equally important. Manners of articulation describe how exactly the active articulator modifies, narrows or closes off the vocal tract. Stops (also referred to as plosives) are consonants where the airstream is completely obstructed. Pressure builds up in the mouth during the stricture, which is then released as a small burst of sound when the articulators move apart. The velum is raised so that air cannot flow through the nasal cavity. If the velum is lowered and allows for air to flow through the nose, the result in a nasal stop. However, phoneticians almost always refer to nasal stops as just "nasals".Affricates are a sequence of stops followed by a fricative in the same place. Fricatives are consonants where the airstream is made turbulent by partially, but not completely, obstructing part of the vocal tract. Sibilants are a special type of fricative where the turbulent airstream is directed towards the teeth, creating a high-pitched hissing sound. Nasals (sometimes referred to as nasal stops) are consonants in which there's a closure in the oral cavity and the velum is lowered, allowing air to flow through the nose. In an approximant, the articulators come close together, but not to such an extent that allows a turbulent airstream. Laterals are consonants in which the airstream is obstructed along the center of the vocal tract, allowing the airstream to flow freely on one or both sides. Laterals have also been defined as consonants in which the tongue is contracted in such a way that the airstream is greater around the sides than over the center of the tongue. The first definition does not allow for air to flow over the tongue. Trills are consonants in which the tongue or lips are set in motion by the airstream. The stricture is formed in such a way that the airstream causes a repeating pattern of opening and closing of the soft articulator(s). Apical trills typically consist of two or three periods of vibration. Taps and flaps are single, rapid, usually apical gestures where the tongue is thrown against the roof of the mouth, comparable to a very rapid stop. These terms are sometimes used interchangeably, but some phoneticians make a distinction. In a tap, the tongue contacts the roof in a single motion whereas in a flap the tongue moves tangentially to the roof of the mouth, striking it in passing. During a glottalic airstream mechanism, the glottis is closed, trapping a body of air. This allows for the remaining air in the vocal tract to be moved separately. An upward movement of the closed glottis will move this air out, resulting in it an ejective consonant. Alternatively, the glottis can lower, sucking more air into the mouth, which results in an implosive consonant. Clicks are stops in which tongue movement causes air to be sucked in the mouth, this is referred to as a velaric airstream. During the click, the air becomes rarefied between two articulatory closures, producing a loud 'click' sound when the anterior closure is released. The release of the anterior closure is referred to as the click influx. The release of the posterior closure, which can be velar or uvular, is the click efflux. Clicks are used in several African language families, such as the Khoisan and Bantu languages. The lungs drive nearly all speech production, and their importance in phonetics is due to their creation of pressure for pulmonic sounds. The most common kinds of sound across languages are pulmonic egress, where air is exhaled from the lungs. The opposite is possible, though no language is known to have pulmonic ingressive sounds as phonemes. Many languages such as Swedish use them for paralinguistic articulations such as affirmations in a number of genetically and geographically diverse languages. Both egressive and ingressive sounds rely on holding the vocal folds in a particular posture and using the lungs to draw air across the vocal folds so that they either vibrate (voiced) or do not vibrate (voiceless). Pulmonic articulations are restricted by the volume of air able to be exhaled in a given respiratory cycle, known as the vital capacity. The lungs are used to maintain two kinds of pressure simultaneously in order to produce and modify phonation. To produce phonation at all, the lungs must maintain a pressure of 3–5 cm H20 higher than the pressure above the glottis. However small and fast adjustments are made to the subglottal pressure to modify speech for suprasegmental features like stress. A number of thoracic muscles are used to make these adjustments. Because the lungs and thorax stretch during inhalation, the elastic forces of the lungs alone can produce pressure differentials sufficient for phonation at lung volumes above 50 percent of vital capacity. Above 50 percent of vital capacity, the respiratory muscles are used to "check" the elastic forces of the thorax to maintain a stable pressure differential. Below that volume, they are used to increase the subglottal pressure by actively exhaling air. During speech, the respiratory cycle is modified to accommodate both linguistic and biological needs. Exhalation, usually about 60 percent of the respiratory cycle at rest, is increased to about 90 percent of the respiratory cycle. Because metabolic needs are relatively stable, the total volume of air moved in most cases of speech remains about the same as quiet tidal breathing. Increases in speech intensity of 18 dB (a loud conversation) has relatively little impact on the volume of air moved. Because their respiratory systems are not as developed as adults, children tend to use a larger proportion of their vital capacity compared to adults, with more deep inhales. The source–filter model of speech is a theory of speech production which explains the link between vocal tract posture and the acoustic consequences. Under this model, the vocal tract can be modeled as a noise source coupled onto an acoustic filter. The noise source in many cases is the larynx during the process of voicing, though other noise sources can be modeled in the same way. The shape of the supraglottal vocal tract acts as the filter, and different configurations of the articulators result in different acoustic patterns. These changes are predictable. The vocal tract can be modeled as a sequence of tubes, closed at one end, with varying diameters, and by using equations for acoustic resonance the acoustic effect of an articulatory posture can be derived. The process of inverse filtering uses this principle to analyze the source spectrum produced by the vocal folds during voicing. By taking the inverse of a predicted filter, the acoustic effect of the supraglottal vocal tract can be undone giving the acoustic spectrum produced by the vocal folds. This allows quantitative study of the various phonation types. Language perception is the process by which a linguistic signal is decoded and understood by a listener. In order to perceive speech the continuous acoustic signal must be converted into discrete linguistic units such as phonemes, morphemes, and words. In order to correctly identify and categorize sounds, listeners prioritize certain aspects of the signal that can reliably distinguish between linguistic categories. While certain cues are prioritized over others, many aspects of the signal can contribute to perception. For example, though oral languages prioritize acoustic information, the McGurk effect shows that visual information is used to distinguish ambiguous information when the acoustic cues are unreliable. While listeners can use a variety of information to segment the speech signal, the relationship between acoustic signal and category perception is not a perfect mapping. Because of coarticulation, noisy environments, and individual differences, there is a high degree of acoustic variability within categories. Known as the problem of perceptual invariance, listeners are able to reliably perceive categories despite the variability in acoustic instantiation. In order to do this, listeners rapidly accommodate to new speakers and will shift their boundaries between categories to match the acoustic distinctions their conversational partner is making. Audition, the process of hearing sounds, is the first stage of perceiving speech. Articulators cause systematic changes in air pressure which travel as sound waves to the listener's ear. The sound waves then hit the listener's ear drum causing it to vibrate. The vibration of the ear drum is transmitted by the ossicles—three small bones of the middle ear—to the cochlea. The cochlea is a spiral-shaped, fluid-filled tube divided lengthwise by the organ of Corti which contains the basilar membrane. The basilar membrane increases in thickness as it travels through the cochlea causing different frequencies to resonate at different locations. This tonotopic design allows for the ear to analyze sound in a manner similar to a Fourier transform. The differential vibration of the basilar causes the hair cells within the organ of Corti to move. This causes depolarization of the hair cells and ultimately a conversion of the acoustic signal into a neuronal signal. While the hair cells do not produce action potentials themselves, they release neurotransmitter at synapses with the fibers of the auditory nerve, which does produce action potentials. In this way, the patterns of oscillations on the basilar membrane are converted to spatiotemporal patterns of firings which transmit information about the sound to the brainstem. Besides consonants and vowels, phonetics also describes the properties of speech that are not localized to segments but greater units of speech, such as syllables and phrases. Prosody includes auditory characteristics such as pitch, speech rate, duration, and loudness. Languages use these properties to different degrees to implement stress, pitch accents, and intonation — for example, stress in English and Spanish is correlated with changes in pitch and duration, whereas stress in Welsh is more consistently correlated with pitch than duration and stress in Thai is only correlated with duration. Early theories of speech perception such as motor theory attempted to solve the problem of perceptual invariance by arguing that speech perception and production were closely linked. In its strongest form, motor theory argues that speech perception "requires" the listener to access the articulatory representation of sounds; in order to properly categorize a sound, a listener reverse engineers the articulation which would produce that sound and by identifying these gestures is able to retrieve the intended linguistic category. While findings such as the McGurk effect and case studies from patients with neurological injuries have provided support for motor theory, further experiments have not supported the strong form of motor theory, though there is some support for weaker forms of motor theory which claim a non-deterministic relationship between production and perception. Successor theories of speech perception place the focus on acoustic cues to sound categories and can be grouped into two broad categories: abstractionist theories and episodic theories. In abstractionist theories, speech perception involves the identification of an idealized lexical object based on a signal reduced to its necessary components and normalizing the signal to counteract speaker variability. Episodic theories such as the exemplar model argue that speech perception involves accessing detailed memories (i.e., episodic memories) of previously heard tokens. The problem of perceptual invariance is explained by episodic theories as an issue of familiarity: normalization is a byproduct of exposure to more variable distributions rather than a discrete process as abstractionist theories claim. The first known phonetic studies were carried out as early as the 6th century BCE by Sanskrit grammarians. The Hindu scholar Pāṇini is among the most well known of these early investigators, whose four-part grammar, written around 350 BCE, is influential in modern linguistics and still represents "the most complete generative grammar of any language yet written". His grammar formed the basis of modern linguistics and described several important phonetic principles, including voicing. This early account described resonance as being produced either by tone, when vocal folds are closed, or noise, when vocal folds are open. The phonetic principles in the grammar are considered "primitives" in that they are the basis for his theoretical analysis rather than the objects of theoretical analysis themselves, and the principles can be inferred from his system of phonology. Advancements in phonetics after Pāṇini and his contemporaries were limited until the modern era, save some limited investigations by Greek and Roman grammarians. In the millennia between Indic grammarians and modern phonetics, the focus shifted from the difference between spoken and written language, which was the driving force behind Pāṇini's account, and began to focus on the physical properties of speech alone. Sustained interest in phonetics began again around 1800 CE with the term "phonetics" being first used in the present sense in 1841. With new developments in medicine and the development of audio and visual recording devices, phonetic insights were able to use and review new and more detailed data. This early period of modern phonetics included the development of an influential phonetic alphabet based on articulatory positions by Alexander Melville Bell. Known as visible speech, it gained prominence as a tool in the oral education of deaf children. Before the widespread availability of audio recording equipment, phoneticians relied heavily on a tradition of practical phonetics to ensure that transcriptions and findings were able to be consistent across phoneticians. This training involved both ear training—the recognition of speech sounds—as well as production training—the ability to produce sounds. Phoneticians were expected to learn to recognize by ear the various sounds on the International Phonetic Alphabet and the IPA still tests and certifies speakers on their ability to accurately produce the phonetic patterns of English (though they have discontinued this practice for other languages). As a revision of his visible speech method, Melville Bell developed a description of vowels by height and backness resulting in 9 cardinal vowels. As part of their training in practical phonetics, phoneticians were expected to learn to produce these cardinal vowels in order to anchor their perception and transcription of these phones during fieldwork. This approach was critiqued by Peter Ladefoged in the 1960s based on experimental evidence where he found that cardinal vowels were auditory rather than articulatory targets, challenging the claim that they represented articulatory anchors by which phoneticians could judge other articulations. Acoustic phonetics deals with the acoustic properties of speech sounds. The sensation of sound is caused by pressure fluctuations which cause the eardrum to move. The ear transforms this movement into neural signals that the brain registers as sound. Acoustic waveforms are records that measure these pressure fluctuations. Articulatory phonetics deals with the ways in which speech sounds are made. Auditory phonetics studies how humans perceive speech sounds. Due to the anatomical features of the auditory system distorting the speech signal, humans do not experience speech sounds as perfect acoustic records. For example, the auditory impressions of volume, measured in decibels (dB), does not linearly match the difference in sound pressure. The mismatch between acoustic analyses and what the listener hears is especially noticeable in speech sounds that have a lot of high-frequency energy, such as certain fricatives. To reconcile this mismatch, functional models of the auditory system have been developed. Human languages use many different sounds and in order to compare them linguists must be able to describe sounds in a way that is language independent. Speech sounds can be described in a number of ways. Most commonly speech sounds are referred to by the mouth movements needed to produce them. Consonants and vowels are two gross categories that phoneticians define by the movements in a speech sound. More fine-grained descriptors are parameters such as place of articulation. Place of articulation, manner of articulation, and voicing are used to describe consonants and are the main divisions of the International Phonetic Alphabet consonant chart. Vowels are described by their height, backness, and rounding. Sign language are described using a similar but distinct set of parameters to describe signs: location, movement, hand shape, palm orientation, and non-manual features. In addition to articulatory descriptions, sounds used in oral languages can be described using their acoustics. Because the acoustics are a consequence of the articulation, both methods of description are sufficient to distinguish sounds with the choice between systems dependent on the phonetic feature being investigated. Consonants are speech sounds that are articulated with a complete or partial closure of the vocal tract. They are generally produced by the modification of an airstream exhaled from the lungs. The respiratory organs used to create and modify airflow are divided into three regions: the vocal tract (supralaryngeal), the larynx, and the subglottal system. The airstream can be either egressive (out of the vocal tract) or ingressive (into the vocal tract). In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use an airstream created by movements of the larynx without airflow from the lungs. Click consonants are articulated through the rarefaction of air using the tongue, followed by releasing the forward closure of the tongue. Vowels are syllabic speech sounds that are pronounced without any obstruction in the vocal tract. Unlike consonants, which usually have definite places of articulation, vowels are defined in relation to a set of reference vowels called cardinal vowels. Three properties are needed to define vowels: tongue height, tongue backness and lip roundedness. Vowels that are articulated with a stable quality are called monophthongs; a combination of two separate vowels in the same syllable is a diphthong. In the IPA, the vowels are represented on a trapezoid shape representing the human mouth: the vertical axis representing the mouth from floor to roof and the horizontal axis represents the front-back dimension. Phonetic transcription is a system for transcribing phones that occur in a language, whether oral or sign. The most widely known system of phonetic transcription, the International Phonetic Alphabet (IPA), provides a standardized set of symbols for oral phones. The standardized nature of the IPA enables its users to transcribe accurately and consistently the phones of different languages, dialects, and idiolects. The IPA is a useful tool not only for the study of phonetics, but also for language teaching, professional acting, and speech pathology. While no sign language has a standardized writing system, linguists have developed their own notation systems that describe the handshape, location and movement. The Hamburg Notation System (HamNoSys) is similar to the IPA in that it allows for varying levels of detail. Some notation systems such as KOMVA and the Stokoe system were designed for use in dictionaries; they also make use of alphabetic letters in the local language for handshapes whereas HamNoSys represents the handshape directly. SignWriting aims to be an easy-to-learn writing system for sign languages, although it has not been officially adopted by any deaf community yet. Unlike spoken languages, words in sign languages are perceived with the eyes instead of the ears. Signs are articulated with the hands, upper body and head. The main articulators are the hands and arms. Relative parts of the arm are described with the terms proximal and distal. Proximal refers to a part closer to the torso whereas a distal part is further away from it. For example, a wrist movement is distal compared to an elbow movement. Due to requiring less energy, distal movements are generally easier to produce. Various factors – such as muscle flexibility or being considered taboo – restrict what can be considered a sign. Native signers do not look at their conversation partner's hands. Instead, their gaze is fixated on the face. Because peripheral vision is not as focused as the center of the visual field, signs articulated near the face allow for more subtle differences in finger movement and location to be perceived. Unlike spoken languages, sign languages have two identical articulators: the hands. Signers may use whichever hand they prefer with no disruption in communication. Due to universal neurological limitations, two-handed signs generally have the same kind of articulation in both hands; this is referred to as the Symmetry Condition. The second universal constraint is the Dominance Condition, which holds that when two handshapes are involved, one hand will remain stationary and have a more limited set handshapes compared to the dominant, moving hand. Additionally, it is common for one hand in a two-handed sign to be dropped during informal conversations, a process referred to as weak drop. Just like words in spoken languages, coarticulation may cause signs to influence each other's form. Examples include the handshapes of neighboring signs becoming more similar to each other (assimilation) or weak drop (an instance of deletion).
https://en.wikipedia.org/wiki?curid=23194
Petroleum Petroleum (pronounced ) is a naturally occurring, yellowish-black liquid found in geological formations beneath the Earth's surface. It is commonly refined into various types of fuels. Components of petroleum are separated using a technique called fractional distillation, i.e. separation of a liquid mixture into fractions differing in boiling point by means of distillation, typically using a fractionating column. It consists of naturally occurring hydrocarbons of various molecular weights and may contain miscellaneous organic compounds. The name "petroleum" covers both naturally occurring unprocessed crude oil and petroleum products that are made up of refined crude oil. A fossil fuel, petroleum is formed when large quantities of dead organisms, mostly zooplankton and algae, are buried underneath sedimentary rock and subjected to both intense heat and pressure. Petroleum has mostly been recovered by oil drilling (natural petroleum springs are rare). Drilling is carried out after studies of structural geology (at the reservoir scale), sedimentary basin analysis, and reservoir characterisation (mainly in terms of the porosity and permeability of geologic reservoir structures) have been completed. It is refined and separated, most easily by distillation, into numerous consumer products, from gasoline (petrol) and kerosene to asphalt and chemical reagents used to make plastics, pesticides and pharmaceuticals. Petroleum is used in manufacturing a wide variety of materials, and it is estimated that the world consumes about 95 million barrels each day. The use of petroleum as fuel causes global warming and ocean acidification. According to the UN's Intergovernmental Panel on Climate Change, without fossil fuel phase-out, including petroleum, there will be "severe, pervasive, and irreversible impacts for people and ecosystems". The word "petroleum" comes from Medieval Latin "petroleum" (literally "rock oil"), which comes from Latin "petra', "rock", (from , "rock") and Latin "oleum", "oil", (from , "oil"). The term was used in the treatise "De Natura Fossilium", published in 1546 by the German mineralogist Georg Bauer, also known as Georgius Agricola. In the 19th century, the term "petroleum" was often used to refer to mineral oils produced by distillation from mined organic solids such as cannel coal (and later oil shale) and refined oils produced from them; in the United Kingdom, storage (and later transport) of these oils were regulated by a series of Petroleum Acts, from the "Petroleum Act 1863" onwards. Petroleum, in one form or another, has been used since ancient times, and is now important across society, including in economy, politics and technology. The rise in importance was due to the invention of the internal combustion engine, the rise in commercial aviation, and the importance of petroleum to industrial organic chemistry, particularly the synthesis of plastics, fertilisers, solvents, adhesives and pesticides. More than 4000 years ago, according to Herodotus and Diodorus Siculus, asphalt was used in the construction of the walls and towers of Babylon; there were oil pits near Ardericca (near Babylon), and a pitch spring on Zacynthus. Great quantities of it were found on the banks of the river Issus, one of the tributaries of the Euphrates. Ancient Persian tablets indicate the medicinal and lighting uses of petroleum in the upper levels of their society. The use of petroleum in ancient China dates back to more than 2000 years ago. In I Ching, one of the earliest Chinese writings cites that oil in its raw state, without refining, was first discovered, extracted, and used in China in the first century BCE. In addition, the Chinese were the first to record the use of petroleum as fuel as early as the fourth century BCE. By 347 CE, oil was produced from bamboo-drilled wells in China. Crude oil was often distilled by Persian chemists, with clear descriptions given in Arabic handbooks such as those of Muhammad ibn Zakarīya Rāzi (Rhazes). The streets of Baghdad were paved with tar, derived from petroleum that became accessible from natural fields in the region. In the 9th century, oil fields were exploited in the area around modern Baku, Azerbaijan. These fields were described by the Arab geographer Abu al-Hasan 'Alī al-Mas'ūdī in the 10th century, and by Marco Polo in the 13th century, who described the output of those wells as hundreds of shiploads. Arab and Persian chemists also distilled crude oil in order to produce flammable products for military purposes. Through Islamic Spain, distillation became available in Western Europe by the 12th century. It has also been present in Romania since the 13th century, being recorded as păcură. Early British explorers to Myanmar documented a flourishing oil extraction industry based in Yenangyaung that, in 1795, had hundreds of hand-dug wells under production. Pechelbronn (Pitch fountain) is said to be the first European site where petroleum has been explored and used. The still active Erdpechquelle, a spring where petroleum appears mixed with water has been used since 1498, notably for medical purposes. Oil sands have been mined since the 18th century. In Wietze in lower Saxony, natural asphalt/bitumen has been explored since the 18th century. Both in Pechelbronn as in Wietze, the coal industry dominated the petroleum technologies. Chemist James Young noticed a natural petroleum seepage in the Riddings colliery at Alfreton, Derbyshire from which he distilled a light thin oil suitable for use as lamp oil, at the same time obtaining a more viscous oil suitable for lubricating machinery. In 1848, Young set up a small business refining the crude oil. Young eventually succeeded, by distilling cannel coal at a low heat, in creating a fluid resembling petroleum, which when treated in the same way as the seep oil gave similar products. Young found that by slow distillation he could obtain a number of useful liquids from it, one of which he named "paraffine oil" because at low temperatures it congealed into a substance resembling paraffin wax. The production of these oils and solid paraffin wax from coal formed the subject of his patent dated 17 October 1850. In 1850 Young & Meldrum and Edward William Binney entered into partnership under the title of E.W. Binney & Co. at Bathgate in West Lothian and E. Meldrum & Co. at Glasgow; their works at Bathgate were completed in 1851 and became the first truly commercial oil-works in the world with the first modern oil refinery. The world's first oil refinery was built in 1856 by Ignacy Łukasiewicz. His achievements also included the discovery of how to distill kerosene from seep oil, the invention of the modern kerosene lamp (1853), the introduction of the first modern street lamp in Europe (1853), and the construction of the world's first modern oil well (1854). The demand for petroleum as a fuel for lighting in North America and around the world quickly grew. Edwin Drake's 1859 well near Titusville, Pennsylvania, is popularly considered the first modern well. Already 1858 Georg Christian Konrad Hunäus had found a significant amount of petroleum while drilling for lignite 1858 in Wietze, Germany. Wietze later provided about 80% of the German consumption in the Wilhelminian Era. The production stopped in 1963, but Wietze has hosted a Petroleum Museum since 1970. Drake's well is probably singled out because it was drilled, not dug; because it used a steam engine; because there was a company associated with it; and because it touched off a major boom. However, there was considerable activity before Drake in various parts of the world in the mid-19th century. A group directed by Major Alexeyev of the Bakinskii Corps of Mining Engineers hand-drilled a well in the Baku region in 1848. There were engine-drilled wells in West Virginia in the same year as Drake's well. An early commercial well was hand dug in Poland in 1853, and another in nearby Romania in 1857. At around the same time the world's first, small, oil refinery was opened at Jasło in Poland, with a larger one opened at Ploiești in Romania shortly after. Romania is the first country in the world to have had its annual crude oil output officially recorded in international statistics: 275 tonnes for 1857. The first commercial oil well in Canada became operational in 1858 at Oil Springs, Ontario (then Canada West). Businessman James Miller Williams dug several wells between 1855 and 1858 before discovering a rich reserve of oil four metres below ground. Williams extracted 1.5 million litres of crude oil by 1860, refining much of it into kerosene lamp oil. Williams's well became commercially viable a year before Drake's Pennsylvania operation and could be argued to be the first commercial oil well in North America. The discovery at Oil Springs touched off an oil boom which brought hundreds of speculators and workers to the area. Advances in drilling continued into 1862 when local driller Shaw reached a depth of 62 metres using the spring-pole drilling method. On January 16, 1862, after an explosion of natural gas Canada's first oil gusher came into production, shooting into the air at a recorded rate of 3,000 barrels per day. By the end of the 19th century the Russian Empire, particularly the Branobel company in Azerbaijan, had taken the lead in production. Access to oil was and still is a major factor in several military conflicts of the twentieth century, including World War II, during which oil facilities were a major strategic asset and were extensively bombed. The German invasion of the Soviet Union included the goal to capture the Baku oilfields, as it would provide much needed oil-supplies for the German military which was suffering from blockades. Oil exploration in North America during the early 20th century later led to the US becoming the leading producer by mid-century. As petroleum production in the US peaked during the 1960s, however, the United States was surpassed by Saudi Arabia and the Soviet Union. In 1973, Saudi Arabia and other Arab nations imposed an oil embargo against the United States, United Kingdom, Japan and other Western nations which supported Israel in the Yom Kippur War of October 1973. The embargo caused an oil crisis with many short- and long-term effects on global politics and the global economy. Today, about 90 percent of vehicular fuel needs are met by oil. Petroleum also makes up 40 percent of total energy consumption in the United States, but is responsible for only 1 percent of electricity generation. Petroleum's worth as a portable, dense energy source powering the vast majority of vehicles and as the base of many industrial chemicals makes it one of the world's most important commodities. Viability of the oil commodity is controlled by several key parameters: number of vehicles in the world competing for fuel; quantity of oil exported to the world market (Export Land Model); net energy gain (economically useful energy provided minus energy consumed); political stability of oil exporting nations; and ability to defend oil supply lines. The top three oil producing countries are Russia, Saudi Arabia and the United States. In 2018, due in part to developments in hydraulic fracturing and horizontal drilling, the United States became the world's largest producer. About 80 percent of the world's readily accessible reserves are located in the Middle East, with 62.5 percent coming from the Arab 5: Saudi Arabia, United Arab Emirates, Iraq, Qatar and Kuwait. A large portion of the world's total oil exists as unconventional sources, such as bitumen in Athabasca oil sands and extra heavy oil in the Orinoco Belt. While significant volumes of oil are extracted from oil sands, particularly in Canada, logistical and technical hurdles remain, as oil extraction requires large amounts of heat and water, making its net energy content quite low relative to conventional crude oil. Thus, Canada's oil sands are not expected to provide more than a few million barrels per day in the foreseeable future. Petroleum includes not only crude oil, but all liquid, gaseous and solid hydrocarbons. Under surface pressure and temperature conditions, lighter hydrocarbons methane, ethane, propane and butane exist as gases, while pentane and heavier hydrocarbons are in the form of liquids or solids. However, in an underground oil reservoir the proportions of gas, liquid, and solid depend on subsurface conditions and on the phase diagram of the petroleum mixture. An oil well produces predominantly crude oil, with some natural gas dissolved in it. Because the pressure is lower at the surface than underground, some of the gas will come out of solution and be recovered (or burned) as "associated gas" or "solution gas". A gas well produces predominantly natural gas. However, because the underground temperature and pressure are higher than at the surface, the gas may contain heavier hydrocarbons such as pentane, hexane, and heptane in the gaseous state. At surface conditions these will condense out of the gas to form "natural gas condensate", often shortened to "condensate." Condensate resembles gasoline in appearance and is similar in composition to some volatile light crude oils. The proportion of light hydrocarbons in the petroleum mixture varies greatly among different oil fields, ranging from as much as 97 percent by weight in the lighter oils to as little as 50 percent in the heavier oils and bitumens. The hydrocarbons in crude oil are mostly alkanes, cycloalkanes and various aromatic hydrocarbons, while the other organic compounds contain nitrogen, oxygen and sulfur, and trace amounts of metals such as iron, nickel, copper and vanadium. Many oil reservoirs contain live bacteria. The exact molecular composition of crude oil varies widely from formation to formation but the proportion of chemical elements varies over fairly narrow limits as follows: Four different types of hydrocarbon molecules appear in crude oil. The relative percentage of each varies from oil to oil, determining the properties of each oil. Crude oil varies greatly in appearance depending on its composition. It is usually black or dark brown (although it may be yellowish, reddish, or even greenish). In the reservoir it is usually found in association with natural gas, which being lighter forms a "gas cap" over the petroleum, and saline water which, being heavier than most forms of crude oil, generally sinks beneath it. Crude oil may also be found in a semi-solid form mixed with sand and water, as in the Athabasca oil sands in Canada, where it is usually referred to as crude bitumen. In Canada, bitumen is considered a sticky, black, tar-like form of crude oil which is so thick and heavy that it must be heated or diluted before it will flow. Venezuela also has large amounts of oil in the Orinoco oil sands, although the hydrocarbons trapped in them are more fluid than in Canada and are usually called extra heavy oil. These oil sands resources are called unconventional oil to distinguish them from oil which can be extracted using traditional oil well methods. Between them, Canada and Venezuela contain an estimated of bitumen and extra-heavy oil, about twice the volume of the world's reserves of conventional oil. Petroleum is used mostly, by volume, for refining into fuel oil and gasoline, both important ""primary energy"" sources. 84 percent by volume of the hydrocarbons present in petroleum is converted into energy-rich fuels (petroleum-based fuels), including gasoline, diesel, jet, heating, and other fuel oils, and liquefied petroleum gas. The lighter grades of crude oil produce the best yields of these products, but as the world's reserves of light and medium oil are depleted, oil refineries are increasingly having to process heavy oil and bitumen, and use more complex and expensive methods to produce the products required. Because heavier crude oils have too much carbon and not enough hydrogen, these processes generally involve removing carbon from or adding hydrogen to the molecules, and using fluid catalytic cracking to convert the longer, more complex molecules in the oil to the shorter, simpler ones in the fuels. Due to its high energy density, easy transportability and relative abundance, oil has become the world's most important source of energy since the mid-1950s. Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics; the 16 percent not used for energy production is converted into these other materials. Petroleum is found in porous rock formations in the upper strata of some areas of the Earth's crust. There is also petroleum in oil sands (tar sands). Known oil reserves are typically estimated at around 190 km3 (1.2 trillion (short scale) barrels) without oil sands, or 595 km3 (3.74 trillion barrels) with oil sands. Consumption is currently around per day, or 4.9 km3 per year, yielding a remaining oil supply of only about 120 years, if current demand remains static.More recent studies, however, put the number at around 50 years. Petroleum is a mixture of a very large number of different hydrocarbons; the most commonly found molecules are alkanes (paraffins), cycloalkanes (naphthenes), aromatic hydrocarbons, or more complicated chemicals like asphaltenes. Each petroleum variety has a unique mix of molecules, which define its physical and chemical properties, like color and viscosity. The "alkanes", also known as "paraffins", are saturated hydrocarbons with straight or branched chains which contain only carbon and hydrogen and have the general formula CnH2n+2. They generally have from 5 to 40 carbon atoms per molecule, although trace amounts of shorter or longer molecules may be present in the mixture. The alkanes from pentane (C5H12) to octane (C8H18) are refined into gasoline, the ones from nonane (C9H20) to hexadecane (C16H34) into diesel fuel, kerosene and jet fuel. Alkanes with more than 16 carbon atoms can be refined into fuel oil and lubricating oil. At the heavier end of the range, paraffin wax is an alkane with approximately 25 carbon atoms, while asphalt has 35 and up, although these are usually cracked by modern refineries into more valuable products. The shortest molecules, those with four or fewer carbon atoms, are in a gaseous state at room temperature. They are the petroleum gases. Depending on demand and the cost of recovery, these gases are either flared off, sold as liquefied petroleum gas under pressure, or used to power the refinery's own burners. During the winter, butane (C4H10), is blended into the gasoline pool at high rates, because its high vapour pressure assists with cold starts. Liquified under pressure slightly above atmospheric, it is best known for powering cigarette lighters, but it is also a main fuel source for many developing countries. Propane can be liquified under modest pressure, and is consumed for just about every application relying on petroleum for energy, from cooking to heating to transportation. The "cycloalkanes", also known as "naphthenes", are saturated hydrocarbons which have one or more carbon rings to which hydrogen atoms are attached according to the formula CnH2n. Cycloalkanes have similar properties to alkanes but have higher boiling points. The "aromatic hydrocarbons" are unsaturated hydrocarbons which have one or more planar six-carbon rings called benzene rings, to which hydrogen atoms are attached with the formula CnH2n-6. They tend to burn with a sooty flame, and many have a sweet aroma. Some are carcinogenic. These different molecules are separated by fractional distillation at an oil refinery to produce gasoline, jet fuel, kerosene, and other hydrocarbons. For example, 2,2,4-trimethylpentane (isooctane), widely used in gasoline, has a chemical formula of C8H18 and it reacts with oxygen exothermically: The number of various molecules in an oil sample can be determined by laboratory analysis. The molecules are typically extracted in a solvent, then separated in a gas chromatograph, and finally determined with a suitable detector, such as a flame ionization detector or a mass spectrometer. Due to the large number of co-eluted hydrocarbons within oil, many cannot be resolved by traditional gas chromatography and typically appear as a hump in the chromatogram. This Unresolved Complex Mixture (UCM) of hydrocarbons is particularly apparent when analysing weathered oils and extracts from tissues of organisms exposed to oil. Some of the component of oil will mix with water: the water associated fraction of the oil. Incomplete combustion of petroleum or gasoline results in production of toxic byproducts. Too little oxygen during combustion results in the formation of carbon monoxide. Due to the high temperatures and high pressures involved, exhaust gases from gasoline combustion in car engines usually include nitrogen oxides which are responsible for creation of photochemical smog. At a constant volume, the heat of combustion of a petroleum product can be approximated as follows: where formula_2 is measured in calories per gram and formula_3 is the specific gravity at . The thermal conductivity of petroleum based liquids can be modeled as follows: where formula_5 is measured in BTU°F−1hr−1ft−1 , formula_6 is measured in °F and formula_7 is degrees API gravity. The specific heat of petroleum oils can be modeled as follows: where formula_9 is measured in BTU/(lb °F), formula_6 is the temperature in Fahrenheit and formula_3 is the specific gravity at . In units of kcal/(kg·°C), the formula is: where the temperature formula_6 is in Celsius and formula_3 is the specific gravity at 15 °C. The latent heat of vaporization can be modeled under atmospheric conditions as follows: where formula_16 is measured in BTU/lb, formula_6 is measured in °F and formula_3 is the specific gravity at . In units of kcal/kg, the formula is: where the temperature formula_6 is in Celsius and formula_3 is the specific gravity at 15 °C. Petroleum is a fossil fuel derived from ancient fossilized organic materials, such as zooplankton and algae. Vast amounts of these remains settled to sea or lake bottoms where they were covered in stagnant water (water with no dissolved oxygen) or sediments such as mud and silt faster than they could decompose aerobically. Approximately 1 m below this sediment or water oxygen concentration was low, below 0.1 mg/l, and anoxic conditions existed. Temperatures also remained constant. As further layers settled to the sea or lake bed, intense heat and pressure built up in the lower regions. This process caused the organic matter to change, first into a waxy material known as kerogen, found in various oil shales around the world, and then with more heat into liquid and gaseous hydrocarbons via a process known as catagenesis. Formation of petroleum occurs from hydrocarbon pyrolysis in a variety of mainly endothermic reactions at high temperature or pressure, or both. These phases are described in detail below. In the absence of plentiful oxygen, "aerobic" bacteria were prevented from decaying the organic matter after it was buried under a layer of sediment or water. However, "anaerobic" bacteria were able to reduce sulfates and nitrates among the matter to H2S and N2 respectively by using the matter as a source for other reactants. Due to such anaerobic bacteria, at first this matter began to break apart mostly via hydrolysis: polysaccharides and proteins were hydrolyzed to simple sugars and amino acids respectively. These were further anaerobically oxidized at an accelerated rate by the enzymes of the bacteria: e.g., amino acids went through oxidative deamination to imino acids, which in turn reacted further to ammonia and α-keto acids. Monosaccharides in turn ultimately decayed to CO2 and methane. The anaerobic decay products of amino acids, monosaccharides, phenols and aldehydes combined to fulvic acids. Fats and waxes were not extensively hydrolyzed under these mild conditions. Some phenolic compounds produced from previous reactions worked as bactericides and the actinomycetales order of bacteria also produced antibiotic compounds (e.g., streptomycin). Thus the action of anaerobic bacteria ceased at about 10 m below the water or sediment. The mixture at this depth contained fulvic acids, unreacted and partially reacted fats and waxes, slightly modified lignin, resins and other hydrocarbons. As more layers of organic matter settled to the sea or lake bed, intense heat and pressure built up in the lower regions. As a consequence, compounds of this mixture began to combine in poorly understood ways to kerogen. Combination happened in a similar fashion as phenol and formaldehyde molecules react to urea-formaldehyde resins, but kerogen formation occurred in a more complex manner due to a bigger variety of reactants. The total process of kerogen formation from the beginning of anaerobic decay is called diagenesis, a word that means a transformation of materials by dissolution and recombination of their constituents. Kerogen formation continued to the depth of about 1 km from the Earth's surface where temperatures may reach around 50 °C. Kerogen formation represents a halfway point between organic matter and fossil fuels: kerogen can be exposed to oxygen, oxidize and thus be lost or it could be buried deeper inside the Earth's crust and be subjected to conditions which allow it to slowly transform into fossil fuels like petroleum. The latter happened through catagenesis in which the reactions were mostly radical rearrangements of kerogen. These reactions took thousands to millions of years and no external reactants were involved. Due to radical nature of these reactions, kerogen reacted towards two classes of products: those with low H/C ratio (anthracene or products similar to it) and those with high H/C ratio (methane or products similar to it); i.e., carbon-rich or hydrogen-rich products. Because catagenesis was closed off from external reactants, the resulting composition of the fuel mixture was dependent on the composition of the kerogen via reaction stoichiometry. 3 main types of kerogen exist: type I (algal), II (liptinic) and III (humic), which were formed mainly from algae, plankton and woody plants (this term includes trees, shrubs and lianas) respectively. Catagenesis was pyrolytic despite of the fact that it happened at relatively low temperatures (when compared to commercial pyrolysis plants) of 60 to several hundred °C. Pyrolysis was possible because of the long reaction times involved. Heat for catagenesis came from the decomposition of radioactive materials of the crust, especially 40K, 232Th, 235U and 238U. The heat varied with geothermal gradient and was typically 10-30 °C per km of depth from the Earth's surface. Unusual magma intrusions, however, could have created greater localized heating. Geologists often refer to the temperature range in which oil forms as an "oil window". Below the minimum temperature oil remains trapped in the form of kerogen. Above the maximum temperature the oil is converted to natural gas through the process of thermal cracking. Sometimes, oil formed at extreme depths may migrate and become trapped at a much shallower level. The Athabasca Oil Sands are one example of this. An alternative mechanism to the one described above was proposed by Russian scientists in the mid-1850s, the hypothesis of abiogenic petroleum origin (petroleum formed by inorganic means), but this is contradicted by geological and geochemical evidence. Abiogenic sources of oil have been found, but never in commercially profitable amounts. "The controversy isn't over whether abiogenic oil reserves exist," said Larry Nation of the American Association of Petroleum Geologists. "The controversy is over how much they contribute to Earth's overall reserves and how much time and effort geologists should devote to seeking them out." Three conditions must be present for oil reservoirs to form: The reactions that produce oil and natural gas are often modeled as first order breakdown reactions, where hydrocarbons are broken down to oil and natural gas by a set of parallel reactions, and oil eventually breaks down to natural gas by another set of reactions. The latter set is regularly used in petrochemical plants and oil refineries. Wells are drilled into oil reservoirs to extract the crude oil. "Natural lift" production methods that rely on the natural reservoir pressure to force the oil to the surface are usually sufficient for a while after reservoirs are first tapped. In some reservoirs, such as in the Middle East, the natural pressure is sufficient over a long time. The natural pressure in most reservoirs, however, eventually dissipates. Then the oil must be extracted using "artificial lift" means. Over time, these "primary" methods become less effective and "secondary" production methods may be used. A common secondary method is "waterflood" or injection of water into the reservoir to increase pressure and force the oil to the drilled shaft or "wellbore." Eventually "tertiary" or "enhanced" oil recovery methods may be used to increase the oil's flow characteristics by injecting steam, carbon dioxide and other gases or chemicals into the reservoir. In the United States, primary production methods account for less than 40 percent of the oil produced on a daily basis, secondary methods account for about half, and tertiary recovery the remaining 10 percent. Extracting oil (or "bitumen") from oil/tar sand and oil shale deposits requires mining the sand or shale and heating it in a vessel or retort, or using "in-situ" methods of injecting heated liquids into the deposit and then pumping the liquid back out saturated with oil. Oil-eating bacteria biodegrade oil that has escaped to the surface. Oil sands are reservoirs of partially biodegraded oil still in the process of escaping and being biodegraded, but they contain so much migrating oil that, although most of it has escaped, vast amounts are still present—more than can be found in conventional oil reservoirs. The lighter fractions of the crude oil are destroyed first, resulting in reservoirs containing an extremely heavy form of crude oil, called crude bitumen in Canada, or extra-heavy crude oil in Venezuela. These two countries have the world's largest deposits of oil sands. On the other hand, oil shales are source rocks that have not been exposed to heat or pressure long enough to convert their trapped hydrocarbons into crude oil. Technically speaking, oil shales are not always shales and do not contain oil, but are fined-grain sedimentary rocks containing an insoluble organic solid called kerogen. The kerogen in the rock can be converted into crude oil using heat and pressure to simulate natural processes. The method has been known for centuries and was patented in 1694 under British Crown Patent No. 330 covering, "A way to extract and make great quantities of pitch, tar, and oil out of a sort of stone." Although oil shales are found in many countries, the United States has the world's largest deposits. The petroleum industry generally classifies crude oil by the geographic location it is produced in (e.g., West Texas Intermediate, Brent, or Oman), its API gravity (an oil industry measure of density), and its sulfur content. Crude oil may be considered "light" if it has low density or "heavy" if it has high density; and it may be referred to as sweet if it contains relatively little sulfur or "sour" if it contains substantial amounts of sulfur. The geographic location is important because it affects transportation costs to the refinery. "Light" crude oil is more desirable than "heavy" oil since it produces a higher yield of gasoline, while "sweet" oil commands a higher price than "sour" oil because it has fewer environmental problems and requires less refining to meet sulfur standards imposed on fuels in consuming countries. Each crude oil has unique molecular characteristics which are revealed by the use of Crude oil assay analysis in petroleum laboratories. Barrels from an area in which the crude oil's molecular characteristics have been determined and the oil has been classified are used as pricing references throughout the world. Some of the common reference crudes are: There are declining amounts of these benchmark oils being produced each year, so other oils are more commonly what is actually delivered. While the reference price may be for West Texas Intermediate delivered at Cushing, the actual oil being traded may be a discounted Canadian heavy oil—Western Canadian Select—delivered at Hardisty, Alberta, and for a Brent Blend delivered at Shetland, it may be a discounted Russian Export Blend delivered at the port of Primorsk. The petroleum industry is involved in the global processes of exploration, extraction, refining, transporting (often with oil tankers and pipelines), and marketing petroleum products. The largest volume products of the industry are fuel oil and gasoline. Petroleum is also the raw material for many chemical products, including pharmaceuticals, solvents, fertilizers, pesticides, and plastics. The industry is usually divided into three major components: upstream, midstream and downstream. Midstream operations are usually included in the downstream category. Petroleum is vital to many industries, and is of importance to the maintenance of industrialized civilization itself, and thus is a critical concern to many nations. Oil accounts for a large percentage of the world's energy consumption, ranging from a low of 32 percent for Europe and Asia, up to a high of 53 percent for the Middle East, South and Central America (44%), Africa (41%), and North America (40%). The world at large consumes 30 billion barrels (4.8 km3) of oil per year, and the top oil consumers largely consist of developed nations. In fact, 24 percent of the oil consumed in 2004 went to the United States alone, though by 2007 this had dropped to 21 percent of world oil consumed. In the US, in the states of Arizona, California, Hawaii, Nevada, Oregon and Washington, the Western States Petroleum Association (WSPA) represents companies responsible for producing, distributing, refining, transporting and marketing petroleum. This non-profit trade association was founded in 1907, and is the oldest petroleum trade association in the United States. In the 1950s, shipping costs made up 33 percent of the price of oil transported from the Persian Gulf to the United States, but due to the development of supertankers in the 1970s, the cost of shipping dropped to only 5 percent of the price of Persian oil in the US. Due to the increase of the value of the crude oil during the last 30 years, the share of the shipping cost on the final cost of the delivered commodity was less than 3% in 2010. For example, in 2010 the shipping cost from the Persian Gulf to the US was in the range of 20 $/t and the cost of the delivered crude oil around 800 $/t. After the collapse of the OPEC-administered pricing system in 1985, and a short-lived experiment with netback pricing, oil-exporting countries adopted a market-linked pricing mechanism. First adopted by PEMEX in 1986, market-linked pricing was widely accepted, and by 1988 became and still is the main method for pricing crude oil in international trade. The current reference, or pricing markers, are Brent, WTI, and Dubai/Oman. Other important market is in Shanghai, because of high consumption in China(rising in XXI century). It is also uncommon, because it shifted trading from USD to RMB. The chemical structure of petroleum is heterogeneous, composed of hydrocarbon chains of different lengths. Because of this, petroleum may be taken to oil refineries and the hydrocarbon chemicals separated by distillation and treated by other chemical processes, to be used for a variety of purposes. The total cost per plant is about 9 billion dollars. The most common distillation fractions of petroleum are fuels. Fuels include (by increasing boiling temperature range): Petroleum classification according to chemical composition. Certain types of resultant hydrocarbons may be mixed with other non-hydrocarbons, to create other end products: Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of energy-intensive mechanization, fertilizers and pesticides. According to the US Energy Information Administration (EIA) estimate for 2011, the world consumes 87.421 million barrels of oil each day. This table orders the amount of petroleum consumed in 2011 in thousand barrels (1000 bbl) per day and in thousand cubic metres (1000 m3) per day: Source: US Energy Information Administration Population Data: 1 peak production of oil already passed in this state 2 This country is not a major oil producer In petroleum industry parlance, "production" refers to the quantity of crude extracted from reserves, not the literal creation of the product. In order of net exports in 2011, 2009 and 2006 in thousand bbl/d and thousand m3/d: Source: US Energy Information Administration 1 peak production already passed in this state 2 Canadian statistics are complicated by the fact it is both an importer and exporter of crude oil, and refines large amounts of oil for the U.S. market. It is the leading source of U.S. imports of oil and products, averaging in August 2007. Total world production/consumption (as of 2005) is approximately . In order of net imports in 2011, 2009 and 2006 in thousand bbl/d and thousand m3/d: Source: US Energy Information Administration 1 peak production of oil expected in 2020 2 Major oil producer whose production is still increasing Countries whose oil production is 10% or less of their consumption. Source: CIA World Factbook Because petroleum is a naturally occurring substance, its presence in the environment need not be the result of human causes such as accidents and routine activities (seismic exploration, drilling, extraction, refining and combustion). Phenomena such as seeps and tar pits are examples of areas that petroleum affects without man's involvement. Regardless of source, petroleum's effects when released into the environment are similar. When burned, petroleum releases carbon dioxide, a greenhouse gas. Along with the burning of coal, petroleum combustion is the largest contributor to the increase in atmospheric CO2. Atmospheric CO2 has risen over the last 150 years to current levels of over 415 ppmv, from the 180–300 ppmv of the prior 800 thousand years. This rise in temperature has reduced the minimum Arctic ice pack to , a loss of almost half since satellite measurements started in 1979. Because of this melt, more oil reserves have been revealed. About 13 percent of the world's undiscovered oil resides in the Arctic. Ocean acidification is the increase in the acidity of the Earth's oceans caused by the uptake of carbon dioxide () from the atmosphere. This increase in acidity inhibits all marine life—having a greater impact on smaller organisms as well as shelled organisms (see scallops). Oil extraction is simply the removal of oil from the reservoir (oil pool). Oil is often recovered as a water-in-oil emulsion, and specialty chemicals called demulsifiers are used to separate the oil from water. Oil extraction is costly and sometimes environmentally damaging. Offshore exploration and extraction of oil disturbs the surrounding marine environment. Crude oil and refined fuel spills from tanker ship accidents have damaged natural ecosystems in Alaska, the Gulf of Mexico, the Galápagos Islands, France and many other places. The quantity of oil spilled during accidents has ranged from a few hundred tons to several hundred thousand tons (e.g., Deepwater Horizon oil spill, SS Atlantic Empress, Amoco Cadiz). Smaller spills have already proven to have a great impact on ecosystems, such as the "Exxon Valdez" oil spill. Oil spills at sea are generally much more damaging than those on land, since they can spread for hundreds of nautical miles in a thin oil slick which can cover beaches with a thin coating of oil. This can kill sea birds, mammals, shellfish and other organisms it coats. Oil spills on land are more readily containable if a makeshift earth dam can be rapidly bulldozed around the spill site before most of the oil escapes, and land animals can avoid the oil more easily. Control of oil spills is difficult, requires ad hoc methods, and often a large amount of manpower. The dropping of bombs and incendiary devices from aircraft on the wreck produced poor results; modern techniques would include pumping the oil from the wreck, like in the "Prestige" oil spill or the "Erika" oil spill. Though crude oil is predominantly composed of various hydrocarbons, certain nitrogen heterocyclic compounds, such as pyridine, picoline, and quinoline are reported as contaminants associated with crude oil, as well as facilities processing oil shale or coal, and have also been found at legacy wood treatment sites. These compounds have a very high water solubility, and thus tend to dissolve and move with water. Certain naturally occurring bacteria, such as "Micrococcus", "Arthrobacter", and "Rhodococcus" have been shown to degrade these contaminants. A tarball is a blob of crude oil (not to be confused with tar, which is a man-made product derived from pine trees or refined from petroleum) which has been weathered after floating in the ocean. Tarballs are an aquatic pollutant in most environments, although they can occur naturally, for example in the Santa Barbara Channel of California or in the Gulf of Mexico off Texas. Their concentration and features have been used to assess the extent of oil spills. Their composition can be used to identify their sources of origin, and tarballs themselves may be dispersed over long distances by deep sea currents. They are slowly decomposed by bacteria, including "Chromobacterium violaceum", "Cladosporium resinae", "Bacillus submarinus", "Micrococcus varians", "Pseudomonas aeruginosa", "Candida marina" and "Saccharomyces estuari". James S. Robbins has argued that the advent of petroleum-refined kerosene saved some species of great whales from extinction by providing an inexpensive substitute for whale oil, thus eliminating the economic imperative for open-boat whaling. In the United States in 2007 about 70 percent of petroleum was used for transportation (e.g. gasoline, diesel, jet fuel), 24 percent by industry (e.g. production of plastics), 5 percent for residential and commercial uses, and 2 percent for electricity production. Outside of the US, a higher proportion of petroleum tends to be used for electricity. Petroleum-based vehicle fuels can be replaced by either alternative fuels, or other methods of propulsion such as electric or nuclear. Alternative fuel vehicles refers to both: Biological feedstocks do exist for industrial uses such as Bioplastic production. In oil producing countries with little refinery capacity, oil is sometimes burned to produce electricity. Renewable energy technologies such as solar power, wind power, micro hydro, biomass and biofuels are used, but the primary alternatives remain large-scale hydroelectricity, nuclear and coal-fired generation. Consumption in the twentieth and twenty-first centuries has been abundantly pushed by automobile sector growth. The 1985–2003 oil glut even fueled the sales of low fuel economy vehicles in OECD countries. The 2008 economic crisis seems to have had some impact on the sales of such vehicles; still, in 2008 oil consumption showed a small increase. In 2016 Goldman Sachs predicted lower demand for oil due to emerging economies concerns, especially China. The BRICS (Brasil, Russia, India, China, South Africa) countries might also kick in, as China briefly was the first automobile market in December 2009. The immediate outlook still hints upwards. In the long term, uncertainties linger; the OPEC believes that the OECD countries will push low consumption policies at some point in the future; when that happens, it will definitely curb oil sales, and both OPEC and the Energy Information Administration (EIA) kept lowering their 2020 consumption estimates during the past five years. A detailed review of International Energy Agency oil projections have revealed that revisions of world oil production, price and investments have been motivated by a combination of demand and supply factors. All together, Non-OPEC conventional projections have been fairly stable the last 15 years, while downward revisions were mainly allocated to OPEC. Recent upward revisions are primarily a result of US tight oil. Production will also face an increasingly complex situation; while OPEC countries still have large reserves at low production prices, newly found reservoirs often lead to higher prices; offshore giants such as Tupi, Guara and Tiber demand high investments and ever-increasing technological abilities. Subsalt reservoirs such as Tupi were unknown in the twentieth century, mainly because the industry was unable to probe them. Enhanced Oil Recovery (EOR) techniques (example: DaQing, China) will continue to play a major role in increasing the world's recoverable oil. The expected availability of petroleum resources has always been around 35 years or even less since the start of the modern exploration. The oil constant, an insider pun in the German industry, refers to that effect. A growing number of divestment campaigns from major funds pushed by newer generations who question the sustainability of petroleum may hinder the financing of future oil prospection and production. Peak oil is a term applied to the projection that future petroleum production (whether for individual oil wells, entire oil fields, whole countries, or worldwide production) will eventually peak and then decline at a similar rate to the rate of increase before the peak as these reserves are exhausted. The peak of oil discoveries was in 1965, and oil production per year has surpassed oil discoveries every year since 1980. However, this does not mean that potential oil production has surpassed oil demand. Hubbert applied his theory to accurately predict the peak of U.S. conventional oil production at a date between 1966 and 1970. This prediction was based on data available at the time of his publication in 1956. In the same paper, Hubbert predicts world peak oil in "half a century" after his publication, which would be 2006. It is difficult to predict the oil peak in any given region, due to the lack of knowledge and/or transparency in accounting of global oil reserves. Based on available production data, proponents have previously predicted the peak for the world to be in years 1989, 1995, or 1995–2000. Some of these predictions date from before the recession of the early 1980s, and the consequent reduction in global consumption, the effect of which was to delay the date of any peak by several years. Just as the 1971 U.S. peak in oil production was only clearly recognized after the fact, a peak in world production will be difficult to discern until production clearly drops off. The peak is also a moving target as it is now measured as "liquids", which includes synthetic fuels, instead of just conventional oil. The International Energy Agency (IEA) said in 2010 that production of conventional crude oil had peaked in 2006 at 70 MBBL/d, then flattened at 68 or 69 thereafter. Since virtually all economic sectors rely heavily on petroleum, peak oil, if it were to occur, could lead to a "partial or complete failure of markets". In the mid-2000s, widespread fears of an imminent peak led to the "peak oil movement," in which over one hundred thousand Americans prepared, individually and collectively, for the "post-carbon" future. While there has been much focus historically on peak oil supply, focus is increasingly shifting to peak demand as more countries seek to transition to renewable energy. The GeGaLo index of geopolitical gains and losses assesses how the geopolitical position of 156 countries may change if the world fully transitions to renewable energy resources. Former oil exporters are expected to lose power, while the positions of former oil importers and countries rich in renewable energy resources is expected to strengthen. Unconventional oil is petroleum produced or extracted using techniques other than the conventional methods. The calculus for peak oil has changed with the introduction of unconventional production methods. In particular, the combination of horizontal drilling and hydraulic fracturing has resulted in a significant increase in production from previously uneconomic plays. Analysts expected that $150 billion would be spent on further developing North American tight oil fields in 2015. The large increase in tight oil production is one of the reasons behind the price drop in late 2014. Certain rock strata contain hydrocarbons but have low permeability and are not thick from a vertical perspective. Conventional vertical wells would be unable to economically retrieve these hydrocarbons. Horizontal drilling, extending horizontally through the strata, permits the well to access a much greater volume of the strata. Hydraulic fracturing creates greater permeability and increases hydrocarbon flow to the wellbore.
https://en.wikipedia.org/wiki?curid=23195
Poultry Poultry () are domesticated birds kept by humans for their eggs, their meat or their feathers. These birds are most typically members of the superorder Galloanserae (fowl), especially the order Galliformes (which includes chickens, quails, and turkeys). Poultry also includes other birds that are killed for their meat, such as the young of pigeons (known as squabs) but does not include similar wild birds hunted for sport or food and known as game. The word "poultry" comes from the French/Norman word "poule", itself derived from the Latin word "pullus", which means small animal. The domestication of poultry took place several thousand years ago. This may have originally been as a result of people hatching and rearing young birds from eggs collected from the wild, but later involved keeping the birds permanently in captivity. Domesticated chickens may have been used for cockfighting at first and quail kept for their songs, but soon it was realised how useful it was having a captive-bred source of food. Selective breeding for fast growth, egg-laying ability, conformation, plumage and docility took place over the centuries, and modern breeds often look very different from their wild ancestors. Although some birds are still kept in small flocks in extensive systems, most birds available in the market today are reared in intensive commercial enterprises. Together with pig meat, poultry is one of the two most widely eaten types of meat globally, with over 70% of the meat supply in 2012 between them; poultry provides nutritionally beneficial food containing high-quality protein accompanied by a low proportion of fat. All poultry meat should be properly handled and sufficiently cooked in order to reduce the risk of food poisoning. The word "poultry" comes from the West & English "pultrie", from Old French , from "pouletier", poultry dealer, from "poulet", pullet. The word "pullet" itself comes from Middle English "pulet", from Old French "polet", both from Latin "pullus", a young fowl, young animal or chicken. The word "fowl" is of Germanic origin (cf. Old English "Fugol", German "Vogel", Danish "Fugl"). "Poultry" is a term used for any kind of domesticated bird, captive-raised for its utility, and traditionally the word has been used to refer to wildfowl (Galliformes) and waterfowl (Anseriformes) but not to cagebirds such as songbirds and parrots. "Poultry" can be defined as domestic fowls, including chickens, turkeys, geese and ducks, raised for the production of meat or eggs and the word is also used for the flesh of these birds used as food. The Encyclopædia Britannica lists the same bird groups but also includes guinea fowl and squabs (young pigeons). In R. D. Crawford's "Poultry breeding and genetics", squabs are omitted but Japanese quail and common pheasant are added to the list, the latter frequently being bred in captivity and released into the wild. In his 1848 classic book on poultry, "Ornamental and Domestic Poultry: Their History, and Management", Edmund Dixon included chapters on the peafowl, guinea fowl, mute swan, turkey, various types of geese, the muscovy duck, other ducks and all types of chickens including bantams. In colloquial speech, the term "fowl" is often used near-synonymously with "domesticated chicken" ("Gallus gallus"), or with "poultry" or even just "bird", and many languages do not distinguish between "poultry" and "fowl". Both words are also used for the flesh of these birds. Poultry can be distinguished from "game", defined as wild birds or mammals hunted for food or sport, a word also used to describe the flesh of these when eaten. Chickens are medium-sized, chunky birds with an upright stance and characterised by fleshy red combs and wattles on their heads. Males, known as cocks, are usually larger, more boldly coloured, and have more exaggerated plumage than females (hens). Chickens are gregarious, omnivorous, ground-dwelling birds that in their natural surroundings search among the leaf litter for seeds, invertebrates, and other small animals. They seldom fly except as a result of perceived danger, preferring to run into the undergrowth if approached. Today's domestic chicken ("Gallus gallus domesticus") is mainly descended from the wild red junglefowl of Asia, with some additional input from grey junglefowl. Domestication is believed to have taken place between 7,000 and 10,000 years ago, and what are thought to be fossilized chicken bones have been found in northeastern China dated to around 5,400 BC. Archaeologists believe domestication was originally for the purpose of cockfighting, the male bird being a doughty fighter. By 4,000 years ago, chickens seem to have reached the Indus Valley and 250 years later, they arrived in Egypt. They were still used for fighting and were regarded as symbols of fertility. The Romans used them in divination, and the Egyptians made a breakthrough when they learned the difficult technique of artificial incubation. Since then, the keeping of chickens has spread around the world for the production of food with the domestic fowl being a valuable source of both eggs and meat. Since their domestication, a large number of breeds of chickens have been established, but with the exception of the white Leghorn, most commercial birds are of hybrid origin. In about 1800, chickens began to be kept on a larger scale, and modern high-output poultry farms were present in the United Kingdom from around 1920 and became established in the United States soon after the Second World War. By the mid-20th century, the poultry meat-producing industry was of greater importance than the egg-laying industry. Poultry breeding has produced breeds and strains to fulfil different needs; light-framed, egg-laying birds that can produce 300 eggs a year; fast-growing, fleshy birds destined for consumption at a young age, and utility birds which produce both an acceptable number of eggs and a well-fleshed carcase. Male birds are unwanted in the egg-laying industry and can often be identified as soon as they are hatch for subsequent culling. In meat breeds, these birds are sometimes castrated (often chemically) to prevent aggression. The resulting bird, called a capon, has more tender and flavorful meat, as well. A bantam is a small variety of domestic chicken, either a miniature version of a member of a standard breed, or a "true bantam" with no larger counterpart. The name derives from the town of Bantam in Java where European sailors bought the local small chickens for their shipboard supplies. Bantams may be a quarter to a third of the size of standard birds and lay similarly small eggs. They are kept by small-holders and hobbyists for egg production, use as broody hens, ornamental purposes, and showing. Cockfighting is said to be the world's oldest spectator sport and may have originated in Persia 6,000 years ago. Two mature males (cocks or roosters) are set to fight each other, and will do so with great vigour until one is critically injured or killed. Breeds such as the Aseel were developed in the Indian subcontinent for their aggressive behaviour. The sport formed part of the culture of the ancient Indians, Chinese, Greeks, and Romans, and large sums were won or lost depending on the outcome of an encounter. Cockfighting has been banned in many countries during the last century on the grounds of cruelty to animals. Ducks are medium-sized aquatic birds with broad bills, eyes on the side of the head, fairly long necks, short legs set far back on the body, and webbed feet. Males, known as drakes, are often larger than females (known as hens) and are differently coloured in some breeds. Domestic ducks are omnivores, eating a variety of animal and plant materials such as aquatic insects, molluscs, worms, small amphibians, waterweeds, and grasses. They feed in shallow water by dabbling, with their heads underwater and their tails upended. Most domestic ducks are too heavy to fly, and they are social birds, preferring to live and move around together in groups. They keep their plumage waterproof by preening, a process that spreads the secretions of the preen gland over their feathers. Clay models of ducks found in China dating back to 4000 BC may indicate the domestication of ducks took place there during the Yangshao culture. Even if this is not the case, domestication of the duck took place in the Far East at least 1500 years earlier than in the West. Lucius Columella, writing in the first century BC, advised those who sought to rear ducks to collect wildfowl eggs and put them under a broody hen, because when raised in this way, the ducks "lay aside their wild nature and without hesitation breed when shut up in the bird pen". Despite this, ducks did not appear in agricultural texts in Western Europe until about 810 AD, when they began to be mentioned alongside geese, chickens, and peafowl as being used for rental payments made by tenants to landowners. It is widely agreed that the mallard ("Anas platyrhynchos") is the ancestor of all breeds of domestic duck (with the exception of the Muscovy duck ("Cairina moschata"), which is not closely related to other ducks). Ducks are farmed mainly for their meat, eggs, and down. As is the case with chickens, various breeds have been developed, selected for egg-laying ability, fast growth, and a well-covered carcase. The most common commercial breed in the United Kingdom and the United States is the Pekin duck, which can lay 200 eggs a year and can reach a weight of in 44 days. In the Western world, ducks are not as popular as chickens, because the latter produce larger quantities of white, lean meat and are easier to keep intensively, making the price of chicken meat lower than that of duck meat. While popular in "haute cuisine", duck appears less frequently in the mass-market food industry. However, things are different in the East. Ducks are more popular there than chickens and are mostly still herded in the traditional way and selected for their ability to find sufficient food in harvested rice fields and other wet environments. The greylag goose ("Anser anser") was domesticated by the Egyptians at least 3000 years ago, and a different wild species, the swan goose ("Anser cygnoides"), domesticated in Siberia about a thousand years later, is known as a Chinese goose. The two hybridise with each other and the large knob at the base of the beak, a noticeable feature of the Chinese goose, is present to a varying extent in these hybrids. The hybrids are fertile and have resulted in several of the modern breeds. Despite their early domestication, geese have never gained the commercial importance of chickens and ducks. Domestic geese are much larger than their wild counterparts and tend to have thick necks, an upright posture, and large bodies with broad rear ends. The greylag-derived birds are large and fleshy and used for meat, while the Chinese geese have smaller frames and are mainly used for egg production. The fine down of both is valued for use in pillows and padded garments. They forage on grass and weeds, supplementing this with small invertebrates, and one of the attractions of rearing geese is their ability to grow and thrive on a grass-based system. They are very gregarious and have good memories and can be allowed to roam widely in the knowledge that they will return home by dusk. The Chinese goose is more aggressive and noisy than other geese and can be used as a guard animal to warn of intruders. The flesh of meat geese is dark-coloured and high in protein, but they deposit fat subcutaneously, although this fat contains mostly monounsaturated fatty acids. The birds are killed either around 10 or about 24 weeks. Between these ages, problems with dressing the carcase occur because of the presence of developing pin feathers. In some countries, geese and ducks are force-fed to produce livers with an exceptionally high fat content for the production of "foie gras". Over 75% of world production of this product occurs in France, with lesser industries in Hungary and Bulgaria and a growing production in China. "Foie gras" is considered a luxury in many parts of the world, but the process of feeding the birds in this way is banned in many countries on animal welfare grounds. Turkeys are large birds, their nearest relatives being the pheasant and the guineafowl. Males are larger than females and have spreading, fan-shaped tails and distinctive, fleshy wattles, called a snood, that hang from the top of the beak and are used in courtship display. Wild turkeys can fly, but seldom do so, preferring to run with a long, straddling gait. They roost in trees and forage on the ground, feeding on seeds, nuts, berries, grass, foliage, invertebrates, lizards, and small snakes. The modern domesticated turkey is descended from one of six subspecies of wild turkey ("Meleagris gallopavo") found in the present Mexican states of Jalisco, Guerrero and Veracruz. Pre-Aztec tribes in south-central Mexico first domesticated the bird around 800 BC, and Pueblo Indians inhabiting the Colorado Plateau in the United States did likewise around 200 BC. They used the feathers for robes, blankets, and ceremonial purposes. More than 1,000 years later, they became an important food source. The first Europeans to encounter the bird misidentified it as a guineafowl, a bird known as a "turkey fowl" at that time because it had been introduced into Europe via Turkey. Commercial turkeys are usually reared indoors under controlled conditions. These are often large buildings, purpose-built to provide ventilation and low light intensities (this reduces the birds' activity and thereby increases the rate of weight gain). The lights can be switched on for 24-hrs/day, or a range of step-wise light regimens to encourage the birds to feed often and therefore grow rapidly. Females achieve slaughter weight at about 15 weeks of age and males at about 19. Mature commercial birds may be twice as heavy as their wild counterparts. Many different breeds have been developed, but the majority of commercial birds are white, as this improves the appearance of the dressed carcass, the pin feathers being less visible. Turkeys were at one time mainly consumed on special occasions such as Christmas (10 million birds in the United Kingdom) or Thanksgiving (60 million birds in the United States). However, they are increasingly becoming part of the everyday diet in many parts of the world. Guinea fowl originated in southern Africa, and the species most often kept as poultry is the helmeted guineafowl ("Numida meleagris"). It is a medium-sized grey or speckled bird with a small naked head with colorful wattles and a knob on top, and was domesticated by the time of the ancient Greeks and Romans. Guinea fowl are hardy, sociable birds that subsist mainly on insects, but also consume grasses and seeds. They will keep a vegetable garden clear of pests and will eat the ticks that carry Lyme disease. They happily roost in trees and give a loud vocal warning of the approach of predators. Their flesh and eggs can be eaten in the same way as chickens, young birds being ready for the table at the age of about four months. A squab is the name given to the young of domestic pigeons that are destined for the table. Like other domesticated pigeons, birds used for this purpose are descended from the rock pigeon ("Columba livia"). Special utility breeds with desirable characteristics are used. Two eggs are laid and incubated for about 17 days. When they hatch, the squabs are fed by both parents on "pigeon's milk", a thick secretion high in protein produced by the crop. Squabs grow rapidly, but are slow to fledge and are ready to leave the nest at 26 to 30 days weighing about . By this time, the adult pigeons will have laid and be incubating another pair of eggs and a prolific pair should produce two squabs every four weeks during a breeding season lasting several months. Worldwide, more chickens are kept than any other type of poultry, with over 50 billion birds being raised each year as a source of meat and eggs. Traditionally, such birds would have been kept extensively in small flocks, foraging during the day and housed at night. This is still the case in developing countries, where the women often make important contributions to family livelihoods through keeping poultry. However, rising world populations and urbanization have led to the bulk of production being in larger, more intensive specialist units. These are often situated close to where the feed is grown or near to where the meat is needed, and result in cheap, safe food being made available for urban communities. Profitability of production depends very much on the price of feed, which has been rising. High feed costs could limit further development of poultry production. In free-range husbandry, the birds can roam freely outdoors for at least part of the day. Often, this is in large enclosures, but the birds have access to natural conditions and can exhibit their normal behaviours. A more intensive system is yarding, in which the birds have access to a fenced yard and poultry house at a higher stocking rate. Poultry can also be kept in a barn system, with no access to the open air, but with the ability to move around freely inside the building. The most intensive system for egg-laying chickens is battery cages, often set in multiple tiers. In these, several birds share a small cage which restricts their ability to move around and behave in a normal manner. The eggs are laid on the floor of the cage and roll into troughs outside for ease of collection. Battery cages for hens have been illegal in the EU since January 1, 2012. Chickens raised intensively for their meat are known as "broilers". Breeds have been developed that can grow to an acceptable carcass size () in six weeks or less. Broilers grow so fast, their legs cannot always support their weight and their hearts and respiratory systems may not be able to supply enough oxygen to their developing muscles. Mortality rates at 1% are much higher than for less-intensively reared laying birds which take 18 weeks to reach similar weights. Processing the birds is done automatically with conveyor-belt efficiency. They are hung by their feet, stunned, killed, bled, scalded, plucked, have their heads and feet removed, eviscerated, washed, chilled, drained, weighed, and packed, all within the course of little over two hours. Both intensive and free-range farming have animal welfare concerns. In intensive systems, cannibalism, feather pecking and vent pecking can be common, with some farmers using beak trimming as a preventative measure. Diseases can also be common and spread rapidly through the flock. In extensive systems, the birds are exposed to adverse weather conditions and are vulnerable to predators and disease-carrying wild birds. Barn systems have been found to have the worst bird welfare. In Southeast Asia, a lack of disease control in free-range farming has been associated with outbreaks of avian influenza. In many countries, national and regional poultry shows are held where enthusiasts exhibit their birds which are judged on certain phenotypical breed traits as specified by their respective breed standards. The idea of poultry exhibition may have originated after cockfighting was made illegal, as a way of maintaining a competitive element in poultry husbandry. Breed standards were drawn up for egg-laying, meat-type, and purely ornamental birds, aiming for uniformity. Sometimes, poultry shows are part of general livestock shows, and sometimes they are separate events such as the annual "National Championship Show" in the United Kingdom organised by the Poultry Club of Great Britain. Poultry is the second most widely eaten type of meat in the world, accounting for about 30% of total meat production worldwide compared to pork at 38%. Sixteen billion birds are raised annually for consumption, more than half of these in industrialised, factory-like production units. Global broiler meat production rose to 84.6 million tonnes in 2013. The largest producers were the United States (20%), China (16.6%), Brazil (15.1%) and the European Union (11.3%). There are two distinct models of production; the European Union supply chain model seeks to supply products which can be traced back to the farm of origin. This model faces the increasing costs of implementing additional food safety requirements, welfare issues and environmental regulations. In contrast, the United States model turns the product into a commodity. World production of duck meat was about 4.2 million tonnes in 2011 with China producing two thirds of the total, some 1.7 billion birds. Other notable duck-producing countries in the Far East include Vietnam, Thailand, Malaysia, Myanmar, Indonesia and South Korea (12% in total). France (3.5%) is the largest producer in the West, followed by other EU nations (3%) and North America (1.7%). China was also by far the largest producer of goose and guinea fowl meat, with a 94% share of the 2.6 million tonne global market. Global egg production was expected to reach 65.5 million tonnes in 2013, surpassing all previous years. Between 2000 and 2010, egg production was growing globally at around 2% per year, but since then growth has slowed down to nearer 1%. Poultry is available fresh or frozen, as whole birds or as joints (cuts), bone-in or deboned, seasoned in various ways, raw or ready cooked. The meatiest parts of a bird are the flight muscles on its chest, called "breast" meat, and the walking muscles on the legs, called the "thigh" and "drumstick". The wings are also eaten (Buffalo wings are a popular example in the United States) and may be split into three segments, the meatier "drumette", the "wingette" (also called the "flat"), and the wing tip (also called the "flapper"). In Japan, the wing is frequently separated, and these parts are referred to as 手羽元 ("teba-moto" "wing base") and 手羽先 ("teba-saki" "wing tip"). Dark meat, which avian myologists refer to as "red muscle", is used for sustained activity—chiefly walking, in the case of a chicken. The dark colour comes from the protein myoglobin, which plays a key role in oxygen uptake and storage within cells. White muscle, in contrast, is suitable only for short bursts of activity such as, for chickens, flying. Thus, the chicken's leg and thigh meat are dark, while its breast meat (which makes up the primary flight muscles) is white. Other birds with breast muscle more suitable for sustained flight, such as ducks and geese, have red muscle (and therefore dark meat) throughout. Some cuts of meat including poultry expose the microscopic regular structure of intracellular muscle fibrils which can diffract light and produce iridescent colours, an optical phenomenon sometimes called structural colouration. Poultry meat and eggs provide nutritionally beneficial food containing protein of high quality. This is accompanied by low levels of fat which have a favourable mix of fatty acids. Chicken meat contains about two to three times as much polyunsaturated fat as most types of red meat when measured by weight. However, for boneless, skinless chicken breast, the amount is much lower. A 100-g serving of baked chicken breast contains 4 g of fat and 31 g of protein, compared to 10 g of fat and 27 g of protein for the same portion of broiled, lean skirt steak. A 2011 study by the Translational Genomics Research Institute showed that 47% of the meat and poultry sold in United States grocery stores was contaminated with "Staphylococcus aureus", and 52% of the bacteria concerned showed resistance to at least three groups of antibiotics. Thorough cooking of the product would kill these bacteria, but a risk of cross-contamination from improper handling of the raw product is still present. Also, some risk is present for consumers of poultry meat and eggs to bacterial infections such as "Salmonella" and "Campylobacter". Poultry products may become contaminated by these bacteria during handling, processing, marketing, or storage, resulting in food-borne illness if the product is improperly cooked or handled. In general, avian influenza is a disease of birds caused by bird-specific influenza A virus that is not normally transferred to people; however, people in contact with live poultry are at the greatest risk of becoming infected with the virus and this is of particular concern in areas such as Southeast Asia, where the disease is endemic in the wild bird population and domestic poultry can become infected. The virus possibly could mutate to become highly virulent and infectious in humans and cause an influenza pandemic. Bacteria can be grown in the laboratory on nutrient culture media, but viruses need living cells in which to replicate. Many vaccines to infectious diseases can be grown in fertilised chicken eggs. Millions of eggs are used each year to generate the annual flu vaccine requirements, a complex process that takes about six months after the decision is made as to what strains of virus to include in the new vaccine. A problem with using eggs for this purpose is that people with egg allergies are unable to be immunised, but this disadvantage may be overcome as new techniques for cell-based rather than egg-based culture become available. Cell-based culture will also be useful in a pandemic when it may be difficult to acquire a sufficiently large quantity of suitable sterile, fertile eggs.
https://en.wikipedia.org/wiki?curid=23197
Propaganda Propaganda is communication that is used primarily to influence an audience and further an agenda, which may not be objective and may be presenting facts selectively to encourage a particular synthesis or perception, or using loaded language to produce an emotional rather than a rational response to the information that is presented. Propaganda is often associated with material prepared by governments, but activist groups, companies, religious organizations, the media, and individuals can also produce propaganda. In the 20th century, the term "propaganda" has often been associated with a manipulative approach, but propaganda historically is a neutral descriptive term. A wide range of materials and media are used for conveying propaganda messages, which changed as new technologies were invented, including paintings, cartoons, posters, pamphlets, films, radio shows, TV shows, and websites. More recently, the digital age has given rise to new ways of disseminating propaganda, for example, through the use of bots and algorithms to create computational propaganda and spread fake or biased news using social media. "Propaganda" is a modern Latin word, ablative singular feminine of the gerundive form of "propagare", meaning "to spread" or "to propagate", thus "propaganda" means "for that which is to be propagated". Originally this word derived from a new administrative body of the Catholic Church (congregation) created in 1622 as part of the Counter-Reformation, called the "Congregatio de Propaganda Fide" ("Congregation for Propagating the Faith"), or informally simply "Propaganda". Its activity was aimed at "propagating" the Catholic faith in non-Catholic countries. From the 1790s, the term began being used also to refer to "propaganda" in secular activities. The term began taking a pejorative or negative connotation in the mid-19th century, when it was used in the political sphere. Harold Laswell provided a broad definition of the term propaganda, writing it as: “the expression of opinions or actions carried out deliberately by individuals or groups with a view to influencing the opinions or actions of other individuals or groups for predetermined ends and through psychological manipulations.” Garth Jowett and Victoria O'Donnell theorize that propaganda is converted into persuasion, and that propagandists also use persuasive methods in the construction of their propagandist discourse. This theory signifies the similarity and optimization of propaganda using persuasive soft power techniques in the development and cultivation of propagandist materials. In a 1929 literary debate with Edward Bernays, Everett Dean Martin argues that, "Propaganda is making puppets of us. We are moved by hidden strings which the propagandist manipulates." Bernays acknowledged in his book "Propaganda" that “The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government which is the true ruling power of our country. We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of.” Primitive forms of propaganda have been a human activity as far back as reliable recorded evidence exists. The Behistun Inscription (c. 515 BC) detailing the rise of Darius I to the Persian throne is viewed by most historians as an early example of propaganda. Another striking example of propaganda during ancient history is the last Roman civil wars (44-30 BC) during which Octavian and Mark Antony blamed each other for obscure and degrading origins, cruelty, cowardice, oratorical and literary incompetence, debaucheries, luxury, drunkenness and other slanders. This defamation took the form of "uituperatio" (Roman rhetorical genre of the invective) which was decisive for shaping the Roman public opinion at this time. Another early example of propaganda was from Genghis Khan. The emperor would send some of his men ahead of his army to spread rumors to the enemy. In most cases, his army was actually smaller than some of his opponents. Propaganda during the Reformation, helped by the spread of the printing press throughout Europe, and in particular within Germany, caused new ideas, thoughts, and doctrine to be made available to the public in ways that had never been seen before the 16th century. During the era of the American Revolution, the American colonies had a flourishing network of newspapers and printers who specialized in the topic on behalf of the Patriots (and to a lesser extent on behalf of the Loyalists). Barbara Diggs-Brown conceives that the negative connotations of the term “propaganda” are associated with the earlier social and political transformations that occurred during the French Revolutionary period movement of 1789 to 1799 between the and the middle portion of the 19th century, in a time where the word started to be used in a nonclerical and political context. The first large-scale and organised propagation of government propaganda was occasioned by the outbreak of war in 1914. After the defeat of Germany in the First World War, military officials such as Erich Ludendorff suggested that British propaganda had been instrumental in their defeat. Adolf Hitler came to echo this view, believing that it had been a primary cause of the collapse of morale and the revolts in the German home front and Navy in 1918 (see also: Dolchstoßlegende). In "Mein Kampf" (1925) Hitler expounded his theory of propaganda, which provided a powerful base for his rise to power in 1933. Historian Robert Ensor explains that "Hitler...puts no limit on what can be done by propaganda; people will believe anything, provided they are told it often enough and emphatically enough, and that contradicters are either silenced or smothered in calumny." This was to be true in Germany and backed up with their army making it difficult to allow other propaganda to flow in. Most propaganda in Nazi Germany was produced by the Ministry of Public Enlightenment and Propaganda under Joseph Goebbels. Goebbels mentions propaganda as a way to see through the masses. Symbols are used towards propaganda such as justice, liberty and one's devotion to its country. World War II saw continued use of propaganda as a weapon of war, building on the experience of WWI, by Goebbels and the British Political Warfare Executive, as well as the United States Office of War Information. In the early 20th century, the invention of motion pictures gave propaganda-creators a powerful tool for advancing political and military interests when it came to reaching a broad segment of the population and creating consent or encouraging rejection of the real or imagined enemy. In the years following the October Revolution of 1917, the Soviet government sponsored the Russian film industry with the purpose of making propaganda films (e.g. the 1925 film "The Battleship Potemkin" glorifies Communist ideals.) In WWII, Nazi filmmakers produced highly emotional films to create popular support for occupying the Sudetenland and attacking Poland. The 1930s and 1940s, which saw the rise of totalitarian states and the Second World War, are arguably the "Golden Age of Propaganda". Leni Riefenstahl, a filmmaker working in Nazi Germany, created one of the best-known propaganda movies, "Triumph of the Will". In the US, animation became popular, especially for winning over youthful audiences and aiding the U.S. war effort, e.g.,"Der Fuehrer's Face" (1942), which ridicules Hitler and advocates the value of freedom. Some American war films in the early 1940s were designed to create a patriotic mindset and convince viewers that sacrifices needed to be made to defeat the Axis Powers. Others were intended to help Americans understand their Allies in general, as in films like "Know Your Ally: Britain" and "Our Greek Allies". Apart from its war films, Hollywood did its part to boost American morale in a film intended to show how stars of stage and screen who remained on the home front were doing their part not just in their labors, but also in their understanding that a variety of peoples worked together against the Axis menace: "Stage Door Canteen" (1943) features one segment meant to dispel Americans' mistrust of the Soviets, and another to dispel their bigotry against the Chinese. Polish filmmakers in Great Britain created anti-nazi color film "Calling Mr. Smith" (1943) about current nazi crimes in occupied Europe and about lies of nazi propaganda. The West and the Soviet Union both used propaganda extensively during the Cold War. Both sides used film, television, and radio programming to influence their own citizens, each other, and Third World nations. George Orwell's contemporaneous novels "Animal Farm" and "Nineteen Eighty-Four" portray the use of propaganda in fictional dystopian societies. During the Cuban Revolution, Fidel Castro stressed the importance of propaganda. Propaganda was used extensively by Communist forces in the Vietnam War as means of controlling people's opinions. During the Yugoslav wars, propaganda was used as a military strategy by governments of Federal Republic of Yugoslavia and Croatia. Propaganda was used to create fear and hatred, and particularly incite the Serb population against the other ethnicities (Bosniaks, Croats, Albanians and other non-Serbs). Serb media made a great effort in justifying, revising or denying mass war crimes committed by Serb forces during these wars. In the early 20th century the term propaganda was used by the founders of the nascent public relations industry to refer to their people. Literally translated from the Latin gerundive as "things that must be disseminated", in some cultures the term is neutral or even positive, while in others the term has acquired a strong negative connotation. The connotations of the term "propaganda" can also vary over time. For example, in Portuguese and some Spanish language speaking countries, particularly in the Southern Cone, the word "propaganda" usually refers to the most common manipulative media – "advertising". In English, "propaganda" was originally a neutral term for the dissemination of information in favor of any given cause. During the 20th century, however, the term acquired a thoroughly negative meaning in western countries, representing the intentional dissemination of often false, but certainly "compelling" claims to support or justify political actions or ideologies. According to Harold Lasswell, the term began to fall out of favor due to growing public suspicion of propaganda in the wake of its use during World War I by the Creel Committee in the United States and the Ministry of Information in Britain: Writing in 1928, Lasswell observed, "In democratic countries the official propaganda bureau was looked upon with genuine alarm, for fear that it might be suborned to party and personal ends. The outcry in the United States against Mr. Creel's famous Bureau of Public Information (or 'Inflammation') helped to din into the public mind the fact that propaganda existed. ... The public's discovery of propaganda has led to a great of lamentation over it. Propaganda has become an epithet of contempt and hate, and the propagandists have sought protective coloration in such names as 'public relations council,' 'specialist in public education,' 'public relations adviser.' " In 1949, political science professor Dayton David McKean wrote, "After World War I the word came to be applied to 'what you don’t like of the other fellow’s publicity,' as Edward L. Bernays said..." The term is essentially contested and some have argued for a neutral definition, arguing that ethics depend on intent and context, while others define it as necessarily unethical and negative. Dr. Emma Briant defines it as "the deliberate manipulation of representations (including text, pictures, video, speech etc.) with the intention of producing any effect in the audience (e.g. action or inaction; reinforcement or transformation of feelings, ideas, attitudes or behaviours) that is desired by the propagandist." The same author explains the importance of consistent terminology across history, particularly as contemporary euphemistic synonyms are used in governments' continual efforts to rebrand their operations such as 'information support' and 'strategic communication'. Identifying propaganda has always been a problem. The main difficulties have involved differentiating propaganda from other types of persuasion, and avoiding a biased approach. Richard Alan Nelson provides a definition of the term: "Propaganda is neutrally defined as a systematic form of purposeful persuasion that attempts to influence the emotions, attitudes, opinions, and actions of specified target audiences for ideological, political or commercial purposes through the controlled transmission of one-sided messages (which may or may not be factual) via mass and direct media channels." The definition focuses on the communicative process involved – or more precisely, on the purpose of the process, and allow "propaganda" to be considered objectively and then interpreted as positive or negative behavior depending on the perspective of the viewer or listener. According to historian Zbyněk Zeman, propaganda is defined as either white, grey or black. White propaganda openly discloses its source and intent. Grey propaganda has an ambiguous or non-disclosed source or intent. Black propaganda purports to be published by the enemy or some organization besides its actual origins (compare with black operation, a type of clandestine operation in which the identity of the sponsoring government is hidden). In scale, these different types of propaganda can also be defined by the potential of true and correct information to compete with the propaganda. For example, opposition to white propaganda is often readily found and may slightly discredit the propaganda source. Opposition to grey propaganda, when revealed (often by an inside source), may create some level of public outcry. Opposition to black propaganda is often unavailable and may be dangerous to reveal, because public cognizance of black propaganda tactics and sources would undermine or backfire the very campaign the black propagandist supported. The propagandist seeks to change the way people understand an issue or situation for the purpose of changing their actions and expectations in ways that are desirable to the interest group. Propaganda, in this sense, serves as a corollary to censorship in which the same purpose is achieved, not by filling people's minds with approved information, but by preventing people from being confronted with opposing points of view. What sets propaganda apart from other forms of advocacy is the willingness of the propagandist to change people's understanding through deception and confusion rather than persuasion and understanding. The leaders of an organization know the information to be one sided or untrue, but this may not be true for the rank and file members who help to disseminate the propaganda. Propaganda was often used to influence opinions and beliefs on religious issues, particularly during the split between the Roman Catholic Church and the Protestant churches. More in line with the religious roots of the term, propaganda is also used widely in the debates about new religious movements (NRMs), both by people who defend them and by people who oppose them. The latter pejoratively call these NRMs cults. Anti-cult activists and Christian countercult activists accuse the leaders of what they consider cults of using propaganda extensively to recruit followers and keep them. Some social scientists, such as the late Jeffrey Hadden, and CESNUR affiliated scholars accuse ex-members of "cults" and the anti-cult movement of making these unusual religious movements look bad without sufficient reasons. Post–World War II usage of the word "propaganda" more typically refers to political or nationalist uses of these techniques or to the promotion of a set of ideas. Propaganda is a powerful weapon in war; it is used to dehumanize and create hatred toward a supposed enemy, either internal or external, by creating a false image in the mind of soldiers and citizens. This can be done by using derogatory or racist terms (e.g., the racist terms "Jap" and "gook" used during World War II and the Vietnam War, respectively), avoiding some words or language or by making allegations of enemy atrocities. The goal of this was to demoralize the opponent into thinking what was being projected was actually true. Most propaganda efforts in wartime require the home population to feel the enemy has inflicted an injustice, which may be fictitious or may be based on facts (e.g., the sinking of the passenger ship RMS Lusitania by the German Navy in World War I). The home population must also believe that the cause of their nation in the war is just. In these efforts it was difficult to determine the accuracy of how propaganda truly impacted the war. In NATO doctrine, propaganda is defined as "Any information, ideas, doctrines, or special appeals disseminated to influence the opinion, emotions, attitudes, or behaviour of any specified group in order to benefit the sponsor either directly or indirectly." Within this perspective, information provided does not need to be necessarily false, but must be instead relevant to specific goals of the "actor" or "system" that performs it. Propaganda is also one of the methods used in psychological warfare, which may also involve false flag operations in which the identity of the operatives is depicted as those of an enemy nation (e.g., The Bay of Pigs invasion used CIA planes painted in Cuban Air Force markings). The term propaganda may also refer to false information meant to reinforce the mindsets of people who already believe as the propagandist wishes (e.g., During the First World War, the main purpose of British propaganda was to encourage men join the army, and women to work in the country's industry. The propaganda posters were used, because radios and TVs were not very common at that time.). The assumption is that, if people believe something false, they will constantly be assailed by doubts. Since these doubts are unpleasant (see cognitive dissonance), people will be eager to have them extinguished, and are therefore receptive to the reassurances of those in power. For this reason propaganda is often addressed to people who are already sympathetic to the agenda or views being presented. This process of reinforcement uses an individual's predisposition to self-select "agreeable" information sources as a mechanism for maintaining control over populations. Propaganda may be administered in insidious ways. For instance, disparaging disinformation about the history of certain groups or foreign countries may be encouraged or tolerated in the educational system. Since few people actually double-check what they learn at school, such disinformation will be repeated by journalists as well as parents, thus reinforcing the idea that the disinformation item is really a "well-known fact", even though no one repeating the myth is able to point to an authoritative source. The disinformation is then recycled in the media and in the educational system, without the need for direct governmental intervention on the media. Such permeating propaganda may be used for political goals: by giving citizens a false impression of the quality or policies of their country, they may be incited to reject certain proposals or certain remarks or ignore the experience of others. In the Soviet Union during the Second World War, the propaganda designed to encourage civilians was controlled by Stalin, who insisted on a heavy-handed style that educated audiences easily saw was inauthentic. On the other hand, the unofficial rumours about German atrocities were well founded and convincing. Stalin was a Georgian who spoke Russian with a heavy accent. That would not do for a national hero so starting in the 1930s all new visual portraits of Stalin were retouched to erase his Georgian facial characteristics and make him a more generalized Soviet hero. Only his eyes and famous mustache remained unaltered. Zhores Medvedev and Roy Medvedev say his "majestic new image was devised appropriately to depict the leader of all times and of all peoples." Article 20 of the International Covenant on Civil and Political Rights prohibits any propaganda for war as well as any advocacy of national or religious hatred that constitutes incitement to discrimination, hostility or violence by law. Simply enough the covenant specifically is not defining the content of propaganda. In simplest terms an act of propaganda if used in a reply to a wartime act is not prohibited. Propaganda shares techniques with advertising and public relations, each of which can be thought of as propaganda that promotes a commercial product or shapes the perception of an organization, person, or brand. Journalistic theory generally holds that news items should be objective, giving the reader an accurate background and analysis of the subject at hand. On the other hand, advertisements evolved from the traditional commercial advertisements to include also a new type in the form of paid articles or broadcasts disguised as news. These generally present an issue in a very subjective and often misleading light, primarily meant to persuade rather than inform. Normally they use only subtle propaganda techniques and not the more obvious ones used in traditional commercial advertisements. If the reader believes that a paid advertisement is in fact a news item, the message the advertiser is trying to communicate will be more easily "believed" or "internalized". Such advertisements are considered obvious examples of "covert" propaganda because they take on the appearance of objective information rather than the appearance of propaganda, which is misleading. Federal law specifically mandates that any advertisement appearing in the format of a news item must state that the item is in fact a paid advertisement. Edmund McGarry illustrates that advertising is more than selling to an audience but a type of propaganda that is trying to persuade the public and not to be balanced in judgement. Propaganda has become more common in political contexts, in particular to refer to certain efforts sponsored by governments, political groups, but also often covert interests. In the early 20th century, propaganda was exemplified in the form of party slogans. Propaganda also has much in common with public information campaigns by governments, which are intended to encourage or discourage certain forms of behavior (such as wearing seat belts, not smoking, not littering and so forth). Again, the emphasis is more political in propaganda. Propaganda can take the form of leaflets, posters, TV and radio broadcasts and can also extend to any other medium. In the case of the United States, there is also an important legal (imposed by law) distinction between advertising (a type of overt propaganda) and what the Government Accountability Office (GAO), an arm of the United States Congress, refers to as "covert propaganda". Roderick Hindery argues that propaganda exists on the political left, and right, and in mainstream centrist parties. Hindery further argues that debates about most social issues can be productively revisited in the context of asking "what is or is not propaganda?" Not to be overlooked is the link between propaganda, indoctrination, and terrorism/counterterrorism. He argues that threats to destroy are often as socially disruptive as physical devastation itself. Since 9/11 and the appearance of greater media fluidity, propaganda institutions, practices and legal frameworks have been evolving in the US and Britain. Briant shows how this included expansion and integration of the apparatus cross-government and details attempts to coordinate the forms of propaganda for foreign and domestic audiences, with new efforts in strategic communication. These were subject to contestation within the US Government, resisted by Pentagon Public Affairs and critiqued by some scholars. The National Defense Authorization Act for Fiscal Year 2013 (section 1078 (a)) amended the US Information and Educational Exchange Act of 1948 (popularly referred to as the Smith-Mundt Act) and the Foreign Relations Authorization Act of 1987, allowing for materials produced by the State Department and the Broadcasting Board of Governors (BBG) to be released within U.S. borders for the Archivist of the United States. The Smith-Mundt Act, as amended, provided that "the Secretary and the Broadcasting Board of Governors shall make available to the Archivist of the United States, for domestic distribution, motion pictures, films, videotapes, and other material 12 years after the initial dissemination of the material abroad (...) Nothing in this section shall be construed to prohibit the Department of State or the Broadcasting Board of Governors from engaging in any medium or form of communication, either directly or indirectly, because a United States domestic audience is or may be thereby exposed to program material, or based on a presumption of such exposure." Public concerns were raised upon passage due to the relaxation of prohibitions of domestic propaganda in the United States. In the wake of this, the internet has become a prolific method of distributing political propaganda, benefiting from an evolution in coding called bots. Software agents or bots can be used for many things, including populating social media with automated messages and posts with a range of sophistication. During the 2016 U.S. election a cyber-strategy was implemented using bots to direct US voters to Russian political news and information sources, and to spread politically motivated rumors and false news stories. At this point it is considered commonplace contemporary political strategy around the world to implement bots in achieving political goals. Common media for transmitting propaganda messages include news reports, government reports, historical revision, junk science, books, leaflets, movies, radio, television, and posters. Some propaganda campaigns follow a strategic transmission pattern to indoctrinate the target group. This may begin with a simple transmission, such as a leaflet or advertisement dropped from a plane or an advertisement. Generally these messages will contain directions on how to obtain more information, via a web site, hot line, radio program, etc. (as it is seen also for selling purposes among other goals). The strategy intends to initiate the individual from information recipient to information seeker through reinforcement, and then from information seeker to opinion leader through indoctrination. A number of techniques based in social psychological research are used to generate propaganda. Many of these same techniques can be found under logical fallacies, since propagandists use arguments that, while sometimes convincing, are not necessarily valid. Some time has been spent analyzing the means by which the propaganda messages are transmitted. That work is important but it is clear that information dissemination strategies become propaganda strategies only when coupled with "propagandistic messages". Identifying these messages is a necessary prerequisite to study the methods by which those messages are spread. Propaganda can also be turned on its makers. For example postage stamps have frequently been tools for government advertising, such as North Korea's extensive issues. The presence of Stalin on numerous Soviet stamps is another example. During the Third Reich Hitler frequently appeared on postage stamps in Germany and some of the occupied nations. A British program to parody these, and other Nazi-inspired stamps, involved air dropping them into Germany on letters containing anti-Nazi literature. In 2018 a scandal broke in which Journalist Carole Cadwalladr, several whistleblowers and the academic Dr Emma Briant revealed advances in digital propaganda techniques showing that online HUMINT techniques used in psychological warfare had been coupled with psychological profiling using illegally obtained social media data for political campaigns in the United States in 2016 to aid Donald Trump by the firm Cambridge Analytica. The company intitially denied breaking laws but later admitted breaking UK law, the scandal provoking a worldwide debate on acceptable use of data for propaganda and influence. The field of social psychology includes the study of persuasion. Social psychologists can be sociologists or psychologists. The field includes many theories and approaches to understanding persuasion. For example, communication theory points out that people can be persuaded by the communicator's credibility, expertise, trustworthiness, and attractiveness. The elaboration likelihood model as well as heuristic models of persuasion suggest that a number of factors (e.g., the degree of interest of the recipient of the communication), influence the degree to which people allow superficial factors to persuade them. Nobel Prize–winning psychologist Herbert A. Simon won the Nobel prize for his theory that people are cognitive misers. That is, in a society of mass information, people are forced to make decisions quickly and often superficially, as opposed to logically. According to William W. Biddle's 1931 article "A psychological definition of propaganda", "[t]he four principles followed in propaganda are: (1) rely on emotions, never argue; (2) cast propaganda into the pattern of "we" versus an "enemy"; (3) reach groups as well as individuals; (4) hide the propagandist as much as possible." More recently, studies from behavioral science have become significant in understanding and planning propaganda campaigns, these include for example nudge theory which was used by the Obama Campaign in 2008 then adopted by the UK Government Behavioural Insights Team. Behavioural methodologies then became subject to great controversy in 2016 after the company Cambridge Analytica was revealed to have applied them with millions of people's breached facebook data to elect Donald Trump.
https://en.wikipedia.org/wiki?curid=23203
Physical quantity A physical quantity is a property of a material or system that can be quantified by measurement. A physical quantity can be expressed as the combination of a numerical value and a unit. For example, the physical quantity mass can be quantified as "n" kg, where "n" is the numerical value and kg is the unit. International recommendations for the use of symbols for quantities are set out in ISO/IEC 80000, the IUPAP red book and the IUPAC green book. For example, the recommended symbol for the physical quantity "mass" is "m", and the recommended symbol for the quantity "electric charge" is "Q". Subscripts are used for two reasons, to simply attach a name to the quantity or associate it with another quantity, or represent a specific vector, matrix, or tensor component. The type of subscript is expressed by its typeface: 'k' and 'p' are abbreviations of the words "kinetic" and "potential", whereas "p" (italic) is the symbol for the physical quantity "pressure" rather than an abbreviation of the word. A scalar is a physical quantity that has magnitude but no direction. Symbols for physical quantities are usually chosen to be a single letter of the Latin or Greek alphabet, and are printed in italic type. Vectors are physical quantities that possess both magnitude and direction. Symbols for physical quantities that are vectors are in bold type, underlined or with an arrow above. For example, if "u" is the speed of a particle, then the straightforward notations for its velocity are u, u, or formula_1. Numerical quantities, even those denoted by letters, are usually printed in roman (upright) type, though sometimes in italic. Symbols for elementary functions (circular trigonometric, hyperbolic, logarithmic etc.), changes in a quantity like Δ in Δ"y" or operators like d in d"x", are also recommended to be printed in roman type. Examples: There is often a choice of unit, though SI units (including submultiples and multiples of the basic unit) are usually used in scientific contexts due to their ease of use, international familiarity and prescription. For example, a quantity of mass might be represented by the symbol "m", and could be expressed in the units kilograms (kg), pounds (lb), or daltons (Da). The notion of "dimension" of a physical quantity was introduced by Joseph Fourier in 1822. By convention, physical quantities are organized in a dimensional system built upon base quantities, each of which is regarded as having its own dimension. Base quantities are those quantities which are distinct in nature and in some cases have historically not been defined in terms of other quantities. Base quantities are those quantities on the basis of which other quantities can be expressed. The seven base quantities of the International System of Quantities (ISQ) and their corresponding SI units and dimensions are listed in the following table. Other conventions may have a different number of base units (e.g. the CGS and MKS systems of units). The last two angular units, plane angle and solid angle, are subsidiary units used in the SI, but are treated as dimensionless. The subsidiary units are used for convenience to differentiate between a "truly dimensionless" quantity (pure number) and an "angle", which are different measurements. Derived quantities are those whose definitions are based on other physical quantities (base quantities). Important applied base units for space and time are below. Area and volume are thus of course derived from length, but included for completeness as they occur frequently in many derived quantities, in particular densities. Important and convenient derived quantities such as densities, fluxes, flows, currents are associated with many quantities. Sometimes different terms such as "current density" and "flux density", "rate", "frequency" and "current", are used interchangeably in the same context, sometimes they are used uniqueley. To clarify these effective template derived quantities, we let "q" be "any" quantity within some scope of context (not necessarily base quantities) and present in the table below some of the most commonly used symbols where applicable, their definitions, usage, SI units and SI dimensions – where ["q"] denotes the dimension of "q". For time derivatives, specific, molar, and flux densities of quantities there is no one symbol, nomenclature depends on subject, though time derivatives can be generally written using overdot notation. For generality we use "qm", "qn", and F respectively. No symbol is necessarily required for the gradient of a scalar field, since only the nabla/del operator ∇ or grad needs to be written. For spatial density, current, current density and flux, the notations are common from one context to another, differing only by a change in subscripts. For current density, formula_2 is a unit vector in the direction of flow, i.e. tangent to a flowline. Notice the dot product with the unit normal for a surface, since the amount of current passing through the surface is reduced when the current is not normal to the area. Only the current passing perpendicular to the surface contributes to the current passing "through" the surface, no current passes "in" the (tangential) plane of the surface. The calculus notations below can be used synonymously. If "X" is a "n"-variable function formula_3, then: The meaning of the term physical "quantity" is generally well understood (everyone understands what is meant by "the frequency of a periodic phenomenon", or "the resistance of an electric wire"). The term "physical quantity" does not imply a physically "invariant quantity". "Length" for example is a "physical quantity", yet it is variant under coordinate change in special and general relativity. The notion of physical quantities is so basic and intuitive in the realm of science, that it does not need to be explicitly "spelled out" or even "mentioned". It is universally understood that scientists will (more often than not) deal with quantitative data, as opposed to qualitative data. Explicit mention and discussion of "physical quantities" is not part of any standard science program, and is more suited for a "philosophy of science" or "philosophy" program. The notion of "physical quantities" is seldom used in physics, nor is it part of the standard physics vernacular. The idea is often misleading, as its name implies "a quantity that can be physically measured", yet is often incorrectly used to mean a physical invariant. Due to the rich complexity of physics, many different fields possess different physical invariants. There is no known physical invariant sacred in all possible fields of physics. Energy, space, momentum, torque, position, and length (just to name a few) are all found to be experimentally variant in some particular scale and system. Additionally, the notion that it is possible to measure "physical quantities" comes into question, particularly in quantum field theory and normalization techniques. As infinities are produced by the theory, the actual “measurements” made are not really those of the physical universe (as we cannot measure infinities), they are those of the renormalization scheme which is expressly dependent on our measurement scheme, coordinate system and metric system.
https://en.wikipedia.org/wiki?curid=23204
Physical constant A physical constant, sometimes fundamental physical constant or universal constant, is a physical quantity that is generally believed to be both universal in nature and have constant value in time. It is contrasted with a mathematical constant, which has a fixed numerical value, but does not directly involve any physical measurement. There are many physical constants in science, some of the most widely recognized being the speed of light in vacuum "c", the gravitational constant "G", the Planck constant "h", the electric constant "ε"0, and the elementary charge "e". Physical constants can take many dimensional forms: the speed of light signifies a maximum speed for any object and its dimension is length divided by time; while the fine-structure constant "α", which characterizes the strength of the electromagnetic interaction, is dimensionless. The term "fundamental physical constant" is sometimes used to refer to universal but dimensioned physical constants such as those mentioned above. Increasingly, however, physicists reserve the use of the term "fundamental physical constant" for dimensionless physical constants, such as the fine-structure constant "α". Physical constant in the sense under discussion in this article should not be confused with other quantities called "constants" that are assumed to be constant in a given context without the implication that they are fundamental, such as the "time constant" characteristic of a given system, or material constants, such as the Madelung constant, electrical resistivity, and heat capacity. Since May 2019, all of the SI base units have been defined in terms of physical constants. As a result, the constants: the speed of light in vacuum, "c"; the Planck constant, "h"; the elementary charge, "e"; the Avogadro constant, "N"A; and the Boltzmann constant, "k"B, have known exact numerical values when expressed in SI units. The first three of these constants are fundamental constants, whereas "N"A and "k"B are of a technical nature only: they do not describe any property of the universe, but instead only give a proportionality factor for defining the units used with large numbers of atomic-scale entities. Whereas the physical quantity indicated by a physical constant does not depend on the unit system used to express the quantity, the numerical values of dimensional physical constants do depend on choice of unit system. The term "physical constant" refers to the physical quantity, and not to the numerical value within any given system of units. For example, the speed of light is defined as having the numerical value of when expressed in the SI unit metres per second, and as having the numerical value of 1 when expressed in the natural units Planck length per Planck time. While its numerical value can be defined at will by the choice of units, the speed of light itself is a single physical constant. Any ratio between physical constants of the same dimensions results in a dimensionless physical constant, for example, the proton-to-electron mass ratio. Any relation between physical quantities can be expressed as a relation between dimensionless ratios via a process known as nondimensionalisation. The term of "fundamental physical constant" is reserved for physical quantities which, according to the current state of knowledge, are regarded as immutable and as non-derivable from more fundamental principles. Notable examples are the speed of light "c", and the gravitational constant "G". The fine-structure constant "α" is the best known dimensionless fundamental physical constant. It is the value of the elementary charge squared expressed in Planck units. This value has become a standard example when discussing the derivability or non-derivability of physical constants. Introduced by Arnold Sommerfeld, its value as determined at the time was consistent with 1/137. This motivated Arthur Eddington (1929) to construct an argument why its value might be 1/137 precisely, which related to the Eddington number, his estimate of the number of protons in the Universe. By the 1940s, it became clear that the value of the fine-structure constant deviates significantly from the precise value of 1/137, refuting Eddington's argument. With the development of quantum chemistry in the 20th century, however, a vast number of previously inexplicable dimensionless physical constants "were" successfully computed from theory. In light of that, some theoretical physicists still hope for continued progress in explaining the values of other dimensionless physical constants. It is known that the Universe would be very different if these constants took values significantly different from those we observe. For example, a few percent change in the value of the fine structure constant would be enough to eliminate stars like our Sun. This has prompted attempts at anthropic explanations of the values of some of the dimensionless fundamental physical constants. It is possible to combine dimensional universal physical constants to define fixed quantities of any desired dimension, and this property has been used to construct various systems of natural units of measurement. Depending on the choice and arrangement of constants used, the resulting natural units may be convenient to an area of study. For example, Planck units, constructed from "c", "G", "ħ", and "k"B give conveniently sized measurement units for use in studies of quantum gravity, and Hartree atomic units, constructed from "ħ", "m"e, "e" and 4"π"'ε"0 give convenient units in atomic physics. The choice of constants used leads to widely varying quantities. The number of fundamental physical constants depends on the physical theory accepted as "fundamental". Currently, this is the theory of general relativity for gravitation and the Standard Model for electromagnetic, weak and strong nuclear interactions and the matter fields. Between them, these theories account for a total of 19 independent fundamental constants. There is, however, no single "correct" way of enumerating them, as it is a matter of arbitrary choice which quantities are considered "fundamental" and which as "derived". Uzan (2011) lists 22 "unknown constants" in the fundamental theories, which give rise to 19 "unknown dimensionless parameters", as follows: The number of 19 independent fundamental physical constants is subject to change under possible extensions of the Standard Model, notably by the introduction of neutrino mass (equivalent to seven additional constants, i.e. 3 Yukawa couplings and 4 lepton mixing parameters). The discovery of variability in any of these constants would be equivalent to the discovery of "new physics". The question as to which constants are "fundamental" is neither straightforward nor meaningless, but a question of interpretation of the physical theory regarded as fundamental; as pointed out by , not all physical constants are of the same importance, with some having a deeper role than others. The same physical constant may move from one category to another as the understanding of its role deepens; this has notably happened to the speed of light, which was a class A constant (characteristic of light) when it was first measured, but became a class B constant (characteristic of electromagnetic phenomena) with the development of classical electromagnetism, and finally a class C constant with the discovery of special relativity. By definition, fundamental physical constants are subject to measurement, so that their being constant (independent on both the time and position of the performance of the measurement) is necessarily an experimental result and subject to verification. Paul Dirac in 1937 speculated that physical constants such as the gravitational constant or the fine-structure constant might be subject to change over time in proportion of the age of the universe. Experiments can in principle only put an upper bound on the relative change per year. For the fine-structure constant, this upper bound is comparatively low, at roughly 10−17 per year (as of 2008). The gravitational constant is much more difficult to measure with precision, and conflicting measurements in the 2000s have inspired the controversial suggestions of a periodic variation of its value in a 2015 paper. However, while its value is not known to great precision, the possibility of observing type Ia supernovae which happened in the universe's remote past, paired with the assumption that the physics involved in these events is universal, allows for an upper bound of less than 10−10 per year for the gravitational constant over the last nine billion years. Similarly, an upper bound of the change in the proton-to-electron mass ratio has been placed at 10−7 over a period of 7 billion years (or 10−16 per year) in a 2012 study based on the observation of methanol in a distant galaxy. It is problematic to discuss the proposed rate of change (or lack thereof) of a single "dimensional" physical constant in isolation. The reason for this is that the choice of units is arbitrary, making the question of whether a constant is undergoing change an artefact of the choice (and definition) of the units. For example, in SI units, the speed of light was given a defined value in 1983. Thus, it was meaningful to experimentally measure the speed of light in SI units prior to 1983, but it is not so now. Similarly, with effect from May 2019, the Planck constant has a defined value, such that all SI base units are now defined in terms of fundamental physical constants. With this change, the international prototype of the kilogram is being retired as the last physical object used in the definition of any SI unit. Tests on the immutability of physical constants look at "dimensionless" quantities, i.e. ratios between quantities of like dimensions, in order to escape this problem. Changes in physical constants are not meaningful if they result in an "observationally indistinguishable" universe. For example, a "change" in the speed of light "c" would be meaningless if accompanied by a corresponding change in the elementary charge "e" so that the ratio (the fine-structure constant) remained unchanged. Some physicists have explored the notion that if the dimensionless physical constants had sufficiently different values, our Universe would be so radically different that intelligent life would probably not have emerged, and that our Universe therefore seems to be fine-tuned for intelligent life. However, the phase space of the possible constants and their values is unknowable, so any conclusions drawn from such arguments are unsupported. The anthropic principle states a logical truism: the fact of our existence as intelligent beings who can measure physical constants requires those constants to be such that beings like us can exist. There are a variety of interpretations of the constants' values, including that of a divine creator (the apparent fine-tuning is actual and intentional), or that ours is one universe of many in a multiverse (e.g. the many-worlds interpretation of quantum mechanics), or even that, if information is an innate property of the universe and logically inseparable from consciousness, a universe without the capacity for conscious beings cannot exist. The fundamental constants and quantities of nature have been discovered to be fine-tuned to such an extraordinarily narrow range that if it were not, the origin and evolution of conscious life in the universe would not be permitted. The table below lists some frequently used constants and their CODATA recommended values. For a more extended list, refer to "List of physical constants".
https://en.wikipedia.org/wiki?curid=23205
Peppermint Peppermint (Mentha" × "piperita, also known as "Mentha balsamea" Wild.) is a hybrid mint, a cross between watermint and spearmint. Indigenous to Europe and the Middle East, the plant is now widely spread and cultivated in many regions of the world. It is occasionally found in the wild with its parent species. Although the genus "Mentha" comprises more than 25 species, the one in most common use is peppermint. While Western peppermint is derived from "Mentha piperita", Chinese peppermint, or “Bohe” is derived from the fresh leaves of "Mentha haplocalyx". "Mentha piperita" and "Mentha haplocalyx" are both recognized as plant sources of menthol and menthone and are among the oldest herbs used for both culinary and medicinal products. Peppermint was first described in 1753 by Carl Linnaeus from specimens that had been collected in England; he treated it as a species, but it is now universally agreed to be a hybrid. It is a herbaceous rhizomatous perennial plant that grows to be tall, with smooth stems, square in cross section. The rhizomes are wide-spreading, fleshy, and bear fibrous roots. The leaves can be long and broad. They are dark green with reddish veins, and they have an acute apex and coarsely toothed margins. The leaves and stems are usually slightly fuzzy. The flowers are purple, long, with a four-lobed corolla about diameter; they are produced in whorls (verticillasters) around the stem, forming thick, blunt spikes. Flowering season lasts from mid- to late summer. The chromosome number is variable, with 2n counts of 66, 72, 84, and 120 recorded. Peppermint is a fast-growing plant; once it sprouts, it spreads very quickly. Peppermint typically occurs in moist habitats, including stream sides and drainage ditches. Being a hybrid, it is usually sterile, producing no seeds and reproducing only vegetatively, spreading by its runners. If placed, it can grow almost anywhere. Outside of its native range, areas where peppermint was formerly grown for oil often have an abundance of feral plants, and it is considered invasive in Australia, the Galápagos Islands, New Zealand, and the United States in the Great Lakes region, noted since 1843. Peppermint generally grows best in moist, shaded locations, and expands by underground rhizomes. Young shoots are taken from old stocks and dibbled into the ground about 1.5 feet apart. They grow quickly and cover the ground with runners if it is permanently moist. For the home gardener, it is often grown in containers to restrict rapid spreading. It grows best with a good supply of water, without being water-logged, and planted in areas with part-sun to shade. The leaves and flowering tops are used; they are collected as soon as the flowers begin to open and can be dried. The wild form of the plant is less suitable for this purpose, with cultivated plants having been selected for more and better oil content. They may be allowed to lie and wilt a little before distillation, or they may be taken directly to the still. A number of cultivars have been selected for garden use: Commercial cultivars may include In 2014, world production of peppermint was 92,296 tonnes, led by Morocco with 92% of the world total reported by FAOSTAT of the United Nations. Argentina accounted for 8% of the world total. In the United States, Oregon and Washington produce most of the country's peppermint, the leaves of which are processed for the essential oil to produce flavorings mainly for chewing gum and toothpaste. Peppermint has a high menthol content. The oil also contains menthone and carboxyl esters, particularly menthyl acetate. Dried peppermint typically has 0.3–0.4% of volatile oil containing menthol (7–48%), menthone (20–46%), menthyl acetate (3–10%), menthofuran (1–17%) and 1,8-cineol (3–6%). Peppermint oil also contains small amounts of many additional compounds including limonene, pulegone, caryophyllene and pinene. Peppermint contains terpenoids and flavonoids such as eriocitrin, hesperidin, and kaempferol 7-O-rutinoside. Peppermint oil has a high concentration of natural pesticides, mainly pulegone (found mainly in "Mentha arvensis" var. "piperascens" cornmint, field mint, Japanese mint, and to a lesser extent (6,530 ppm) in "Mentha" × "piperita" subsp. "notho") and menthone. It is known to repel some pest insects, including mosquitos, and has uses in organic gardening. It is also widely used to repel rodents. The chemical composition of the essential oil from peppermint ("Mentha" × "piperita" L.) was analyzed by GC/FID and GC-MS. The main constituents were menthol (40.7%) and menthone (23.4%). Further components were (±)-menthyl acetate, 1,8-cineole, limonene, beta-pinene and beta-caryophyllene. Peppermint oil is under preliminary research for its potential as a short-term treatment for irritable bowel syndrome, and has supposed uses in traditional medicine for minor ailments. Peppermint oil and leaves have a cooling effect when used topically for muscle pain, nerve pain, relief from itching, or as a fragrance. High oral doses of peppermint oil (500 mg) can cause mucosal irritation and mimic heartburn. Fresh or dried peppermint leaves are often used alone in peppermint tea or with other herbs in herbal teas (tisanes, infusions). Peppermint is used for flavouring ice cream, candy, fruit preserves, alcoholic beverages, chewing gum, toothpaste, and some shampoos, soaps and skin care products. Menthol activates cold-sensitive TRPM8 receptors in the skin and mucosal tissues, and is the primary source of the cooling sensation that follows the topical application of peppermint oil. Peppermint oil is also used in construction and plumbing to test for the tightness of pipes and disclose leaks by its odor. Medicinal uses of peppermint have not been approved as effective or safe by the US Food and Drug Administration. With caution that the concentration of the peppermint constituent pulegone should not exceed 1% (140 mg), peppermint preparations are considered safe by the European Medicines Agency when used in topical formulations for adult subjects. Diluted peppermint essential oil is safe for oral intake when only a few drops are used. Although peppermint is commonly available as a herbal supplement, there are no established, consistent manufacturing standards for it, and some peppermint products may be contaminated with toxic metals or other substituted compounds. Skin rashes, irritation, or an allergic reaction may result from applying peppermint oil to the skin, and its use on the face or chest of young children may cause side effects if the oil menthol is inhaled. A common side effect from oral intake of peppermint oil or capsules is heartburn. Oral use of peppermint products may have adverse effects when used with iron supplements, cyclosporine, medicines for heart conditions or high blood pressure, or medicines to decrease stomach acid.
https://en.wikipedia.org/wiki?curid=23209
Pseudorandomness A pseudorandom process produces predictable outcomes given information which is typically difficult to acquire; absent such information, pseudorandom sequences of numbers exhibit statistical randomness. In general, a random process generates unpredictable outcomes: for any single event any particular outcome cannot be predicted in advance given available information. For example, consider an unbiased coin which on any given flip is either heads or tails: on a single flip no outcome is certain. Recording 1,000 flips in a logbook provides a sequence of pseudorandom outcomes: in possession of the logbook each outcome is known for certain; however, a person without the logbook sees only a random string of heads and tails. To generate random numbers that can never be predicted by any observer requires a causally non-deterministic process where events are not fully determined by prior states (e.g., whether a photon is emitted by an atom in any given nanosecond). Due to the physical impossibility of acquiring sufficient information to predict the outcome of such an event, its outcomes are guaranteed to be random to all. Randomness is therefore a condition which holds of a sequence relative to the information available to the predictor, with pseudorandomness indicating that information sufficient to predict the next outcome may be acquired by the predictor under some circumstances. The most prominent example is the pseudorandom number generators used by digital computers in which knowing a starting "seed" number produces an entirely predictable string of numbers which are unpredictable without it. The generation of random numbers has many uses (mostly in statistics, for random sampling, and simulation). Before modern computing, researchers requiring random numbers would either generate them through various means (dice, cards, roulette wheels, etc.) or use existing random number tables. The first attempt to provide researchers with a ready supply of random digits was in 1927, when the Cambridge University Press published a table of 41,600 digits developed by L.H.C. Tippett. In 1947, the RAND Corporation generated numbers by the electronic simulation of a roulette wheel; the results were eventually published in 1955 as "A Million Random Digits with 100,000 Normal Deviates". A pseudorandom variable is a variable which is created by a deterministic algorithm, often a computer program or subroutine, which in most cases takes random bits as input. The pseudorandom string will typically be longer than the original random string, but less random (less entropic in the information theory sense). This can be useful for randomized algorithms. Pseudorandom number generators are widely used in such applications as computer modeling (e.g., Markov chains), statistics, experimental design, etc. Linux uses various system timings (like user keystrokes, I/O, or least-significant digit voltage measurements) to produce a pool of random numbers. It attempts to constantly replenish the pool, depending on the level of importance, and so will issue a random number. In theoretical computer science, a distribution is pseudorandom against a class of adversaries if no adversary from the class can distinguish it from the uniform distribution with significant advantage. This notion of pseudorandomness is studied in computational complexity theory and has applications to cryptography. Formally, let "S" and "T" be finite sets and let F = {"f": "S" → "T"} be a class of functions. A distribution D over "S" is ε-pseudorandom against F if for every "f" in F, the statistical distance between the distributions "f"("X"), where "X" is sampled from D, and "f"("Y"), where "Y" is sampled from the uniform distribution on "S", is at most ε. In typical applications, the class F describes a model of computation with bounded resources and one is interested in designing distributions D with certain properties that are pseudorandom against F. The distribution D is often specified as the output of a pseudorandom generator. Though random numbers are needed in cryptography, the use of pseudorandom number generators (whether hardware or software or some combination) is insecure. When random values are required in cryptography, the goal is to make a message as hard to crack as possible, by eliminating or obscuring the parameters used to encrypt the message (the key) from the message itself or from the context in which it is carried. Pseudorandom sequences are deterministic and reproducible; all that is required in order to discover and reproduce a pseudorandom sequence is the algorithm used to generate it and the initial seed. So the entire sequence of numbers is only as powerful as the randomly chosen parts—sometimes the algorithm and the seed, but usually only the seed. There are many examples in cryptographic history of ciphers, otherwise excellent, in which random choices were not random enough and security was lost as a direct consequence. The World War II Japanese PURPLE cipher machine used for diplomatic communications is a good example. It was consistently broken throughout World War II, mostly because the "key values" used were insufficiently random. They had patterns, and those patterns made any intercepted traffic readily decryptable. Had the keys (i.e., the initial settings of the stepping switches in the machine) been made unpredictably (i.e., randomly), that traffic would have been much harder to break, and perhaps even secure in practice. Since pseudorandom numbers are in fact deterministic, a given seed will always determine the same pseudorandom number. This attribute is used in security, in the form of rolling code to avoid replay attacks, in which a command would be intercepted to be used by a thief at a later time. A Monte Carlo method simulation is defined as any method that utilizes sequences of random numbers to perform the simulation. Monte Carlo simulations are applied to many topics including quantum chromodynamics, cancer radiation therapy, traffic flow, stellar evolution and VLSI design. All these simulations require the use of random numbers and therefore pseudorandom number generators, which makes creating random-like numbers very important. A simple example of how a computer would perform a Monte Carlo simulation is the calculation of π. If a square enclosed a circle and a point were randomly chosen inside the square the point would either lie inside the circle or outside it. If the process were repeated many times, the ratio of the random points that lie inside the circle to the total number of random points in the square would approximate the ratio of the area of the circle to the area of the square. From this we can estimate pi, as shown in the Python code below utilizing a SciPy package to generate pseudorandom numbers with the MT19937 algorithm. Note that this method is a computationally inefficient way to numerically approximate π. import scipy N = 100000 x_array = scipy.random.rand(N) y_array = scipy.random.rand(N) N_qtr_circle = sum(x_array**2+y_array**2 < 1) pi_approx = 4 * float(N_qtr_circle) / N # Typical values: 3.13756, 3.15156
https://en.wikipedia.org/wiki?curid=23210
Poales The Poales are a large order of flowering plants in the monocotyledons, and includes families of plants such as the grasses, bromeliads, and sedges. Sixteen plant families are currently recognized by botanists to be part of Poales. The flowers are typically small, enclosed by bracts, and arranged in inflorescences (except in three species of the genus "Mayaca", which possess very reduced, one-flowered inflorescences). The flowers of many species are wind pollinated; the seeds usually contain starch. The APG III system (2009) accepts the order within a monocot clade called commelinids, and accepts the following 16 families: The earlier APG system (1998) adopted the same placement of the order, although it used the spelling "commelinoids". It did not include the Bromeliaceae and Mayaceae, but had the additional families Prioniaceae (now included in Thurniaceae), Sparganiaceae (now in Typhaceae), and Hydatellaceae (now transferred out of the monocots; recently discovered to be an 'early-diverging' lineage of flowering plants). The morphology-based Cronquist system did not include an order named Poales, assigning these families to the orders Bromeliales, Cyperales, Hydatellales, Juncales, Restionales and Typhales. In early systems, an order including the grass family did not go by the name Poales but by a descriptive botanical name such as Graminales in the Engler system (update of 1964) and in the Hutchinson system (first edition, first volume, 1926), Glumiflorae in the Wettstein system (last revised 1935) or Glumaceae in the Bentham & Hooker system (third volume, 1883). The earliest fossils attributed to the Poales date to the late Cretaceous period about million years ago, though some studies (e.g., Bremer, 2002) suggest the origin of the group may extend to nearly 115 million years ago, likely in South America. The earliest known fossils include pollen and fruits. The phylogenetic position of Poales within the commelinids was difficult to resolve, but an analysis using complete chloroplast DNA found support for Poales as sister group of Commelinales plus Zingiberales. Major lineages within the Poales have been referred to as bromeliad, cyperid, xyrid, graminid, and restiid clades. A phylogenetic analysis resolved most relationships within the order but found weak support for the monophyly of the cyperid clade. The relationship between Centrolepidaceae and Restoniaceae within the restiid clade remains unclear; the first may actually be embedded in the latter. The four most species-rich families in the order are: Cyperales was a name for an order of flowering plants. As used in the Engler system (update, of 1964) and in the Wettstein system it consisted of only the single family. In the Cronquist system it is used for an order (placed in subclass "Commelinidae") and circumscribed as (1981): The APG system now assigns the plants involved to the order "Poales". Eriocaulales is a botanical name for an order of flowering plants. The name was published by Takenoshin Nakai. In the Cronquist system the name was used for an order placed in the subclass "Commelinidae". The order consisted of one family only (1981): The APG IV system now assigns these plants to the order "Poales". The Poales are the most economically important order of monocots and possibly the most important order of plants in general. Within the order, by far the most important family economically is the family of grasses (Poaceae, syn. Gramineae), which includes the starch staples barley, maize, millet, rice, and wheat as well as bamboos (mostly used structurally, like wood, but somewhat as vegetables), and a few "seasonings" like sugarcane and lemongrass. Graminoids, especially the grasses, are typically dominant in open (low moisture but not yet arid, or also fire climax) habitats like prairie/steppe and savannah and thus form a large proportion of the forage of grazing livestock. Possibly due to pastoral nostalgia or simply a desire for open areas for play, they dominate most Western yards as lawns, which consume vast sums of money in upkeep (artificial grazing—mowing—for aesthetics and to keep the allergenic flowers suppressed, irrigation, and fertilizer). Many Bromeliaceae are used as ornamental plants (and one, the pineapple, is internationally grown in the tropics for fruit). Many wetland species of sedges, rushes, grasses, and cattails are important habitat plants for waterfowl, are used in weaving chair seats, and (especially cattails) were important pre-agricultural food sources for man. Two sedges, chufa ("Cyperus esculentus", also a significant weed) and water chestnut ("Eleocharis dulcis") are still at least locally important wetland starchy root crops.
https://en.wikipedia.org/wiki?curid=23212
Ploidy Ploidy () is the number of complete sets of chromosomes in a cell, and hence the number of possible alleles for autosomal and pseudoautosomal genes. Somatic cells, tissues, and individual organisms can be described according to the number of sets of chromosomes present (the "ploidy level"): monoploid (1 set), diploid (2 sets), triploid (3 sets), tetraploid (4 sets), pentaploid (5 sets), hexaploid (6 sets), heptaploid or septaploid (7 sets), etc. The generic term polyploid is often used to describe cells with three or more chromosome sets. Virtually all sexually reproducing organisms are made up of somatic cells that are diploid or greater, but ploidy level may vary widely between different organisms, between different tissues within the same organism, and at different stages in an organism's life cycle. Half of all known plant genera contain polyploid species, and about two-thirds of all grasses are polyploid. Many animals are uniformly diploid, though polyploidy is common in invertebrates, reptiles, and amphibians. In some species, ploidy varies between individuals of the same species (as in the social insects), and in others entire tissues and organ systems may be polyploid despite the rest of the body being diploid (as in the mammalian liver). For many organisms, especially plants and fungi, changes in ploidy level between generations are major drivers of speciation. In mammals and birds, ploidy changes are typically fatal. There is, however, evidence of polyploidy in organisms now considered to be diploid, suggesting that polyploidy has contributed to evolutionary diversification in plants and animals through successive rounds of polyploidization and rediploidization. Humans are diploid organisms, carrying two complete sets of chromosomes in their somatic cells: one set of 23 chromosomes from their father and one set of 23 chromosomes from their mother. The two sets combined provide a full complement of 46 chromosomes. This total number of individual chromosomes (counting all complete sets) is called the chromosome number. The number of chromosomes found in a single complete set of chromosomes is called the monoploid number ("x"). The haploid number ("n") refers to the total number of chromosomes found in a gamete (a sperm or egg cell produced by meiosis in preparation for sexual reproduction). Under normal conditions, the haploid number is exactly half the total number of chromosomes present in the organism's somatic cells. For diploid organisms, the monoploid number and haploid number are equal; in humans, both are equal to 23. When a human germ cell undergoes meiosis, the diploid 46-chromosome complement is split in half to form haploid gametes. After fusion of a male and a female gamete (each containing 1 set of 23 chromosomes) during fertilization, the resulting zygote again has the full complement of 46 chromosomes: 2 sets of 23 chromosomes. The term "ploidy" is a back-formation from "haploidy" and "diploidy". "Ploid" is a combination of Ancient Greek -πλόος (-plóos, “-fold”) and -ειδής (-"eidḗs"), from εἶδος ("eîdos", "form, likeness"). The principal meaning of the Greek word ᾰ̔πλόος (haplóos) is "single", from ἁ- (ha-, “one, same”). διπλόος ("diplóos") means "duplex" or "two-fold". Diploid therefore means "duplex-shaped" (compare "humanoid", "human-shaped"). Polish botanist Eduard Strasburger coined the terms "haploid" and "diploid" in 1905. Some authors suggest that Strasburger based the terms on August Weismann's conception of the id (or germ plasm), hence haplo-"id" and diplo-"id". The two terms were brought into the English language from German through William Henry Lang's 1908 translation of a 1906 textbook by Strasburger and colleagues. The term haploid is used with two distinct but related definitions. In the most generic sense, haploid refers to having the number of sets of chromosomes normally found in a gamete. Because two gametes necessarily combine during sexual reproduction to form a single zygote from which somatic cells are generated, healthy gametes always possess exactly half the number of sets of chromosomes found in the somatic cells, and therefore "haploid" in this sense refers to having exactly half the number of sets of chromosomes found in a somatic cell. By this definition, an organism whose gametic cells contain a single copy of each chromosome (one set of chromosomes) may be considered haploid while the somatic cells, containing two copies of each chromosome (two sets of chromosomes), are diploid. This scheme of diploid somatic cells and haploid gametes is widely used in the animal kingdom and is the simplest to illustrate in diagrams of genetics concepts. But this definition also allows for haploid gametes with "more than one" set of chromosomes. As given above, gametes are by definition haploid, regardless of the actual number of sets of chromosomes they contain. An organism whose somatic cells are tetraploid (four sets of chromosomes), for example, will produce gametes by meiosis that contain two sets of chromosomes. These gametes might still be called haploid even though they are numerically diploid. An alternative usage defines "haploid" as having a single copy of each chromosome – that is, one and only one set of chromosomes. In this case, the nucleus of a eukaryotic cell is only said to be haploid if it has a single set of chromosomes, each one not being part of a pair. By extension a cell may be called haploid if its nucleus has one set of chromosomes, and an organism may be called haploid if its body cells (somatic cells) have one set of chromosomes per cell. By this definition haploid therefore would not be used to refer to the gametes produced by the tetraploid organism in the example above, since these gametes are numerically diploid. The term monoploid is often used as a less ambiguous way to describe a single set of chromosomes; by this second definition, haploid and monoploid are identical and can be used interchangeably. Gametes (sperm and ova) are haploid cells. The haploid gametes produced by most organisms combine to form a zygote with "n" pairs of chromosomes, i.e. 2"n" chromosomes in total. The chromosomes in each pair, one of which comes from the sperm and one from the egg, are said to be homologous. Cells and organisms with pairs of homologous chromosomes are called diploid. For example, most animals are diploid and produce haploid gametes. During meiosis, sex cell precursors have their number of chromosomes halved by randomly "choosing" one member of each pair of chromosomes, resulting in haploid gametes. Because homologous chromosomes usually differ genetically, gametes usually differ genetically from one another. All plants and many fungi and algae switch between a haploid and a diploid state, with one of the stages emphasized over the other. This is called alternation of generations. Most fungi and algae are haploid during the principal stage of their life cycle, as are some primitive plants like mosses. More recently evolved plants, like the gymnosperms and angiosperms, spend the majority of their life cycle in the diploid stage. Most animals are diploid, but male bees, wasps, and ants are haploid organisms because they develop from unfertilized, haploid eggs, while females (workers and queens) are diploid, making their system haplodiploid. In some cases there is evidence that the "n" chromosomes in a haploid set have resulted from duplications of an originally smaller set of chromosomes. This "base" number – the number of apparently originally unique chromosomes in a haploid set – is called the monoploid number, also known as basic or cardinal number, or fundamental number. As an example, the chromosomes of common wheat are believed to be derived from three different ancestral species, each of which had 7 chromosomes in its haploid gametes. The monoploid number is thus 7 and the haploid number is 3 × 7 = 21. In general "n" is a multiple of "x". The somatic cells in a wheat plant have six sets of 7 chromosomes: three sets from the egg and three sets from the sperm which fused to form the plant, giving a total of 42 chromosomes. As a formula, for wheat 2"n" = 6"x" = 42, so that the haploid number "n" is 21 and the monoploid number "x" is 7. The gametes of common wheat are considered to be haploid, since they contain half the genetic information of somatic cells, but they are not monoploid, as they still contain three complete sets of chromosomes ("n" = 3"x"). In the case of wheat, the origin of its haploid number of 21 chromosomes from three sets of 7 chromosomes can be demonstrated. In many other organisms, although the number of chromosomes may have originated in this way, this is no longer clear, and the monoploid number is regarded as the same as the haploid number. Thus in humans, "x" = "n" = 23. Diploid cells have two homologous copies of each chromosome, usually one from the mother and one from the father. All or nearly all mammals are diploid organisms. The suspected tetraploid (possessing four chromosome sets) plains viscacha rat ("Tympanoctomys barrerae") and golden viscacha rat ("Pipanacoctomys aureus") have been regarded as the only known exceptions (as of 2004). However, some genetic studies have rejected any polyploidism in mammals as unlikely, and suggest that amplification and dispersion of repetitive sequences best explain the large genome size of these two rodents. All normal diploid individuals have some small fraction of cells that display polyploidy. Human diploid cells have 46 chromosomes (the somatic number, "2n") and human haploid gametes (egg and sperm) have 23 chromosomes ("n"). Retroviruses that contain two copies of their RNA genome in each viral particle are also said to be diploid. Examples include human foamy virus, human T-lymphotropic virus, and HIV. Polyploidy is the state where all cells have multiple sets of chromosomes beyond the basic set, usually 3 or more. Specific terms are triploid (3 sets), tetraploid (4 sets), pentaploid (5 sets), hexaploid (6 sets), heptaploid or septaploid (7 sets), octoploid (8 sets), nonaploid (9 sets), decaploid (10 sets), undecaploid (11 sets), dodecaploid (12 sets), tridecaploid (13 sets), tetradecaploid (14 sets), etc. Some higher ploidies include hexadecaploid (16 sets), dotriacontaploid (32 sets), and tetrahexacontaploid (64 sets), though Greek terminology may be set aside for readability in cases of higher ploidy (such as "16-ploid"). Polytene chromosomes of plants and fruit flies can be 1024-ploid. Ploidy of systems such as the salivary gland, elaiosome, endosperm, and trophoblast can exceed this, up to 1048576-ploid in the silk glands of the commercial silkworm "Bombyx mori". The chromosome sets may be from the same species or from closely related species. In the latter case, these are known as allopolyploids (or amphidiploids, which are allopolyploids that behave as if they were normal diploids). Allopolyploids are formed from the hybridization of two separate species. In plants, this probably most often occurs from the pairing of meiotically unreduced gametes, and not by diploid–diploid hybridization followed by chromosome doubling. The so-called "Brassica" triangle is an example of allopolyploidy, where three different parent species have hybridized in all possible pair combinations to produce three new species. Polyploidy occurs commonly in plants, but rarely in animals. Even in diploid organisms, many somatic cells are polyploid due to a process called endoreduplication, where duplication of the genome occurs without mitosis (cell division). The extreme in polyploidy occurs in the fern genus "Ophioglossum", the adder's-tongues, in which polyploidy results in chromosome counts in the hundreds, or, in at least one case, well over one thousand. It is possible for polyploid organisms to revert to lower ploidy by haploidisation. Polyploidy is a characteristic of the bacterium "Deinococcus radiodurans" and of the archaeon "Halobacterium salinarum". These two species are highly resistant to ionizing radiation and desiccation, conditions that induce DNA double-strand breaks. This resistance appears to be due to efficient homologous recombinational repair. Depending on growth conditions, prokaryotes such as bacteria may have a chromosome copy number of 1 to 4, and that number is commonly fractional, counting portions of the chromosome partly replicated at a given time. This is because under exponential growth conditions the cells are able to replicate their DNA faster than they can divide. In ciliates, the macronucleus is called ampliploid, because only part of the genome is amplified. Mixoploidy is the case where two cell lines, one diploid and one polyploid, coexist within the same organism. Though polyploidy in humans is not viable, mixoploidy has been found in live adults and children. There are two types: diploid-triploid mixoploidy, in which some cells have 46 chromosomes and some have 69, and diploid-tetraploid mixoploidy, in which some cells have 46 and some have 92 chromosomes. It is a major topic of cytology. Dihaploid and polyhaploid cells are formed by haploidisation of polyploids, i.e., by halving the chromosome constitution. Dihaploids (which are diploid) are important for selective breeding of tetraploid crop plants (notably potatoes), because selection is faster with diploids than with tetraploids. Tetraploids can be reconstituted from the diploids, for example by somatic fusion. The term "dihaploid" was coined by Bender to combine in one word the number of genome copies (diploid) and their origin (haploid). The term is well established in this original sense, but it has also been used for doubled monoploids or doubled haploids, which are homozygous and used for genetic research. Euploidy (Greek "eu", "true" or "even") is the state of a cell or organism having one or more than one set of the same set of chromosomes, possibly excluding the sex-determining chromosomes. For example, most human cells have 2 of each of the 23 homologous monoploid chromosomes, for a total of 46 chromosomes. A human cell with one extra set of the 23 normal chromosomes (functionally triploid) would be considered euploid. Euploid karyotypes would consequentially be a multiple of the haploid number, which in humans is 23. Aneuploidy is the state where one or more individual chromosomes of a normal set are absent or present in more than their usual number of copies (excluding the absence or presence of complete sets, which is considered euploidy). Unlike euploidy, aneuploid karyotypes will not be a multiple of the haploid number. In humans, examples of aneuploidy include having a single extra chromosome (as in Down syndrome, where affected individuals have three copies of chromosome 21) or missing a chromosome (as in Turner syndrome, where affected individuals are missing an X chromosome). Aneuploid karyotypes are given names with the suffix "-somy" (rather than "-ploidy", used for euploid karyotypes), such as trisomy and monosomy. Homoploid means "at the same ploidy level", i.e. having the same number of homologous chromosomes. For example, homoploid hybridization is hybridization where the offspring have the same ploidy level as the two parental species. This contrasts with a common situation in plants where chromosome doubling accompanies or occurs soon after hybridization. Similarly, homoploid speciation contrasts with polyploid speciation. Zygoidy is the state in which the chromosomes are paired and can undergo meiosis. The zygoid state of a species may be diploid or polyploid. In the azygoid state the chromosomes are unpaired. It may be the natural state of some asexual species or may occur after meiosis. In diploid organisms the azygoid state is monoploid. (See below for dihaploidy.) In the strictest sense, ploidy refers to the number of sets of chromosomes in a single nucleus rather than in the cell as a whole. Because in most situations there is only one nucleus per cell, it is commonplace to speak of the ploidy of a cell, but in cases in which there is more than one nucleus per cell, more specific definitions are required when ploidy is discussed. Authors may at times report the total combined ploidy of all nuclei present within the cell membrane of a syncytium, though usually the ploidy of each nucleus is described individually. For example, a fungal dikaryon with two separate haploid nuclei is distinguished from a diploid cell in which the chromosomes share a nucleus and can be shuffled together. It is possible on rare occasions for ploidy to increase in the germline, which can result in polyploid offspring and ultimately polyploid species. This is an important evolutionary mechanism in both plants and animals and is known as a primary driver of speciation. As a result, it may become desirable to distinguish between the ploidy of a species or variety as it presently breeds and that of an ancestor. The number of chromosomes in the ancestral (non-homologous) set is called the monoploid number ("x"), and is distinct from the haploid number ("n") in the organism as it now reproduces. Common wheat ("Triticum aestivum") is an organism in which "x" and "n" differ. Each plant has a total of six sets of chromosomes (with two sets likely having been obtained from each of three different diploid species that are its distant ancestors). The somatic cells are hexaploid, 2"n" = 6"x" = 42 (where the monoploid number "x" = 7 and the haploid number "n" = 21). The gametes are haploid for their own species, but triploid, with three sets of chromosomes, by comparison to a probable evolutionary ancestor, einkorn wheat. Tetraploidy (four sets of chromosomes, 2"n" = 4"x") is common in many plant species, and also occurs in amphibians, reptiles, and insects. For example, species of "Xenopus" (African toads) form a ploidy series, featuring diploid ("X. tropicalis", 2n=20), tetraploid ("X. laevis", 4n=36), octaploid ("X. wittei", 8n=72), and dodecaploid ("X. ruwenzoriensis", 12n=108) species. Over evolutionary time scales in which chromosomal polymorphisms accumulate, these changes become less apparent by karyotype – for example, humans are generally regarded as diploid, but the 2R hypothesis has confirmed two rounds of whole genome duplication in early vertebrate ancestors. Ploidy can also vary between individuals of the same species or at different stages of the life cycle. In some insects it differs by caste. In humans, only the gametes are haploid, but in many of the social insects, including ants, bees, and termites, certain individuals develop from unfertilized eggs, making them haploid for their entire lives, even as adults. In the Australian bulldog ant, "Myrmecia pilosula", a haplodiploid species, haploid individuals of this species have a single chromosome and diploid individuals have two chromosomes. In "Entamoeba", the ploidy level varies from 4"n" to 40"n" in a single population. Alternation of generations occurs in most plants, with individuals "alternating" ploidy level between different stages of their sexual life cycle. In large multicellular organisms, variations in ploidy level between different tissues, organs, or cell lineages are common. Because the chromosome number is generally reduced only by the specialized process of meiosis, the somatic cells of the body inherit and maintain the chromosome number of the zygote by mitosis. However, in many situations somatic cells double their copy number by means of endoreduplication as an aspect of cellular differentiation. For example, the hearts of two-year-old human children contain 85% diploid and 15% tetraploid nuclei, but by 12 years of age the proportions become approximately equal, and adults examined contained 27% diploid, 71% tetraploid and 2% octaploid nuclei. There is continued study and debate regarding the fitness advantages or disadvantages conferred by different ploidy levels. A study comparing the karyotypes of endangered or invasive plants with those of their relatives found that being polyploid as opposed to diploid is associated with a 14% lower risk of being endangered, and a 20% greater chance of being invasive. Polyploidy may be associated with increased vigor and adaptability. Some studies suggest that selection is more likely to favor diploidy in host species and haploidy in parasite species. When a germ cell with an uneven number of chromosomes undergoes meiosis, the chromosomes cannot be evenly divided between the daughter cells, resulting in aneuploid gametes. Triploid organisms, for instance, are usually sterile. Because of this, triploidy is commonly exploited in agriculture to produce seedless fruit such as bananas and watermelons. If the fertilization of human gametes results in three sets of chromosomes, the condition is called triploid syndrome. The common potato ("Solanum tuberosum") is an example of a tetraploid organism, carrying four sets of chromosomes. During sexual reproduction, each potato plant inherits two sets of 12 chromosomes from the pollen parent, and two sets of 12 chromosomes from the ovule parent. The four sets combined provide a full complement of 48 chromosomes. The haploid number (half of 48) is 24. The monoploid number equals the total chromosome number divided by the ploidy level of the somatic cells: 48 chromosomes in total divided by a ploidy level of 4 equals a monoploid number of 12. Hence, the monoploid number (12) and haploid number (24) are distinct in this example. However, commercial potato crops (as well as many other crop plants) are commonly propagated vegetatively (by asexual reproduction through mitosis), in which case new individuals are produced from a single parent, without the involvement of gametes and fertilization, and all the offspring are genetically identical to each other and to the parent, including in chromosome number. The parents of these vegetative clones may still be capable of producing haploid gametes in preparation for sexual reproduction, but these gametes are not used to create the vegetative offspring by this route. Some eukaryotic genome-scale or genome size databases and other sources which may list the ploidy levels of many organisms:
https://en.wikipedia.org/wiki?curid=23219
IBM System/360 The IBM System/360 (S/360) is a family of mainframe computer systems that was announced by IBM on April 7, 1964, and delivered between 1965 and 1978. It was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific. The design made a clear distinction between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, which features 8-bit byte addressing and binary, decimal and hexadecimal floating-point calculations. The launch of the System/360 family introduced IBM's Solid Logic Technology (SLT), a new technology that was the start of more powerful but smaller computers. The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second. The larger 360 models could have up to 8 MB of main memory, though that much main memory was unusual—a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models. The IBM 360 was extremely successful in the market, allowing customers to purchase a smaller system with the knowledge they would always be able to migrate upward if their needs grew, without reprogramming of application software or replacing peripheral devices. Many consider the design one of the most successful computers in history, influencing computer design for years to come. The chief architect of System/360 was Gene Amdahl, and the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr. The commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM’s System 360 mainframe family in 1964. Application-level compatibility (with some restrictions) for System/360 software is maintained to the present day with the System z mainframe servers. Contrasting with at-the-time normal industry practice, IBM created an entire new series of computers, from small to large, low- to high-performance, all using the same instruction set (with two exceptions for specific markets). This feat allowed customers to use a cheaper model and then upgrade to larger systems as their needs increased without the time and expense of rewriting software. Before the introduction of System/360, business and scientific applications used different computers with different instruction sets and operating systems. Different-sized computers also had their own instruction sets. IBM was the first manufacturer to exploit microcode technology to implement a compatible range of computers of widely differing performance, although the largest, fastest, models had hard-wired logic instead. This flexibility greatly lowered barriers to entry. With most other vendors customers had to choose between machines they could outgrow and machines that were potentially too powerful and thus too costly. This meant that many companies simply did not buy computers. IBM initially announced a series of six computers and forty common peripherals. IBM eventually delivered fourteen models, including rare one-off models for NASA. The least expensive model was the Model 20 with as little as 4096 bytes of core memory, eight 16-bit registers instead of the sixteen 32-bit registers of other System/360 models, and an instruction set that was a subset of that used by the rest of the range. The initial announcement in 1964 included Models 30, 40, 50, 60, 62, and 70. The first three were low- to middle-range systems aimed at the IBM 1400 series market. All three first shipped in mid-1965. The last three, intended to replace the 7000 series machines, never shipped and were replaced with the 65 and 75, which were first delivered in November 1965, and January 1966, respectively. Later additions to the low-end included models 20 (1966, mentioned above), 22 (1971), and 25 (1968). The Model 20 had several sub-models; sub-model 5 was at the higher end of the model. The Model 22 was a recycled Model 30 with minor limitations: a smaller maximum memory configuration, and slower I/O channels, which limited it to slower and lower-capacity disk and tape devices than on the 30. The Model 44 (1966) was a specialized model, designed for scientific computing and for real-time computing and process control, featuring some additional instructions, and with all storage-to-storage instructions and five other complex instructions eliminated. A succession of high-end machines included the Model 67 (1966, mentioned below, briefly anticipated as the 64 and 66), 85 (1969), 91 (1967, anticipated as the 92), 95 (1968), and 195 (1971). The 85 design was intermediate between the System/360 line and the follow-on System/370 and was the basis for the 370/165. There was a System/370 version of the 195, but it did not include Dynamic Address Translation. The implementations differed substantially, using different native data path widths, presence or absence of microcode, yet were extremely compatible. Except where specifically documented, the models were architecturally compatible. The 91, for example, was designed for scientific computing and provided out-of-order instruction execution (and could yield "imprecise interrupts" if a program trap occurred while several instructions were being read), but lacked the decimal instruction set used in commercial applications. New features could be added without violating architectural definitions: the 65 had a dual-processor version (M65MP) with extensions for inter-CPU signalling; the 85 introduced cache memory. Models 44, 75, 91, 95, and 195 were implemented with hardwired logic, rather than microcoded as all other models. The Model 67, announced in August 1965, was the first production IBM system to offer dynamic address translation (virtual memory) hardware to support time-sharing. "DAT" is now more commonly referred to as an MMU. An experimental one-off unit was built based on a model 40. Before the 67, IBM had announced models 64 and 66, DAT versions of the 60 and 62, but they were almost immediately replaced with the 67 at the same time that the 60 and 62 were replaced with the 65. DAT hardware would reappear in the S/370 series in 1972, though it was initially absent from the series. Like its close relative, the 65, the 67 also offered dual CPUs. IBM stopped marketing all System/360 models by the end of 1977. IBM's existing customers had a large investment in software that executed on second-generation machines. Several models offered the option of emulation of the customer's previous computer using a combination of special hardware, special microcode and an emulation program that used the emulation instructions to simulate the target system, so that old programs could run on the new machine. Customers initially had to halt the computer and load the emulation program. IBM later added features and modified emulator programs to allow emulation of the 1401, 1440, 1460, 1410 and 7010 under the control of an operating system. The Model 85 and later System/370 maintained the precedent, retaining emulation options and allowing emulator programs to execute under operating system control alongside native programs. System/360 (excepting the Model 20) was replaced with the compatible System/370 range in 1970 and Model 20 users were targeted to move to the IBM System/3. (The idea of a major breakthrough with FS technology was dropped in the mid-1970s for cost-effectiveness and continuity reasons.) Later compatible IBM systems include the 4300 family, the 308x family, the 3090, the ES/9000 and 9672 families (System/390 family), and the IBM Z series. Computers that were mostly identical or compatible in terms of the machine code or architecture of the System/360 included Amdahl's 470 family (and its successors), Hitachi mainframes, the UNIVAC 9000 series, Fujitsu as the Facom, the RCA Spectra 70 series, and the English Electric System 4. The System 4 machines were built under license to RCA. RCA sold the Spectra series to what was then UNIVAC, where they became the UNIVAC Series 70. UNIVAC also developed the UNIVAC Series 90 as successors to the 9000 series and Series 70. The Soviet Union produced a System/360 clone named the ES EVM. The IBM 5100 portable computer, introduced in 1975, offered an option to execute the System/360's APL.SV programming language through a hardware emulator. IBM used this approach to avoid the costs and delay of creating a 5100-specific version of APL. Special radiation-hardened and otherwise somewhat modified System/360s, in the form of the System/4 Pi avionics computer, are used in several fighter and bomber jet aircraft. In the complete 32-bit AP-101 version, 4 Pi machines were used as the replicated computing nodes of the fault-tolerant Space Shuttle computer system (in five nodes). The U.S. Federal Aviation Administration operated the IBM 9020, a special cluster of modified System/360s for air traffic control, from 1970 until the 1990s. (Some 9020 software is apparently still used via emulation on newer hardware.) The System/360 introduced a number of industry standards to the marketplace, such as: The System/360 series has a computer system architecture specification. This specification makes no assumptions on the implementation itself, but rather describes the interfaces and expected behavior of an implementation. The architecture describes mandatory interfaces that must be available on all implementations, and optional interfaces. Some aspects of this architecture are: Some of the optional features are: All models of System/360, except for the Model 20 and Model 44, implemented that specification. Binary arithmetic and logical operations are performed as register-to-register and as memory-to-register/register-to-memory as a standard feature. If the Commercial Instruction Set option was installed, packed decimal arithmetic could be performed as memory-to-memory with some memory-to-register operations. The Scientific Instruction Set feature, if installed, provided access to four floating point registers that could be programmed for either 32-bit or 64-bit floating point operations. The Models 85 and 195 could also operate on 128-bit extended-precision floating point numbers stored in pairs of floating point registers, and software provided emulation in other models. The System/360 used an 8-bit byte, 32-bit word, 64-bit double-word, and 4-bit nibble. Machine instructions had operators with operands, which could contain register numbers or memory addresses. This complex combination of instruction options resulted in a variety of instruction lengths and formats. Memory addressing was accomplished using a base-plus-displacement scheme, with registers 1 through F (15). A displacement was encoded in 12 bits, thus allowing a 4096-byte displacement (0-4095), as the offset from the address put in a base register. Register 0 could not be used as a base register nor as an index register (nor as a branch address register), as "0" was reserved to indicate an address in the first 4 KB of memory, that is, if register 0 was specified as described, the value 0x00000000 was implicitly input to the effective address calculation in place of whatever value might be contained within register 0 (or if specified as a branch address register, then no branch was taken, and the content of register 0 was ignored, but any side effect of the instruction was performed). This specific behavior permitted initial execution of an interrupt routines, since base registers would not necessarily be set to 0 during the first few instruction cycles of an interrupt routine. It isn't needed for IPL ("Initial Program Load" or boot), as one can always clear a register without the need to save it. With the exception of the Model 67, all addresses were real memory addresses. Virtual memory was not available in most IBM mainframes until the System/370 series. The Model 67 introduced a virtual memory architecture, which MTS, CP-67, and TSS/360 used—but not IBM's mainline System/360 operating systems. The System/360 machine-code instructions are 2 bytes long (no memory operands), 4 bytes long (one operand), or 6 bytes long (two operands). Instructions are always situated on 2-byte boundaries. Operations like MVC (Move-Characters) (Hex: D2) can only move at most 256 bytes of information. Moving more than 256 bytes of data required multiple MVC operations. (The System/370 series introduced a family of more powerful instructions such as the MVCL "Move-Characters-Long" instruction, which supports moving up to 16 MB as a single block.) An operand is two bytes long, typically representing an address as a 4-bit nibble denoting a base register and a 12-bit displacement relative to the contents of that register, in the range 000–FFF (shown here as hexadecimal numbers). The address corresponding to that operand is the contents of the specified general-purpose register plus the displacement. For example, an MVC instruction that moves 256 bytes (with length code 255 in hexadecimal as FF) from base register 7, plus displacement 000, to base register 8, plus displacement 001, would be coded as the 6-byte instruction "D2FF 8001 7000" (operator/length/address1/address2). The System/360 was designed to separate the "system state" from the "problem state". This provided a basic level of security and recoverability from programming errors. Problem (user) programs could not modify data or program storage associated with the system state. Addressing, data, or operation exception errors made the machine enter the system state through a controlled routine so the operating system could try to correct or terminate the program in error. Similarly, it could recover certain processor hardware errors through the "machine check" routines. Peripherals interfaced to the system via "channels". A channel is a specialized processor with the instruction set optimized for transferring data between a peripheral and main memory. In modern terms, this could be compared to direct memory access (DMA). The S/360 connects channels to control units with bus and tag cables; IBM eventually replaced these with (Enterprise Systems Connection (ESCON) and Fibre Connection (FICON) channels. There were initially two types of channels; byte-multiplexer channels (known at the time simply as "multiplexor channels"), for connecting "slow speed" devices such as card readers and punches, line printers, and communications controllers, and selector channels for connecting high speed devices, such as disk drives, tape drives, data cells and drums. Every System/360 (except for the Model 20, which was not a standard 360) has a byte-multiplexer channel and 1 or more selector channels, though the model 25 has just one channel, which can be either a byte-multiplexor or selector channel. The smaller models (up to the model 50) have integrated channels, while for the larger models (model 65 and above) the channels are large separate units in separate cabinets: the IBM 2870 is the byte-multiplexor channel with up to four selector sub-channels, and the IBM 2860 is up to three selector channels. The byte-multiplexer channel is able to handle I/O to/from several devices simultaneously at the device's highest rated speeds, hence the name, as it multiplexed I/O from those devices onto a single data path to main memory. Devices connected to a byte-multiplexer channel are configured to operate in 1-byte, 2-byte, 4-byte, or "burst" mode. The larger "blocks" of data are used to handle progressively faster devices. For example, a 2501 card reader operating at 600 cards per minute would be in 1-byte mode, while a 1403-N1 printer would be in burst mode. Also, the byte-multiplexer channels on larger models have an optional selector subchannel section that would accommodate tape drives. The byte-multiplexor's channel address was typically "0" and the selector subchannel addresses were from "C0" to "FF." Thus, tape drives on System/360 were commonly addressed at 0C0-0C7. Other common byte-multiplexer addresses are: 00A: 2501 Card Reader, 00C/00D: 2540 Reader/Punch, 00E/00F: 1403-N1 Printers, 010-013: 3211 Printers, 020-0BF: 2701/2703 Telecommunications Units. These addresses are still commonly used in z/VM virtual machines. System/360 models 40 and 50 have an integrated 1052-7 console that is usually addressed as 01F, however, this was not connected to the byte-multiplexer channel, but rather, had a direct internal connection to the mainframe. The model 30 attached a different model of 1052 through a 1051 control unit. The models 60 through 75 also use the 1052-7. Selector channels enabled I/O to high speed devices. These storage devices were attached to a control unit and then to the channel. The control unit let clusters of devices be attached to the channels. On higher speed models, multiple selector channels, which could operate simultaneously or in parallel, improved overall performance. Control units are connected to the channels with "bus and tag" cable pairs. The bus cables carried the address and data information and the tag cables identified what data was on the bus. The general configuration of a channel is to connect the devices in a chain, like this: Mainframe—Control Unit X—Control Unit Y—Control Unit Z. Each control unit is assigned a "capture range" of addresses that it services. For example, control unit X might capture addresses 40-4F, control unit Y: C0-DF, and control unit Z: 80-9F. Capture ranges had to be a multiple of 8, 16, 32, 64, or 128 devices and be aligned on appropriate boundaries. Each control unit in turn has one or more devices attached to it. For example, you could have control unit Y with 6 disks, that would be addressed as C0-C5. There are three general types of bus-and-tag cables produced by IBM. The first is the standard gray bus-and-tag cable, followed by the blue bus-and-tag cable, and finally the tan bus-and-tag cable. Generally, newer cable revisions are capable of higher speeds or longer distances, and some peripherals specified minimum cable revisions both upstream and downstream. The cable ordering of the control units on the channel is also significant. Each control unit is "strapped" as High or Low priority. When a device selection was sent out on a mainframe's channel, the selection was sent from X->Y->Z->Y->X. If the control unit was "high" then the selection was checked in the outbound direction, if "low" then the inbound direction. Thus, control unit X was either 1st or 5th, Y was either 2nd or 4th, and Z was 3rd in line. It is also possible to have multiple channels attached to a control unit from the same or multiple mainframes, thus providing a rich high-performance, multiple-access, and backup capability. Typically the total cable length of a channel is limited to 200 feet, less being preferred. Each control unit accounts for about 10 "feet" of the 200-foot limit. IBM first introduced a new type of I/O channel on the Model 85 and Model 195, the 2880 block multiplexer channel, and then made them standard on the System/370. This channel allowed a device to suspend a channel program, pending the completion of an I/O operation and thus to free the channel for use by another device. A block multiplexer channel can support either standard 1.5 MB/second connections or, with the 2-byte interface feature, 3 MB/second; the latter use one tag cable and two bus cables. On the S/370 there is an option for a 3.0 MB/s data streaming channel with one bus cable and one tag cable. The initial use for this was the 2305 fixed-head disk, which has 8 "exposures" (alias addresses) and rotational position sensing (RPS). Block multiplexer channels can operate as a selector channel to allow compatible attachment of legacy subsystems. Being uncertain of the reliability and availability of the then new monolithic integrated circuits, IBM chose instead to design and manufacture its own custom hybrid integrated circuits. These were built on 11 mm square ceramic substrates. Resistors were silk screened on and discrete glass encapsulated transistors and diodes were added. The substrate was then covered with a metal lid or encapsulated in plastic to create a "Solid Logic Technology" (SLT) module. A number of these SLT modules were then flip chip mounted onto a small multi-layer printed circuit "SLT card". Each card had one or two sockets on one edge that plugged onto pins on one of the computer's "SLT boards". This was the reverse of how most other company's cards were mounted, where the cards had pins or printed contact areas and plugged into sockets on the computer's boards. Up to twenty SLT boards could be assembled side-by-side (vertically and horizontally) to form a "logic gate". Several gates mounted together constituted a box-shaped "logic frame". The outer gates were generally hinged along one vertical edge so they could be swung open to provide access to the fixed inner gates. The larger machines could have more than one frame bolted together to produce the final unit, such as a multi-frame Central Processing Unit (CPU). The smaller System/360 models used the Basic Operating System/360 (BOS/360), Tape Operating System (TOS/360), or Disk Operating System/360 (DOS/360, which evolved into DOS/VS, DOS/VSE, VSE/AF, VSE/SP, VSE/ESA, and then z/VSE). The larger models used Operating System/360 (OS/360). IBM developed several versions of OS/360, with increasingly powerful features: Primary Control Program (PCP), Multiprogramming with a Fixed number of Tasks (MFT), and Multiprogramming with a Variable number of Tasks (MVT). MVT took a long time to develop into a usable system, and the less ambitious MFT was widely used. PCP was used on intermediate machines; the final releases of OS/360 included only MFT and MVT. For the System/370 and later machines, MFT evolved into OS/VS1, while MVT evolved into OS/VS2 (SVS) (Single Virtual Storage), then various versions of MVS (Multiple Virtual Storage) culminating in the current z/OS. When it announced the Model 67 in August 1965, IBM also announced TSS/360 (Time-Sharing System) for delivery at the same time as the 67. TSS/360, a response to Multics, was an ambitious project that included many advanced features. It had performance problems, was delayed, canceled, reinstated, and finally canceled again in 1971. Customers migrated to CP-67, MTS (Michigan Terminal System), TSO (Time Sharing Option for OS/360), or one of several other time-sharing systems. CP-67, the original virtual machine system, was also known as CP/CMS. CP/67 was developed outside the IBM mainstream at IBM's Cambridge Scientific Center, in cooperation with MIT researchers. CP/CMS eventually won wide acceptance, and led to the development of VM/370 (Virtual Machine) which had a primary interactive "sub" operating system known as VM/CMS (Conversational Monitoring System). This evolved into today's z/VM. The Model 20 offered a simplified and rarely used tape-based system called TPS (Tape Processing System), and DPS (Disk Processing System) that provided support for the 2311 disk drive. TPS could run on a machine with 8 KB of memory; DPS required 12 KB, which was pretty hefty for a Model 20. Many customers ran quite happily with 4 KB and CPS (Card Processing System). With TPS and DPS, the card reader was used to read the Job Control Language cards that defined the stack of jobs to run and to read in transaction data such as customer payments. The operating system was held on tape or disk, and results could also be stored on the tapes or hard drives. Stacked job processing became an exciting possibility for the small but adventurous computer user. A little-known and little-used suite of 80-column punched-card utility programs known as Basic Programming Support (BPS) (jocularly: Barely Programming Support), a precursor of TOS, was available for smaller systems. IBM created a new naming system for the new components created for System/360, although well-known old names, like IBM 1403 and IBM 1052, were retained. In this new naming system, components were given four-digit numbers starting with 2. The second digit described the type of component, as follows: IBM developed a new family of peripheral equipment for System/360, carrying over a few from its older 1400 series. Interfaces were standardized, allowing greater flexibility to mix and match processors, controllers and peripherals than in the earlier product lines. In addition, System/360 computers could use certain peripherals that were originally developed for earlier computers. These earlier peripherals used a different numbering system, such as the IBM 1403 chain printer. The 1403, an extremely reliable device that had already earned a reputation as a workhorse, was sold as the 1403-N1 when adapted for the System/360. Also available were optical character recognition (OCR) readers IBM 1287 and IBM 1288 which could read Alpha Numeric (A/N) and Numeric Hand Printed (NHP/NHW) Characters from Cashier's rolls of tape to full legal size pages. At the time this was done with very large optical/logic readers. Software was too slow and expensive at that time. Most small systems were sold with an IBM 1052-7 as the console typewriter. This was tightly integrated into the CPU — the keyboard would physically lock under program control. Certain high-end machines could optionally be purchased with a 2250 graphical display, costing upwards of US $100,000. The 360/85 used a 5450 display console that was not compatible with anything else in the line; the later 3066 console for the 370/165 and 370/168 used the same basic display design as the 360/85. The first disk drives for System/360 were IBM 2302s and IBM 2311s. The first drum for System/360 was the IBM 7320. The 156 KB/second 2302 was based on the earlier 1302 and was available as a model 3 with two 112.79 MB modules or as a model 4 with four such modules. The 2311, with a removable 1316 disk pack, was based on the IBM 1311 and had a theoretical capacity of 7.2 MB, although actual capacity varied with record design. (When used with a 360/20, the 1316 pack was formatted into fixed-length 270 byte sectors, giving a maximum capacity of 5.4MB.) In 1966, the first 2314s shipped. This device had up to eight usable disk drives with an integral control unit; there were nine drives, but one was reserved as a spare. Each drive used a removable 2316 disk pack with a capacity of nearly 28 MB. The disk packs for the 2311 and 2314 were "physically" large by today's standards — e.g., the 1316 disk pack was about in diameter and had six platters stacked on a central spindle. The top and bottom outside platters did not store data. Data were recorded on the inner sides of the top and bottom platters and both sides of the inner platters, providing 10 recording surfaces. The 10 read/write heads moved together across the surfaces of the platters, which were formatted with 203 concentric tracks. To reduce the amount of head movement (seeking), data was written in a virtual cylinder from inside top platter down to inside bottom platter. These disks were not usually formatted with fixed-sized sectors as are today's hard drives (though this "was" done with CP/CMS). Rather, most System/360 I/O software could customize the length of the data record (variable-length records), as was the case with magnetic tapes. Some of the most powerful early System/360s used high-speed head-per-track drum storage devices. The 3,500 RPM 2301, which replaced the 7320, was part of the original System/360 announcement, with a capacity of 4 MB. The 303.8 KB/second IBM 2303 was announced on January 31, 1966, with a capacity of 3.913 MB. These were the only drums announced for System/360 and System/370, and their niche was later filled by fixed-head disks. The 6,000 RPM 2305 appeared in 1970, with capacities of 5 MB (2305-1) or 11 MB (2305-2) per module. Although these devices did not have large capacity, their speed and transfer rates made them attractive for high-performance needs. A typical use was overlay linkage (e.g. for OS and application subroutines) for program sections written to alternate in the same memory regions. Fixed head disks and drums were particularly effective as paging devices on the early virtual memory systems. The 2305, although often called a "drum" was actually a head-per-track disk device, with 12 recording surfaces and a data transfer rate up to 3 MB per second. Rarely seen was the IBM 2321 Data Cell, a mechanically complex device that contained multiple magnetic strips to hold data; strips could be randomly accessed, placed upon a cylinder-shaped drum for read/write operations; then returned to an internal storage cartridge. The IBM Data Cell [noodle picker] was among several IBM trademarked "speedy" mass online direct-access storage peripherals (reincarnated in recent years as "virtual tape" and automated tape librarian peripherals). The 2321 file had a capacity of 400 MB, at the time when the 2311 disk drive only had 7.2 MB. The IBM Data Cell was proposed to fill cost/capacity/speed gap between magnetic tapes—which had high capacity with relatively low cost per stored byte—and disks, which had higher expense per byte. Some installations also found the electromechanical operation less dependable and opted for less mechanical forms of direct-access storage. The Model 44 was unique in offering an integrated single-disk drive as a standard feature. This drive used the 2315 "ramkit" cartridge and provided 1,171,200 bytes of storage. The 2400 tape drives consisted of a combined drive and control unit, plus individual 1/2" tape drives attached. With System/360, IBM switched from IBM 7 track to 9 track tape format. 2400 drives could be purchased that read and wrote 7-track tapes for compatibility with the older IBM 729 tape drives. In 1967, a slower and cheaper pair of tape drives with integrated control unit was introduced: the 2415. In 1968, the IBM 2420 tape system was released, offering much higher data rates, self-threading tape operation and 1600bpi packing density. It remained in the product line until 1979. Despite having been sold or leased in very large numbers for a mainframe system of its era only a few of System/360 computers remain mainly as non-operating property of museums or collectors. Examples of existing systems include: A running list of remaining System/360s can be found at World Inventory of remaining System/360 CPUs. This gallery shows the operator's console, with register value lamps, toggle switches (middle of pictures), and "emergency pull" switch (upper right of pictures) of the various models. In the US television series "Mad Men" (2007–2015), the "IBM 360" was featured as a plot device in which a company leased the system to the advertising agency and was a prominent background in the seventh season. A crowdfunding campaign for rescuing and restoring an IBM 360 system from Nuremberg has received successful funding.
https://en.wikipedia.org/wiki?curid=29294
Spouse A spouse is a significant other in a marriage, civil union, or common-law marriage. The term is gender neutral, whereas a male spouse is a husband and a female spouse is a wife. Although a spouse is a form of significant other, the latter term also includes non-marital partners who play a social role similar to that of a spouse, but do not have rights and duties reserved by law to a spouse. The legal status of a spouse, and the specific rights and obligations associated with that status, vary significantly among the jurisdictions of the world. These regulations are usually described in family law statutes. However, in many parts of the world, where civil marriage is not that prevalent, there is instead customary marriage, which is usually regulated informally by the community. In many parts of the world, spousal rights and obligations are related to the payment of bride price, dowry or dower. Historically, many societies have given sets of rights and obligations to male marital partners that have been very different from the sets of rights and obligations given to female marital partners. In particular, the control of marital property, inheritance rights, and the right to dictate the activities of children of the marriage, have typically been given to male marital partners. However, this practice was curtailed to a great deal in many countries in the twentieth century, and more modern statutes tend to define the rights and duties of a spouse without reference to gender. Among the last European countries to establish full gender equality in marriage were Switzerland, Greece, Spain, and France in the 1980s. In various marriage laws around the world, however, the husband continues to have authority; for instance the Civil Code of Iran states at Article 1105: ""In relations between husband and wife; the position of the head of the family is the exclusive right of the husband"". Depending on jurisdiction, the refusal or inability of a spouse to perform the marital obligations may constitute a ground for divorce, legal separation or annulment. The latter two options are more prevalent in countries where the dominant religion is Roman Catholicism, some of which introduced divorce only recently (i.e. Italy in 1970, Portugal in 1975, Brazil in 1977, Spain in 1981, Argentina in 1987, Paraguay in 1991, Colombia in 1991, Ireland in 1996, Chile in 2004 and Malta in 2011). In recent years, many Western countries have adopted no fault divorce. In some parts of the world, the formal dissolution of a marriage is complicated by the payments and goods which have been exchanged between families (this is common where marriages are arranged). This often makes it difficult to leave a marriage, especially for the woman: in some parts of Africa, once the bride price has been paid, the wife is seen as belonging to the husband and his family; and if she wants to leave, the husband may demand back the bride price that he had paid to the girl's family. The girl's family often cannot or does not want to pay it back. Regardless of legislation, personal relations between spouses may also be influenced by local culture and religion. There is often a minimum legal marriageable age. The United Nations Population Fund stated the following: Although in Western countries spouses sometimes choose not to have children, such a choice is not accepted in some parts of the world. In some cultures and religions, the quality of a spouse imposes an obligation to have children. In northern Ghana, for example, the payment of bride price signifies a woman's requirement to bear children, and women using birth control are at risks of threats and coercion. There are many ways in which a spouse is chosen, which vary across the world, and include love marriage, arranged marriage, and forced marriage. The latter is in some jurisdictions a void marriage or a voidable marriage. Forcing someone to marry is also a criminal offense in some countries.
https://en.wikipedia.org/wiki?curid=29298
Semiotics Semiotics (also called semiotic studies) is the study of sign process (semiosis), which is any form of activity, conduct, or any process that involves signs, including the production of meaning. A sign is anything that communicates a meaning, that is not the sign itself, to the interpreter of the sign. The meaning can be intentional such as a word uttered with a specific meaning, or unintentional, such as a symptom being a sign of a particular medical condition. Signs can communicate through any of the senses, visual, auditory, tactile, olfactory, or gustatory. The semiotic tradition explores the study of signs and symbols as a significant part of communications. Unlike linguistics, semiotics also studies non-linguistic sign systems. Semiotics includes the study of signs and sign processes, indication, designation, likeness, analogy, allegory, metonymy, metaphor, symbolism, signification, and communication. Semiotics is frequently seen as having important anthropological and sociological dimensions; for example, the Italian semiotician and novelist Umberto Eco proposed that every cultural phenomenon may be studied as communication. Some semioticians focus on the logical dimensions of the science, however. They examine areas belonging also to the life sciences—such as how organisms make predictions about, and adapt to, their semiotic niche in the world (see semiosis). In general, semiotic theories take "signs" or sign systems as their object of study: the communication of information in living organisms is covered in biosemiotics (including zoosemiotics and phytosemiotics). Semiotics is not to be confused with the Saussurean tradition called semiology, which is a subset of semiotics. The importance of signs and signification has been recognized throughout much of the history of philosophy, and in psychology as well. The term derives from the , "observant of signs" (from σημεῖον "sēmeion", "a sign, a mark"). For the Greeks, "signs" occurred in the world of nature, and "symbols" in the world of culture. As such, Plato and Aristotle explored the relationship between signs and the world. It would not be until Augustine of Hippo that the nature of the sign would be considered within a conventional system. Augustine introduced a thematic proposal for uniting the two under the notion of "sign" ("signum") as transcending the nature-culture divide and identifying symbols as no more than a species (or sub-species) of "signum" be formally proposed. A monograph study on this question would be done by Manetti (1987). These theories have had a lasting effect in Western philosophy, especially through scholastic philosophy. The general study of signs that began in Latin with Augustine culminated with the 1632 "Tractatus de Signis" of John Poinsot, and then began anew in late modernity with the attempt in 1867 by Charles Sanders Peirce to draw up a "new list of categories." More recently, Umberto Eco, in his "Semiotics and the Philosophy of Language", has argued that semiotic theories are implicit in the work of most, perhaps all, major thinkers. John Locke (1690), himself a man of medicine, was familiar with this "semeiotics" as naming a specialized branch within medical science. In his personal library were two editions of Scapula's 1579 abridgement of Henricus Stephanus' "Thesaurus Graecae Linguae", which listed "σημειωτική" as the name for "diagnostics," the branch of medicine concerned with interpreting symptoms of disease ("symptomatology"). Indeed, physician and scholar Henry Stubbe (1670) had transliterated this term of specialized science into English precisely as ""semeiotics"," marking the first use of the term in English:"…nor is there any thing to be relied upon in Physick, but an exact knowledge of medicinal phisiology (founded on observation, not principles), semeiotics, method of curing, and tried (not excogitated, not commanding) medicines.…"Locke would use the term "sem(e)iotike" in "An Essay Concerning Human Understanding" (book IV, chap. 21), in which he explains how science may be divided into three parts: Locke then elaborates on the nature of this third category, naming it "Σημειωτική" ("Semeiotike"), and explaining it as "the doctrine of signs" in the following terms: Yuri Lotman would introduce Eastern Europe to semiotics and adopt Locke's coinage ("Σημειωτική") as the name to subtitle his founding at the University of Tartu in Estonia in 1964 of the first semiotics journal, "Sign Systems Studies". Ferdinand de Saussure founded his semiotics, which he called semiology, in the social sciences: Thomas Sebeok would assimilate "semiology" to "semiotics" as a part to a whole, and was involved in choosing the name "Semiotica" for the first international journal devoted to the study of signs. Saussurean semiotics have exercised a great deal of influence on the schools of Structuralism and Post-Structuralism. Jacques Derrida, for example, takes as his object the Saussurean relationship of signifier and signified, asserting that signifier and signified are not fixed, coining the expression "différance", relating to the endless deferral of meaning, and to the absence of a 'transcendent signified'. For Derrida, ""il n'y a pas de hors-texte"" (). In the nineteenth century, Charles Sanders Peirce defined what he termed "semiotic" (which he would sometimes spell as "semeiotic") as the "quasi-necessary, or formal doctrine of signs," which abstracts "what must be the characters of all signs used by…an intelligence capable of learning by experience," and which is philosophical logic pursued in terms of signs and sign processes. Peirce's perspective is considered as philosophical logic studied in terms of signs that are not always linguistic or artificial, and sign processes, modes of inference, and the inquiry process in general. The Peircean semiotic addresses not only the external communication mechanism, as per Saussure, but the internal representation machine, investigating sign processes, and modes of inference, as well as the whole inquiry process in general. Peircean semiotic is triadic, including sign, object, interpretant, as opposed to the dyadic Saussurian tradition (signifier, signified). Peircean semiotics further subdivides each of the three triadic elements into three sub-types, positing the existence of signs that are symbols; semblances ("icons"); and "indices," i.e., signs that are such through a factual connection to their objects. Peircean scholar and editor Max H. Fisch (1978) would claim that "semeiotic" was Peirce's own preferred rendering of Locke's σημιωτική. Charles W. Morris followed Peirce in using the term "semiotic" and in extending the discipline beyond human communication to animal learning and use of signals. While the Saussurean semiotic is dyadic (sign/syntax, signal/semantics), the Peircean semiotic is triadic (sign, object, interpretant), being conceived as philosophical logic studied in terms of signs that are not always linguistic or artificial. Peirce would aim to base his new list directly upon experience precisely as constituted by action of signs, in contrast with the list of Aristotle's categories which aimed to articulate within experience the dimension of being that is independent of experience and knowable as such, through human understanding. The estimative powers of animals interpret the environment as sensed to form a "meaningful world" of objects, but the objects of this world (or "Umwelt", in Jakob von Uexküll's term) consist exclusively of objects related to the animal as desirable (+), undesirable (–), or "safe to ignore" (0). In contrast to this, human understanding adds to the animal "Umwelt" a relation of self-identity within objects which transforms objects experienced into "things" as well as +, –, 0 objects. Thus, the generically animal objective world as "Umwelt", becomes a species-specifically human objective world or "Lebenswelt" (life-world), wherein linguistic communication, rooted in the biologically underdetermined "Innenwelt" (inner-world) of humans, makes possible the further dimension of cultural organization within the otherwise merely social organization of non-human animals whose powers of observation may deal only with directly sensible instances of objectivity. This further point, that human culture depends upon language understood first of all not as communication, but as the biologically underdetermined aspect or feature of the human animal's "Innenwelt", was originally clearly identified by Thomas A. Sebeok. Sebeok also played the central role in bringing Peirce's work to the center of the semiotic stage in the twentieth century, first with his expansion of the human use of signs (""anthroposemiosis"") to include also the generically animal sign-usage (""zoösemiosis""), then with his further expansion of semiosis to include the vegetative world (""phytosemiosis""). Such would initially be based on the work of Martin Krampen, but takes advantage of Peirce's point that an interpretant, as the third item within a sign relation, "need not be mental." Peirce's distinguished between the interpretant and the interpreter. The interpretant is the internal, mental representation that mediates between the object and its sign. The interpreter is the human who is creating the interpretant. Peirce's "interpretant" notion opened the way to understanding an action of signs beyond the realm of animal life (study of "phytosemiosis" + "zoösemiosis" + "anthroposemiosis" = "biosemiotics"), which was his first advance beyond Latin Age semiotics. Other early theorists in the field of semiotics include Charles W. Morris. Max Black argued that the work of Bertrand Russell was seminal in the field. Semioticians classify signs or sign systems in relation to the way they are transmitted (see modality). This process of carrying meaning depends on the use of codes that may be the individual sounds or letters that humans use to form words, the body movements they make to show attitude or emotion, or even something as general as the clothes they wear. To coin a word to refer to a "thing" (see lexical words), the community must agree on a simple meaning (a denotative meaning) within their language, but that word can transmit that meaning only within the language's grammatical structures and codes (see syntax and semantics). Codes also represent the values of the culture, and are able to add new shades of connotation to every aspect of life. To explain the relationship between semiotics and communication studies, communication is defined as the process of transferring data and-or meaning from a source to a receiver. Hence, communication theorists construct models based on codes, media, and contexts to explain the biology, psychology, and mechanics involved. Both disciplines recognize that the technical process cannot be separated from the fact that the receiver must decode the data, i.e., be able to distinguish the data as salient, and make meaning out of it. This implies that there is a necessary overlap between semiotics and communication. Indeed, many of the concepts are shared, although in each field the emphasis is different. In "Messages and Meanings: An Introduction to Semiotics", Marcel Danesi (1994) suggested that semioticians' priorities were to study signification first, and communication second. A more extreme view is offered by Jean-Jacques Nattiez (1987; trans. 1990: 16), who, as a musicologist, considered the theoretical study of communication irrelevant to his application of semiotics. Semiotics differs from linguistics in that it generalizes the definition of a sign to encompass signs in any medium or sensory modality. Thus it broadens the range of sign systems and sign relations, and extends the definition of language in what amounts to its widest analogical or metaphorical sense. The branch of semiotics that deals with such formal relations between signs or expressions in abstraction from their signification and their interpreters, or—more generally—with formal properties of symbol systems (specifically, with reference to linguistic signs, syntax) is referred to as syntactics. Peirce's definition of the term "semiotic" as the study of necessary features of signs also has the effect of distinguishing the discipline from linguistics as the study of contingent features that the world's languages happen to have acquired in the course of their evolutions. From a subjective standpoint, perhaps more difficult is the distinction between semiotics and the philosophy of language. In a sense, the difference lies between separate traditions rather than subjects. Different authors have called themselves "philosopher of language" or "semiotician". This difference does "not" match the separation between analytic and continental philosophy. On a closer look, there may be found some differences regarding subjects. Philosophy of language pays more attention to natural languages or to languages in general, while semiotics is deeply concerned with non-linguistic signification. Philosophy of language also bears connections to linguistics, while semiotics might appear closer to some of the humanities (including literary theory) and to cultural anthropology. Semiosis or "semeiosis" is the process that forms meaning from any organism's apprehension of the world through signs. Scholars who have talked about semiosis in their subtheories of semiotics include C. S. Peirce, John Deely, and Umberto Eco. Cognitive semiotics is combining methods and theories developed in the disciplines of cognitive methods and theories developed in semiotics and the humanities, with providing new information into human signification and its manifestation in cultural practices. The research on cognitive semiotics brings together semiotics from linguistics, cognitive science, and related disciplines on a common meta-theoretical platform of concepts, methods, and shared data. Cognitive semiotics may also be seen as the study of meaning-making by employing and integrating methods and theories developed in the cognitive sciences. This involves conceptual and textual analysis as well as experimental investigations. Cognitive semiotics initially was developed at the Center for Semiotics at Aarhus University (Denmark), with an important connection with the Center of Functionally Integrated Neuroscience (CFIN) at Aarhus Hospital. Amongst the prominent cognitive semioticians are Per Aage Brandt, Svend Østergaard, Peer Bundgård, Frederik Stjernfelt, Mikkel Wallentin, Kristian Tylén, Riccardo Fusaroli, and Jordan Zlatev. Zlatev later in co-operation with Göran Sonesson established CCS (Center for Cognitive Semiotics) at Lund University, Sweden. Finite semiotics, developed by Cameron Shackell (2018, 2019), aims to unify existing theories of semiotics for application to the post-Baudrillardian world of ubiquitous technology. Its central move is to place the finiteness of thought at the root of semiotics and the sign as a secondary but fundamental analytical construct. The theory contends that the levels of reproduction that technology is bringing to human environments demands this reprioritisation if semiotics is to remain relevant in the face of effectively infinite signs. The shift in emphasis allows practical definitions of many core constructs in semiotics which Shackell has applied to areas such as human computer interaction, creativity theory, and a computational semiotics method for generating semiotic squares from digital texts. Pictorial semiotics is intimately connected to art history and theory. It goes beyond them both in at least one fundamental way, however. While art history has limited its visual analysis to a small number of pictures that qualify as "works of art", pictorial semiotics focuses on the properties of pictures in a general sense, and on how the artistic conventions of images can be interpreted through pictorial codes. Pictorial codes are the way in which viewers of pictorial representations seem automatically to decipher the artistic conventions of images by being unconsciously familiar with them. According to Göran Sonesson, a Swedish semiotician, pictures can be analyzed by three models: (a) the narrative model, which concentrates on the relationship between pictures and time in a chronological manner as in a comic strip; (b) the rhetoric model, which compares pictures with different devices as in a metaphor; and (c) the Laokoon model, which considers the limits and constraints of pictorial expressions by comparing textual mediums that utilize time with visual mediums that utilize space. The break from traditional art history and theory—as well as from other major streams of semiotic analysis—leaves open a wide variety of possibilities for pictorial semiotics. Some influences have been drawn from phenomenological analysis, cognitive psychology, structuralist, and cognitivist linguistics, and visual anthropology and sociology. Studies have shown that semiotics may be used to make or break a brand. Culture codes strongly influence whether a population likes or dislikes a brand's marketing, especially internationally. If the company is unaware of a culture's codes, it runs the risk of failing in its marketing. Globalization has caused the development of a global consumer culture where products have similar associations, whether positive or negative, across numerous markets. Mistranslations may lead to instances of "Engrish" or "Chinglish", terms for unintentionally humorous cross-cultural slogans intended to be understood in English. This may be caused by a sign that, in Peirce's terms, mistakenly indexes or symbolizes something in one culture, that it does not in another. In other words, it creates a connotation that is culturally-bound, and that violates some culture code. Theorists who have studied humor (such as Schopenhauer) suggest that contradiction or incongruity creates absurdity and therefore, humor. Violating a culture code creates this construct of ridiculousness for the culture that owns the code. Intentional humor also may fail cross-culturally because jokes are not on code for the receiving culture. A good example of branding according to cultural code is Disney's international theme park business. Disney fits well with Japan's cultural code because the Japanese value "cuteness", politeness, and gift giving as part of their culture code; Tokyo Disneyland sells the most souvenirs of any Disney theme park. In contrast, Disneyland Paris failed when it launched as Euro Disney because the company did not research the codes underlying European culture. Its storybook retelling of European folktales was taken as elitist and insulting, and the strict appearance standards that it had for employees resulted in discrimination lawsuits in France. Disney souvenirs were perceived as cheap trinkets. The park was a financial failure because its code violated the expectations of European culture in ways that were offensive. On the other hand, some researchers have suggested that it is possible to successfully pass a sign perceived as a cultural icon, such as the Coca-Cola or McDonald's logos, from one culture to another. This may be accomplished if the sign is migrated from a more economically-developed to a less developed culture. The intentional association of a product with another culture has been called Foreign Consumer Culture Positioning (FCCP). Products also may be marketed using global trends or culture codes, for example, saving time in a busy world; but even these may be fine-tuned for specific cultures. Research also found that, as airline industry brandings grow and become more international, their logos become more symbolic and less iconic. The iconicity and symbolism of a sign depends on the cultural convention and, are on that ground in relation with each other. If the cultural convention has greater influence on the sign, the signs get more symbolic value. The flexibility of human semiotics is well demonstrated in dreams. Sigmund Freud spelled out how meaning in dreams rests on a blend of images, affects, sounds, words, and kinesthetic sensations. In his chapter on "The Means of Representation," he showed how the most abstract sorts of meaning and logical relations can be represented by spatial relations. Two images in sequence may indicate "if this, then that" or "despite this, that". Freud thought the dream started with "dream thoughts" which were like logical, verbal sentences. He believed that the dream thought was in the nature of a taboo wish that would awaken the dreamer. In order to safeguard sleep, the mindbrain converts and disguises the verbal dream thought into an imagistic form, through processes he called the "dream-work". Subfields that have sprouted out of semiotics include, but are not limited to, the following: Charles Sanders Peirce (1839–1914), a noted logician who founded philosophical pragmatism, defined "semiosis" as an irreducibly triadic process wherein something, as an object, logically determines or influences something as a sign to determine or influence something as an interpretation or "interpretant", itself a sign, thus leading to further interpretants. Semiosis is logically structured to perpetuate itself. The object may be quality, fact, rule, or even fictional (Hamlet), and may be "immediate" to the sign, the object as represented in the sign, or "dynamic", the object as it really is, on which the immediate object is founded. The interpretant may be "immediate" to the sign, all that the sign immediately expresses, such as a word's usual meaning; or "dynamic", such as a state of agitation; or "final" or "normal", the ultimate ramifications of the sign about its object, to which inquiry taken far enough would be destined and with which any interpretant, at most, may coincide. His "semiotic" covered not only artificial, linguistic, and symbolic signs, but also semblances such as kindred sensible qualities, and indices such as reactions. He came c. 1903 to classify any sign by three interdependent trichotomies, intersecting to form ten (rather than 27) classes of sign. Signs also enter into various kinds of meaningful combinations; Peirce covered both semantic and syntactical issues in his speculative grammar. He regarded formal semiotic as logic "per se" and part of philosophy; as also encompassing study of arguments (hypothetical, deductive, and inductive) and inquiry's methods including pragmatism; and as allied to, but distinct from logic's pure mathematics. In addition to pragmatism, Peirce provided a definition of "sign" as a "representamen", in order to bring out the fact that a sign is something that "represents" something else in order to suggest it (that is, "re-present" it) in some way: "A sign, or representamen, is something which stands to somebody for something in some respect or capacity. It addresses somebody, that is, creates in the mind of that person an equivalent sign. That sign which it creates I call the interpretant of the first sign. The sign stands for something, its object not in all respects, but in reference to a sort of idea." Ferdinand de Saussure (1857–1913), the "father" of modern linguistics, proposed a dualistic notion of signs, relating the "signifier" as the form of the word or phrase uttered, to the "signified" as the mental concept. According to Saussure, the sign is completely arbitrary—i.e., there is no necessary connection between the sign and its meaning. This sets him apart from previous philosophers, such as Plato or the scholastics, who thought that there must be some connection between a signifier and the object it signifies. In his "Course in General Linguistics", Saussure credits the American linguist William Dwight Whitney (1827–1894) with insisting on the arbitrary nature of the sign. Saussure's insistence on the arbitrariness of the sign also has influenced later philosophers and theorists such as Jacques Derrida, Roland Barthes, and Jean Baudrillard. Ferdinand de Saussure coined the term "sémiologie" while teaching his landmark "Course on General Linguistics" at the University of Geneva from 1906 to 1911. Saussure posited that no word is inherently meaningful. Rather a word is only a "signifier." i.e., the representation of something, and it must be combined in the brain with the "signified", or the thing itself, in order to form a meaning-imbued "sign." Saussure believed that dismantling signs was a real science, for in doing so we come to an empirical understanding of how humans synthesize physical stimuli into words and other abstract concepts. Jakob von Uexküll (1864–1944) studied the sign processes in animals. He used the German word "umwelt", "environment," to describe the individual's subjective world, and he invented the concept of functional circle ("funktionskreis") as a general model of sign processes. In his "Theory of Meaning" ("Bedeutungslehre", 1940), he described the semiotic approach to biology, thus establishing the field that now is called biosemiotics. Valentin Voloshinov (1895–1936) was a Soviet-Russian linguist, whose work has been influential in the field of literary theory and Marxist theory of ideology. Written in the late 1920s in the USSR, Voloshinov's "Marxism and the Philosophy of Language" () developed a counter-Saussurean linguistics, which situated language use in social process rather than in an entirely decontextualized Saussurean "langue". Louis Hjelmslev (1899–1965) developed a formalist approach to Saussure's structuralist theories. His best known work is "Prolegomena to a Theory of Language", which was expanded in "Résumé of the Theory of Language", a formal development of "glossematics", his scientific calculus of language. Charles W. Morris (1901–1979): Unlike his mentor George Herbert Mead, Morris was a behaviorist and sympathetic to the Vienna Circle positivism of his colleague, Rudolf Carnap. Morris was accused by John Dewey of misreading Peirce. In his 1938 "Foundations of the Theory of Signs", he defined semiotics as grouped into three branches: Thure von Uexküll (1908–2004), the "father" of modern psychosomatic medicine, developed a diagnostic method based on semiotic and biosemiotic analyses. Roland Barthes (1915–1980) was a French literary theorist and semiotician. He often would critique pieces of cultural material to expose how bourgeois society used them to impose its values upon others. For instance, the portrayal of wine drinking in French society as a robust and healthy habit would be a bourgeois ideal perception contradicted by certain realities (i.e. that wine can be unhealthy and inebriating). He found semiotics useful in conducting these critiques. Barthes explained that these bourgeois cultural myths were second-order signs, or connotations. A picture of a full, dark bottle is a sign, a signifier relating to a signified: a fermented, alcoholic beverage—wine. However, the bourgeois take this signified and apply their own emphasis to it, making "wine" a new signifier, this time relating to a new signified: the idea of healthy, robust, relaxing wine. Motivations for such manipulations vary from a desire to sell products to a simple desire to maintain the status quo. These insights brought Barthes very much in line with similar Marxist theory. Algirdas Julien Greimas (1917–1992) developed a structural version of semiotics named, "generative semiotics", trying to shift the focus of discipline from signs to systems of signification. His theories develop the ideas of Saussure, Hjelmslev, Claude Lévi-Strauss, and Maurice Merleau-Ponty. Thomas A. Sebeok (1920–2001), a student of Charles W. Morris, was a prolific and wide-ranging American semiotician. Although he insisted that animals are not capable of language, he expanded the purview of semiotics to include non-human signaling and communication systems, thus raising some of the issues addressed by philosophy of mind and coining the term zoosemiotics. Sebeok insisted that all communication was made possible by the relationship between an organism and the environment in which it lives. He also posed the equation between "semiosis" (the activity of interpreting signs) and "life"—a view that the Copenhagen-Tartu biosemiotic school has further developed. Yuri Lotman (1922–1993) was the founding member of the Tartu (or Tartu-Moscow) Semiotic School. He developed a semiotic approach to the study of culture—semiotics of culture—and established a communication model for the study of text semiotics. He also introduced the concept of the semiosphere. Among his Moscow colleagues were Vladimir Toporov, Vyacheslav Ivanov and Boris Uspensky. Christian Metz (1931–1993) pioneered the application of Saussurean semiotics to film theory, applying syntagmatic analysis to scenes of films and grounding film semiotics in greater context. Eliseo Verón (1935–2014) developed his "Social Discourse Theory" inspired in the Peircian conception of "Semiosis". Groupe µ (founded 1967) developed a structural version of rhetorics, and the visual semiotics. Umberto Eco (1932–2016) was an Italian novelist, semiotician and academic. He made a wider audience aware of semiotics by various publications, most notably "A Theory of Semiotics" and his novel, "The Name of the Rose", which includes (second to its plot) applied semiotic operations. His most important contributions to the field bear on interpretation, encyclopedia, and model reader. He also criticized in several works ("A theory of semiotics", "La struttura assente", "Le signe", "La production de signes") the "iconism" or "iconic signs" (taken from Peirce's most famous triadic relation, based on indexes, icons, and symbols), to which he proposed four modes of sign production: recognition, ostension, replica, and invention. Paul Bouissac (born 1934) is a world renowned expert of circus studies, known for developing a range of semiotic interpretations of circus performances. This includes the multimodal dimensions of clowns and clowning, jugglers, and trapeze acts. He is the author of several books relating to the semiotics of the circus. Bouissac is the Series Editor for the Advances in Semiotics Series for Bloomsbury Academic. He runs the SemiotiX Bulletin which has a global readership, is a founding editor of the "Public Journal of Semiotics", and was a central founding figure in the Toronto Semiotic Circle. He is Professor Emeritus of Victoria College, University of Toronto. The personal, professional, and intellectual life of Bouissac is recounted in the book, "The Pleasures of Time: Two Men, A Life", by his life-long partner, the sociologist Stephen Harold Riggins. Julia Kristeva (born 1941), a student of Lucien Goldmann and Roland Barthes, Bulgarian-French semiotician, literary critic, psychoanalyst, feminist, and novelist. She uses psychoanalytical concepts together with the semiotics, distinguishing the two components in the signification, the symbolic and the semiotic"." Kristeva also studies the representation of women and women's bodies in popular culture, such as horror films and has had a remarkable influence on feminism and feminist literary studies. Some applications of semiotics include: In some countries, the role of semiotics is limited to literary criticism and an appreciation of audio and visual media. This narrow focus may inhibit a more general study of the social and political forces shaping how different media are used and their dynamic status within modern culture. Issues of technological determinism in the choice of media and the design of communication strategies assume new importance in this age of mass media. A world organisation of semioticians, the International Association for Semiotic Studies, and its journal "Semiotica", was established in 1969. The larger research centers together with teaching program include the semiotics departments at the University of Tartu, University of Limoges, Aarhus University, and Bologna University. Publication of research is both in dedicated journals such as "Sign Systems Studies", established by Yuri Lotman and published by Tartu University Press; "Semiotica", founded by Thomas A. Sebeok and published by Mouton de Gruyter; "Zeitschrift für Semiotik"; "European Journal of Semiotics"; "Versus" (founded and directed by Umberto Eco), et al.; "The American Journal of Semiotics"; and as articles accepted in periodicals of other disciplines, especially journals oriented toward philosophy and cultural criticism. The major semiotic book series "Semiotics, Communication, Cognition", published by De Gruyter Mouton (series editors Paul Cobley and Kalevi Kull) replaces the former "Approaches to Semiotics" (more than 120 volumes) and "Approaches to Applied Semiotics" (series editor Thomas A. Sebeok). Since 1980 the Semiotic Society of America has produced an annual conference series: "".
https://en.wikipedia.org/wiki?curid=29301
Sojourner Truth Sojourner Truth (; born Isabella "Belle" Baumfree; November 26, 1883) was an American abolitionist and women's rights activist. Truth was born into slavery in Swartekill, New York, but escaped with her infant daughter to freedom in 1826. After going to court to recover her son in 1828, she became the first black woman to win such a case against a white man. She gave herself the name Sojourner Truth in 1843 after she became convinced that God had called her to leave the city and go into the countryside "testifying the hope that was in her". Her best-known speech was delivered extemporaneously, in 1851, at the Ohio Women's Rights Convention in Akron, Ohio. The speech became widely known during the Civil War by the title "Ain't I a Woman?", a variation of the original speech re-written by someone else using a stereotypical Southern dialect, whereas Sojourner Truth was from New York and grew up speaking Dutch as her first language. During the Civil War, Truth helped recruit black troops for the Union Army; after the war, she tried unsuccessfully to secure land grants from the federal government for formerly enslaved people (summarised as the promise of "forty acres and a mule"). In 2014, Truth was included in "Smithsonian" magazine's list of the "100 Most Significant Americans of All Time". A memorial bust of Truth was unveiled in 2009 in Emancipation Hall in the U.S. Capitol Visitor's Center. She is the first African American to have a statue in the Capitol building. Truth was one of the 10 or 12 children born to James and Elizabeth Baumfree (or Bomefree). Colonel Hardenbergh bought James and Elizabeth Baumfree from slave traders and kept their family at his estate in a big hilly area called by the Dutch name Swartekill (just north of present-day Rifton), in the town of Esopus, New York, north of New York City. Charles Hardenbergh inherited his father's estate and continued to enslave people as a part of that estate's property. When Charles Hardenbergh died in 1806, nine-year-old Truth (known as Belle), was sold at an auction with a flock of sheep for $100 to John Neely, near Kingston, New York. Until that time, Truth spoke only Dutch. She later described Neely as cruel and harsh, relating how he beat her daily and once even with a bundle of rods. In 1808 Neely sold her for $105 to tavern keeper Martinus Schryver of Port Ewen, New York, who owned her for 18 months. Schryver then sold Truth in 1810 to John Dumont of West Park, New York. Although this fourth owner was more kindly disposed toward her, considerable tension existed between Truth and Dumont's wife, Elizabeth Waring Dumont, who harassed her and made her life more difficult. Around 1815, Truth met and fell in love with an enslaved man named Robert from a neighboring farm. Robert's owner (Charles Catton, Jr., a landscape painter) forbade their relationship; he did not want the people he enslaved to have children with people he was not enslaving, because he would not own the children. One day Robert sneaked over to see Truth. When Catton and his son found him, they savagely beat Robert until Dumont finally intervened. Truth never saw Robert again after that day and he died a few years later. The experience haunted Truth throughout her life. Truth eventually married an older enslaved man named Thomas. She bore five children: James, her firstborn, who died in childhood, Diana (1815), the result of a rape by John Dumont, and Peter (1821), Elizabeth (1825), and Sophia (ca. 1826), all born after she and Thomas united. In 1799, the State of New York began to legislate the abolition of slavery, although the process of emancipating those people enslaved in New York was not complete until July 4, 1827. Dumont had promised to grant Truth her freedom a year before the state emancipation, "if she would do well and be faithful". However, he changed his mind, claiming a hand injury had made her less productive. She was infuriated but continued working, spinning of wool, to satisfy her sense of obligation to him. Late in 1826, Truth escaped to freedom with her infant daughter, Sophia. She had to leave her other children behind because they were not legally freed in the emancipation order until they had served as bound servants into their twenties. She later said, "I did not run off, for I thought that wicked, but I walked off, believing that to be all right." She found her way to the home of Isaac and Maria Van Wagenen in New Paltz, who took her and her baby in. Isaac offered to buy her services for the remainder of the year (until the state's emancipation took effect), which Dumont accepted for $20. She lived there until the New York State Emancipation Act was approved a year later. Truth learned that her son Peter, then five years old, had been sold illegally by Dumont to an owner in Alabama. With the help of the Van Wagenens, she took the issue to court and in 1828, after months of legal proceedings, she got back her son, who had been abused by those who were enslaving him. Truth became one of the first black women to go to court against a white man and win the case. Truth had a life-changing religious experience during her stay with the Van Wagenens and became a devout Christian. In 1829 she moved with her son Peter to New York City, where she worked as a housekeeper for Elijah Pierson, a Christian Evangelist. While in New York, she befriended Mary Simpson, a grocer on John Street who claimed she had once been enslaved by George Washington. They shared an interest in charity for the poor and became intimate friends. In 1832, she met Robert Matthews, also known as Prophet Matthias, and went to work for him as a housekeeper at the Matthias Kingdom communal colony. Elijah Pierson died, and Robert Matthews and Truth were accused of stealing from and poisoning him. Both were acquitted of the murder, though Matthews was convicted of lesser crimes, served time, and moved west. In 1839, Truth's son Peter took a job on a whaling ship called the "Zone of Nantucket". From 1840 to 1841, she received three letters from him, though in his third letter he told her he had sent five. Peter said he also never received any of her letters. When the ship returned to port in 1842, Peter was not on board and Truth never heard from him again. The year 1843 was a turning point for Baumfree. She became a Methodist, and on June 1, Pentecost Sunday, she changed her name to Sojourner Truth. She chose the name because she heard the Spirit of God calling on her to preach the truth. She told her friends: "The Spirit calls me, and I must go", and left to make her way traveling and preaching about the abolition of slavery. Taking along only a few possessions in a pillowcase, she traveled north, working her way up through the Connecticut River Valley, towards Massachusetts. At that time, Truth began attending Millerite Adventist camp meetings. Millerites followed the teachings of William Miller of New York, who preached that Jesus would appear in 1843–1844, bringing about the end of the world. Many in the Millerite community greatly appreciated Truth's preaching and singing, and she drew large crowds when she spoke. Like many others disappointed, Truth distanced herself from her Millerite friends for a while after the anticipated second coming did not arrive. In 1844, she joined the Northampton Association of Education and Industry in Florence, Massachusetts. Founded by abolitionists, the organization supported women's rights and religious tolerance as well as pacifism. There were, in its four-and-a-half year history, a total of 240 members, though no more than 120 at any one time. They lived on , raising livestock, running a sawmill, a gristmill, and a silk factory. Truth lived and worked in the community and oversaw the laundry, supervising both men and women. While there, Truth met William Lloyd Garrison, Frederick Douglass, and David Ruggles. Encouraged by the community, Truth delivered her first anti-slavery speech that year. In 1846, the group disbanded, unable to support itself. In 1845, she joined the household of George Benson, the brother-in-law of William Lloyd Garrison. In 1849, she visited John Dumont before he moved west. Truth started dictating her memoirs to her friend Olive Gilbert and in 1850 William Lloyd Garrison privately published her book, "The Narrative of Sojourner Truth: a Northern Slave". That same year, she purchased a home in Florence for $300 and spoke at the first National Women's Rights Convention in Worcester, Massachusetts. In 1854, with proceeds from sales of the narrative and "cartes-de-visite" captioned, "I sell the shadow to support the substance", she paid off the mortgage held by her friend from the community, Samuel L. Hill. In 1851, Truth joined George Thompson, an abolitionist and speaker, on a lecture tour through central and western New York State. In May, she attended the Ohio Women's Rights Convention in Akron, Ohio, where she delivered her famous extemporaneous speech on women's rights, later known as "Ain't I a Woman?". Her speech demanded equal human rights for all women as well as for all blacks. Advocating for women and African Americans was dangerous and challenging enough, but being one and doing so was far more difficult. The pressures and severity of her speech did not get to Truth, however. Truth took to the stage with a demanding and composed presence. Audience members were baffled by the way she carried herself and were hesitant to believe that she was even a woman, prompting the name of her speech "Ain't I a Woman?" The convention was organized by Hannah Tracy and Frances Dana Barker Gage, who both were present when Truth spoke. Different versions of Truth's words have been recorded, with the first one published a month later in the "Anti-Slavery Bugle" by Rev. Marius Robinson, the newspaper owner and editor who was in the audience. Robinson's recounting of the speech included no instance of the question "Ain't I a Woman?" Nor did any of the other newspapers reporting of her speech at the time. Twelve years later, in May 1863, Gage published another, very different, version. In it, Truth's speech pattern had characteristics of Southern slaves, and the speech was vastly different than the one Robinson had reported. Gage's version of the speech became the historic standard version, and is known as "Ain't I a Woman?" because that question was repeated four times. It is highly unlikely that Truth's own speech pattern was Southern in nature, as she was born and raised in New York, and she spoke only upper New York State low-Dutch until she was nine years old. In contrast to Robinson's report, Gage's 1863 version included Truth saying her 13 children were sold away from her into slavery. Truth is widely believed to have had five children, with one sold away, and was never known to boast more children. Gage's 1863 recollection of the convention conflicts with her own report directly after the convention: Gage wrote in 1851 that Akron in general and the press, in particular, were largely friendly to the woman's rights convention, but in 1863 she wrote that the convention leaders were fearful of the "mobbish" opponents. Other eyewitness reports of Truth's speech told a calm story, one where all faces were "beaming with joyous gladness" at the session where Truth spoke; that not "one discordant note" interrupted the harmony of the proceedings. In contemporary reports, Truth was warmly received by the convention-goers, the majority of whom were long-standing abolitionists, friendly to progressive ideas of race and civil rights. In Gage's 1863 version, Truth was met with hisses, with voices calling to prevent her from speaking. According to Frances Gage's recount in 1863, Truth argued, "That man over there says that women need to be helped into carriages, and lifted over ditches, and to have the best place everywhere. Nobody helps "me" any best place. "And ain't I a woman?"" Truth's "Ain't I a Woman" showed the lack of recognition that black women received during this time and whose lack of recognition will continue to be seen long after her time. "Black women, of course, were virtually invisible within the protracted campaign for woman suffrage", wrote Angela Davis, supporting Truth's argument that nobody gives her "any best place"; and not just her, but black women in general. Over the next 10 years, Truth spoke before dozens, perhaps hundreds, of audiences. From 1851 to 1853, Truth worked with Marius Robinson, the editor of the Ohio "Anti-Slavery Bugle", and traveled around that state speaking. In 1853, she spoke at a suffragist "mob convention" at the Broadway Tabernacle in New York City; that year she also met Harriet Beecher Stowe. In 1856, she traveled to Battle Creek, Michigan, to speak to a group called the "Friends of Human Progress". In 1858, someone interrupted a speech and accused her of being a man; Truth opened her blouse and revealed her breasts. Northampton Camp Meeting – 1844, Northampton, Massachusetts: At a camp meeting where she was participating as an itinerant preacher, a band of "wild young men" disrupted the camp meeting, refused to leave, and threatened to burn down the tents. Truth caught the sense of fear pervading the worshipers and hid behind a trunk in her tent, thinking that since she was the only black person present, the mob would attack her first. However, she reasoned with herself and resolved to do something: as the noise of the mob increased and a female preacher was "trembling on the preachers' stand", Truth went to a small hill and began to sing "in her most fervid manner, with all the strength of her most powerful voice, the hymn on the resurrection of Christ". Her song, "It was Early in the Morning", gathered the rioters to her and quieted them. They urged her to sing, preach, and pray for their entertainment. After singing songs and preaching for about an hour, Truth bargained with them to leave after one final song. The mob agreed and left the camp meeting. Abolitionist Convention – 1840s, Boston, Massachusetts: William Lloyd Garrison invited Sojourner Truth to give a speech at an annual antislavery convention. Wendell Phillips was supposed to speak after her, which made her nervous since he was known as such a good orator. So Truth sang a song, "I am Pleading for My people", which was her own original composition sung to the tune of Auld Lang Syne. Mob Convention – September 7, 1853: At the convention, young men greeted her with "a perfect storm", hissing and groaning. In response, Truth said, "You may hiss as much as you please, but women will get their rights anyway. You can't stop us, neither". Sojourner, like other public speakers, often adapted her speeches to how the audience was responding to her. In her speech, Sojourner speaks out for women's rights. She incorporates religious references in her speech, particularly the story of Esther. She then goes on to say that, just as women in scripture, women today are fighting for their rights. Moreover, Sojourner scolds the crowd for all their hissing and rude behavior, reminding them that God says to "Honor thy father and thy mother". American Equal Rights Association – May 9–10, 1867: Her speech was addressed to the American Equal Rights Association, and divided into three sessions. Sojourner was received with loud cheers instead of hisses, now that she had a better-formed reputation established. "The Call" had advertised her name as one of the main convention speakers. For the first part of her speech, she spoke mainly about the rights of black women. Sojourner argued that because the push for equal rights had led to black men winning new rights, now was the best time to give black women the rights they deserve too. Throughout her speech she kept stressing that "we should keep things going while things are stirring" and fears that once the fight for colored rights settles down, it would take a long time to warm people back up to the idea of colored women's having equal rights. In the second sessions of Sojourner's speech, she utilized a story from the Bible to help strengthen her argument for equal rights for women. She ended her argument by accusing men of being self-centered, saying: "Man is so selfish that he has got women's rights and his own too, and yet he won't give women their rights. He keeps them all to himself." For the final session of Sojourner's speech, the center of her attention was mainly on women's right to vote. Sojourner told her audience that she owned her own house, as did other women, and must, therefore, pay taxes. Nevertheless, they were still unable to vote because they were women. Black women who were enslaved were made to do hard manual work, such as building roads. Sojourner argues that if these women were able to perform such tasks, then they should be allowed to vote because surely voting is easier than building roads. Eighth Anniversary of Negro Freedom – New Year's Day, 1871: On this occasion the Boston papers related that "...seldom is there an occasion of more attraction or greater general interest. Every available space of sitting and standing room was crowded". She starts off her speech by giving a little background about her own life. Sojourner recounts how her mother told her to pray to God that she may have good masters and mistresses. She goes on to retell how her masters were not good to her, about how she was whipped for not understanding English, and how she would question God why he had not made her masters be good to her. Sojourner admits to the audience that she had once hated white people, but she says once she met her final master, Jesus, she was filled with love for everyone. Once enslaved folks were emancipated, she tells the crowd she knew her prayers had been answered. That last part of Sojourner's speech brings in her main focus. Some freed enslaved people were living on government aid at that time, paid for by taxpayers. Sojourner announces that this is not any better for those colored people than it is for the members of her audience. She then proposes that black people are given their own land. Because a portion of the South's population contained rebels that were unhappy with the abolishment of slavery, that region of the United States was not well suited for colored people. She goes on to suggest that colored people be given land out west to build homes and prosper on. Second Annual Convention of the American Woman Suffrage Association – Boston, 1871: In a brief speech, Truth argued that women's rights were essential, not only to their own well-being, but "for the benefit of the whole creation, not only the women, but all the men on the face of the earth, for they were the mother of them". In 1856, Truth bought a neighboring lot in Northampton, but she did not keep the new property for long. On September 3, 1857, she sold all her possessions, new and old, to Daniel Ives and moved to Battle Creek, Michigan, where she rejoined former members of the Millerite movement who had formed the Seventh-day Adventist Church. Antislavery movements had begun early in Michigan and Ohio. Here, she also joined the nucleus of the Michigan abolitionists, the Progressive Friends, some who she had already met at national conventions. From 1857 to 1867 Truth lived in the village of Harmonia, Michigan, a Spiritualist utopia. She then moved into nearby Battle Creek, Michigan, living at her home on 38 College St. until her death in 1883. According to the 1860 census, her household in Harmonia included her daughter, Elizabeth Banks (age 35), and her grandsons James Caldwell (misspelled as "Colvin"; age 16) and Sammy Banks (age 8). During the Civil War, Truth helped recruit black troops for the Union Army. Her grandson, James Caldwell, enlisted in the 54th Massachusetts Regiment. In 1864, Truth was employed by the National Freedman's Relief Association in Washington, D.C., where she worked diligently to improve conditions for African-Americans. In October of that year, she met President Abraham Lincoln. In 1865, while working at the Freedman's Hospital in Washington, Truth rode in the streetcars to help force their desegregation. Truth is credited with writing a song, "", for the 1st Michigan Colored Regiment; it was said to be composed during the war and sung by her in Detroit and Washington, D.C. It is sung to the tune of "John Brown's Body" or "The Battle Hymn of the Republic". Although Truth claimed to have written the words, it has been disputed (see "Marching Song of the First Arkansas"). In 1867, Truth moved from Harmonia to Battle Creek. In 1868, she traveled to western New York and visited with Amy Post, and continued traveling all over the East Coast. At a speaking engagement in Florence, Massachusetts, after she had just returned from a very tiring trip, when Truth was called upon to speak she stood up and said, "Children, I have come here like the rest of you, to hear what I have to say." In 1870, Truth tried to secure land grants from the federal government to former enslaved people, a project she pursued for seven years without success. While in Washington, D.C., she had a meeting with President Ulysses S. Grant in the White House. In 1872, she returned to Battle Creek, became active in Grant's presidential re-election campaign, and even tried to vote on Election Day, but was turned away at the polling place. Truth spoke about abolition, women's rights, prison reform, and preached to the Michigan Legislature against capital punishment. Not everyone welcomed her preaching and lectures, but she had many friends and staunch support among many influential people at the time, including Amy Post, Parker Pillsbury, Frances Gage, Wendell Phillips, William Lloyd Garrison, Laura Smith Haviland, Lucretia Mott, Ellen G. White, and Susan B. Anthony. Several days before Sojourner Truth died, a reporter came from the "Grand Rapids Eagle" to interview her. "Her face was drawn and emaciated and she was apparently suffering great pain. Her eyes were very bright and mind alert although it was difficult for her to talk." Truth died at her Battle Creek home on November 26, 1883. On November 28 her funeral was held at the Congregational-Presbyterian Church officiated by its pastor, the Reverend Reed Stuart. Some of the prominent citizens of Battle Creek acted as pall-bearers. Truth was buried in the city's Oak Hill Cemetery. The calendar of saints of the Episcopal Church remembers Sojourner Truth annually, together with Elizabeth Cady Stanton, Amelia Bloomer and Harriet Ross Tubman on July 20 individually or Nov 26. The calendar of saints of the Lutheran Church remembers Sojourner Truth together with Harriet Tubman on March 10. A larger-than-life sculpture of Sojourner Truth by Tina Allen was dedicated in 1999, which is the estimated bicentennial of Sojourner's birth, in Battle Creek's Monument Park. The 12-foot tall Sojourner monument is cast bronze. There is also a statue of Sojourner Truth in Florence, Massachusetts, by sculptor Thomas Jay Warren, as well as one, also in bronze, on the campus of the University of California, San Diego, by sculptor Manuelita Brown. The U.S. Treasury Department announced in 2016 that an image of Sojourner Truth will appear on the back of a newly designed $10 bill along with Lucretia Mott, Susan B. Anthony, Elizabeth Cady Stanton, Alice Paul and the 1913 Woman Suffrage Procession. Designs for new $5, $10 and $20 bills will be unveiled in 2020 in conjunction with the 100th anniversary of American women winning the right to vote via the Nineteenth Amendment to the United States Constitution. On September 19, 2018, the U.S. Secretary of the Navy Ray Mabus announced the name of the last ship of a six unit construction contract as USNS "Sojourner Truth" (T-AO 210). This ship with be part of the latest "John Lewis"-class of Fleet Replenishment Oilers named in honor of U.S. civil and human rights heroes currently under construction at General Dynamics NASSCO in San Diego, CA. Other honors and commemorations include (by year): As of February 2020, elementary schools and K-12 schools in several states are named after Truth. Sojourner–Douglass College in Baltimore, which closed in 2019, was also named after her.
https://en.wikipedia.org/wiki?curid=29305
STOVL A short take-off and vertical landing aircraft (STOVL aircraft) is a fixed-wing aircraft that is able to take off from a short runway (or take off vertically if it does not have a heavy payload) and land vertically (i.e. with no runway). The formal NATO definition (since 1991) is: On aircraft carriers, non-catapult-assisted, fixed-wing short takeoffs are accomplished with the use of thrust vectoring, which may also be used in conjunction with a runway "ski-jump". Use of STOVL tends to allow aircraft to carry a larger payload compared to vertical take-off and landing (VTOL), while still only requiring a short runway. The most famous examples are the Hawker Siddeley Harrier and the Sea Harrier. Although technically VTOL aircraft, they are operationally STOVL aircraft due to the extra weight carried at take-off for fuel and armaments. The same is true of the F-35B Lightning II, which demonstrated VTOL capability in test flights but is operationally a STOVL. In 1951, the Lockheed XFV and the Convair XFY Pogo tailsitters were both designed around the Allison YT40 turboprop engine driving contra-rotating propellers. The British Hawker P.1127 took off vertically in 1960, and demonstrated conventional take-off in 1961. It was developed into the Hawker Siddeley Harrier which flew in 1967. In 1962, Lockheed built the XV-4 Hummingbird for the U.S. Army. It sought to "augment" available thrust by injecting the engine exhaust into an ejector pump in the fuselage. First flying vertically in 1963, it suffered a fatal crash in 1964. It was converted into the XV-4B Hummingbird for the U.S. Air Force as a testbed for separate, vertically mounted lift engines, similar to those used in the Yak-38 Forger. That plane flew and later crashed in 1969. The Ryan XV-5 Vertifan, which was also built for the U.S. Army at the same time as the Hummingbird, experimented with gas-driven lift fans. That plane used fans in the nose and each wing, covered by doors which resembled half garbage can lids when raised. However, it crashed twice, and proved to generate a disappointing amount of lift, and was difficult to transition to horizontal flight. Of dozens of VTOL and V/STOL designs tried from the 1950s to 1980s, only the subsonic Hawker Siddeley Harrier and Yak-38 Forger reached operational status, with the Forger being withdrawn after the fall of the Soviet Union. Rockwell International built, and then abandoned, the Rockwell XFV-12 supersonic fighter which had an unusual wing which opened up like window blinds to create an ejector pump for vertical flight. It never generated enough lift to get off the ground despite developing 20,000 lbf of thrust. The French had a nominally Mach 2 Dassault Mirage IIIV fitted with no less than 8 lift engines that flew (and crashed), but did not have enough space for fuel or payload for combat missions. The German EWR VJ 101 used swiveling engines mounted on the wingtips with fuselage mounted lift engines, and the VJ 101C X1 reached supersonic flight (Mach 1.08) on 29 July 1964. The supersonic Hawker Siddeley P.1154, which competed with the Mirage IIIV for use in NATO, was cancelled even as the aircraft were being built. NASA uses the abbreviation SSTOVL for Supersonic Short Take-Off / Vertical Landing, and as of 2012, the X-35B/F-35B are the only aircraft to conform with this combination within one flight. The experimental Mach 1.7 Yakovlev Yak-141 did not find an operational customer, but similar rotating rear nozzle technology is used on the F-35B. The F-35B Lightning II entered service on July 31, 2015. Larger STOVL designs were considered, the Armstrong Whitworth AW.681 cargo aircraft was under development when cancelled in 1965. The Dornier Do 31 got as far as three experimental aircraft before cancellation in 1970. Although mostly a VTOL design, the V-22 Osprey has increased payload when taking off from a short runway.
https://en.wikipedia.org/wiki?curid=29306