text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Beersheba] | [TOKENS: 10145] |
Contents Beersheba 31°15′08″N 34°47′12″E / 31.2522°N 34.7867°E / 31.2522; 34.7867 Beersheba (/bɪərˈʃiːbə/ beer-SHEE-bə), officially Be'er-Sheva (/bɛərˈʃɛvə/ bair-SHEV-ə),[a] is the largest city in the Negev desert of southern Israel. Often referred to as the "Capital of the Negev", it is the centre of the fourth-most populous metropolitan area in Israel, the eighth-most populous Israeli city with a population of 218,995, and the second-largest city in the area (after Jerusalem), with a total area of 117,500 dunams (117.5 square kilometres). Human habitation near present-day Beersheba dates back to the fourth millennium BC. In the Bible, Beersheba marks the southern boundary of ancient Israel, as mentioned in the phrase "From Dan to Beersheba." Initially assigned to the Tribe of Judah, Beersheba was later reassigned to Simeon. During the monarchic era, it functioned as a royal city but eventually faced destruction at the hands of the Assyrians. The Biblical site of Beersheba is Tel Be'er Sheva, lying some 4 kilometres (2+1⁄2 miles) distant from the modern city, which was established at the start of the 20th century by the Ottomans. The city was captured by the British-led Australian Light Horse troops in the Battle of Beersheba during World War I. The population of the town was completely changed in 1948–49 during the First Arab–Israeli War. Beersheba had been almost entirely Muslim, and the 1947 UN Partition Plan designated it to be part of the Arab state. It was occupied by the Egyptian army from May 1948 until October 1948 when it was captured by the Israel Defense Forces and part of the Arab population was expelled. Today, the metropolitan area is composed of approximately equal Jewish and Arab populations, with a large portion of the Jewish population made up of the descendants of Sephardi Jews and Mizrahi Jews who fled, relocated or were expelled from Arab countries after Israel's founding in 1948, as well as smaller communities of Bene Israel and Cochin Jews from India. Second and third waves of immigration have taken place since 1990, bringing Russian-speaking immigrants from the former Soviet Union as well as Beta Israel immigrants from Ethiopia. The Soviet immigrants have made the game of chess a major sport in Beersheba, and it is now Israel's national chess center, with more chess grandmasters per capita than any other city in the world. Beersheba is home to Ben-Gurion University of the Negev. This city also serves as a center for Israel's high-tech and developing technology industry. Etymology The Book of Genesis gives two etymologies for the name Be'er Sheba. Genesis 21:28-31 relates: Then Abraham set seven ewes apart. And Abimelech said to Abraham, "What mean these seven ewes, which you have set apart? And [Abraham] said, "That you are to take these seven (sheba) ewes from me, to be for me a witness that I have dug this well (bǝ'er)." Therefore the name of that place was Be'er Sheba, for there the two of them had sworn (nishbǝ'u). Genesis 26 relates: And Isaac redug the wells which had been dug in the days of Abraham his father, and which the Philistines had sealed after the death of Abraham, and he used the same names as had his father . . . And they arose in the morning, and they swore (wa-yishabǝ'u) each to his fellow, and Isaac sent them off, and they departed him in peace. On that same day, Isaac's men came to him to tell him of the well which they had dug, and they said to him, "We found water." And he called it Shib'a ("seven" normally, possibly "oath" or a proper noun); therefore the name of the city is Be'er Sheba to this day. The original Hebrew name could therefore relate to the oath of Abraham and Abimelech ('well of the oath') or the seven ewes in that oath ('well of the seven'), as related in Genesis 21:31, and/or to the oath of Isaac and Abimelech in Genesis 26:33. Alternatively, Obadiah Sforno suggested that the well is called Seven because it was the seventh dug; the narrative of Genesis 26 includes three wells dug by Abraham which are reopened by Isaac (Esek, Sitnah, Rehoboth), for a total of six, after which Isaac goes to Beersheba, the seventh well. The double name of Shib'a and Beersheba is referenced again by the Masoretic Text in Joshua 19:2, usually translated "Beersheba or Sheba"; however the Septuagint reads "Beersheba and Samaa (Σαμαὰ)" which fits with MT 1 Chron. 4:28. Abraham ibn Ezra and Samuel b. Meir suggest the two etymologies refer to two different cities. During the Ottoman administration, the city was referred as بلدية بئرالسبع, "Belediyet Bi'r alsab'" (Palestinian arabic meaning "Well of the Seven"), "Baladiyyat Bi'russab" (Modern Standard Arabic) Hebrew Bible Beersheba is mainly dealt with in the Hebrew Bible in connection with the Patriarchs Abraham and Isaac, who both dug a well and close peace treaties with King Abimelech of Gerar at the site. Hence it receives its name twice, first after Abraham's dealings with Abimelech (Genesis 21:22–34), and again from Isaac who closes his own covenant with Abimelech of Gerar and whose servants also dig a well there (Genesis 26:23–33). The place is thus connected to two of the three Wife–sister narratives in the Book of Genesis. According to the Hebrew Bible, Beersheba was founded when Abraham and Abimelech settled their differences over a well of water and made a covenant (see Genesis 21:22–34). Abimelech's men had taken the well from Abraham after he had previously dug it so Abraham brought sheep and cattle to Abimelech to get the well back. He set aside seven lambs to swear that it was he that had dug the well and no one else. Abimelech conceded that the well belonged to Abraham and, in the Bible, Beersheba means "Well of Seven" or "Well of the Oath". Beersheba is further mentioned in the following Bible passages: Isaac built an altar in Beersheba (Genesis 26:23–33). Jacob had his dream about a stairway to heaven after leaving Beersheba. (Genesis 28:10–15 and 46:1–7). Beersheba was the territory of the tribe of Simeon and Judah (Joshua 15:28 and 19:2). The sons of the prophet Samuel were judges in Beersheba (I Samuel 8:2). Saul, Israel's first king, built a fort there for his campaign against the Amalekites (I Samuel 14:48 and 15:2–9). The prophet Elijah took refuge in Beersheba when Jezebel ordered him killed (I Kings 19:3). The prophet Amos mentions the city in regard to idolatry (Amos 5:5 and 8:14). Following the Babylonian conquest and subsequent enslavement of many Israelites, the town was abandoned. After the Israelite slaves returned from Babylon, they resettled the town. According to the Hebrew Bible, Beersheba was the southernmost city of the territories settled by Israelites, hence the expression "from Dan to Beersheba" to describe the whole kingdom. Zibiah, the consort of King Ahaziah of Judah and the mother of King Jehoash of Judah, was from Beersheba. History The city has been destroyed and rebuilt many times. Considered unimportant for centuries, Be’er Sheva regained notoriety under Byzantine rule (in the 4th–7th century), when it was a key point on the Limes Palestinae, a defense line built against the desert tribes; however, it fell to the Arabs in the 7th century and to the Turks in the 16th century. It long remained a watering place and small trade centre for the nomadic Bedouin tribes of the Negev, despite Turkish efforts at town planning and development around 1900. Its capture in 1917 by the British Army opened the way for their conquest of Palestine and Syria. After being taken by Israeli troops in October 1948, Beersheba was rapidly settled by new immigrants and has since developed as the administrative, cultural, and industrial centre of the Negev. It is one of the largest cities in Israel outside of metropolitan Tel Aviv, Jerusalem, and Haifa. Human settlement in the area dates from the Copper Age. The inhabitants lived in caves, crafting metal tools and raising cattle. Findings unearthed at Tel Be'er Sheva, an archaeological site east of modern-day Beersheba, suggest the region has been inhabited since the 4th millennium BC (between 5000 and 6,000 years ago). Tel Be'er Sheva, an archaeological site containing the ruins of an ancient town believed to have been the Biblical Beersheba, lies a few kilometers east of the modern city. The town dates to the early Israelite period, around the 10th century BC. The site was possibly chosen due to the abundance of water, as evidenced by the numerous wells in the area. According to the Hebrew Bible, the wells were dug by Abraham and Isaac when they arrived there. The streets were laid out in a grid, with separate areas for administrative, commercial, military, and residential use. It is believed to have been the first planned settlement in the region, and is also noteworthy for its elaborate water system; in particular, a huge cistern carved out of the rock beneath the town. During the Persian rule 539 BC–c. 332 BC, Beersheba[dubious – discuss] was at the south of Yehud Medinata autonomous province of the Persian Achaemenid Empire. During that era, the city was rebuilt and a citadel had been constructed. Archeological finds from between 359 and 338 BC have been made, finding pottery and an ostracon. During the Hasmonean rule, the city[dubious – discuss] was not attributed great importance as it was not mentioned when conquered from Edom or described in the Hasmonean wars.[dubious – discuss] Around 64-63 BC, the Roman general Gnaeus Pompeius Magnus made Beersheba, known as Birosaba, the southern part of the Judea province. During the Herodian period there was a small settlement in Beersheba. Remains of a Jewish village dating back to the 1st century AD were discovered in the Rakafot neighborhood in the north of the city. In the following years, the town served as front-line defence against Nabatean attacks and was on the limes belt, which in this region is attributed to the time of Vespasian (1st century AD). The city become the centre of an eparchy around 268. During the Roman and Byzantine periods, the city developed significantly and the burial grounds on the outskirts of the city became residential areas. The inhabitants, which consisted of Nabataeans, Jews and other ethnicities, spoke primarily Greek and lived from olive oil production, viticulture, agricultural and other trades. After the reforms of Diocletian, the town became part of the province of Palaestina Tertia and grew to an approximate size of 60 hectares during its peak in the 6th century. Beersheba was described in the Madaba Map and Eusebius of Caesarea as a large village with a Roman garrison. The camp was later identified in aerial photographs taken during the First World War and other structures associated with the camp, such as a bath house and dwellings, were found in later excavations. During the Byzantine period, at least six churches were built there, one of which is the largest church to have been excavated in the Negev. Some of the churches were still in use until the Umayyad period but it remains uncertain whether they continued beyond the early eight century. Monasticism is also attested in historical documents and one structure has been identified as a monastery. Barsanuphius of Gaza corresponded with a certain monk of Beersheba, John, who might be identified with John the Prophet, who between 525 and 527 moved to the monastery of Seridus and together with Barsanuphius wrote over 850 letters on spiritual direction. During the early Muslim period, some of the Byzantine buildings continued to be used, but there was a slow decline of the city, which was manifested in the demolition of the public buildings and their transformation into a source of raw material for secondary construction. In the second half of the 8th century, the city was apparently abandoned. In 1483, during the late Mamluk era, the pilgrim Felix Fabri noted Beersheba as a city. Fabri also noted that Beersheba marked the southern-most border of "the Holy Land". The present-day city was built to serve as an administrative center by the Ottoman administration for the benefit of the Bedouin at the outset of the 20th century and was given the name of Bir al-Sabi (well of the seven). Until World War I, it was an overwhelmingly Muslim township with some 1,000 residents. Ben-David and Kressel have argued that the Bedouin traditional market was the cornerstone for the founding of Beersheba as capital of the Negev during this period,: 3 and Negev Bedouin. Anthropologist and educationalist Aref Abu-Rabia, who worked for the Israeli Ministry of Education and Culture, described it as "the first Bedouin city".: ix In June 1899, the Ottoman government ordered the creation of the Beersheba sub-district (kaza) of the district (mutasarrıflık) of Jerusalem, with Beersheba to be developed as its capital. Implementation was entrusted to a special bureau of the Ministry of the Interior. The British incorporation of Sinai into Egypt led to a need for the Ottomans to consolidate their hold on southern Palestine. There was also a desire to encourage the Bedouin to become sedentary, with a predicted increase of tranquility and tax revenue. The first governor (kaymakam), Isma'il Kamal Bey, lived in a tent lent by the local sheikh until the government house (Saraya) was built. Kamal was replaced by Muhammed Carullah Efendi in 1901, who in turn was replaced by Hamdi Bey in 1903. The governor in 1908 was promoted to 'adjoint' (mutassarrıf muavin) to the governor of the Jerusalem district, which placed him above the other sub-district governors. A visitor to Beersheba in May 1900 found only a ruin, a two-storey stone khan, and several tents. By the start of 1901 there was a barracks with a small garrison as well as other buildings. The Austro-Hungarian-Czech orientalist Alois Musil noted in August 1902: By 1907, there was a large village, military post, a residence for the kaymakam and a large mosque. The population increased from 300 to 800 between 1902 and 1911, and by 1914 there were 1,000 people living in 200 houses. A plan for the town in the form of a grid was developed by a Swiss and a German architect and two others. The grid pattern can be seen today in Beersheba's Old City. Most of the residents at the time were Arabs from Hebron and the Gaza area, although Jews also began settling in the city. Many Bedouin abandoned their nomadic lives and built homes in Beersheba. During World War I, the Ottomans built a military railroad from the Hejaz line to Beersheba, inaugurating the station on October 30, 1915. The celebration was attended by the Ottoman army commander Jamal Pasha and other senior government officials. The train line was captured by Allied forces in 1917, towards the end of the war. Today, it forms part of the Israeli railway network.[citation needed] Beersheba played an important role in the Sinai and Palestine Campaign in World War I. The Battle of Beersheba was part of a wider British offensive in aimed at breaking the Turkish defensive line from Gaza to Beersheba. The Ottoman army engaged in three battles with the British forces near Gaza between March 26 and November 7, 1917. Having failed in the First and Second Battles of Gaza, the British succeeded in the Third Battle of Gaza. On October 31, 1917, three months after taking Rafah, General Allenby's troops breached the line of Turkish defense between Gaza and Beersheba. Approximately five-hundred soldiers of the Australian 4th Light Horse Regiment and the 12th Light Horse Regiment of the 4th Light Horse Brigade, led by Brigadier General William Grant, with only horses and bayonets, charged the Turkish trenches, overran them and captured the wells in what has become known as the Battle of Beersheba, called the "last successful cavalry charge in British military history." On the edge of Beersheba's Old City is a Commonwealth War Graves Commission Cemetery containing the graves of Australian, New Zealand and British soldiers. The town also contains a memorial park dedicated to them. During the Palestine Mandate, Beersheba was a major administrative center. The British constructed a railway between Rafah and Beersheba in October 1917 which opened to the public in May 1918, serving the Negev and settlements south of Mount Hebron. In 1928, at the beginning of the tension between the Jews and the Arabs over control of Palestine and wide-scale rioting which left 133 Jews dead and 339 wounded, many Jews abandoned Beersheba, although some returned occasionally. After an Arab attack on a Jewish bus in 1936, which escalated into the 1936–39 Arab revolt in Palestine, the remaining Jews left. At the time of the 1922 census of Palestine, Beersheba had a population of 2,356 (2,012 Muslims, 235 Christians, 98 Jews and 11 Druze). At the time of the 1931 census, Beersheba had 545 occupied houses and a population of 2,959 (2,791 Muslims, 152 Christians, 11 Jews and five Baháʼí). The 1938 village survey did not cover Beersheba due to the area's largely nomadic population and the Rural Property Tax Ordinance not being applied there. The 1945 village survey conducted by the Palestine Mandate government found 5,570 (5,360 Muslims, 200 Christians and 10 others). In 1947, the United Nations Special Committee on Palestine (UNSCOP) proposed that Beersheba be included within the Jewish state in their partition plan for Palestine. However, when the UN's Ad Hoc Committee revised the plan, they moved Beersheva to the Arab state on account of it being primarily Arab. Egyptian forces had been stationed at Beersheva since May 1948. After the Arab states invaded Palestine and declared war on the newly-founded Jewish state of Israel, Yigal Allon proposed the conquest of Beersheba, which was approved by Prime Minister David Ben-Gurion. According to Israeli historian Benny Morris, Allon ordered the "conquest of Beersheba, occupation of outposts around it, [and] demolition of most of the town." The objective was to break the Egyptian blockade of Israeli convoys to the Negev. The Egyptian army did not expect an offensive and fled en masse. Israel bombed the town on October 16. At 4:00 am on October 21, the 8th Brigade's 89th battalion and the Negev Brigade's 7th and 9th battalions moved in. Some troops advanced from the Mishmar HaNegev junction, 20 kilometres (12 mi) north of Beersheba and others from the Turkish train station and Hatzerim. By 9:45, Beersheba was in Israeli hands. Around 120 Egyptian soldiers were taken prisoner. All of the Arab inhabitants who had resisted were expelled. The remaining Arab civilians, 200 men and 150 women and children, were taken to the police fort and, on October 25, the women, children, disabled and elderly were driven by truck to the Gaza border. The Egyptian soldiers were interned in POW camps. Some men lived in the local mosque and were put to work cleaning, however, when it was discovered that they were supplying information to the Egyptian army, they were also deported. The town was subject to large-scale looting by the Haganah, and by December, in one calculation, the total number of Arabs driven out from Beersheva and surrounding areas reached 30,000 with many ending up in Jordan as refugees. Following Operation Yoav, a 10-kilometer radius exclusion zone around Beersheba was enforced into which no Bedouin were allowed. In response, the United Nations Security Council passed two resolutions on the November 4 and 16 demanding that Israel withdraw from the area. Following the conclusion of the war, the 1949 Armistice Agreements formally granted Beersheba to Israel. The town was then transformed into an Israeli city with only an exiguous Arab minority. Beersheba was deemed strategically important due to its location with a reliable water supply and at a major crossroads, northeast to Hebron and Jerusalem, east to the Dead Sea and al Karak, south to Aqaba, west to Gaza and southwest to Al-Auja and the border with Egypt. After a few months, the town's war-damaged houses were repaired. As a post-independence wave of Jewish immigration to Israel began, Beersheba experienced a population boom as thousands of immigrants moved in. The city rapidly expanded beyond its core, which became known as the "Old City", as new neighborhoods were built around it, complete with various housing projects such as apartment buildings and houses with auxiliary farms, as well as shopping centers and schools. The Old City was turned into a city center, with shops, restaurants, and government and utility offices. An industrial area and one of the largest cinemas in Israel were also built in the city. By 1956, Beersheba was a booming city of 22,000. In 1959, during the Wadi Salib riots, riots spread quickly to other parts of the country, including Beersheba. Soroka Hospital opened its doors in 1960. By 1968, the population had grown to 80,000. The University of the Negev, which would later become Ben-Gurion University of the Negev, was established in 1969. The then Egyptian president Anwar Sadat visited Beersheba in 1979. In 1983, its population was more than 110,000. During the 1990s post-Soviet aliyah, the city's population greatly increased as many immigrants from the former Soviet Union settled there. As part of its Blueprint Negev project, the Jewish National Fund funded major redevelopment projects in Beersheba. One such project is the Beersheba River Walk, a 900-acre (3.6-square-kilometre) riverfront park stretching along 8 kilometers of the riverside and containing a 15-acre (6.1-hectare) manmade boating lake, a 12,000-seat amphitheater, green spaces, playgrounds, and a bridge along the route of the city's Mekorot water pipes. The Beersheba River had previously been used as a dumping site and filled with untreated wastewater. After the renovation, the river was transformed and now flows with high-quality purified wastewater. At the official entrance to the river park is the Beit Eshel Park, which consists of a park built around a courtyard with historic remains from the settlement of Beit Eshel. Four new shopping malls were also built. Among them is Kanyon Beersheba, a 115,000-square-metre (1,240,000-square-foot) ecologically planned mall with pools for collecting rainwater and lighting generated by solar panels on the roof. It will be situated next to an 8,000-meter park with bicycle paths. In addition, the first ever farmer's market in Israel was established as an enclosed, circular complex with 400 spaces for vendors surrounded by parks and greenery. A new central bus station was built in the city. The station has a glass-enclosed complex also containing shops and cafés. Some $10.5 million was also invested in renovating Beersheba's Old City, preserving historical buildings and upgrading infrastructure. The Turkish Quarter was also redeveloped with newly cobbled streets, widened sidewalks, and the restoration of Turkish homes into areas for dining and shopping. In 2011, city hall announced plans to turn Beersheba into the "water city" of Israel. One of the projects, "Beersheva beach", is a 7-dunam fountain opposite city hall. Other projects included fountains near the Soroka Medical Center and in front of the Shamoon College of Engineering. In the 1990s, as skyscrapers began to appear in Israel, the construction of high-rise buildings began in Beersheba. Today, downtown Beersheba has been described as a "clean, compact, and somewhat sterile-looking collection of high-rise office and residential towers." The city's tallest building is Rambam Square 2, a 32-story apartment building. Many additional high-rise buildings are planned or are under construction, including skyscrapers. There are further plans to build luxury residential towers in the city. In December 2012, a plan to build 16,000 new housing units in the Ramot Gimel neighborhood was scrapped in favor of creating a new urban forest, which spans 1,360 acres (550 ha) and serves as the area's "green lung", as part of the plans to develop a "green band" around the city. The forest includes designated picnic areas, biking trails, and walking trails. According to Mayor Ruvik Danilovich, Beersheba still has an abundance of open, underdeveloped spaces that can be used for urban development. In 2017, a new urban building plan was approved for the city, designed to raise the city's population to 340,000 by 2030. Under the plan, 13,000 more housing units will be built, along with industrial and business developments occupying a total of four million square meters. A second public hospital is also planned. Planning for the Beersheba Light Rail also began. In 2019, the construction of a new public hospital, which will be named after Shimon Peres, was approved. The hospital will be a 345-acre (140 ha) complex that will feature 1,900 beds, commerce, hotel, alternative medicine, and paramedical services, and research centers, with the possibility of apartment units for medical faculty employees, students, and senior housing. It will be linked to the rest of the city by a light rail system. In 2021, an outline plan was approved for the construction of 34,000 housing units in the city to increase the population to 400,000, as well as the construction of 4 million square meters of office and commercial space, 3 million square meters of industrial space, 2.7 million square meters of space in public buildings, and 370,000 square meters of space for the tourism industry. One of the primary goals of the plan is to boost connections between neighborhoods through a continuous network of streets which will be shaded and give preference to public transport and pedestrians. Under the plan, construction in the city center will be boosted and Rager Boulevard, which the plan identifies as the city's main avenue, will be turned from a multi-lane road into an urban avenue with expanded residential construction alongside it. On October 19, 1998, sixty-four people were wounded in a grenade attack. On August 31, 2004, sixteen people were killed in two suicide bombings on commuter buses in Beersheba for which Hamas claimed responsibility. On August 28, 2005, another suicide bomber attacked the central bus station, seriously injuring two security guards and 45 bystanders. During Operation Cast Lead, which began on December 27, 2008, and lasted until the ceasefire on January 18, 2009, Hamas fired 2,378 rockets (such as Grad rockets) and mortars, from Gaza into southern Israel, including Beersheba. The rocket attacks have continued, but have been only partially effective since the introduction of the Iron Dome rocket defense system. In 2010, an Arab attacked and injured two people with an axe. In 2012, a Palestinian from Jenin was stopped before a stabbing attack in a "safe house". On October 18, 2015, a lone gunman shot and killed a soldier guarding the Beersheva bus station before being gunned down by police. In September 2016, the Shin Bet thwarted a Palestinian Islamic Jihad terror attack at a wedding hall in Beersheba. On March 22, 2022, a convicted Islamic State supporter carried out a stabbing and vehicle-ramming attack, killing four people and injuring two others. During the Gaza war, the city became the target of several rocket attacks. During the Iran–Israel war in 2025, the city was targeted by Iran. On June 19, the Soroka Medical Center was struck by an Iranian ballistic missile, destroying the hospital's surgical ward, causing widespread destruction to nearby buildings and injuring at least 80 people. On June 24, after the ceasefire agreement came into effect, Iran launched missiles towards a residential building in the city, killing 5 civilians and injuring 20. Emblem of Beersheba Since 1950, Beersheba has changed its municipal emblem several times. The 1950 emblem, designed by Abraham Khalili, featured a tamarix tree, a factory and water flowing from a pipeline. In 1972, the emblem was modernized with the symbolic representation of the Twelve Tribes and a tower. Words from the Bible are inscribed: Abraham "planted a tamarisk tree in Beersheba." (Genesis 21:33) Since 2012, it has incorporated the number seven as part of the city rebranding. Geography Beersheba is located on the northern edge of the Negev desert 115 kilometres (71 mi) south-east of Tel Aviv and 120 kilometres (75 mi) south-west of Jerusalem. The city is located on the main route from the center and north of the country to Eilat in the far south. The Beersheba Valley has been populated for thousands of years, as it has available water, which flows from the Hebron hills in the winter and is stored underground in vast quantities. The main river in Beersheba is Nahal Be'er Sheva, a stream that flows year round and occasionally floods in the winter. The Kovshim and Katef streams are other important wadis that pass through the city. Beersheba is surrounded by several satellite towns, including Omer, Lehavim, and Meitar, and the Bedouin localities of Rahat, Tel as-Sabi, and Lakiya. Just northwest of the city (near Ramot neighborhood) is a region called Goral hills (heb:גבעות גורל lit: hills of fate), the area has hills with up to 500 metres (1,600 feet) above sea level and low as 300 metres (980 feet) above sea level. Due to heavy construction the flora unique to the area is endangered. Northeast of the city (north to the Neve Menahem neighborhood) there are Loess plains and dry river bands. Beersheba has a hot arid climate (Köppen climate classification BWh) bordering upon a hot semi-arid climate (BSh) though with Mediterranean influences. The city has characteristics of both Mediterranean and desert climates. Summers are hot and dry, and winters are mild. Rainfall is highly concentrated in the winter season. In summer, the temperatures are high in daytime and nighttime with an average high of 34.7 °C (94 °F) and an average low of 21.4 °C (71 °F). Winters have an average high of 17.7 °C (64 °F) and average low of 7.1 °C (45 °F). Snow is very rare; a snowfall on February 20, 2015, was the first such occurrence in the city since 2000. Precipitation in summer is rare, most rainfalls come in winter between September and May, but the annual amount is low, averaging 195.1 millimeters (7.7 in) per year. There are sandstorms in summer. Haze and fog are common in winter, as a result of high humidity. Demographics Beersheba is one of the fastest-growing cities in Israel. Though it has a population of about 200,000, the city is larger in area than Tel Aviv, and its urban plan calls for an eventual population of 450,000–500,000. It is planned to have a population of 340,000 by 2030. The population of Beersheba is predominantly Jewish. Jews and others represent 97.3% of the population, of whom Jews are 86.5%. Arabs constitute around 2.69% of city population. The Israel Central Bureau of Statistics divides the Beersheba metropolitan area into two areas: Economy The largest employers in Beersheba are Soroka Medical Center, the municipality, Israel Defense Forces and Ben-Gurion University. A major Israel Aerospace Industries complex is located in the main industrial zone, north of Highway 60. Numerous electronics and chemical plants, including Teva Pharmaceutical Industries, are located in and around the city. Beersheba is emerging as a high-tech center, with an emphasis on cyber security. A large high-tech park was built near the Be'er Sheva North Railway Station in 2012 and a fifth commercial building begun to be constructed. Deutsche Telekom, Elbit Systems, EMC, Lockheed Martin, Ness Technologies, WeWork and RAD Data Communications have opened facilities there, as has a cyberincubator run by Jerusalem Venture Partners. A Science park funded by the RASHI-SACTA Foundation, Beersheba Municipality and private donors was completed in 2008. Another high-tech park is located north of the city near Omer. An additional three industrial zones are located on the southeastern side of the city – Makhteshim, Emek Sara and Kiryat Yehudit – and a light industry zone between Kiryat Yehudit and the Old City. Local government The mayor of Beersheba is Ruvik Danilovich, who was deputy mayor under Yaakov Turner. Educational institutions According to the Israel Central Bureau of Statistics, in 2022, Beersheba has a ca.8,975 preschoolers in ca.300 preschools & kindergartens. A total of 99 schools teaching a student population of ca.45,291: 60 elementary schools with an enrollment of 19,617 (ca.3,200 of whom are entering the 1st grade), and 39 high schools with an enrollment of 16,699. Of Beersheba's 12th graders, 90% earned a Bagrut matriculation certificate in 2022. The city also has several private schools and yeshivot in the religious sector with 3,000 or more students. Beersheba is home to one of Israel's major universities, Ben-Gurion University of the Negev, located on an urban campus in the city (Dalet neighborhood). Other schools in Beersheva are the Open University of Israel, Shamoon College of Engineering (SCE), Kaye Academic College of Education, Practical Engineering College of Beersheba (Hamikhlala ha technologit shel Be'er sheva), and a campus of the Israeli Air and Space College (Techni Be'er sheva). Neighborhoods After Israeli independence, Beersheba became a "laboratory" for Israeli architecture. Mishol Girit, a neighborhood built in the late 1950s, was the first attempt to create an alternative to the standard public housing projects in Israel. Hashatiah (literally, "the carpet"), also known as Hashekhuna ledugma (the model neighborhood), was hailed by architects around the world. Today, Beersheba is divided into seventeen residential neighborhoods in addition to the Old City and Ramot, an umbrella neighborhood of four sub-districts. Many of the neighbourhoods are named after letters of the Hebrew alphabet, which also have numerical value, but descriptive place names have been given to some of the newer neighborhoods. Culture In 1953, Cinema Keren, the Negev's first movie theater, opened in Beersheba. It was built by the Histadrut and had seating for 1,200 people. Beersheba is the home base of the Israel Sinfonietta, founded in 1973. Over the years, the Sinfonietta has developed a broad repertoire of symphonic works, concerti for solo instruments and large choral productions, among them Handel's Israel in Egypt, masses by Schubert and Mozart, Rossini's "Stabat Mater" and Vivaldi's "Gloria". World-famous artists have appeared as soloists with the Sinfonietta, including Pinchas Zukerman, Jean-Pierre Rampal, Shlomo Mintz, Gary Karr, and Paul Tortelier. In the 1970s, a memorial commemorating fallen Israeli soldiers designed by the sculptor Danny Karavan was erected on a hill north-east of the city. The Beersheba Theater opened in 1973. The Light Opera Group of the Negev, established in 1980, performs musicals in English every year. Landmarks in the city include "Abraham's well", a well dating to at least the 12th century CE (now inside a visitors center), and the old Turkish railway station, now the focus of development plans. The Artists House of the Negev, in a Mandate-era building, showcases artwork connected in some way to the Negev. The Negev Museum of Art reopened in 2004 in the Ottoman Governor's House, and an art and media center for young people was established in the Old City. In 2009, a new tourist and information center, Gateway to the Negev, was built. In 2024, Midbarium, a desert zoo and amusement park was opened, replacing the NegevZoo. In 1906, during the Ottoman era, the Great Mosque of Beersheba was built with donations collected from the Bedouin residents in the Negev. It was used actively as a mosque until the city fell to Israeli forces in 1948. The mosque was used until 1953 as the city's courthouse. From then until the 1990s, when it was closed for renovations, the building housed an archeological museum, which the city intended to turn into the archeological branch of the Negev Museum. In 2011, however, the Supreme Court of Israel, sitting as the High Court of Justice, ordered the property to be turned into a museum of Islam without reverting to a place of worship. Transportation Beersheba is the central transport hub of southern Israel, served by roads, railways and air. Beersheba is connected to Tel Aviv via Highway 40, the second longest highway in Israel, which passes to the east of the city and is called the Beersheba bypass because it allows travellers from the north to go to southern locations, avoiding the more congested city center. From west to east, the city is divided by Highway 25, which connects to Ashkelon and the Gaza Strip to the northwest, and Dimona to the east. Finally, Highway 60 connects Beersheba with Jerusalem and the Shoket Junction, and goes through the West Bank. On the local level, a partial ring road surrounds the city from the north and east, and Road 406 (Rager Blvd.) goes through the city center from north to south. Metrodan Beersheba, established in 2003, had a fleet of 90 buses and operates 19 lines in the city between 2003 and 2016, most of which depart from the Beersheba Central Bus Station. These lines were formerly operated by the municipality as the 'Be'er Sheva Urban Bus Services'. Inter-city buses to and from Beersheba are operated by Egged, Dan BaDarom and Metropoline. The intercity bus service was transferred to Dan Be'er Sheva in 25'th of November 2016 and Metrodan Beersheva had been shut down. With the change to Dan Be'er Sheva the company introduced electronic payment stopping pay at the driver which was common in Beersheba. Israel Railways operates two stations in the city that form part of the railway to Beersheba: the old Be'er Sheva North University station, adjacent to Ben Gurion University and Soroka Medical Center, and the new Be'er Sheva Central station, adjacent to the central bus station. Between the two stations, the railway splits into two, and also continues to Dimona and the Dead Sea factories. An extension is planned to Eilat and Arad. The Be'er Sheva North University station is the terminus of the line to Dimona. All stations of Israel Railways can be accessed from Beersheba using transfer stations in Tel Aviv and Lod. Until 2012, the railway line to Beersheba used a slow single-track configuration with sharp curves and many level crossings which limited train speed. Between 2004 and 2012 the line was double tracked and rebuilt using an improved alignment and all its level crossings were grade separated. The rebuilding effort cost NIS 2.8 billion and significantly reduced the travel time and greatly increased the train frequency to and from Tel Aviv and Kiryat Motzkin to Beersheba. In addition, Beersheba will be linked to Tel Aviv and Eilat by a new passenger and freight high-speed railway system. The Beersheba Light Rail is currently planned as a light rail system for the city of Beersheba and outlying communities. There have been plans for a light rail system in Beersheba for many years, and a light rail system appears in the master plan for the city. An agreement was signed for the construction of a light rail system in 1998, but was not implemented. In 2008, the Israeli Finance Ministry contemplated freezing the Tel Aviv Light Rail project and building a light rail system in Beersheba instead, but that did not happen. In 2014, mayor Ruvik Danilovich announced that the light rail system will be built in the city. In 2017, the Ministry of Transport gave the Beersheba municipality approval to proceed with preliminary planning on a light rail system. In August 2023, the light rail was officially approved. It is expected to be completed by 2033. In Be'er Sheva, there are over 250 roundabouts, giving the city its nickname of "Roundabout Capital of Israel". Many roundabouts, part of Be'er-Sheva's urban oasis project, include fountains, landscaping and sculptures by well-known artists (such as Menashe Kadishman's The Horse Circle and Jeremy Langford's The Drip Circle). Some commemorate famous people and international and local organizations, or mark important events. Some are named after the twin cities of Beer Sheva. Well-known roundabouts are: Ilan Ramon Circle, Phantom Circle near the Air Force Technical School, Champions Square near Turner Stadium and Conch Arena, Chess Circle, Harp Circle near the Municipal Conservatory and the Be'er-Sheva Performing Arts Center, College Circle, Ben Gurion Circle, Light Circle, Freemasons Circle, Shofarot Circle, Twin Towers Circle. Beersheba is linked to Hilvan by the Abraham Path.[citation needed] Sports Hapoel Be'er Sheva plays in the Israeli Premier League, the top tier of Israeli football, having been promoted in the 2008–2009 Liga Leumit season. The club has won the Israeli championship five times, in 1975, 1976, 2016, 2017 and 2018, as well as the State Cup in 1997, 2020 and 2022. Beersheba has two other local clubs, Maccabi Be'er Sheva (based in Neve Noy) and F.C. Be'er Sheva (based in the north of Dalet), a continuation of the defunct Beitar Avraham Be'er Sheva. Hapoel play at the Turner Stadium. Beersheba has a basketball club, Hapoel Be'er Sheva. The team plays at The Conch Arena, which seats 3,000. Beersheba has become Israel's national chess center; thanks to Soviet immigration, it is home to the largest number of chess grandmasters of any city in the world. The city hosted the World Team Chess Championship in 2005, and chess is taught in the city's kindergartens. The Israeli chess team won the silver medal at the 2008 Chess Olympiad and the bronze at the 2010 Olympiad. The chess club was founded in 1973 by Eliyahu Levant, who served as its director for the next 40 years. The city has the second largest wrestling center (AMI wrestling school) in Israel. [citation needed] The center is run by Leonid Shulman and has approximately 2,000 students, most of whom are from Russian immigrant families since the origins of the club are in the Nahal Beka immigrant absorption center. Maccabi Be'er Sheva has a freestyle wrestling team, whilst Hapoel Be'er Sheva has a Greco-Roman wrestling team. In the 2010 World Wrestling Championships, AMI students won five medals. Cricket is played under the auspices of Israel Cricket Association. Beersheba is also home to a rugby team, whose senior and youth squads have won several national titles (including the recent Senior National League 2004–2005 championship). Beersheba's tennis center, which opened in 1991, features eight lighted courts, and the Beersheba (Teyman) airfield is used for gliding. Environmental awards In 2012, the Beersheba "ring trail", a 42-kilometer hiking trail around the city, won third place in the annual environmental competition of the European Travelers Association. Notable people Twin towns – sister cities Beersheba is twinned with: See also Notes References Bibliography External links Beer Sheva travel guide from Wikivoyage |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-0-7914-5352-0] | [TOKENS: 380] |
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Simon_Conway_Morris] | [TOKENS: 1151] |
Contents Simon Conway Morris Simon Conway Morris FRS (born 1951) is an English palaeontologist, evolutionary biologist, and astrobiologist known for his study of the fossils of the Burgess Shale and the Cambrian explosion. The results of these discoveries were celebrated in Stephen Jay Gould's 1989 book Wonderful Life. Conway Morris's own book on the subject, The Crucible of Creation (1998), however, is critical of Gould's presentation and interpretation. Conway Morris, a Christian, holds to theistic views of biological evolution. He has held the Chair of Evolutionary Palaeobiology in the Department of Earth Sciences, University of Cambridge since 1995. Biography Conway Morris was born on 6 November 1951. A native of Carshalton, Surrey, he was brought up in London, England. and went on to study geology at Bristol University, achieving a First Class Honours degree. He then moved to Cambridge University and completed a PhD at St John's College under Harry Blackmore Whittington. He is professor of evolutionary palaeobiology in the Department of Earth Sciences at Cambridge. He is renowned for his insights into early evolution and his studies of paleobiology. He gave the Royal Institution Christmas Lecture in 1996 on the subject of The History in our Bones. He was elected a Fellow of the Royal Society at age 39, was awarded the Walcott Medal of the National Academy of Sciences in 1987 and the Lyell Medal of the Geological Society of London in 1998. Conway Morris is based in the Department of Earth Sciences at the University of Cambridge and is best known for his work on the Cambrian explosion, the Burgess Shale fossil fauna and similar deposits in China and Greenland. In addition to working in these countries he has undertaken research in Australia, Canada, Mongolia and the United States. His studies on the Burgess Shale-type faunas, as well as the early evolution of skeletons, has encompassed a wide variety of groups, ranging from ctenophores to the earliest vertebrates. His thinking on the significance of the Burgess Shale has evolved and his current interest in evolutionary convergence and its wider significance – the topic of his 2007 Gifford Lectures – was in part spurred by Stephen Jay Gould's arguments for the importance of contingency in the history of life. In January 2017, his team announced the discovery of Saccorhytus and initially described it as an early member of the deuterostomes which contain a diverse group of animals including vertebrates, but subsequent analysis reclassified this taxon as a member of the protostomes, probably within the ecdysozoans. Conway Morris' views on the Burgess Shale are reported in numerous technical papers and more generally in The Crucible of Creation (Oxford University Press, 1998). In recent years he has been investigating the phenomenon of evolutionary convergence, the main thesis of which is put forward in Life's Solution: Inevitable Humans in a Lonely Universe (Cambridge University Press, 2003). He is now involved on a major project to investigate both the scientific ramifications of convergence and also to establish a website (www.mapoflife.org) that aims to provide an easily accessible introduction to the thousands of known examples of convergence. This work is funded by the John Templeton Foundation. Conway Morris is active in the public understanding of science and has broadcast extensively on radio and television. The latter includes the Royal Institution Christmas Lectures delivered in 1996. A Christian, he has participated in science and religion debates, including arguments against intelligent design on the one hand and materialism on the other. In 2005 he gave the second Boyle Lecture. He has lectured at the Faraday Institute for Science and Religion on "Evolution and fine-tuning in Biology". He gave the University of Edinburgh Gifford Lectures for 2007 in a series titled "Darwin's Compass: How Evolution Discovers the Song of Creation". In these lectures Conway Morris explained why evolution is compatible with belief in the existence of a God. He is a critic of materialism and of reductionism: That satisfactory definitions of life elude us may be one hint that when materialists step forward and declare with a brisk slap of the hands that this is it, we should be deeply skeptical. Whether the "it" be that of Richard Dawkins' reductionist gene-centred worldpicture, the "universal acid" of Daniel Dennett's meaningless Darwinism, or David Sloan Wilson's faith in group selection (not least to explain the role of human religions), we certainly need to acknowledge each provides insights but as total explanations of what we see around us they are, to put it politely, somewhat incomplete. and of scientists who are militantly against religion: the scientist who boomingly – and they always boom – declares that those who believe in the Deity are unavoidably crazy, "cracked" as my dear father would have said, although I should add that I have every reason to believe he was – and now hope is – on the side of the angels. In March 2009 he was the opening speaker at the Biological Evolution: Facts and Theories conference held at the Pontifical Gregorian University in Rome, as well as chairing one of the sessions. The conference was sponsored by the Catholic Church. Conway Morris has contributed articles on evolution and Christian belief to several collections, including The Cambridge Companion to Science and Religion (2010) and The Blackwell Companion to Science and Christianity (2012). Awards and honours Bibliography See also Extraterrestrial (TV program) in which Conway Morris participates. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/WATFIV] | [TOKENS: 2355] |
Contents WATFIV WATFIV (Waterloo FORTRAN IV), developed in Canada at the University of Waterloo, is an implementation of the Fortran computer programming language. It is the successor of WATFOR. WATFIV was used from the late 1960s into the mid-1980s. WATFIV was in turn succeeded by later versions of WATFOR. Because it could complete the three usual steps ("compile-link-go") in just one pass, the system became popular for teaching students computer programming. History In the early 1960s, newly formed computer science departments started university programs to teach computer programming languages. The Fortran language had been developed at IBM, but suffered from slow and error-prone three-stage batch processing workflow. In the first stage, the compiler started with source code and produced object code. In the second stage, a linker constructed a complete program using growing libraries of common functions. Finally, the program was repeatedly executed with data for the typical scientific and business problems of customers. Each step often included a new set of punched cards or tape. Students, on the other hand, had very different requirements. Their programs were generally short, but usually contained logic and syntax errors, resulting in time-consuming repetition of the steps and confusing "core dumps" (It often took a full day to submit and receive the successful or failed output from the computer operator). Once their programs worked correctly, they were turned in and not run again. In 1961, the University of Wisconsin developed a technology called FORGO for the IBM 1620 which combined some of the steps. Similar experiments were carried out at Purdue University on the IBM 7090 in a system called PUFFT. In summer 1965, four undergraduate students of the University of Waterloo, Gus German, James G. Mitchell Richard Shirley and Robert Zarnke, led by Peter Shantz, developed a Fortran compiler for the IBM 7040 computer called WATFOR. Its objectives were fast compilation speed and effective error diagnostics at both compile and execution time. It eliminates the need for a separate linking step and, as a result, FORTRAN programs which contain no syntax errors are placed into immediate execution. Professor J. Wesley Graham provided leadership throughout the project. This simple, one-step process allowed non-experienced programmers to learn programming with lower cost in time and computing resources. To aid in debugging, the compiler uses an innovative approach to checking for undefined variables (an extremely common mistake by novice programmers). It uses a diagnostic feature of the 7040 that can deliberately set areas of memory to bad parity. When a program tries to reference variables that hadn't been set, the machine takes an interrupt (handled by the Watfor runtime routines) and the error is reported to the user as an undefined variable. This has the pleasant side effect of checking for undefined variables with essentially no CPU overhead. WATFOR quickly gained popularity and over 75 institutions installed it on their IBM 7040 systems. The distribution of the compiler was handled by Sandra Bruce (née Hope). In 1966, the University planned to replace the 7040 with an IBM System/360 computer, which was much faster but not software compatible. A team of full-time employees and undergraduate students was formed to write an IBM 360 version. The project members, Betty Schmidt, Paul Dirksen, Paul H. Cress, Lothar K. "Ned" Kesselhut, Bill Kindree and Dereck Meek, were later joined by Mike Doyle, Rod Milne, Ron Hurdal and Lynn Williams, completed 360 WATFOR in the early part of 1967. Many other institutions (universities, colleges, businesses and governmental agencies) started using the WATFOR compiler to meet needs similar to those experienced at the University of Waterloo. The distribution of the software and customer support was carried on by Sandra Ward. As a result of proposals from the SHARE user group Fortran committee and others, a new version called WATFIV was produced in 1968. WATFIV introduced new features such as CHARACTER variables and direct-access input-output. The Association for Computing Machinery presented Paul Cress and Paul Dirksen the Grace Murray Hopper Award for contributions to the WATFOR and WATFIV projects in 1972. The WATFIV compiler was included in the DATAPRO Honour Roll for 1975 and 1976. People involved with maintenance and enhancement included Bernie Murphy, Martin Wiseman and Yvonne Johnson. WATFIV was pronounced as "WHAT FIVE" (and sometimes "WATT FIVE"), but, as was realized at the time, could also (almost) still be pronounced as "WHAT FOR", as in WAT-F-IV (Waterloo Fortran IV). Universities and corporations used these compilers and a number of other software products have been developed in the WATFOR tradition. For example, a version for the COBOL programming language is called WATBOL. Daniel D. McCracken said "it is no exaggeration to suggest that WATFOR revolutionized the use of computers in education." At one point, more than 3,000 mini and mainframe computer licenses and over 100,000 microcomputer licenses were held worldwide for this family of software products. In 1974, a compiler with characteristics similar to the IBM implementation was created for the Digital Equipment Corporation PDP-11 computer and called WATFOR-11. The team members, Jack Schueler, Jim Welch and Terry Wilkinson, were later joined by Ian McPhee who had added new control statements to the WATFIV compiler for structured programming (SP). These new statements included the block IF (later included in the ANSI X3.9-1978 language standard), WHILE, UNTIL, and others. WATFIV-S was announced in 1974 and a few months later, WATFOR-11S (the "S" indicating the new SP features) was also announced. The original SP features were later enhanced with additional statements by Bruce Hay in WATFIV-S in 1980 and by Jack Schueler in WATFOR-11S in 1981. During the 1970s, the ANSI X3J3 subcommittee (the FORTRAN language standard group) developed a new language standard which was officially approved in April, 1978. This standard, designated FORTRAN 77, introduced many new statements into the language. In fact, the previous language standard FORTRAN 66 is a very small document and describes, what is in effect, a subset of most implementations of FORTRAN. For example, the WATFIV and WATFOR-11 implementations are based upon the IBM definition of FORTRAN-IV. As programmers used the FORTRAN 77 features, a new compiler was required to combine the advantages of the WATFIV compiler with the new language standard. In January 1983, a project to develop a FORTRAN 77 compiler was started at Watcom Systems Inc. Under the leadership of Jack Schueler, Watcom employees and undergraduate students from the University of Waterloo's Co-operative Computer Science program became involved in the creation of the WATFOR-77 compiler. The major work was done by Geno Coschi, Fred Crigger, John Dahms, Jim Graham, Jack Schueler, Anthony Scian and Paul Van Oorschot. They were assisted by Rod Cremasco, John McCormick, David McKee and Brian Stecher. Many of the team members from former compiler projects provided input. These included Bruce Hay, Ian McPhee, Sandra Ward, Jim Welch and Terry Wilkinson. Unlike previous compilers, a significant portion of WATFOR-77 was written in a portable systems language to ease the implementation of the compiler on other computer systems. Earlier WATFOR compilers were written entirely in machine-dependent assembly language. Two components of the compiler are not portable. The code generator translates FORTRAN statements into native computer instructions and stores them in memory. The first version of WATFOR-77 generates instructions for the IBM 370 computer architecture. Most of the execution-time support (undefined variable checking, subscript evaluation, intrinsic functions) was written in assembly language for good performance. In September 1984, the first version was installed at the University of Waterloo for the Department of Computing Services. It was an implementation for IBM 370 computers running the VM/SP CMS operating system. A few months earlier, in May 1984, a project started to implement the WATFOR-77 compiler on the IBM Personal Computer. This project included Geno Coschi, Fred Crigger, Tim Galvin, Athos Kasapi, Jack Schueler, Terry Skomorowski and Brian Stecher. In April 1985, this second version of WATFOR-77 was installed at the University of Waterloo for use by students of the Faculty of Engineering. The compiler can run on a 256K IBM Personal Computer using IBM PC DOS 2.0 and does not require special floating-point hardware. In the fall of 1985, a Japanese version of WATFOR-77 was delivered to IBM Japan for the IBM JX Personal Computer. This version produces Japanese language error messages and supported the Kanji, Hiragana and Katakana character sets for variable names and character strings. To support the JX, the Language Reference manual and User's Guide were translated into Japanese. Another version of WATFOR-77 with the same features mentioned above was also developed for Japanese IBM PS/55 family of personal computers in Spring 1988. During the summer of 1986, the IBM PC version of WATFOR-77 was adapted to run on the Unisys ICON which runs the QNX operating system. Since QNX is quite different from IBM PC DOS, parts of the run-time system were rewritten. This implementation of WATFOR-77 was made available in September 1986. During the summer of 1985, a project was started to adapt WATFOR-77 to the Digital Equipment Corporation VAX computer series running the VMS operating system. The members of this project included Geno Coschi, Marc Ouellette, Jack Schueler and Terry Skomorowski. This implementation was made available in March 1987. Also, in the spring of 1988, a new project was begun to develop an optimizing FORTRAN 77 compiler. This compiler uses the code generator from the Watcom C compiler, which produces superior machine code to other C compilers. The FORTRAN 77 optimizing compiler was first shipped in mid-1990. In October 1990, the 25th anniversary of WATFOR was celebrated. Many involved in the development of the WATFOR compilers were invited to the University of Waterloo for a reunion. In spring 1992, a version of WATFOR-77 was adapted to the NEC PC-9801 family of personal computers. This version was similar to the IBM PS/55 version but modified to accommodate architectural differences. In January 1992, development of a 32-bit version of WATFOR-77 for Intel 80386 and Intel 80486 personal computers began. The first version was shipped in the fall of 1992. As late as 1995, classes for programming in WATFIV were still being held at the University of Mississippi, led by Professor Charles H. (Chuckie) Franke. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Anchimayen] | [TOKENS: 243] |
Contents Anchimayen The Anchimayen (in the Mapudungun language, also spelled "Anchimallén" or "Anchimalguén" in Spanish) is a mythical creature in Mapuche mythology. Anchimayens are described as little creatures that take the form of small children, and can transform into flying fireballs that emit bright light. They are the servants of a kalku (a type of Mapuche sorcerer). According to some sources, the goddess[a] was originally conceived of as the moon goddess, married to the sun, but later developed into a fuego fatuo (will-o'-the-wisp) type being that frightens and unhorses travelers. Anchimayens are sometimes confused with Kueyen (the Mapuche lunar goddess), because she also produces a bright light. See also Explanatory notes References This article about a legendary creature is a stub. You can help Wikipedia by adding missing information. This article relating to a myth or legend from Argentina is a stub. You can help Wikipedia by adding missing information. This article relating to a myth or legend from Chile is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/American_entry_into_World_War_I] | [TOKENS: 12273] |
Contents American entry into World War I The United States entered into World War I on 6 April 1917, more than two and a half years after the war began in Europe. Apart from an Anglophile element urging early support for the British and an anti-tsarist element sympathizing with Germany's war against Russia, American public opinion had generally reflected a desire to stay out of the war. Over time, especially after reports of German atrocities in Belgium in 1914 and after the sinking of the RMS Lusitania in a torpedo attack by a submarine of the Imperial German Navy off the southern coast of Ireland in May 1915, Americans increasingly came to see Imperial Germany as the aggressor in Europe. When the country was at peace, American banks made huge loans to the Entente powers (Allies), which were used mainly to buy munitions, raw materials, and food from across the Atlantic in North America from the United States and Canada. Although president Woodrow Wilson made minimal preparations for a land war before 1917, he did authorize a shipbuilding program for the United States Navy. Wilson was narrowly re-elected in 1916 on an anti-war platform. By 1917, with Belgium and northern France occupied by German troops, the Russian Empire experiencing turmoil and upheaval in the February Revolution overthrowing the tsar on the Eastern Front, and with the remaining Entente nations low on credit, the German empire appeared to have the upper hand in Europe. However, a British economic embargo and naval blockade were causing severe shortages of fuel and food in Germany. Berlin then decided to resume unrestricted submarine warfare. The aim was to break the trans-Atlantic supply chain to Britain from other nations to the West, although the German high command realized that sinking American-flagged ships would almost certainly bring the United States into the war. Imperial Germany also made a secret offer to help Mexico regain territories of the Mexican Cession of 1849, lost seven decades before in the Mexican–American War of 1846–1848, (now incorporated in the Southwestern United States) in an encoded diplomatic secret telegram known as the Zimmermann Telegram intercepted by British intelligence. Publication in the media of that communique outraged Americans just as German submarines started sinking American merchant ships in the North Atlantic in their U-boat campaign. President Wilson then asked Congress for "a war to end all wars" that would "make the world safe for democracy", and Congress voted to declare war on Germany on April 6, 1917. US troops began to arrive in Europe later that year, and served in major combat operations on the Western Front under the command of general John J. Pershing, particularly during the final Hundred Days Offensive. Main issues Britain used its large navy to prevent cargo vessels entering German ports, mainly by intercepting them in the North Sea between the coasts of Scotland and Norway. By the end of 1915, the Royal Navy had successfully interdicted and stopped the naval shipment of most war supplies and food to Germany. Neutral American merchant cargo ships that tried to trade with Germany were seized or turned back by the Royal Navy in outlying waters who viewed such trade as in direct conflict with the Allies' war efforts. The impact from the blockade became apparent very slowly because Germany and its allies controlled extensive farmlands and raw materials on the continent of Europe and could trade with land-bordering neutral countries like Sweden and the Netherlands who were not themselves blockaded by the British or French. However, because Germany and adjacent Central Powers ally Austria-Hungary had decimated their agricultural production by drafting and taking so many farmers and supplies of nitrate fertilisers into their armies, and the Allies were able to pressure neutral countries into reducing exports, the situation worsened, with the "turnip winter" of 1916–1917 an example of the emerging severe shortages in Central Europe. The situation at the start of 1917 was such that there was clear pressure on the German leadership to avoid a "war of exhaustion", while the softening of neutral trade reduced the importance to keep the neutral countries on side. Germany had considered a blockade from 1914. "England wants to starve us", said Grand Admiral Alfred von Tirpitz (1849–1930), the man who built the Imperial German Navy fleet after 1871 with the unification of Germany during the last few decades and who remained a key advisor to the German Emperor / Kaiser Wilhelm II. "We can play the same game. We can bottle her up and destroy every ship that endeavors to break the blockade". Admiral Tirpitz wanted to sink or scare off merchant and passenger ships en route to Britain. He and others in the Admiralty reasoned that since the island of Britain depended on imports of food, raw materials, and manufactured goods, preventing a substantial number of ships from supplying Britain would effectively undercut its long-term ability to maintain an army on the Western Front and potentially force Britain into speedy surrender. Such strategy would also give an essential, war winning role to the German Imperial Navy, who had been mostly passive in the war thus far due to being unable to challenge the powerful Royal Navy surface warship fleet. While Germany had ample shipyard capacity to build hundreds of U-boats, they had only nine long-range U-boats at the start of the war in August 1914. Nevertheless, without consulting colleagues earlier superiors like Tirpitz, the outgoing head of the German Admiralty Hugo von Pohl (1855–1916), declared the beginning of the first round of unrestricted submarine warfare six months after the war began in February 1915. However, instead of ceasing shipping to Britain and blaming the British (as the Germans positioned the move as a reprisal) as the Germans anticipated, the United States demanded that Germany respect the earlier peace-time international agreements upon "freedom of the seas", which protected neutral American and other ships on the high seas from seizure or sinking by either belligerent. Furthermore, Americans insisted on strict accountability for the deaths of innocent American civilians, demanding an apology, compensation and suggesting that it is grounds for a declaration of war. While the British Royal Navy frequently violated America's neutral rights by defining contraband very broadly in their naval blockade of Germany, German submarine warfare threatened American lives. Wilson's top advisor, legendary Colonel Edward M. House (1858–1938), commented that, "The British have gone as far as they possibly could in violating neutral rights, though they have done it in the most courteous way". Further, while the British justified their argument with an appeal to precedent, the Germans claimed that they should be allowed to use their new weapon to its best potential and so existing rules and norms need not apply. This is especially exemplified when German submarines torpedoed ships without warning, causing sailors and passengers to drown. Though in practice this was initially rare, since U-boats preferred to attack on the surface, this strategy was justified by claims that submarines were so vulnerable that they dared not surface near merchant ships that might be carrying guns and which were too small to rescue submarine crews. The Americans countered that if the new weapon cannot be used while protecting civilian lives, it should not be used at all. From February 1915, despite the United States warning Germany about the misuse of submarines, several incidents occurred where neutral ships were attacked or Americans killed. After the Thrasher incident, the German Imperial Embassy warned US citizens against boarding vessels to Britain, which would have to face German attack. Then on May 7, Germany torpedoed the British ocean liner RMS Lusitania, sinking her. This act of aggression caused the loss of 1,199 civilian lives, including 128 US citizens. The sinking of a large, unarmed passenger ship, combined with the previous stories of atrocities in Belgium, shocked Americans and turned public opinion hostile to Germany, although not yet to the point of war. Wilson issued a warning to Germany affirming it would face "strict accountability" if it killed more American citizens. Berlin acquiesced, ordering its submarines to avoid passenger ships. By January 1917, however, Field Marshal Paul von Hindenburg and General Erich Ludendorff decided that an unrestricted submarine blockade was the only way to achieve a decisive victory. They demanded that Kaiser Wilhelm order unrestricted submarine warfare be resumed. Germany knew this decision meant war with the United States, but they gambled that they could win before the United States' potential strength could be mobilized. However, they overestimated how many ships they could sink and thus the extent Britain would be weakened. Finally, they did not foresee that convoys could and would be used to defeat their efforts. They believed that the United States was so weak militarily that it could not be a factor on the Western Front for more than a year and that submarines would stop the transport of troops anyway. The civilian government in Berlin objected, but the Kaiser sided with his military. The second round of unrestricted submarine warfare was communicated to the Americans on January 31, 1917. The State Department had some indications that the campaign would come, but Wilson declared to his cabinet that the announcement had come as a complete surprise. The announcement was especially galling due to Wilson's "peace without victory" speech nine days earlier, as well as ongoing discussions on US opposition to British use of armed merchant ships. The Germans started targeting American vessels the very next day. The beginning of war in Europe coincided with the end of the recession of 1913–1914 in the US. Exports to belligerent nations rose rapidly over the first four years of the War from $824.8 million in 1913 to $2.25 billion in 1917. Loans from American financial institutions to the Allied nations in Europe also increased dramatically over the same period. Economic activity towards the end of this period boomed as government resources aided the production of the private sector. Between 1914 and 1917, industrial production increased 32% and GNP increased by almost 20%. The improvements to industrial production in the United States outlasted the war. The capital build-up that had allowed US companies to supply belligerents and the US Army resulted in a greater long-run rate of production even after the war had ended in 1918. The J.P. Morgan Bank offered assistance in the wartime financing of Britain and France from the earliest stages of the conflict through the US's entrance in 1917. J.P. Morgan's New York office, was designated as the primary financial agent to the British government starting in 1914 after successful lobbying by the British ambassador, Sir Cecil Spring Rice. The same bank would later take a similar role in France. J.P. Morgan & Co. became the primary issuer of loans to the French government, providing the capital of US investors, operating from their French affiliate Morgan, Harjes. Relations between Morgan and the French government became tense as the war raged on with no end in sight however. France's ability to borrow from other sources diminished, leading to greater lending rates and a depressing of the value of the franc. After the war ended, J.P. Morgan & Co. continued to aid the French government financially through monetary stabilization and debt relief. Because the United States was still a declared neutral state, the financial dealings of United States banks in Europe caused a great deal of contention between Wall Street and the US government. Secretary of State William Jennings Bryan strictly opposed financial support of warring nations and wanted to ban loans to the belligerents in August 1914. He told President Wilson that "refusal to loan to any belligerent would naturally tend to hasten a conclusion of the war." Wilson at first agreed, but then reversed himself when France argued that if it was legal to buy US goods then it was legal to take out credits on the purchase. J.P. Morgan issued loans to France including one in March 1915 and, following negotiations with the Anglo-French Financial Commission, another joint loan to Britain and France in October 1915, the latter amounting to US$500,000,000. Although the stance of the US government was that stopping such financial assistance could hasten the end of the war and therefore save lives, little was done to insure adherence to the ban on loans, in part due to pressure from Allied governments and US business interests. The US steel industry had faced difficulties and declining profits during the recession of 1913–1914. As war began in Europe, however, the increased demand for tools of war began a period of heightened productivity that alleviated many US industrial companies from the low-growth environment of the recession. Bethlehem Steel took particular advantage of the increased demand for armaments abroad. Prior to US entrance into the war, these companies benefited from unrestricted commerce with sovereign customers abroad. After President Wilson issued his declaration of war, the companies were subjected to price controls created by the US Trade Commission in order to ensure that the US armed forces would have access to the necessary armaments. By the end of the war in 1918, Bethlehem Steel had produced 65,000 pounds of forged military products and 70 million pounds of armor plate, 1.1 billion pounds of steel for shells, and 20.1 million rounds of artillery ammunition for Britain and France. Bethlehem Steel took advantage of the domestic armaments market and produced 60% of the US weaponry and 40% of the artillery shells used in the war. Even with price controls and a lower profit margin on manufactured goods, the profits resulting from wartime sales expanded the company into the third largest manufacturing company in the country. Bethlehem Steel became the primary arms supplier for the United States and other allied powers again in 1939. Historians divide the views of US political, social and business leaders into four distinct groups: The first of these were the Non-Interventionists, a loosely affiliated and politically diverse anti-war movement which sought to keep the United States out of the conflict altogether. Members of this group tended to view the war as a clash between the imperialist and militaristic great powers of Europe, whom they deemed to be corrupt and unworthy of support. Others were pacifists, who objected on moral grounds. Prominent leaders included Democrats like former Secretary of State William Jennings Bryan, industrialist Henry Ford and publisher William Randolph Hearst; Republicans such as Senator Robert M. La Follette of Wisconsin and Senator George W. Norris of Nebraska; and Progressive activist Jane Addams. At the far-left end of the political spectrum, the Socialists, led by their perennial candidate for President, Eugene V. Debs, and movement veterans like Victor L. Berger and Morris Hillquit, were staunch anti-militarists. They were opposed to any US intervention, branding the conflict as a "capitalist war" which US workers should resist. However, after the US joined the war in April 1917, a schism developed between the anti-war party leadership and a pro-war faction of socialist writers and intellectuals led by John Spargo, William English Walling and E. Haldeman-Julius. This clique founded the rival Social Democratic League of America to promote the war effort among their fellow Socialists. Next were the more moderate Liberal-Internationalists. This nominally progressive group reluctantly supported US entry into the war against Germany, with the postwar goal of establishing strong international institutions designed to peacefully resolve future conflicts between nations and to promote liberal democratic values more broadly. This camp's views were advocated by interest groups such as the League to Enforce Peace. Adherents included U.S. President Woodrow Wilson, his influential advisor Edward M. House, former President William Howard Taft, chairman of the Commission for Relief in Belgium Herbert Hoover, Wall Street financier Bernard Baruch, and Harvard University President Abbott Lawrence Lowell. Finally, there were the Atlanticists. Politically conservative and unambiguously pro-Allied, this faction had championed US intervention in the war since the sinking of the Lusitania and also strongly supported the Preparedness Movement. Proponents also advocated for an enduring postwar alliance with Great Britain, which they saw as vital to maintaining the future security of the US. Prominent among the Anglophile Eastern establishment, supporters included former U.S. President Theodore Roosevelt, Major General Leonard Wood, lawyer and diplomat Joseph Hodges Choate, former Secretary of War Henry Stimson, and Senators Henry Cabot Lodge of Massachusetts and Elihu Root of New York. Public opinion A surprising factor in the development of US public opinion was how little the political parties became involved. Wilson and the Democrats in 1916 campaigned on the slogan "He kept us out of war!", saying a Republican victory would mean war with both Mexico and Germany. His position probably was critical in winning the western states. Charles Evans Hughes, the GOP candidate, insisted on downplaying the war issue. The Socialist Party of America talked peace. Socialist rhetoric declared the European conflict to be "an imperialist war" blaming the war on capitalism and pledged total opposition. "A bayonet", its propaganda said, "was a weapon with a worker at each end". When war was declared, however, many Socialists, including much of the party's intellectual leadership, supported the decision and sided with the pro-Allied efforts. The majority, led by Eugene V. Debs (the party's presidential candidate from 1900 to 1912), remained ideological and die-hard opponents. Many socialists came under investigation from the Espionage Act of 1917 and many suspected of treason were arrested, including Debs. This increased the Socialists' and anti-war groups' resentment toward the US government. The working class was relatively quiet and tended to divide along ethnic lines. At the beginning of the war, neither working men nor farmers took a large interest in the debates on war preparation. Samuel Gompers, head of the AFL labor movement, denounced the war in 1914 as "unnatural, unjustified, and unholy", but by 1916 he was supporting Wilson's limited preparedness program, against the objections of Socialist union activists. In 1916 the labor unions supported Wilson on domestic issues and ignored the war question. The war at first disrupted the cotton market; the Royal Navy blockaded shipments to Germany, and prices fell from 11 cents a pound to only 4 cents. By 1916, however, the British decided to bolster the price to 10 cents to avoid losing Southern support. The cotton growers seem to have moved from neutrality to intervention at about the same pace as the rest of the nation. Midwestern farmers generally opposed the war, particularly those of German and Scandinavian descent. The Midwest became the stronghold of isolationism; other remote rural areas also saw no need for war. The African-American community did not take a strong position one way or the other. A month after Congress declared war, W. E. B. Du Bois called on African-Americans to "fight shoulder to shoulder with the world to gain a world where war shall be no more". Once the war began and black men were drafted, they worked to achieve equality. Many had hoped the community's help in the war efforts abroad would earn civil rights at home. When such civil liberties were still not granted, many African-Americans grew tired of waiting for recognition of their rights as American citizens. There was a strong antiwar element among poor rural whites in the South and border states. In rural Missouri for example, distrust of powerful eastern influences focused on the risk that Wall Street would lead the US into war. Across the South poor white farmers warned each other that "a rich man's war meant a poor man's fight," and they wanted nothing of it. Antiwar sentiment was strongest among Christians affiliated with the Churches of Christ, the Holiness movement and Pentecostal churches. Congressman James Hay, Democrat of Virginia was the powerful chairman of the House Committee on Military Affairs. He repeatedly blocked prewar efforts to modernize and enlarge the army. Preparedness was not needed because Americans were already safe, he insisted in January 1915: Educated, urban, middle-class Southerners generally supported entering the war, and many worked on mobilization committees. In contrast to this many rural southern whites opposed entering the war. Those with more formal education were more in favor of entering the war and those in the South with less formal education were more likely to oppose entering the war. Letters to newspapers with spelling or grammatical errors overwhelmingly opposed entry into the war; letters without errors overwhelmingly supported entry into the war. When the war began Texas and Georgia led the Southern states with volunteers. 1,404 from Texas, 1,397 from Georgia, 538 from Louisiana, 532 from Tennessee, 470 from Alabama, 353 from North Carolina, 316 from Florida, and 225 from South Carolina. Every Southern senator voted in favor of entering the war except Mississippi firebrand James K. Vardaman. Some regions of the South were more heavily in favor of intervention than others. Georgia had the largest portion of pro-British newspapers before the US entry into the war and provided the most volunteers per capita of any state in the country before the introduction of conscription. All five major newspapers of Southeast Georgia were outspokenly Anglophilic throughout the war and highlighted German atrocities such as the rape of Belgium and execution of British nurse Edith Cavell. Other magazines with nationwide distribution which were pro-British such as The Outlook and The Literary Digest had a disproportionately high distribution throughout every region of the state of Georgia as well as the region of northern Alabama in the area around Huntsville and Decatur (when the war began there were 470 volunteers from the state of Alabama, of these, over 400 came from Huntsville-Decatur region). Support for US entry into the war was also pronounced in central Tennessee. Letters to newspapers which expressed pro-British, anti-German or pro-interventionist sentiment were common. In between October 1914 and April 1917, letters about the war to newspapers from Tennessee included at least one of these three sentiments. In the Tennessee counties of Cheatham County, Robertson County, Sumner County, Wilson County, Rutherford County, Williamson County, Maury County, Marshall County, Bedford County, Coffee County and Cannon County over half of the letters contained all three of these elements. In South Carolina there was support for the US entering the war. Led by Governor Richard I. Manning, the cities of Greenville, Spartanburg, and Columbia had started lobbying for army training centers in their communities, for both economic and patriotic reasons, in preparation for US entry into the war. Similarly, Charleston had interned a German freighter in 1914, and when the freighter's skeleton crew tried to block Charleston harbor they were all arrested and imprisoned. From that point on Charleston was buzzing with "war fever." 1915, 1916 and early 1917 were all years when Charleston and the low country coastal counties to the south of Charleston were gripped by sentiment that was very "pro-British and anti-German." German Americans by this time usually had only weak ties to Germany; however, they were fearful of negative treatment they might receive if the United States entered the war (such mistreatment was already happening to German-descent citizens in Canada and Australia). Almost none called for intervening on Germany's side, instead calling for neutrality and speaking of the superiority of German culture. As more nations were drawn into the conflict, however, the English-language press increasingly supported Britain, while the German-American media called for neutrality while also defending Germany's position. Chicago's Germans worked to secure a complete embargo on all arms shipments to Europe. In 1916, large crowds in Chicago's Germania celebrated the Kaiser's birthday, something they had not done before the war. German Americans in early 1917 still called for neutrality, but proclaimed that if a war came they would be loyal to the United States. By this point, they had been excluded almost entirely from national discourse on the subject. German-American Socialists in Milwaukee, Wisconsin actively campaigned against entry into the war. Leaders of most religious groups (except the Episcopalians) tended to pacifism, as did leaders of the woman's movement. The Methodists and Quakers were among the vocal opponents of the war. President Wilson, a devout Presbyterian, often framed the war in terms of good and evil in an appeal for religious support of the war. A concerted effort was made by pacifists including Jane Addams, Oswald Garrison Villard, David Starr Jordan, Henry Ford, Lillian Wald, and Carrie Chapman Catt. Their goal was to encourage Wilson's efforts to mediate an end of the war by bringing the belligerents to the conference table. Finally in 1917 Wilson convinced some of them that to be truly anti-war they needed to support what he promised would be "a war to end all wars". Once war was declared, the more liberal denominations, which had endorsed the Social Gospel, called for a war for righteousness that would uplift all mankind. The theme—an aspect of American exceptionalism—was that God had chosen America as his tool to bring redemption to the world. US Catholic bishops maintained a general silence toward the issue of intervention. Millions of Catholics lived in both warring camps, and Catholic Americans tended to split on ethnic lines in their opinions toward US involvement in the war. At the time, heavily Catholic towns and cities in the east and Midwest often contained multiple parishes, each serving a single ethnic group, such as Irish, German, Italian, Polish, or English. US Catholics of Irish and German descent opposed intervention most strongly. Pope Benedict XV made several attempts to negotiate a peace. All of his efforts were rebuffed by both the Allies and the Germans, and throughout the war the Vatican maintained a policy of strict neutrality. In 1914–1916, there were few Jewish Americans in favor of US entry into the war.[citation needed] New York City, with its Jewish community numbering 1.5 million, was a center of antiwar activism, much of which was organized by labor unions which were primarily on the political left and therefore opposed to a war that they viewed to be a battle between several great powers. Some Jewish communities worked together during the war years to provide relief to Jewish communities in Eastern Europe decimated by fighting, famine and scorched earth policies of the Russian and Austro-German armies. Of greatest concern to Jewish Americans was the tsarist government in Russia due its toleration of pogroms and allegedly following antisemitic policies. As historian Joseph Rappaport reported through his study of Yiddish press during the war, "The pro-Germanism of America's immigrant Jews was an inevitable consequence of their Russophobia." However, after the February Revolution of 1917 led to the transformation of Russia into a republic, a major obstacle was removed for those Jews who refused to support US entry into the war on the side of Russia. The draft went smoothly in New York City, and left-wing opposition to the war largely collapsed when Zionists saw the possibility of using the war to demand a state of Israel. The most effective domestic opponents of the war were Irish-American Catholics. Many had little interest in the continent; despite traditional hostility towards the United Kingdom and British Empire, some Irish Americans took a more neutral stance on the issue of aiding the Entente on account of the recently passed Government of Ireland Act 1914, allowing Irish Home Rule. However, the Act was suspended until the war ended. John Redmond and the Irish Parliamentary Party (IPP) declared that Irish Volunteers should support the US's pro-Allied war efforts first; his political opponents argued that it was not the time to support Britain in its attempt to "strengthen and expand her empire". The attacks on the IPP and pro-Allied press showed a firm belief that a German victory would hasten the achievement of an independent Irish state. Yet rather than proposing intervention on behalf of the Germans, Irish American leaders and organizations focused on demanding US neutrality. But the increased contact between militant Irish nationalists and German agents in the United States only fueled concerns of where the primary loyalties of Irish Americans lay. Nevertheless, nearly 1,000 Irish-born Americans died fighting with the US armed forces in WWI. The Easter Rising in Dublin in April 1916 was defeated within a week and its leaders executed by firing squad. Both the mainstream Irish and US press treated the uprising as foolish and misguided, and later joined the British press in suspecting it was largely created and planned by the Germans. Overall public opinion remained faithfully pro-Entente. In many major US cities, Irish-Americans dominated the Democratic Party, forcing Wilson to take into account their political viewpoints. Irish-American political efforts influenced the United States into defining its own objectives from the war separate from those of its allies, which were primarily (among other objectives) self-determination for the various nations and ethnic groups of Europe. Wilson gave assurances he would promote Irish independence after the war which helped to secure support for his war policies. However once the war was over Wilson reneged, disappointing many Irish-Americans. Though an ideological proponent of self-determination in general, Wilson saw the Irish situation purely as an internal affair of the United Kingdom and did not perceive the dispute and the unrest in Ireland as one and the same as that being faced by the various other nationalities in Europe as fallout from World War I. Some British immigrants worked actively for intervention. London-born Samuel Insull, Chicago's leading industrialist, for example, enthusiastically provided money, propaganda, and means for volunteers to enter the British or Canadian armies. After the United States's entry, Insull directed the Illinois State Council of Defense, with responsibility for organizing the state's mobilization. Immigrants from eastern Europe usually cared more about politics in their homeland than politics in the United States. Spokesmen for Slavic immigrants hoped that an Allied victory would bring independence for their homelands. Large numbers of Hungarian immigrants who were liberal and nationalist in sentiment, and sought an independent Hungary, separate from the Austro-Hungarian Empire lobbied in favor of the war and allied themselves with the Atlanticist or Anglophile portion of the population. This community was largely pro-British and anti-German in sentiment. Albanian-Americans in communities such as Boston also campaigned for entry into the war and were overwhelmingly pro-British and anti-German, as well as hopeful the war would lead to an independent Albania which would be free from the Ottoman Empire. Wisconsin had the distinction of being the most isolationist state due to its many German-Americans, socialists and pacifists. However, the exception to this were pockets within the state such as the city of Green Bay. Green Bay had a large number of pro-Allied immigrants, including the largest Belgian immigrant community in the entire country, and for this reason anti-German and pro-war sentiment were significantly higher in Green Bay than the country overall. There was a large Serbian-American community in Alaska which also was enthusiastically in favor of US entry into World War I. In the case of Alaska, which was at the time a territory, thousands of Serbian immigrants and Serbian-Americans volunteered early to join the U.S. Army shortly after the declaration of war, after the community had been outspokenly in favor of the US's entry into the war before this. During the First World War, many Serbian Americans volunteered to fight overseas, with thousands coming from Alaska. Henry Ford supported the pacifist cause by sponsoring a large-scale private peace mission, with numerous activists and intellectuals aboard the "Peace Ship" (the ocean liner Oscar II). Ford chartered the ship in 1915 and invited prominent peace activists to join him to meet with leaders on both sides in Europe. He hoped to create enough publicity to prompt the belligerent nations to convene a peace conference and mediate an end to the war. The mission was widely mocked by the press as a "Ship of Fools".[citation needed] Infighting between the activists, mockery by the press contingent aboard, and an outbreak of influenza marred the voyage. Four days after the ship arrived in neutral Norway, a beleaguered and physically ill Ford abandoned the mission and returned to the United States; he had demonstrated that independent small efforts accomplished nothing. On July 24, 1915, the German embassy's commercial attaché, Heinrich Albert, left his briefcase on a train in New York City, where an alert Secret Service agent, Frank Burke, snatched it. Wilson let the newspapers publish the contents, which indicated a systematic effort by Berlin to subsidize friendly newspapers and block British purchases of war materials. Berlin's top espionage agent, debonnaire Franz Rintelen von Kleist, was spending millions to finance sabotage in Canada, stir up trouble between the United States and Mexico, and incite labor strikes. Germany took the blame as Americans grew ever more worried about the vulnerability of a free society to subversion. Indeed, one of the main fears Americans of all stations had in 1916–1919 was that spies and saboteurs were everywhere. This sentiment played a major role in arousing fear of Germany, and suspicions regarding everyone of German descent who could not "prove" 100% loyalty. Preparedness movement By 1915, Americans were paying much more attention to the war. The sinking of Lusitania had a strong effect on public opinion because of the deaths of US civilians. That year, a strong "Preparedness" movement emerged. Proponents argued that the United States needed to immediately build up strong naval and land forces for defensive purposes; an unspoken assumption was that the US would fight sooner or later. General Leonard Wood (still on active duty after serving a term as Chief of Staff of the Army), former president Theodore Roosevelt, and former secretaries of war Elihu Root and Henry Stimson were the driving forces behind Preparedness, along with many of the nation's most prominent bankers, industrialists, lawyers and scions of prominent families. Indeed, there emerged an "Atlanticist" foreign policy establishment, a group of influential Americans drawn primarily from upper-class lawyers, bankers, academics, and politicians of the Northeast, committed to a strand of Anglophile internationalism. Representative was Paul D. Cravath, one of New York's foremost corporation lawyers. For Cravath, in his mid-fifties when the war began, the conflict served as an epiphany, sparking an interest in international affairs that dominated his remaining career. Fiercely Anglophile, he strongly supported US intervention in the war and hoped that close Anglo-American cooperation would be the guiding principle of postwar international organization. The Preparedness movement had a "realistic" philosophy of world affairs—they believed that economic strength and military muscle were more decisive than idealistic crusades focused on causes like democracy and national self-determination. Emphasizing over and over the weak state of national defenses, they showed that the US's 100,000-man Army even augmented by the 112,000 National Guardsmen, was outnumbered 20 to one by Germany's army, which was drawn from a smaller population. Similarly in 1915, the armed forces of Britain and her Empire, France, Russia, Austria-Hungary, Ottoman Empire, Italy, Bulgaria, Romania, Serbia, Belgium, Japan and Greece were all larger and more experienced than the United States military, in many cases significantly so. Reform to them meant UMT or "universal military training". They proposed a national service program under which the 600,000 men who turned 18 every year would be required to spend six months in military training, and afterwards be assigned to reserve units. The small regular army would primarily be a training agency. Antimilitarists complained the plan would make the US resemble Germany (which required two years' active duty). Advocates retorted that military "service" was an essential duty of citizenship, and that without the commonality provided by such service the nation would splinter into antagonistic ethnic groups. One spokesman promised that UMT would become "a real melting pot, under which the fire is hot enough to fuse the elements into one common mass of Americanism". Furthermore, they promised, the discipline and training would make for a better paid work force. Hostility to military service was strong at the time, and the program failed to win approval. In World War II, when Stimson as Secretary of War proposed a similar program of universal peacetime service, he was defeated. Underscoring its commitment, the Preparedness movement set up and funded its own summer training camps at Plattsburgh, New York, and other sites, where 40,000 college alumni became physically fit, learned to march and shoot, and ultimately provided the cadre of a wartime officer corps. Suggestions by labor unions that talented working-class youth be invited to Plattsburgh were ignored. The Preparedness movement was distant not only from the working classes but also from the middle-class leadership of most of small-town America. It had had little use for the National Guard, which it saw as politicized, localistic, poorly armed, ill trained, too inclined to idealistic crusading (as against Spain in 1898), and too lacking in understanding of world affairs. The National Guard on the other hand was securely rooted in state and local politics, with representation from a very broad cross section of US society. The Guard was one of the nation's few institutions that (in some northern states) accepted blacks on an equal footing. The Democratic party saw the Preparedness movement as a threat.[citation needed] Roosevelt, Root and Wood were prospective Republican presidential candidates. More subtly, the Democrats were rooted in localism that appreciated the National Guard, and the voters were hostile to the rich and powerful in the first place. Working with the Democrats who controlled Congress, Wilson was able to sidetrack the Preparedness forces. Army and Navy leaders were forced to testify before Congress to the effect that the nation's military was in excellent shape. In fact, neither the army nor navy was in shape for war. The navy had fine ships but Wilson had been using them to threaten Mexico, and the fleet's readiness had suffered. The crews of the Texas and the New York, the two newest and largest battleships, had never fired a gun, and the morale of the sailors was low. In addition, it was outnumbered and outgunned when compared to the British and German navies. The army and navy air forces were tiny in size. Despite the flood of new weapons systems created by the British, Germans, French, Austro-Hungarians, Italians, and others in the war in Europe, the army was paying scant attention. For example, it was making no studies of trench warfare, poison gas, heavy artillery, or tanks and was utterly unfamiliar with the rapid evolution of aerial warfare. The Democrats in Congress tried to cut the military budget in 1915. The Preparedness movement effectively exploited the surge of outrage over the Lusitania in May 1915, forcing the Democrats to promise some improvements to the military and naval forces. Wilson, less fearful of the navy, embraced a long-term building program designed to make the fleet the equal of the Royal Navy by the mid-1920s, although this would not be achieved until World War II. "Realism" was at work here; the admirals were Mahanians and they therefore wanted a surface fleet of heavy battleships second to none—that is, equal to Britain. The facts of submarine warfare (which necessitated destroyers, not battleships) and the possibilities of imminent war with Germany (or with Britain, for that matter), were simply ignored. Wilson's program for the Army touched off a firestorm. Secretary of War Lindley Garrison adopted many of the proposals of the Preparedness leaders, especially their emphasis on a large federal reserve and abandonment of the National Guard. Garrison's proposals not only outraged the localistic politicians of both parties, they also offended a strongly held belief shared by the liberal wing of the Progressive movement. They felt that warfare always had a hidden economic motivation. Specifically, they warned the chief warmongers were New York bankers (like J. P. Morgan) with millions at risk, profiteering munition makers (like Bethlehem Steel, which made armor, and DuPont, which made powder) and unspecified industrialists searching for global markets to control. Antiwar critics blasted them. These special interests were too powerful, especially, Senator La Follette noted, in the conservative wing of the Republican Party. The only road to peace was disarmament, reiterated Bryan. Garrison's plan unleashed the fiercest battle in peacetime history over the relationship of military planning to national goals. In peacetime, War Department arsenals and navy yards manufactured nearly all munitions that lacked civilian uses, including warships, artillery, naval guns, and shells. Items available on the civilian market, such as food, horses, saddles, wagons, and uniforms were always purchased from civilian contractors. Armor plate (and after 1918, airplanes) was an exception that has caused unremitting controversy for a century. After World War II, the arsenals and navy yards were much less important than giant civilian aircraft and electronics firms, which became the second half of the "military-industrial complex." Peace leaders like Jane Addams of Hull House and David Starr Jordan of Stanford redoubled their efforts, and now turned their voices against the president because he was "sowing the seeds of militarism, raising up a military and naval caste". Many ministers, professors, farm spokesmen, and labor union leaders joined in, with powerful support from Claude Kitchin and his band of four dozen Southern Democrats in Congress who took control of the House Military Affairs Committee. Wilson, in deep trouble, took his cause to the people in a major speaking tour in early 1916, a warmup for his reelection campaign that fall. Wilson seems to have won over the middle classes, but had little impact on the largely ethnic working classes and the deeply isolationist farmers. Congress still refused to budge, so Wilson replaced Garrison as Secretary of War with Newton Baker, the Democratic mayor of Cleveland and an outspoken opponent of preparedness (Garrison kept quiet, but felt Wilson was "a man of high ideals but no principles"). The upshot was a compromise passed in May 1916, as the war raged on and Berlin was debating whether the US was so weak it could be ignored. The Army was to double in size to 11,300 officers and 208,000 men, with no reserve, and a National Guard that would be enlarged in five years to 440,000 men. Summer camps on the Plattsburg model were authorized for new officers, and the government was given $20 million to build a nitrate plant of its own. Preparedness supporters were downcast, the antiwar people were jubilant: the US would now be too weak to go to war. The House gutted Wilson's naval plans as well, defeating a "big navy" plan by 189 to 183, and scuttling the battleships. However, news arrived of the great sea battle between Britain and Germany, the Battle of Jutland. The battle was used by the navalists to argue for the primacy of seapower; they then took control in the Senate, broke the House coalition, and authorized a rapid three-year buildup of all classes of warships. A new weapons system, naval aviation, received $3.5 million, and the government was authorized to build its own armor plate factory. The very weakness of US military power encouraged Berlin to start its unrestricted submarine attacks in 1917. It knew this meant war with the US, but it could discount the immediate risk because the US Army was negligible and the new warships would not be at sea until 1919, by which time it believed the war would be over, with Germany victorious. The argument that armaments led to war was turned on its head: most Americans came to fear that failure to arm in 1916 made aggression against the US more likely. The United States had remained aloof from the arms race in which the European powers had engaged during the decades leading up to the war. The US Army numbered slightly more than 100,000 active duty soldiers in 1916; by that time the French, British, Russian and German armies had all fought battles in which more than 10,000 men had been killed in one day, and fought campaigns in which total casualties had exceeded 200,000. In other words, the entire United States Army, as it stood on the eve of intervention, could be wiped out in a single week of the fighting that had characterized the war to date. Americans felt an increasing need for a military that could command respect. As one editor put it, "The best thing about a large army and a strong navy is that they make it so much easier to say just what we want to say in our diplomatic correspondence." Berlin thus far had backed down and apologized when Washington was angry, thus boosting US self-confidence. The US's rights and honor increasingly came into focus. The slogan "Peace" gave way to "Peace with Honor". The Army remained unpopular, however. A recruiter in Indianapolis noted that, "The people here do not take the right attitude towards army life as a career, and if a man joins from here he often tries to go out on the quiet". The Preparedness movement used its easy access to the mass media to demonstrate that the War Department had no plans, no equipment, little training, no reserve, a laughable National Guard, and a wholly inadequate organization for war. At a time when European generals were directing field armies that numbered several corps, on combat fronts that stretched for dozens or hundreds of miles, no active duty US general officer had commanded more than a division. Motion pictures like The Battle Cry of Peace (1915) depicted invasions of the US homeland that demanded action. The readiness and capability of the US Navy was a matter of controversy. The press at the time reported that the only thing the military was ready for was an enemy fleet attempting to seize New York harbor –at a time when the German battle fleet was penned up by the Royal Navy. The Navy Secretary Josephus Daniels was a journalist with pacifist leanings. He had built up the educational resources of the Navy and made its Naval War College in Newport, Rhode Island an essential experience for would-be admirals. However, he alienated the officer corps with his moralistic reforms, including no wine in the officers' mess, no hazing at the Naval Academy, and more chaplains and YMCAs. Daniels, as a newspaperman, knew the value of publicity. In 1915 he set up the Naval Consulting Board headed by Thomas Edison to obtain the advice and expertise of leading scientists, engineers, and industrialists. It popularized technology, naval expansion, and military preparedness, and was well covered in the media. But according to Coletta he ignored the nation's strategic needs, and disdaining the advice of its experts, Daniels suspended meetings of the Joint Army and Navy Board for two years because it was giving unwelcome advice, chopped in half the General Board's recommendations for new ships, reduced the authority of officers in the Navy yards where ships were built and repaired, and ignored the administrative chaos in his department. Bradley Fiske, one of the most innovative admirals in US naval history, in 1914 was Daniels' top aide; he recommended a reorganization that would prepare for war, but Daniels refused. Instead he replaced Fiske in 1915 and brought in for the new post of Chief of Naval Operations an unknown captain, William Benson. Chosen for his compliance, Benson proved to be a wily bureaucrat who was more interested in preparing the US Navy for the possibility of an eventual showdown with Britain than an immediate one with Germany. Benson told Sims he "would as soon fight the British as the Germans". Proposals to send observers to Europe were blocked, leaving the Navy in the dark about the success of the German submarine campaign. Admiral William Sims charged after the war that in April 1917, only ten percent of the Navy's warships were fully manned; the rest lacked 43% of their seamen. Light antisubmarine ships were few in number, as if Daniels had been unaware of the German submarine menace that had been the focus of foreign policy for two years. The Navy's only warfighting plan, the "Black Plan" assumed the Royal Navy did not exist and that German battleships were moving freely about the Atlantic and the Caribbean and threatening the Panama Canal. Daniels' tenure would have been even less successful save for the energetic efforts of Assistant Secretary Franklin D. Roosevelt, who effectively ran the Department. His most recent biographer concludes that, "it is true that Daniels had not prepared the navy for the war it would have to fight." Decision for war By 1916 a new factor was emerging—a sense of national self-interest and US nationalism. The unbelievable casualty figures in Europe were sobering—two vast battles caused over one million casualties each. Clearly this war would be a decisive episode in the history of the world. Every effort to find a peaceful solution had been unavailing. Kendrick Clements claims bureaucratic decision-making was one of the main sources pushing the United States to declaring war on Germany and aligning itself with the Allies. He writes, referencing the demand for submarines to follow cruiser rules instead of simply avoiding the war zone: "The problem with the American policy toward submarine warfare that was set in February 1915 was not that it was necessarily wrong, but that it was determined almost casually, without careful analysis either of its implications or of any alternatives." Secretary of State William Jennings Bryan spent most of the fall of 1914 out of contact with the State Department, leaving the more conservative Robert Lansing with the ability to shape US foreign policy at the time. Many seemingly small decisions made by Lansing during this time would eventually stack up, shifting US support towards the Allies. Then, with the announcement of the U-boat campaign in February 1915, Lansing produced a draft containing the phrase "strict accountability". Initially unexplained, this gradually became a doctrine justifying the use of force. In early 1917, Kaiser Wilhelm II forced the issue. After discussions in a 9 January 1917 German Crown Council meeting, the decision was made public on January 31, 1917, to target neutral shipping in a designated war zone. This became the immediate cause of the entry of the United States into the war. Germany sank ten US merchant ships from February 3, 1917, through April 4, 1917, though news about the schooner Marguerite did not arrive until after Wilson signed the declaration of war. Outraged public opinion now overwhelmingly supported Wilson when he asked Congress for a declaration of war on April 2, 1917. It was voted approved by a Joint Session (not merely the Senate) on April 6, 1917, and Wilson signed it the following afternoon. The Germans had anticipated that unrestricted submarine warfare would lead to war and thus tried to line up new allies ahead of the announcement, especially Mexico. Arthur Zimmermann, the German foreign minister, sent the Zimmermann Telegram to Mexico on January 16, 1917. Zimmermann invited Mexico (knowing their resentment towards America since the 1848 Mexican Cession) to join in a war against the United States if the United States declared war on Germany. Germany promised to pay for Mexico's costs and to help it recover the territory forcibly annexed by the United States in 1848. These territories included the present day states of California, Nevada, Utah, most of Arizona, about half of New Mexico and a quarter of Colorado. British intelligence intercepted and decoded the telegram and passed it to the Wilson administration. The White House released it to the press on March 1. This exacerbated American anger even as Germany continued to sink US ships, undermining the efforts of isolationists in the Senate who filibustered to block legislation for arming American merchant ships to defend themselves. Public opinion, moralism, and national interest Historians such as Ernest R. May have approached the process of the US entry into the war as a study in how public opinion changed radically in three years' time. In 1914 most Americans called for neutrality, seeing the war as a dreadful mistake and were determined to stay out. By 1917 the same public felt just as strongly that going to war was both necessary and wise. Military leaders had little to say during this debate, and military considerations were seldom raised. The decisive questions dealt with morality and visions of the future. The prevailing attitude was that the US possessed a superior moral position as the only great nation devoted to the principles of freedom and democracy. By staying aloof from the squabbles of reactionary empires, it could preserve those ideals—sooner or later the rest of the world would come to appreciate and adopt them. In 1917 this very long-run program faced the severe danger that in the short run powerful forces adverse to democracy and freedom would triumph. Strong support for moralism came from religious leaders, women (led by Jane Addams), and from public figures like long-time Democratic leader William Jennings Bryan, the Secretary of State from 1913 to 1916. The most important moralist of all was President Woodrow Wilson—the man who dominated decision making so totally that the war has been labeled, from a US perspective, "Wilson's War". In 1917 Wilson won the support of most of the moralists by proclaiming "a war to make the world safe for democracy." If they truly believed in their ideals, he explained, now was the time to fight. The question then became whether Americans would fight for what they deeply believed in, and the answer turned out to be a resounding "Yes". Some of this attitude was mobilised by the Spirit of 1917, which evoked the Spirit of '76. Antiwar activists at the time and in the 1930s, alleged that beneath the veneer of moralism and idealism there must have been ulterior motives. Some suggested a conspiracy on the part of New York City bankers holding $3 billion of war loans to the Allies, or steel and chemical firms selling munitions to the Allies. The interpretation was popular among left-wing Progressives (led by Senator Robert La Follette of Wisconsin) and among the "agrarian" wing of the Democratic party—including the chairman of the tax-writing Ways and Means Committee of the House. He strenuously opposed war, and when it came he rewrote the tax laws to make sure the rich paid the most. (In the 1930s neutrality laws were passed to prevent financial entanglements from dragging the nation into a war.) In 1915, Bryan thought that Wilson's pro-British sentiments had unduly influenced his policies, so he became the first Secretary of State ever to resign in protest. However, historian Harold C. Syrett demonstrated that business in general supported neutrality. Other historians state that the pro-war element was animated not by profit but by disgust with what Germany actually did, especially in Belgium, and the threat it represented to US ideals. Belgium kept the public's sympathy as the Germans executed civilians, and English nurse Edith Cavell. American engineer Herbert Hoover led a private relief effort that won wide support. Compounding the Belgium atrocities were new weapons that Americans found repugnant, like poison gas and the aerial bombardment of innocent civilians as Zeppelins dropped bombs on London. Even anti-war spokesmen did not claim that Germany was innocent, and pro-German scripts were poorly received. Randolph Bourne criticized the moralist philosophy claiming it was a justification by US intellectual and power elites, like President Wilson, for going to war unnecessarily. He argues that the push for war started with the Preparedness movement, fueled by big business. While big business would not push much further than Preparedness, benefitting the most from neutrality, the movement would eventually evolve into a war-cry, led by war-hawk intellectuals under the guise of moralism. Bourne believes elites knew full well what going to war would entail and the price in US lives it would cost. If US elites could portray the United States' role in the war as noble, they could convince the generally isolationist US public war would be acceptable. Above all, US attitudes towards Germany focused on the U-boats (submarines), which sank RMS Lusitania in 1915 and other passenger ships without warning. That appeared to Americans as an unacceptable challenge to the US's rights as a neutral country, and as an unforgivable affront to humanity. After repeated diplomatic protests, Germany agreed to stop. But in 1917 the Germany military leadership decided that "military necessity" dictated the unrestricted use of their submarines. The Kaiser's advisors felt the US was enormously powerful economically but too weak militarily to make a difference. Twenty years after World War I ended, 70% of Americans polled believed that US participation in the war had been a mistake. Declaration of war On April 2, 1917, Wilson asked a special joint session of Congress to declare war on the German Empire, stating, "We have no selfish ends to serve". To make the conflict seem like a better idea, he painted the conflict idealistically, stating that the war would "make the world safe for democracy" and later that it would be a "war to end war". The United States had a moral responsibility to enter the war, Wilson proclaimed. The future of the world was being determined on the battlefield, and US national interest demanded a voice. Wilson's definition of the situation won wide acclaim, and, indeed, has shaped the US's role in world and military affairs ever since. Wilson believed that if the Central Powers won, the consequences would be bad for the United States. Germany would have dominated the continent and perhaps would gain control of the seas as well. Latin America could well have fallen under Berlin's control. The dream of spreading democracy, liberalism, and independence would have been shattered. On the other hand, if the Allies had won without help, there was a danger they would carve up the world without regard to US commercial interests. They were already planning to use government subsidies, tariff walls, and controlled markets to counter the competition posed by US businessmen. The solution was a third route, a "peace without victory", according to Wilson. On April 6, 1917, Congress declared war. In the Senate, the resolution passed 82 to 6, with Senators Harry Lane, William J. Stone, James Vardaman, Asle Gronna, Robert M. La Follette, Sr., and George W. Norris voting against it. In the House, the declaration passed 373 to 50, with Claude Kitchin, a senior Democrat from North Carolina, notably opposing it. Another opponent was Jeannette Rankin, the first woman in Congress. Nearly all of the opposition came from the West and the Midwest. The United States Senate, in a 74 to 0 vote, declared war on Austria-Hungary on December 7, 1917, citing Austria-Hungary's severing of diplomatic relations with the United States, its use of unrestricted submarine warfare and its alliance with Germany. The declaration passed in the United States House of Representatives by a vote of 365 to 1. The sole dissenting ballot was cast by Meyer London, a Socialist Party of America member and New York congressman. President Wilson also came under pressure from Senator Henry Cabot Lodge, and from former President Theodore Roosevelt, who demanded a declaration of war on the Ottoman Empire and Bulgaria, as Germany's allies. President Wilson drafted a statement to Congress in December 1917 which said "I... recommend that Congress immediately declare the United States in a state of war with Austria-Hungary, with Turkey and with Bulgaria". However, after further consultations, the decision to go to war against Germany's other allies was postponed. See also Footnotes Bibliography Primary sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/E._Mary_Smallwood] | [TOKENS: 312] |
Contents E. Mary Smallwood Edith Mary Smallwood (8 December 1919 – 4 September 2023) was a British historian and professor of Jewish history under the Romans at Queen's University, Belfast. Early life Smallwood was born in Wandsworth, Surrey (now London) in December 1919. She received her education at the Mary Datchelor Girls’ School, Camberwell, and at Girton College, University of Cambridge, to which she won a scholarship. She graduated with First Class Honours in Classics (1942), and was later a Research Fellow at Girton, gaining her PhD in 1951 under the supervision of Prof. Jocelyn Toynbee. Career Mary Smallwood was appointed lecturer in classics (in the Latin dept) at the Queen's University, Belfast, in 1951. She became senior lecturer in 1963, reader in 1967, and was awarded a personal chair as professor of Romano-Jewish History in 1978. For the 1971–72 academic year she was a member of the School of Historical Studies at the Institute for Advanced Study, Princeton. She was elected a Fellow of the Society of Antiquaries in 1972. She retired to Edinburgh in 1983. Death Smallwood died at Cluny Lodge, Edinburgh on 4 September 2023, at the age of 103. Bibliography References This article about a British historian or genealogist is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-146] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jean-Pierre_Luminet] | [TOKENS: 1390] |
Contents Jean-Pierre Luminet Jean-Pierre Luminet (born 3 June 1951) is a French astrophysicist, specializing in black holes and cosmology. He is an emeritus research director at the CNRS (Centre national de la recherche scientifique). Luminet is a member of the Laboratoire d'Astrophysique de Marseille (LAM) and Laboratoire Univers et Théories (LUTH) of the Paris-Meudon Observatory, and is a visiting scientist at the Centre de Physique Théorique (CPT) in Marseilles. He is also a writer and poet. Luminet has been awarded several prizes on account of his work in pure science and science communication, including the Georges Lemaître Prize (1999) in recognition of his work in cosmology. In November 2021, he received the UNESCO Kalinga Prize for the Popularization of Science. He serves on the editorial board of Inference: The International Review of Science. The asteroid 5523 Luminet, discovered in 1991 at Palomar Observatory, was named after him. Luminet has published fifteen science books, seven historical novels, TV documentaries, and six poetry collections. He is an artist, an engraver, a sculptor, and a musician. During his music career, he has collaborated with composers such as Gérard Grisey and Hèctor Parra. Some of Luminet's literary works have been translated into other languages, such as Chinese, Korean, Bengali, German, Lithuanian, Greek, Italian or Spanish. Scientific career After studying mathematics at the Saint-Charles University of Marseilles in 1976, Luminet moved to Paris-Meudon Observatory to undertake a Ph. D. with Brandon Carter as his advisor. He met Stephen Hawking at the Department of Applied Mathematics and Theoretical Physics in Cambridge, England.[citation needed] He defended his Ph.D. thesis in 1977 at Paris University on the subject of Singularities in Cosmology. In 1979, Luminet got a permanent research position at the CNRS and developed his scientific activities at Paris Observatory until 2014, before joining the Laboratoire d'Astrophysique de Marseille. During the two year interval, he was a visiting scientist at the University of São Paulo, Brazil (1984 and 1988), at the University of Berkeley, California (1989–1990) and a visiting astronomer at the European Southern Observatory, Chile (2005).[citation needed] In 1978, Luminet created the first "image" of a black hole with an accretion disk, using nothing but an early computer, math, and India ink. He predicted that it could apply to the supermassive black hole in the core of the elliptical galaxy M87. In April 2019, the Event Horizon Telescope Consortium confirmed Luminet's predictions by providing the first telescopic image of the shadow of the M87* black hole and its accretion disk.[citation needed] In 1982, along with physicist Brandon Carter, Luminet invented the concept of a Tidal disruption event (TDE), the destruction of a star passing in the vicinity of a supermassive black hole. They showed that this phenomenon could result in the violent destruction of the star causing a "stellar pancake" and nuclear reactions in the core of the star in the stage of its maximum compression. With other collaborators, Luminet predicted specific observational signatures and introduced the concept of "tidal supernovae". The theory of TDE was confirmed by observing eruptions resulting from the accretion of stellar debris. It explains the superluminous supernova SN 2015L, the tidal explosion of a white dwarf before being absorbed beneath a massive black hole.[citation needed] In 1995, with his colleague Marc Lachièze-Rey [fr], Luminet coined the term "Cosmic Topology" for describing the shape of space, proposing a variety of universe models compatible with the standard Friedmann-Lemaître models of relativistic cosmology.[citation needed] In 2003, large scale anomalies in the anisotropies of the cosmic microwave background observed by the Wilkinson Microwave Anisotropy Probe led to Luminet suggesting that the shape of the universe is a finite dodecahedron, attached to itself by paired opposite faces, forming a Poincaré homology sphere. During the following years, astronomers searched for more evidence to support this hypothesis but found none. Jean-Pierre Luminet is a specialist in the history of cosmology and in particular the emergence of the concept of the Big Bang. He emphasizes in several books and articles the leading role played by the Belgian priest and cosmologist Georges Lemaître. In 2018, the International Astronomical Union (IAU) recommended that Hubble's law be known as the Hubble-Lemaître law.[citation needed] Luminet published a critical analysis of the Holographic principle and the AdS/CFT correspondence while working on Quantum gravity.[citation needed] Artistic activities Luminet is devoted to drawing, engraving (learned with Jean Delpech at Ecole Polytechnique), and sculpture. A thorough analysis of his artwork has been done by Martin Kemp, Professor of Art History at Oxford University. In the field of music, Luminet collaborated in 1991 with Gérard Grisey (a former pupil of Olivier Messiaen and Henri Dutilleux) to produce a piece of cosmic music called Le Noir de l'étoile (The Black of the Star). This work for six percussionists, based on magnetic tape and astronomical signals coming from pulsars,[citation needed] is regularly performed around the world. In 2011, he began a collaboration with Hèctor Parra, who composed the orchestral piece Caressant l'horizon (Caressing the Horizon) inspired by Luminet's books. In 2017, Luminet wrote the scenario for Parra's Inscape. Composed of an ensemble of 16 soloists, large orchestra, and electronics, the piece describes an Utopian voyage through a giant black hole. It was created in 2018 in Barcelona, Paris, and Köln. In 1998, Luminet was a curator of the exhibition Figures du Ciel (Figures of Heaven), coupled to the opening of the new Bibliothèque nationale de France. (October 1998 – January 1999) Honors and recognition Luminet has received more than twenty prizes and honors, including: Selected publications See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Moscow] | [TOKENS: 20512] |
Contents Moscow Moscow[a] is the capital and largest city of Russia, standing on the Moskva River in Central Russia. It has a population estimated at over 13 million residents within the city limits, over 19.1 million residents in the urban area, and over 21.5 million residents in its metropolitan area. The city covers an area of 2,511 square kilometers (970 mi2), while the urban area covers 5,891 square kilometers (2,275 mi2), and the metropolitan area covers over 26,000 square kilometers (10,000 mi2). Moscow is among the world's largest cities, being the most populous city entirely in Europe,[b] the largest urban and metropolitan area in Europe, and the largest city by land area on the European continent. First documented in 1147, Moscow became the capital of the Grand Principality of Moscow, which led the unification of the Russian lands in the 15th century and became the center of a unified state. Following the proclamation of the Tsardom of Russia in 1547, Moscow remained the political and economic center for most of its history. During the reign of Peter the Great, the Russian capital was moved to the newly founded city of Saint Petersburg in 1712, leading to a decline in Moscow's importance throughout the imperial period. Following the Russian Revolution and the establishment of the Russian SFSR, the capital was moved back to Moscow in 1918. The city later became the political center of the Soviet Union and experienced significant population growth throughout the Soviet period. In the aftermath of the dissolution of the Soviet Union, Moscow remained the capital city of the newly reconstituted Russian Federation and has experienced continued growth. The northernmost and coldest megacity in the world, Moscow is governed as a federal city, where it serves as the political, economic, cultural, and scientific center of Russia and Eastern Europe. Moscow has one of the world's largest urban economies. Moscow has the second-highest number of billionaires of any city (tied with Hong Kong). The Moscow International Business Center is one of the largest financial centers in the world and features the majority of Europe's tallest skyscrapers. Moscow was the host city of the 1980 Summer Olympics and one of the host cities of the 2018 FIFA World Cup. The city contains several UNESCO World Heritage Sites and is known for its display of Russian architecture, particularly in areas such as Red Square and buildings such as Saint Basil's Cathedral and the Moscow Kremlin, the latter of which is the seat of power of the Government of Russia. Moscow is home to Russian companies in different industries and is served by a comprehensive transit network, which includes four international airports, ten railway terminals, a tram system, a monorail system, and the Moscow Metro, which is the busiest metro system in Europe and one of the largest rapid transit systems in the world. The city has over 40 percent of its territory covered by greenery, making it one of the greenest cities in the world. Etymology The city's name is thought to be derived from the Moskva River. Theories of the origin of the name of the river have been proposed. The most linguistically well-grounded and widely accepted is from the Proto-Balto-Slavic root *mŭzg-/muzg- from the Proto-Indo-European *meu- "wet", so the name Moskva might signify a river at a wetland or marsh. Its cognates include Russian: музга, muzga "pool, puddle", Lithuanian: mazgoti and Latvian: mazgāt "to wash", Sanskrit: májjati "to drown", Latin: mergō "to dip, immerse", Prekmurian müzga "marsh, swamp." In many Slavic countries Moskov is a surname, most common in Russia, Bulgaria, Ukraine and North Macedonia. Additionally, there are similarly named places in Poland like Mozgawa. According to a Finno-Ugric hypothesis, the Merya and Muroma people, who were among the pre-Slavic tribes which inhabited the area, called the river Mustajoki "Black river", and the name of the river derives from this term. Other theories, having little or no scientific basis, are rejected by linguists. The Old Russian form of the name is reconstructed as *Москы, *Mosky, hence it was one of a few Slavic ū-stem nouns. As with other nouns of that declension, it had been undergoing a morphological transformation at the early stage of the development of the language, as a result, the first written references in the 12th century were Московь, Moskovĭ (accusative case), Москви, Moskvi (locative case), Москвe/Москвѣ, Moskve/Moskvě (genitive case). From the latter forms came the modern Russian form Москва, Moskva, which is a result of morphological generalization with the numerous Slavic ā-stem nouns. The form Moskovĭ has left traces in other languages, including English: Moscow; German: Moskau; French: Moscou; Portuguese: Moscou, Moscovo; and Spanish: Moscú. Moscow has acquired epithets, such as the "third Rome". Moscow is one of twelve Hero Cities. The demonym for a Moscow resident is rendered as Muscovite in English. History The site of modern-day Moscow has been inhabited since prehistoric times. Among the earliest finds are relics of the Lyalovo culture, which experts assign to the Neolithic period. They confirm that the first inhabitants of the area were hunters and gatherers. Around 950 AD, two Slavic tribes, Vyatichi and Krivichi, settled here. The Vyatichi may have formed the majority of Moscow's indigenous population. Moscow is first mentioned in chronicles under the year 1147, as part of the principality of Rostov-Suzdal, which emerged from the disintegration of Kievan Rus'. It was referred to as a meeting place of Yuri Dolgorukiy and Sviatoslav Olgovich. At the time, it was a minor town on the western border of the principality. The importance of Moscow greatly increased during the second half of the 12th century, and it was converted into a fortified gorod (stronghold) in the 1150s, with the first walls of the Kremlin being built. During the Mongol invasions of 1237–1238, Moscow was sacked following the destruction of Ryazan. The first prince of Moscow was Daniel, the youngest son of Alexander Nevsky, and in 1263, he was given Moscow as an otchina (hereditary land), where he established a local branch of Rurikid princes. Two chronicles refer to Mikhail Khorobrit as "Mikhail of Moscow" in the mid-13th century, but Daniel is generally considered to be the first prince of Moscow. On Mikhail's death in 1248, if it is assumed that an appanage principality was created, Moscow reverted as an escheat to the grand prince of Vladimir. Until 1271, the principality was ruled by the governors of Daniel's uncle Yaroslav, who was given Tver as an appanage. Daniel himself is first mentioned in chronicles under the year 1282 as taking part in a feudal war between his two older brothers. By the turn of the century, Moscow was one of the leading principalities within Vladimir-Suzdal, alongside Tver. On the right bank of the Moskva River, at a distance of eight kilometres (5 mi) from the Kremlin, Daniel founded the first monastery with the wooden church of St. Daniel-Stylite, which is now the Danilov Monastery. Following Daniel's death in 1303, the territory of the principality had almost tripled in size, encompassing the entire Moskva River along with its tributaries, which allowed Moscow to become self-sufficient. The principality was also provided with an excellent river network that facilitated trade. Daniel's descendants struggled with the princes of Tver for succession to the grand principality. Yury won recognition from the Mongol khan as the grand prince in 1318, but he lost the title four years later. Ivan I recovered the grand princely throne from Tver after proving himself to be a loyal servant of the khan. Ivan collected the tribute to the khan of the Golden Horde from dependent Russian princes and he used the funds he acquired to develop Moscow. The metropolitan of the Russian Church also found an ally in Ivan and moved his seat from the nominal capital of Vladimir to Moscow. The foundation of Moscow's first stone church, the Dormition Cathedral, was laid in 1326, and the metropolitan chose to be buried there – an act that would cement Moscow's status as the spiritual center of Russian Orthodoxy. Masonry building continued in the following years with the construction of additional stone churches. The limestone walls and towers of the Kremlin were built in 1366–1368. A distinct architectural school emerged in the late 14th century. The khan of the Golden Horde initially backed Moscow in an effort to halt the eastward expansion of the Grand Duchy of Lithuania, but he continued to meddle in Moscow's relations with other Russian princes to prevent it from becoming too strong. In 1353, the Black Death spread from northwestern Russia to Moscow, causing the deaths of Simeon of Moscow, his sons, and the metropolitan. The ruling family of Moscow remained small as a result and a new vertical pattern of princely succession from father to son was defined. During the reign of Dmitry Donskoy, the Moscow principality greatly expanded in size. In 1380, Dmitry led a united Russian army to an important victory over the Mongols in the Battle of Kulikovo, which greatly increased Moscow's prestige and solidified the status of its rulers as the military leaders of the nation. Following his death in 1389, the thrones of Vladimir and Moscow were permanently united. During the reign of Vasily II, a civil war broke out after Yury of Zvenigorod challenged the succession of his nephew in 1425. Moscow switched hands numerous times, and Yury's son, Dmitry Shemyaka, continued to offer resistance until his appanage center of Galich was captured in 1450. In ecclesiastical matters, Vasily disapproved of the Council of Florence, leading him to arrest the metropolitan upon his return in 1441 for having it signed. Seven years later, a council of Russian bishops elected their own metropolitan, which amounted to a declaration of autocephaly by the Russian Church. The fall of Constantinople in 1453 was viewed by the Russians as divine punishment for apostasy, and in 1492, Moscow was called an imperial city for the first time by the Russian metropolitan. During the reign of Ivan III, nearly all of the Russian states were united with Moscow and the foundations for a centralized state were laid. His defeat of the Tatars in 1480 also traditionally marks the end of Tatar suzerainty. Ivan did his utmost to make his capital a worthy successor to Constantinople, and he had the Kremlin reconstructed after inviting architects from Renaissance Italy, including Petrus Antonius Solarius, who designed the new Kremlin wall and its towers, and Marco Ruffo who designed the new palace for the prince. The Kremlin walls as they now appear are those designed by Solarius, completed in 1495. The Ivan the Great Bell Tower was built in 1505–1508 and augmented to its present height in 1600. A trading settlement, or posad, grew up to the east of the Kremlin, in the area known as Zaradye. In the time of Ivan III, the Red Square, originally named the Hollow Field, appeared. Ivan's son Vasily III continued the expansion of the Muscovite state and annexed the remaining Russian territories. His reign also saw the continued development of the doctrine of Moscow as the "third Rome". In 1508–1516, the Italian architect Aleviz Fryazin (Novy) arranged for the construction of a moat in front of the eastern wall, which would connect the Moskva and Neglinnaya and be filled with water from Neglinnaya. Known as the Alevizov moat and with a length of 541 metres (1,775 feet), width of 36 metres (118 feet), and depth of 9.5 to 13 metres (31–43 feet) was lined with limestone and, in 1533, fenced on both sides with low, four-metre-thick (13-foot) cogged-brick walls. In 1547, Ivan the Terrible was crowned in Moscow as not only the grand prince, but also the first tsar of all Russia. In the 16th and 17th centuries, three circular defenses were built: Kitay-gorod, the White City and the Earthen City. However, in 1547, fires destroyed much of the town, and in 1571 the Crimean Tatars captured Moscow, burning everything except the Kremlin. The annals record that only 30,000 of 200,000 inhabitants survived. The Crimean Tatars attacked again in 1591, but were held back by new walls, built between 1584 and 1591 by a craftsman named Fyodor Kon. In 1592, an outer earth rampart with 50 towers was erected around the city, including an area on the right bank of the Moscow River. As an outermost line of defense, a chain of strongly fortified monasteries was established beyond the ramparts to the south and east, principally the Novodevichy Convent and Donskoy, Danilov, Simonov, Novospasskiy, and Andronikov monasteries, most of which now house museums. From its ramparts, the city became poetically known as Bielokamennaya, the "White-Walled". The city's limits as marked by the ramparts, are now marked by the Garden Ring. Three square gates existed on the east side of the Kremlin wall, which in the 17th century, were known as Konstantino-Eleninsky, Spassky, Nikolsky (after the icons of Constantine and Helen, the Saviour and St. Nicholas that hung over them). The last two were directly opposite the Red Square, while the Konstantino-Elenensky gate was located behind Saint Basil's Cathedral.[citation needed] The Russian famine of 1601–1603 killed perhaps 100,000 in Moscow. Between 1610 and 1612, troops of the Polish–Lithuanian Commonwealth occupied Moscow, as its ruler Sigismund III tried to take the Russian throne. In 1612, Nizhny Novgorod and other Russian cities led by prince Dmitry Pozharsky and Kuzma Minin rose against the Polish occupants, besieged the Kremlin, and expelled them. In 1613, the Zemsky Sobor elected Michael Romanov aa tsar, establishing the Romanov dynasty. The 17th century saw several risings, such as the liberation of Moscow from the Polish–Lithuanian invaders (1612), the Salt Riot (1648), the Copper Riot (1662), and the Moscow uprising of 1682. During the first half of the 17th century, the population doubled from 100,000 to 200,000, and it expanded beyond its ramparts in the latter part of the century. In the middle of the 17th century, 20% of Moscow suburb's inhabitants were from the Grand Duchy of Lithuania, being driven from their homeland by the Muscovite invaders. By 1682, there were 692 households established north of the ramparts, by Ukrainians and Belarusians abducted from their hometowns in the course of the Russo-Polish War of 1654–1667. These new outskirts became known as the Meshchanskaya sloboda, after Ruthenian meshchane "town people". The term meshchane acquired pejorative connotations in 18th-century Russia and today means "petty bourgeois" or "narrow-minded philistine". The entire city of the late 17th century are contained within what is today Moscow's Central Administrative Okrug. Numerous disasters befell the city. The plague epidemics ravaged Moscow in 1570–1571, 1592 and 1654–1656. The plague killed upwards of 80% of the people in 1654–55. Fires burned out much of the wooden city in 1626 and 1648. In 1712, Peter the Great moved his government to the newly built Saint Petersburg on the Baltic coast. After losing the status as capital, the population at first decreased, from 200,000 in the 17th century to 130,000 in 1750. But after 1750, the population grew tenfold over the remaining duration of the Russian Empire, reaching 1.8 million by 1915. The 1770–1772 Russian plague killed up to 100,000 people in Moscow. By 1700, the building of cobbled roads had begun. In 1730, permanent street lights were introduced, and by 1867 many streets had a gaslight. In 1883, near the Prechistinskiye Gates, arc lamps were installed. In 1741 Moscow was surrounded by a barricade 40 kilometres (25 mi) long, the Kamer-Kollezhskiy barrier, with 16 gates at which customs tolls were collected. Its line is traced today by several streets called val ("ramparts"). In the early 19th century, the Arch of Konstantino-Elenensky gate was paved with bricks, but the Spassky Gate was the main front gate of the Kremlin and used for royal entrances. From this gate, wooden and stone bridges stretched across the moat. Books were sold on this bridge and stone platforms were built nearby for guns – "raskats". The Tsar Cannon was located on the platform of the Lobnoye mesto. The road connecting Moscow with St. Petersburg, now the M10 highway, was completed in 1746, its Moscow end following the old Tver road, which had existed since the 16th century. It became known as Peterburskoye Schosse after it was paved in the 1780s. Petrovsky Palace was built in 1776–1780 by Matvey Kazakov. Between 1781 and 1804 the Mytischinskiy water pipe (the first in Russia) was built. When Napoleon invaded Russia in 1812, the Muscovites were evacuated. The Moscow fire was principally the effect of Russian sabotage. Napoleon's Grande Armée was forced to retreat and nearly annihilated by the devastating Russian winter. In 1813, following the destruction during the French occupation, a Commission for the Construction of the City of Moscow was established. It launched a great program of rebuilding, including a partial replanning of the centre. Among many buildings constructed, or reconstructed, was the Grand Kremlin Palace and the Kremlin Armoury, the Moscow University, the Moscow Manege (Riding School), and the Bolshoi Theatre. The Arbat Street had been in existence since at least the 15th century, but it was developed into a prestigious area during the 18th century. It was destroyed in the fire of 1812 and was rebuilt completely in the early 19th century. Moscow State University was established in 1755. Its main building was reconstructed after the 1812 fire by Domenico Giliardi. The Moskovskiye Vedomosti newspaper appeared from 1756, originally in weekly intervals, and from 1859 as a daily newspaper. In the 1830s, general Alexander Bashilov planned the first regular grid of city streets north from Petrovsky Palace. Khodynka field south of the highway was used for military training. Smolensky Rail station (forerunner of Belorussky Rail Terminal) was inaugurated in 1870. Sokolniki Park, in the 18th century the home of the tsar's falconers well outside Moscow, became contiguous with the expanding city in the later 19th century and was developed into a public municipal park in 1878. The suburban Savyolovsky Rail Terminal was built in 1902. In January 1905, the institution of the City Governor, or Mayor, was officially introduced, and Alexander Adrianov became Moscow's first official mayor. When Catherine II came to power in 1762, the city's filth and the smell of sewage were depicted by observers as a symptom of disorderly lifestyles of lower-class Russians recently arrived from the farms. Elites called for improved sanitation, which became part of Catherine's plans for increasing control over social life. National political and military successes from 1812 through 1855 calmed the critics and validated efforts to produce a more enlightened and stable society. There was less discussion about the poor conditions of public health. However, in the wake of Russia's failures in the Crimean War in 1855–56, confidence in the ability of the state to maintain order in the slums eroded, and demands for improved public health put it back on the agenda. In 1903 the Moskvoretskaya water supply was completed. In November 1917, upon learning of the uprising in Petrograd, Moscow's Bolsheviks began their uprising. On 2 November (15), 1917, after heavy fighting, Soviet power was established in Moscow. Vladimir Lenin, fearing invasion, moved the capital back to Moscow on 12 March 1918. The Kremlin once again became the seat of power, political centre of the new state. With the change in values imposed by communist ideology, the tradition of preserving cultural heritage was broken. Independent preservation societies, even those that defended only secular landmarks, were disbanded by the end of the 1920s. A new anti-religious campaign, launched in 1929, coincided with the collectivization of peasants; the destruction of churches in the cities peaked around 1932. In 1937 letters were written to the Central Committee of the Communist Party of the Soviet Union to rename Moscow to "Stalindar" or "Stalinodar". Stalin rejected this suggestion. During World War II, the Soviet State Committee of Defence and the General Staff of the Red Army were located in Moscow. In 1941, 16 divisions of the national volunteers (more than 160,000 people), 25 battalions, and 4 engineering regiments were formed among the Muscovites. Between October 1941 and January 1942, the German Army Group Centre was stopped at the outskirts of the city, then driven off in the Battle of Moscow. Many factories were evacuated, together with much of the government, and from 20 October the city was declared to be in a siege. Its remaining inhabitants built and manned antitank defenses, while the city was bombarded from the air. On 1 May 1944, a medal "For the defence of Moscow" and in 1947 another medal "In memory of the 800th anniversary of Moscow" was instituted. German and Soviet casualties during the battle have been debated, as sources provide different estimates. Total casualties between 30 September 1941, and 7 January 1942, are estimated to be between 248,000 and 400,000 for the Wehrmacht and between 650,000 and 1,280,000 for the Red Army. During the postwar years, there was a housing crisis, solved by the invention of high-rise apartments. There are over 11,000 of these standardised and prefabricated apartment blocks, housing most of Moscow's population, making it by far the city with the most high-rise buildings. Apartments were built and partly furnished in the factory, before being raised and stacked into tall columns. The popular Soviet-era comic film Irony of Fate parodies this construction method. The city of Zelenograd was built in 1958 at 37 kilometres (23 miles) from the city centre to the north-west, along with the Leningradskoye Shosse, and incorporated as one of Moscow's administrative okrugs. Moscow State University moved to its campus on Sparrow Hills in 1953. In 1959, Nikita Khrushchev launched his anti-religious campaign. Of Moscow's fifty churches operating in 1959, thirty were closed and six demolished. On 8 May 1965, due to the actual 20th anniversary of the victory in World War II, Moscow was awarded a title of the Hero City. The Moscow Ring Road (MKAD) was opened in 1961. It had four lanes running 109 kilometres (68 miles) along the city borders. The MKAD marked the administrative boundaries of the city until the 1980s, when outlying suburbs beyond the ring road were incorporated. In 1980, Moscow hosted the Summer Olympic Games, which were boycotted by the US and other Western countries due to the Soviet Union's invasion of Afghanistan in 1979. In 1991 Moscow was the scene of a coup attempt by conservative communists opposed to the liberal reforms of Mikhail Gorbachev. When the USSR was dissolved in 1991, Moscow remained the capital of the Russian Federation. Since then, a market economy has emerged, producing an explosion of Western-style retailing, services, architecture, and lifestyles. The city continued to grow during the 1990s to 2000s, its population rising from below nine to above ten million. Mason and Nigmatullina argue that Soviet-era urban-growth controls produced controlled and sustainable metropolitan development, typified by the greenbelt built in 1935. Since then, however, there has been a dramatic growth of low-density suburban sprawl, created by heavy demand for single-family dwellings as opposed to crowded apartments. In 1995–97 the MKAD ring road was widened from the initial four to ten lanes. In December 2002, Bulvar Dmitriya Donskogo became the first Moscow Metro station that opened beyond the limits of MKAD. The Third Ring Road, intermediate between the early 19th-century Garden Ring and the Soviet-era outer ring road, was completed in 2004. The greenbelt is becoming more and more fragmented, and satellite cities are appearing at the fringe. Summer dachas are being converted into year-round residences, and with the proliferation of automobiles, there is heavy traffic congestion. Multiple old churches and other examples of architectural heritage that had been demolished during the Stalin era have been restored, such as the Cathedral of Christ the Saviour. In 2010s, Moscow's administration has launched some long duration projects like the Moja Ulitsa (in English: My Street) urban redevelopment program or the Residency renovation one. By its territorial expansion on 1 July 2012, southwest into the Moscow Oblast the area of the capital more than doubled, going from 1,091 to 2,511 square kilometers (421 to 970 sq mi), resulting in Moscow becoming the largest city on the European continent by area; it also gained an additional population of 233,000 people. The annexed territory was officially named Новая Москва (New Moscow). Geography Moscow is situated on the banks of the Moskva River, which flows for just over 500 km (311 mi) through the East European Plain in central Russia, not far from the natural border of the forest and forest-steppe zone. 49 bridges span the river and its canals within the city's limits. The elevation of Moscow at the All-Russia Exhibition Center (VVC), where the leading Moscow weather station is situated, is 156 metres (512 feet). Teplostan Upland is the city's highest point at 255 metres (837 feet). The width of Moscow city (not limiting MKAD) from west to east is 39.7 km (24.7 mi), and the length from north to south is 51.8 km (32.2 mi). Moscow serves as the reference point for the time zone used in most of European Russia, Belarus and the Republic of Crimea. The areas operate in what is referred to in international standards as Moscow Standard Time (MSK, МСК), which is 3 hours ahead of UTC, or UTC+3. Daylight saving time is no longer observed. According to the geographical longitude the average solar noon in Moscow occurs at 12:30. Moscow has a humid continental climate (Köppen: Dfb) with long, cold (although average by Russian standards) winters usually lasting from mid-November to the end of March, and warm summers. More extreme continental climates at the same latitude – such as parts of Eastern Canada or Siberia – have much colder winters than Moscow, suggesting that there is still significant moderation from the Atlantic Ocean[citation needed] despite the fact that Moscow is far from the sea. Weather can fluctuate widely, with temperatures ranging from −25 °C (−13 °F) in the city and −30 °C (−22 °F) in the suburbs to above 5 °C (41 °F) in the winter, and from 10 to 35 °C (50 to 95 °F) in the summer. Typical high temperatures in the warm months of June, July, and August are around a comfortable 20 to 26 °C (68 to 79 °F), but during heat waves (which can occur between May and September), daytime high temperatures often exceed 30 °C (86 °F), sometimes for a week or two at a time. In the winter, average temperatures normally drop to approximately −10 °C (14 °F), though almost every winter there are periods of warmth with day temperatures rising above 0 °C (32 °F), and periods of cooling with night temperatures falling below −20 °C (−4 °F). These periods usually last about a week or two. The growing season in Moscow normally lasts for 156 days usually around 1 May to 5 October. The highest temperature ever recorded was 38.2 °C (100.8 °F) at the VVC weather station and 39.0 °C (102.2 °F) in the center of Moscow and Domodedovo airport on 29 July 2010, during the unusual 2010 Northern Hemisphere summer heat waves. Record high and average temperatures were recorded for January, March, April, May, June, July, August, November, and December in 2007–2022. The average July temperature from 1991 to 2020 is 19.7 °C (67.5 °F). The lowest ever recorded temperature was −42.1 °C (−43.8 °F) in January 1940. Snow, which is present for about five months a year, often begins to fall mid-October, while snow cover lies in late November and melts at the end of March. On average, Moscow has 1731 hours of sunshine per year, varying from a low of 8% in December to 52% from May to August. This large annual variation is due to convective cloud formation. In the winter, moist air from the Atlantic condenses in the cold continental interior, resulting in very overcast conditions. However, this same continental influence results in considerably sunnier summers than oceanic cities of similar latitude such as Edinburgh. Between 2004 and 2010, the average was between 1800 and 2000 hours with a tendency to more sunshine in summer months, up to a record 411 hours in July 2014, 79% of possible sunshine. December 2017 was the darkest month in Moscow since records began, with only six minutes of sunlight. Temperatures in the centre of Moscow are often significantly higher than in the outskirts and nearby suburbs, especially in winter. For example, if the average January temperature in the north-east of Moscow is −6.2 °C (20.8 °F), in the suburbs it is about −8.3 °C (17.1 °F). The temperature difference between the centre of Moscow and nearby areas of Moscow Oblast can sometimes be more than 10 °C (18 °F) on frosty winter nights. Recent changes in Moscow's regional climate, since it is in the mid-latitudes of the northern hemisphere, are often cited by climate scientists as evidence of global warming, though by definition, climate change is global, not regional. During the summer, extreme heat is often observed in the city (2001, 2002, 2003, 2010, 2011, 2021). Along with a southern part of Central Russia, after recent years of hot summer seasons, the climate of the city gets hot-summer classification trends. Winter also became significantly milder: for example, the average January temperature in the early 1900s was −12.0 °C (10.4 °F), while now it is about −7.0 °C (19.4 °F). At the end of January–February it is often colder, with frosts reaching −30.0 °C (−22.0 °F) a few nights per year (2006, 2010, 2011, 2012, and 2013). The last decade was the warmest in the history of meteorological observations of Moscow. Temperature changes in the city are depicted in the table below: Paleontology Moscow is one of the few cities with paleontological monuments of world significance on its territory. One of them is the Gorodnya River with its tributaries, on the banks of which outcrops of the Quaternary and older Cretaceous periods are located. Fossils of the bivalve mollusk Inoceramus kleinii and tubular passages of burrowing animals, described in 2017 as a new ichnospecies Skolithos gorodnensis, were discovered in the Coniacian deposits near the stream bed of the Bolshaya Glinka River. Ichnogenera Diplocraterion, Planolites, Skolithos and possibly Ophiomorpha were found in the Albian deposits. Paleolithic flint tools were discovered in the Quaternary deposits of the Bolshaya Glinka stream bed. In 1878, paleontologist Hermann Trautschold discovered the left flipper of an ichthyosaur near the village of Mnevniki, which later became part of Moscow. In 2014, the animal was named Undorosaurus trautscholdi, after its discoverer. Trautschold determined the age of the sediments from which the specimen was taken to be Kimmeridgian, but, according to more recent studies, they were formed in the Tithonian age of the Jurassic period. Albian foraminifera and ammonites also known from the Moscow deposits. Fossils of various organisms are on display in Moscow museums, including the Orlov Museum of Paleontology and Vernadsky State Geological Museum. Demographics According to the 2021 Russian census, the population was 13,010,112; up from 11,503,501 in the 2010 Russian census. The official population of Moscow is based on those holding "permanent residency". According to Russia's Federal Migration Service, Moscow holds 1.8 million official "guests" who have temporary residency on the basis of visas or other documentation, giving a legal population of 14.8 million. The number of Illegal immigrants, the vast majority originating from Central Asia, is estimated to be an additional 1 million people, giving a total population of about 15.8 million. Vital statistics for 2024: Total fertility rate (2024): 1.46 children per woman Life expectancy (2021): Total — 74.55 years (male — 71.00, female — 77.94) Christians form the majority of the city's population; most of whom adhere Russian Orthodox Church. The Patriarch of Moscow serves as the head of the church and resides in the Danilov Monastery. Moscow was called the "city of 40 times 40 churches"—prior to 1917. Moscow is Russia's capital of Eastern Orthodox Christianity, which has been the country's traditional religion. Other religions practiced in Moscow include Buddhism, Hinduism, Islam, Judaism, Yazidism, and Rodnovery. The Moscow Mufti Council claimed that Muslims numbered around 1.5 million of 10.5 million of the city's population in 2010. There are four mosques in the city. Cityscape Moscow's architecture is world-renowned. Moscow is the site of Saint Basil's Cathedral, with its elegant onion domes, as well as the Cathedral of Christ the Savior and the Seven Sisters. The first Kremlin was built in the middle of the 12th century. Medieval Moscow's design was of concentric walls and intersecting radial thoroughfares. This layout, as well as Moscow's rivers, helped shape Moscow's design in subsequent centuries. The Kremlin was rebuilt in the 15th century. Its towers and some of its churches were built by Italian architects, lending the city some of the aurae of the renaissance. From the end of the 15th century, the city was embellished by masonry structures such as monasteries, palaces, walls, towers, and churches. The city's appearance had not changed much by the 18th century. Houses were made of pine and spruce logs, with shingled roofs plastered with sod or covered by birch bark. The rebuilding of Moscow in the second half of the 18th century was necessitated by constant fires and the needs of the nobility. Much of the wooden city was replaced by buildings in the classical style. For much of its architectural history, Moscow was dominated by Orthodox churches. However, the overall appearance of the city changed drastically during Soviet times, especially as a result of Joseph Stalin's large-scale effort to "modernize" Moscow. Stalin's plans for the city included a network of broad avenues and roadways, some of them over ten lanes wide, which, while greatly simplifying movement through the city, were constructed at the expense of a great number of historical buildings and districts. Among the many casualties of Stalin's demolitions was the Sukharev Tower, a longtime city landmark, as well as mansions and commercial buildings. The city's newfound status as the capital of a deeply secular nation, made religiously significant buildings especially vulnerable to demolition. Many of the city's churches, which in most cases were some of Moscow's oldest and most prominent buildings, were destroyed; some notable examples include the Kazan Cathedral and the Cathedral of Christ the Savior. During the 1990s, both were rebuilt. Many smaller churches, however, were lost. While the later Stalinist period was characterized by the curtailing of creativity and architectural innovation, the earlier post-revolutionary years saw a plethora of radical new buildings created in the city. Especially notable were the constructivist architects associated with VKHUTEMAS, responsible for such landmarks as Lenin's Mausoleum. Another prominent architect was Vladimir Shukhov, famous for Shukhov Tower, just one of many hyperboloid towers designed by Shukhov. It was built between 1919 and 1922 as a transmission tower for a Russian broadcasting company. Shukhov also left a lasting legacy to the Constructivist architecture of early Soviet Russia. He designed spacious elongated shop galleries, most notably the GUM department store on Red Square, bridged with innovative metal-and-glass vaults. Perhaps the most recognizable contributions of the Stalinist period are the so-called Seven Sisters, seven massive skyscrapers scattered throughout the city at about an equal distance from the Kremlin. A defining feature of Moscow's skyline, their imposing form was allegedly inspired by the Manhattan Municipal Building in New York City, and their style—with intricate exteriors and a large central spire—has been described as Stalinist Gothic architecture. All seven towers can be seen from most high points in the city; they are among the tallest constructions in central Moscow apart from the Ostankino Tower, which, when it was completed in 1967, was the highest free-standing land structure in the world and today remains the world's seventy-second tallest, ranking among buildings such as the Burj Khalifa in Dubai, Taipei 101 in Taiwan and the CN Tower in Toronto. The Soviet goal of providing housing for every family, and the rapid growth of Moscow's population, led to the construction of large, monotonous housing blocks. Most of these date from the post-Stalin era and the styles are often named after the leader then in power (Brezhnev, Khrushchev, etc.). They are usually badly maintained. Although the city still has some five-story apartment buildings constructed before the mid-1960s, more recent apartment buildings are usually at least nine floors tall, and have elevators. It is estimated that Moscow has over twice as many elevators as New York City and four times as many as Chicago. Moslift, one of the city's major elevator operating companies, has about 1500 elevator mechanics on call, to release residents trapped in elevators. Stalinist-era buildings, mostly found in the central part of the city, are massive and usually ornamented with Socialist realism motifs that imitate classical themes. However, small churches—almost always Eastern Orthodox–found across the city provide glimpses of its past. The Old Arbat Street, a tourist street that was once the heart of a bohemian area, preserves most of its buildings from prior to the 20th century. Many buildings found off the main streets of the inner city (behind the Stalinist façades of Tverskaya Street, for example) are also examples of bourgeois architecture typical of Tsarist times. Ostankino Palace, Kuskovo, Uzkoye and other large estates just outside Moscow originally belong to nobles from the Tsarist era, and some convents, and monasteries, both inside and outside the city, are open to Muscovites and tourists. Attempts are being made to restore many of the city's best-kept examples of pre-Soviet architecture. These restored structures are easily spotted by their bright new colors and spotless façades. There are a few examples of notable, early Soviet avant-garde work too, such as the house of the architect Konstantin Melnikov in the Arbat area. Many of these restorations were criticized for alleged disrespect of historical authenticity. Facadism is also widely practiced. Later examples of interesting Soviet architecture are usually marked by their impressive size and the semi-Modernist styles employed, such as with the Novy Arbat project, familiarly known as "false teeth of Moscow" and notorious for the wide-scale disruption of a historic area in central Moscow involved in the project. Plaques on house exteriors will inform passers-by that a well-known personality once lived there. Frequently, the plaques are dedicated to Soviet celebrities not well known outside (or often, like with decorated generals and revolutionaries, now both inside) of Russia. There are also many "museum houses" of famous Russian writers, composers, and artists in the city. Moscow's skyline is quickly modernizing, with several new towers under construction. In recent years, the city administration has been widely criticized for heavy destruction that has affected many historical buildings. As much as a third of historic Moscow has been destroyed in the past few years to make space for luxury apartments and hotels. Other historical buildings, including such landmarks as the 1930 Moskva hotel and the 1913 department store Voyentorg, have been razed and reconstructed anew, with the inevitable loss of historical value. Critics blame the government for not enforcing conservation laws: in the last 12 years, more than 50 buildings with monument status were torn down, several of those dating back to the 17th century. Some critics also wonder if the money used for the reconstruction of razed buildings could not be used for the renovation of decaying structures, which include many works by architect Konstantin Melnikov and Mayakovskaya metro station. Some organizations, such as Moscow Architecture Preservation Society and Save Europe's Heritage, are trying to draw the international public attention to these problems. There are 96 parks and 18 gardens in Moscow, including four botanical gardens. There are 450 square kilometres (170 sq mi) of green zones besides 100 square kilometres (39 sq mi) of forests. Moscow is a very green city, if compared to other cities of comparable size in Western Europe and North America; this is partly due to a history of having green "yards" with trees and grass, between residential buildings. There are on average 27 square meters (290 sq ft) of parks per person in Moscow compared with 6 for Paris, 7.5 in London and 8.6 in New York. Gorky Park (officially the Central Park of Culture and Rest named after Maxim Gorky), was founded in 1928. The main part (689,000 square metres or 170 acres) along the Moskva river contains estrades, children's attractions (including the Observation Wheel water ponds with boats and water bicycles), dancing, tennis courts and other sports facilities. It borders the Neskuchny Garden (408,000 square metres or 101 acres), the oldest park in Moscow and a former imperial residence, created as a result of the integration of three estates in the 18th century. The Garden features the Green Theater, one of the largest open amphitheaters in Europe, able to hold up to 15 thousand people. Several parks include a section known as a "Park of Culture and Rest", sometimes alongside a much wilder area (this includes parks such as Izmaylovsky, Fili and Sokolniki). Some parks are designated as Forest Parks (lesopark). Izmaylovsky Park, created in 1931, is one of the largest urban parks in the world along with Richmond Park in London. Its area of 15.34 square kilometres (5.92 sq mi) is six times greater than that of Central Park in New York. Bauman Garden, officially founded in 1920 and renamed in 1922 after the bolshevik Nikolay Bauman, is one of the oldest parks in Moscow. It is standing on the site of the former Golitsyn estate and eighteenth-century public garden. Sokolniki Park, named after the falcon hunting that occurred there in the past, is one of the oldest parks in Moscow and has an area of 6 square kilometres (2.3 sq mi). A central circle with a large fountain is surrounded by birch, maple, and elm tree alleys. A labyrinth composed of green paths lies beyond the park's ponds. Losiny Ostrov National Park ("Elk Island" National Park), with a total area of more than 116 square kilometres (45 sq mi), borders Sokolniki Park and was Russia's first national park. It is quite wild, and is also known as the "city taiga" – elk can be seen there. Tsytsin Main Botanical Garden of Academy of Sciences, founded in 1945 is the largest in Europe. It covers the territory of 3.61 square kilometres (1.39 sq mi) bordering the All-Russia Exhibition Center and contains a live exhibition of more than 20 thousand species of plants from around the world, as well as a lab for scientific research. It contains a rosarium with 20 thousand rose bushes, a dendrarium, and an oak forest, with the average age of trees exceeding 100 years. There is a greenhouse taking up more than 5,000 square metres (53,820 square feet) of land. The All-Russian Exhibition Center (Всероссийский выставочный центр), formerly known as the All-Union Agricultural Exhibition (VSKhV) and later Exhibition of Achievements of the National Economy (VDNKh), though officially named a "permanent trade show", is one of the most prominent examples of Stalinist-era monumental architecture. Among the large spans of a recreational park, areas are scores of elaborate pavilions, each representing either a branch of Soviet industry and science or a USSR republic. Even though during the 1990s it was, and for some part still is, misused as a gigantic shopping center (most of the pavilions are rented out for small businesses), it still retains the bulk of its architectural landmarks, including two monumental fountains (Stone Flower and Friendship of Nations) and a 360 degrees panoramic cinema. In 2014 the park returned to the name Exhibition of Achievements of National Economy, and in the same year, huge renovation works had been started. Lilac Park, founded in 1958, has a permanent sculpture display and a large rosarium. Moscow has always been a popular destination for tourists. Some of the more famous attractions include the city's UNESCO World Heritage Site, Moscow Kremlin and Red Square, which was built between the 14th and 17th centuries. The Church of the Ascension at Kolomenskoye, which dates from 1532, is also a UNESCO World Heritage Site and another popular attraction. Near the new Tretyakov Gallery there is a sculpture garden, Museon, often called "the graveyard of fallen monuments" that displays statues of the former Soviet Union that were removed from their place after its dissolution. Other attractions include the Moscow Zoo, a zoological garden in two sections (the valleys of two streams) linked by a bridge, with nearly a thousand species and more than 6,500 specimens. Each year, the zoo attracts more than 1.2 million visitors. Many of Moscow's parks and landscaped gardens are protected natural environments. Moscow rings Moscow's road system is centered roughly on the Kremlin at the heart of the city. From there, roads generally span outwards to intersect with a sequence of circular roads ("rings"). Aside from the aforementioned hierarchy, line 5 of Moscow Metro is a circle-shaped looped subway line (hence the name Koltsevaya Liniya, literally "ring line"), which is located between the Sadovoye Koltso and Third Transport Ring. Two modern overlapping lines of Moscow Metro form "two hearts": The outermost ring within Moscow is the Moscow Ring Road (often called MKAD, acronym word for Russian Московская Кольцевая Автомобильная Дорога), which forms the cultural boundary of the city, and was established in the 1950s. It is to note the method of building the road (usage of ground elevation instead of concrete columns throughout the whole way) formed a wall-like barrier that obstacles building roads under the MKAD highway itself). Outside Moscow, some of the roads encompassing the city continue to follow this circular pattern seen inside city limits, with the notable examples of Betonka roads (highways A107 and A108), originally made of concrete pads. In order to reduce transit traffic on MKAD, the new ring road (called CKAD - Centralnaya Koltsevaya Avtomobilnaya Doroga, Central Ring Road) is now under construction beyond the MKAD. Culture One of the most notable art museums in Moscow is the Tretyakov Gallery, which was founded by Pavel Tretyakov, a wealthy patron of the arts who donated a large private collection to the city. The Tretyakov Gallery is split into two buildings. The Old Tretyakov gallery, the original gallery in the Tretyakovskaya area on the south bank of the Moskva River, houses works in the classic Russian tradition. The works of famous pre-Revolutionary painters, such as Ilya Repin, as well as the works of early Russian icon painters can be found here. Visitors can even see rare originals by early 15th-century iconographer Andrei Rublev. The New Tretyakov gallery, created in Soviet times, mainly contains the works of Soviet artists, as well as of a few contemporary paintings, but there is some overlap with the Old Tretyakov Gallery for early 20th-century art. The new gallery includes a small reconstruction of Vladimir Tatlin's famous Monument to the Third International and a mixture of other avant-garde works by artists like Kazimir Malevich and Wassily Kandinsky. Socialist realism features can also be found within the halls of the New Tretyakov Gallery. Another art museum in the city of Moscow is the Pushkin Museum of Fine Arts, which was founded by, among others, the father of Marina Tsvetaeva. The Pushkin Museum is similar to the British Museum in London in that its halls are a cross-section of exhibits on world civilisations, with many copies of ancient sculptures. However, it also hosts paintings from every major Western era; works by Claude Monet, Paul Cézanne, and Pablo Picasso are present in the museum's collection. The State Historical Museum of Russia (Государственный Исторический музей) is a museum of Russian history located between Red Square and Manege Square in Moscow. Its exhibitions range from relics of the prehistoric tribes inhabiting present-day Russia, through priceless artworks acquired by members of the Romanov dynasty. The total number of objects in the museum's collection numbers is several million. The Polytechnical Museum, founded in 1872 is the largest technical museum in Russia, offering a wide array of historical inventions and technological achievements, including humanoid automata from the 18th century and the first Soviet computers. Its collection contains more than 160,000 items. The Borodino Panorama museum located on Kutuzov Avenue provides an opportunity for visitors to experience being on a battlefield with a 360° diorama. It is a part of the large historical memorial commemorating the victory in the Patriotic War of 1812 over Napoleon's army, that includes also the triumphal arch, erected in 1827. There is also a military history museum that includes statues, and military hardware. Memorial Museum of Cosmonautics under the Monument to the Conquerors of Space at the end of Cosmonauts Alley is the central memorial place for the Russian space officials. The Shchusev State Museum of Architecture is the national museum of Russian architecture by the name of the architect Alexey Shchusev near the Kremlin area. Moscow will get its own branch of the Hermitage Museum in 2024, with authorities having agreed upon the final project, to be executed by Hani Rashid, co-founder of New York-based 'Asymptote Architecture' - the same bureau that is behind the city's stock market building, the Busan-based World Business Center Solomon Tower and the Strata Tower in Abu-Dhabi. Moscow is the heart of the Russian performing arts, including ballet and film, with 68 museums 103 theaters, 132 cinemas and 24 concert halls. Among Moscow's theaters and ballet studios is the Bolshoi Theatre and the Malyi Theatre as well as Vakhtangov Theatre and Moscow Art Theatre. The Moscow International Performance Arts Center, opened in 2003, also known as Moscow International House of Music, is known for its performances in classical music. It has the largest organ in Russia installed in Svetlanov Hall. There are also two large circuses in Moscow: Moscow State Circus and Moscow Circus on Tsvetnoy Boulevard named after Yuri Nikulin. The Mosfilm studio was at the heart of many classic films, as it is responsible for both artistic and mainstream productions. However, despite the continued presence and reputation of internationally renowned Russian filmmakers, the once prolific native studios are much quieter. Rare and historical films may be seen in the Salut cinema, where films from the Museum of Cinema collection are shown regularly. International film festivals such as the Moscow International Film Festival, Stalker, Artdocfest, and Moscow Jewish Film Festival are staged in Moscow. Sports Over 500 Olympic sports champions lived in the city by 2005. Moscow is home to 63 stadiums (besides eight football and eleven light athletics maneges), of which Luzhniki Stadium is the largest and the 4th biggest in Europe (it hosted the 1998–99 UEFA Cup, 2007–08 UEFA Champions League finals, the 1980 Summer Olympics, and the 2018 FIFA World Cup with 7 games total, including the final). Forty other sports complexes are located within the city, including 24 with artificial ice. The Olympic Stadium was the world's first indoor arena for bandy and hosted the Bandy World Championship twice. Moscow was again the host of the competition in 2010, this time in Krylatskoye. That arena has also hosted the World Speed Skating Championships. There are also seven horse racing tracks in Moscow, of which Central Moscow Hippodrome, founded in 1834, is the largest. Moscow was the host city of the 1980 Summer Olympics, with the yachting events being held at Tallinn, in present-day Estonia. Large sports facilities and the main international airport, Sheremetyevo Terminal 2, were built in preparation for the 1980 Summer Olympics. Moscow had made a bid for the 2012 Summer Olympics. However, when final voting commenced on 6 July 2005, Moscow was the first city to be eliminated from further rounds. The Games were awarded to London. The most titled ice hockey team in the Soviet Union and in the world, HC CSKA Moscow comes from Moscow. Other big ice hockey clubs from Moscow are HC Dynamo Moscow, which was the second most titled team in the Soviet Union, and HC Spartak Moscow. The most titled Soviet, Russian, and one of the most titled Euroleague clubs, is the basketball club from Moscow PBC CSKA Moscow. Moscow hosted the EuroBasket in 1953 and 1965. Moscow had more winners at the USSR and Russian Chess Championship than any other city. The most titled volleyball team in the Soviet Union and in Europe (CEV Champions League) is VC CSKA Moscow. In football, FC Spartak Moscow has won more championship titles in the Russian Premier League than any other team. They were second only to FC Dynamo Kyiv in Soviet times. PFC CSKA Moscow became the first Russian football team to win a UEFA title, the UEFA Cup (present-day UEFA Europa League). FC Lokomotiv Moscow, FC Dynamo Moscow and FC Torpedo Moscow are other professional football teams also based in Moscow. Moscow houses other prominent football, ice hockey, and basketball teams. Because sports organisations in the Soviet Union were once highly centralized, two of the best Union-level teams represented defence and law-enforcing agencies: the Armed Forces (CSKA) and the Ministry of Internal Affairs (Dinamo). There were army and police teams in most major cities. As a result, Spartak, CSKA, and Dinamo were among the best-funded teams in the USSR. The Irina Viner-Usmanova Gymnastics Palace is located in the Luzniki Olympic Complex. The building works started in 2017 and the opening ceremony took place on 18 June 2019. The investor of the Palace is the billionaire Alisher Usmanov, husband of the former gymnast and gymnastics coach Irina Viner-Usmanova. The total surface of the building is 23,500 m2, which include 3 fitness rooms, locker rooms, rooms reserved for referees and coaches, saunas, a canteen, a cafeteria, 2 ball halls, a Medical center, a hall reserved for journalists, and a hotel for athletes. Because of Moscow's cold local climate, winter sports have a following. Many of Moscow's large parks offer marked trails for skiing and frozen ponds for skating. Moscow hosts the annual Kremlin Cup, a popular tennis tournament on both the WTA and ATP tours. It is one of the ten Tier-I events on the women's tour and a host of Russian players feature every year. SC Olimpiyskiy hosted the Eurovision Song Contest 2009, the first and so far the only Eurovision Song Contest arranged in Russia. Slava Moscow is a professional rugby club, competing in the national Professional Rugby League. Former rugby league heavyweights RC Lokomotiv have entered the same league as of 2011[update]. The Luzhniki Stadium also hosted the 2013 Rugby World Cup Sevens. In bandy, one of the most successful clubs in the world is 20 times Russian League champions Dynamo Moscow. They have also won the World Cup thrice and European Cup six times. MFK Dinamo Moskva is one of the major futsal clubs in Europe, having won the Futsal Champions League title once. When Russia was selected to host the 2018 FIFA World Cup, the Luzhniki Stadium got an increased capacity, by almost 10,000 new seats, in addition to a further two stadiums that have been built: the Dynamo Stadium, and the Spartak Stadium, although the first one later was dismissed from having World Cup matches. Entertainment The city is full of clubs, restaurants, and bars. Tverskaya Street is also one of the busiest shopping streets in Moscow. The adjoining Tretyakovsky Proyezd, also south of Tverskaya Street, in Kitai-gorod, is host to upmarket boutique stores such as Bulgari, Tiffany & Co., Armani, Prada and Bentley. Nightlife in Moscow has moved on since Soviet times and today the city has many of the world's largest nightclubs. The hottest area is located around the old chocolate factory, where bars, nightclubs, galleries, cafés and restaurants are placed. Dream Island is an amusement park in Moscow that opened on 29 February 2020. It is the largest indoor theme park in Europe. The park covers 300,000 square meters. The complex includes a landscaped park along with a concert hall, a cinema, a hotel, a children's sailing school, restaurants, and shops. Authorities According to the Constitution of the Russian Federation, Moscow is an independent federal subject of the Russian Federation, a city of federal importance. The Mayor of Moscow is the leading official in the executive, leading the Government of Moscow, which is the highest organ of executive power. The Moscow City Duma is the city duma (city council or local parliament) and local laws must be approved by it. It includes 45 members who are elected for a five-year term on single-mandate constituency basis. From 2006 to 2012, direct elections of the mayor were not held due to changes in the Charter of the city of Moscow, with the mayor appointed by presidential decree. The first direct elections from the time of the 2003 vote were to be held after the expiration of the current mayor in 2015, however, in connection with his resignation of his own free will, they took place in September 2013. Local administration is carried out through eleven prefectures, uniting the districts of Moscow into administrative districts on a territorial basis, and 125 regional administrations. According to the law "On the organization of local self-government in the city of Moscow", since the beginning of 2003, the executive bodies of local self-government are municipalities, representative bodies are municipal assemblies, whose members are elected in accordance with the Charter of the intracity municipality. In Moscow, as in a city endowed with the Constitution of the Russian Federation, the legislative, executive, and judicial federal authorities of the country are located, with the exception of the Constitutional Court of the Russian Federation, which has been located in Saint Petersburg since 2008. The supreme executive authority – the Government of the Russian Federation – is located in the House of the Government of the Russian Federation (the White House) on Krasnopresnenskaya Embankment in the center of Moscow. The State Duma sits on Okhotny Ryad. The Federation Council is located in a building on Bolshaya Dmitrovka. The Supreme Court of the Russian Federation is also located in Moscow. The Moscow Kremlin is the official residence of the President of the Russian Federation. The president's working residence in the Kremlin is located in the Senate Palace. In a ranking of the safest cities by The Economist in 2021, Moscow occupied the 38th position with a score of 62.5 points. The general level of crime is quite low. More than 170,000 surveillance cameras in Moscow are connected to the facial recognition system. The authorities recognized the successful two-month experiment with automatic recognition of faces, gender, and age of people in real-time – and deployed the system to the whole city. The network of video surveillance unites access video cameras (95% of residential apartment buildings in the capital), cameras in the territory and in buildings of schools and kindergartens, at the MCC stations, stadiums, public transport stops, and bus stations, in parks, underground passages. The emergency numbers are the same as in all the other regions of Russia: 112 is the Single Emergency Number, 101 is the number of the Fire Service and Ministry of Emergency Situations, 102 is the Police one, 103 is the ambulance one, 104 is the Emergency Gas number. Moscow's EMS is the second most efficient one among the world's megacities, as reported by PwC during the presentation of the international study Analysis of EMS Efficiency in Megacities of the World. Administrative divisions The entire city of Moscow is headed by one mayor (Sergey Sobyanin). The city of Moscow is divided into twelve administrative okrugs and 125 districts. The Russian capital's town-planning development began to show as early as the 12th century when the city was founded. The central part of Moscow grew by consolidating with suburbs in line with medieval principles of urban development when strong fortress walls would gradually spread along the circle streets of adjacent new settlements. The first circular defence walls set the trajectory of Moscow's rings, laying the groundwork for the future planning of the Russian capital. The following fortifications served as the city's circular defense boundaries at some point in history: the Kremlin walls, Zemlyanoy Gorod (Earthwork Town), the Kamer-Kollezhsky Rampart, the Garden Ring, and the small railway ring. The Moscow Ring Road (MKAD) has been Moscow's boundary since 1960. Also in the form of a circle are the main Moscow subway line, the Ring Line, and the so-called Third Automobile Ring, which was completed in 2005. Hence, the characteristic radial-circle planning continues to define Moscow's further development. However, contemporary Moscow has also engulfed a number of territories outside the MKAD, such as Solntsevo, Butovo, and the town of Zelenograd. A part of Moscow Oblast's territory was merged into Moscow on 1 July 2012; as a result, Moscow is no longer fully surrounded by Moscow Oblast and now also has a border with Kaluga Oblast. In all, Moscow gained about 1,500 square kilometers (580 sq mi) and 230,000 inhabitants. Moscow's Mayor Sergey Sobyanin lauded the expansion that will help Moscow and the neighboring region, a "mega-city" of twenty million people, to develop "harmonically". All administrative okrugs and districts have their own coats of arms and flags as well as individual heads of the area. In addition to the districts, there are Territorial Units with Special Status. These usually include areas with small or no permanent populations. Such is the case with the All-Russia Exhibition Centre, the Botanical Garden, large parks, and industrial zones. In recent years, some territories have been merged with different districts. There are no ethnic-specific regions in Moscow, as in the Chinatowns that exist in some North American and East Asian cities. And although districts are not designated by income, as with most cities, those areas that are closer to the city center, metro stations or green zones are considered more prestigious. Moscow also hosts some of the government bodies of Moscow Oblast, although the city itself is not a part of the oblast. Economy Moscow has one of the largest municipal economies in Europe and it accounts more than one-fifth of Russia's gross domestic product (GDP). As of 2021[update], the GRP of Moscow reached almost ₽24.5 trillion(US$332 billion). GMP of Moscow Region was ₽31.3 trillion or around US$425 billion. The average gross monthly wage in the city is ₽123,688 (US$2,000), which is around twice the national average of ₽66,572 (US$1,000), and one of the highest among the federal subjects of Russia. Moscow is home to the third-highest number of billionaires of any city in the world, and has the highest number of billionaires of any city in Europe. It is the financial center of Russia and home to the country's largest banks and many of its largest companies, such as oil giant Rosneft. Moscow accounts for 17% of retail sales in Russia and for 13% of all construction activity in the country. Since the 1998 Russian financial crisis, business sectors in Moscow have shown exponential rates of growth. Many new business centers and office buildings have been built in recent years, but Moscow still experiences shortages in office space. As a result, many former industrial and research facilities are being reconstructed to become suitable for office use. Overall, economic stability has improved in recent years; nonetheless, crime and corruption still hinder business development. Primary industries in Moscow include the chemical, metallurgy, food, textile, furniture, energy production, software development and machinery industries. The Mil Moscow Helicopter Plant is a manufacturer of military and civil helicopters. Khrunichev State Research and Production Space Center produces various space equipment, including modules for space stations Mir, Salyut and the ISS as well as Proton launch vehicles and military ICBMs. Sukhoi, Ilyushin, Mikoyan, Tupolev and Yakovlev aircraft design bureaus also situated in Moscow. NPO Energomash, producing the rocket engines for Russian and American space programs, as well as Lavochkin design bureau, which built fighter planes during WWII, but switched to space probes since the Space Race, are in nearby Khimki, an independent city in Moscow Oblast that have largely been enclosed by Moscow from its sides. Automobile plants ZiL and AZLK, as well as the Voitovich Rail Vehicle plant, are situated in Moscow and Metrovagonmash metro wagon plant is located just outside the city limits. The Poljot Moscow watch factory produces military, professional and sport watches well known in Russia and abroad. The Electrozavod factory was the first transformer factory in Russia. The Kristall distillery is the oldest distillery in Russia producing vodka types, including "Stolichnaya" while wines are produced at Moscow wine plants, including the Moscow Interrepublican Vinery. The Moscow Jewelry Factory and the Jewellerprom are producers of jewelry in Russia. There are other industries located just outside the city of Moscow, as well as microelectronic industries in Zelenograd, including Ruselectronics companies. Gazprom, the largest extractor of natural gas in the world and the largest Russian company, has head offices also in Moscow, as well as other oil, gas, and electricity companies. Moscow hosts headquarters of the many of telecommunications and technology companies, including 1C, ABBYY, Beeline, Kaspersky Lab, Mail.Ru Group, MegaFon, MTS, Rambler&Co, Rostelecom, Yandex, and Yota. Some industry is being transferred out of the city to improve the ecological state of the city. During Soviet times, apartments were lent to people by the government according to the square meters-per-person norm (some groups, including people's artists, heroes, and prominent scientists had bonuses according to their honors). Private ownership of apartments was limited until the 1990s when people were permitted to secure property rights to their inhabited places. Since the Soviet era, estate owners have had to pay the service charge for their residences, a fixed amount based on persons per living area. The price of real estate in Moscow continues to rise. Today, one could expect to pay $4,000 on average per square meter (11 sq ft) on the outskirts of the city or US$6,500–$8,000 per square meter in a prestigious district. The price sometimes may exceed US$40,000 per square meter in a flat. It costs about US$1,200 per month to rent a one-bedroom apartment and about US$1,000 per month for a studio in the center of Moscow. A typical one-bedroom apartment is about thirty square metres (320 square feet), a typical two-bedroom apartment is forty-five square metres (480 square feet), and a typical three-bedroom apartment is seventy square metres (750 square feet). Many cannot move out of their apartments, especially if a family lives in a two-room apartment originally granted by the state during the Soviet era. Some city residents have attempted to cope with the cost of living by renting their apartments while staying in dachas (country houses) outside the city. In 2006, Mercer Human Resources Consulting named Moscow the world's most expensive city for expatriate employees, ahead of perennial winner Tokyo, due to the stable Russian ruble as well as increasing housing prices within the city. Moscow also ranked first in the 2007 edition and 2008 edition of the survey. However, Tokyo has overtaken Moscow as the most expensive city in the world, placing Moscow at third behind Osaka in second place. In 2008, Moscow ranked top on the list of most expensive cities for the third year in a row. In 2014, according to Forbes, Moscow was ranked the 9th most expensive city in the world. Forbes ranked Moscow the 2nd most expensive city the year prior. In 2019 the Economist Intelligence Unit's Worldwide Cost of Living survey put Moscow to 102nd place in the biannual ranking of 133 most expensive cities. ECA International's Cost of Living 2019 Survey ranked Moscow at number 120 among 482 locations worldwide. The heating of buildings in Moscow, like in other cities in Russia is done using central heating system. Before 2004, state unitary enterprises were responsible to produce and supply heat to the clients by the operation of heating stations and heating distribution system of Mosgorteplo, Mosteploenergo, and Teploremontnaladka which gave service to the heating substations in the north-eastern part of the city. Clients were divided between the various enterprises based on their geographical location. A major reform launched in 2004 consolidated the various companies under the umbrella of MIPC which became the municipal heat supplier. Its subsidiaries were the newly transformed Joint-stock companies. The city's main source of heating is the power station of Mosenergo which was reformed in 2005, when around ten subsidiaries were separated from it. One of the newly independent companies was the District Heating Network Company (MTK) (Russian: Московская теплосетевая компания). In 2007 the Government of Moscow bought controlling stakes in the company. "Our city" is a geo-information portal created in 2011 under the mayor of Moscow Sergei Sobyanin with the aim of building a constructive dialogue between Moscow residents and the city's executive authorities. The portal is being developed by the State Public Institution "New Management Technologies" together with the Moscow Department of Information Technologies. In its 10 years of operation, more than 1.7 million users have joined the portal, and during this time it has become an effective tool for monitoring the state of urban infrastructure. Education There are 1,696 high schools in Moscow, as well as 91 colleges. Besides these, there are 222 institutions of higher education, including 60 state universities and the Lomonosov Moscow State University, which was founded in 1755. The main university building located in Vorobyovy Gory (Sparrow Hills) is 240 metres (790 ft) tall and when completed, was the tallest building on the continent. The university has over 30,000 undergraduate and 7,000 postgraduate students, who have a choice of twenty-nine faculties and 450 departments for study. The Moscow State University library contains over nine million books, making it one of the largest libraries in all of Russia. The I.M. Sechenov First Moscow State Medical University named after Ivan Sechenov or formerly known as Moscow Medical Academy (1stMSMU) is a medical university situated in Moscow, Russia. It was founded in 1785 as the faculty of the Moscow State University. It is a Russian Federal Agency for Health and Social Development. It is one of the largest medical universities in Russia and Europe. More than 9200 students are enrolled in 115 academic departments. It offers courses for post-graduate studies. The Pirogov Russian National Research Medical University (formerly known as Russian State Medical University) is a medical higher education institution in Moscow, Russia founded in 1906. It is fully accredited and recognized by Russia's Ministry of Education and Science and is currently under the authority of the Ministry of Health and Social Development. Named after Russian surgeon and pedagogue N.I. Pirogov (1810–1888), it is one of the largest medical institutions and the first university in Russia to allow women to acquire degrees. Moscow is one of the financial centers of the Russian Federation and CIS countries and is known for its business schools. Among them are the Financial University under the Government of the Russian Federation; Plekhanov Russian University of Economics; The State University of Management, and the National Research University - Higher School of Economics. They offer undergraduate degrees in management, finance, accounting, marketing, real estate, and economic theory, as well as Masters programs and MBAs. Most of them have branches in other regions of Russia and countries around the world. Bauman Moscow State Technical University, founded in 1830, is located in the center of Moscow and provides 18,000 undergraduate and 1,000 postgraduate students with an education in science and engineering, offering technical degrees. The Moscow Conservatory, founded in 1866, is a prominent music school in Russia. The Gerasimov All-Russian State Institute of Cinematography, abbreviated as VGIK, is the world's oldest educational institution in Cinematography, founded by Vladimir Gardin in 1919. Moscow State Institute of International Relations, founded in 1944, remains Russia's best- known school of international relations and diplomacy, with six schools focused on international relations. Approximately 4,500 students make up the university's student body and over 700,000 Russian and foreign-language books—of which 20,000 are considered rare—can be found in the library of the Moscow State Institute of International Relations. Other institutions are the Moscow Institute of Physics and Technology, also known as Phystech, the Fyodorov Eye Microsurgery Complex, founded in 1988 by Russian eye surgeon Svyatoslav Fyodorov, the Moscow Aviation Institute, the Moscow Motorway Institute (State Technical University), and the Moscow Engineering Physics Institute. Moscow Institute of Physics and Technology has taught numerous Nobel Prize winners, including Pyotr Kapitsa, Nikolay Semyonov, Lev Landau and Alexander Prokhorov, while the Moscow Engineering Physics Institute is known for its research in nuclear physics. The highest Russian military school is the Combined Arms Academy of the Armed Forces of the Russian Federation. Although Moscow has a number of famous Soviet-era higher educational institutions, most of which are more oriented towards engineering or the fundamental sciences, in recent years Moscow has seen a growth in the number of commercial and private institutions that offer classes in business and management. Many state institutions have expanded their education scope and introduced new courses or departments. Institutions in Moscow, as well as the rest of post-Soviet Russia, have begun to offer new international certificates and postgraduate degrees, including the Master of Business Administration. Student exchange programs with numerous countries, specially with the rest of Europe, have also become widespread in Moscow's universities, while schools within the Russian capital also offer seminars, lectures, and courses for corporate employees and businessmen. Moscow is one of the largest science centers in Russia. The headquarters of the Russian Academy of Sciences are located in Moscow as well as research and applied science institutions. The Kurchatov Institute, Russia's leading research and development institution in the fields of nuclear energy, where the first nuclear reactor in Europe was built, the Landau Institute for Theoretical Physics, Institute for Theoretical and Experimental Physics, Kapitza Institute for Physical Problems and Steklov Institute of Mathematics are all situated in Moscow. There are 452 libraries in the city, including 168 for children. The Russian State Library, founded in 1862, is the national library of Russia. The library is home to over 275 km (171 mi) of shelves and 42 million items, including over 17 million books and serial volumes, 13 million journals, 350,000 music scores and sound records, and 150,000 maps, making it the largest library in Russia and one of the largest in the world. Items in 247 languages account for 29% of the collection. The State Public Historical Library, founded in 1863, is the largest library specialising in Russian history. Its collection contains four million items in 112 languages, mostly on Russian and world history, heraldry, numismatics, and the history of science. In regard to primary and secondary education, in 2011, Clifford J. Levy of The New York Times wrote, "Moscow has some strong public schools, but the system as a whole is dispiriting, in part because it is being corroded by the corruption that is a post-Soviet scourge. Parents often pay bribes to get their children admitted to better public schools. There are additional payoffs for good grades." Transportation The Moscow Metro system is famous for its art, murals, mosaics, and ornate chandeliers. It started operation in 1935 and immediately became the centrepiece of the transportation system. More than that it was a Stalinist device to awe and reward the populace, and give them an appreciation of Soviet realist art. It became the prototype for future Soviet large-scale technologies. Lazar Kaganovich was in charge; he designed the subway so that citizens would absorb the values and ethos of Stalinist civilisation as they rode. The artwork of the 13 original stations became nationally and internationally famous. For example, the Sverdlov Square subway station featured porcelain bas-reliefs depicting the daily life of the Soviet peoples, and the bas-reliefs at the Dynamo Stadium sports complex glorified sports and the physical prowess of the powerful new "Homo Sovieticus" (Soviet man). The metro was touted as the symbol of the new social order—a sort of Communist cathedral of engineering modernity. Soviet workers did the labour and the artwork, but the main engineering designs, routes, and construction plans were handled by specialists recruited from the London Underground. The Britons called for tunneling instead of the "cut-and-cover" technique, the use of escalators instead of lifts, and designed the routes and the rolling stock. The paranoia of Stalin and the NKVD was evident when the secret police arrested numerous British engineers for espionage—that is for gaining an in-depth knowledge of the city's physical layout. Engineers for the Metropolitan Vickers Electrical Company were given a show trial and deported in 1933, ending the role of British business in the USSR. Today, the Moscow Metro comprises twelve lines, mostly underground with a total of 203 stations. The Metro is one of the deepest subway systems in the world; for instance, the Park Pobedy station, completed in 2003, at 84 metres (276 ft) underground, has the longest escalators in Europe. The Moscow Metro is the busiest metro system in Europe, as well as one of the world's busiest metro systems, serving about ten million passengers daily (300,000,000 people every month). Facing serious transportation problems, Moscow has plans for expanding its Metro. In 2016, the authorities launched a new circle metro railway that contributed to solving transportation issues, namely daily congestion at Koltsevaya Line. Due to the treatment of Metro stations as possible canvas for art, characterized by the fact that workers of Moscow would get to see them every day, many Stalin-era metro stations were built in different "custom" designs (where each station's design would be, initially, a massive installation on a certain theme. For example, Elektrozavodskaya station was themed solely after nearby lightbulb factory and ceramic ribbed lightbulb sockets); the tradition of "Grand Designs" and, basically, decorating metro stations as single-themed installations, was restored in late 1979. Moscow's metro is one of the world's busiest, handling 2.6 billion passengers in 2019. In the Russian capital, there are over 21.5 thousand Wi-Fi access points, in student dormitories, in parks, cultural and sports institutions, and within the Garden Ring and the Third Transport Ring. From September 2020 to August 2021, 1,700 new access points to urban Wi-Fi were launched in Moscow. The structure of the Wi-Fi network allows citizens to use the Internet without re-authorization. The Moscow Metro operates a short monorail line (line 13). The line connects Timiryazevskaya metro station and Ulitsa Sergeya Eisensteina, passing close to VDNH (and Line 6 Metro station "V.D.N.Kh."). The line opened in 2004. It accepts overground interchanges, no additional fare is needed if a ride was spent at Moscow Metro within previous 90 minutes. As Metro stations outside the city center are far apart in comparison to other cities, up to 4 kilometres (2.5 mi), a bus network radiates from each station to the surrounding residential zones. Moscow has a bus terminal for long-range and intercity passenger buses (Central Bus Terminal) with a daily turnover of about 25 thousand passengers serving about 40% of long-range bus routes in Moscow. Every major street in the city is served by at least one bus route. Many of these routes are doubled by a trolleybus route and have trolley wires over them. With the total line length of almost 600 kilometres (370 miles) of a single wire, 8 depots, 104 routes, and 1740 vehicles, the Moscow trolleybus system was the largest in the world. But municipal authority, headed by Sergey Sobyanin, began to destroy the trolleybus system in Moscow in 2014 due the planned replacement of trolleybuses by electric buses. In 2018 Moscow trolleybus system has only 4 depots and dozens of kilometers of unused wires. Almost all trolleybus wires inside Garden Ring (Sadovoe Koltso) were cut in 2016–2017 due to the reconstruction of central streets ("Moya Ulitsa"). Opened on 15 November 1933, it is also the world's 6th oldest operating trolleybus system. In 2018 the vehicle companies Kamaz and GAZ have won the Mosgortrans tender for delivering 200 electric buses and 62 ultra-fast charging stations to the city transport system. The manufacturers will be responsible for the quality and reliable operation of the buses and charging stations for the next 15 years. The city will be procuring only electric buses as of 2021, replacing the diesel bus fleet gradually. According to expectations, Moscow will become the leader amongst the European cities in terms of electric and gas fuel share in public transport by 2019. On 26 November 2018, the mayor of Moscow Sergey Sobyanin took part in the ceremony to open the cable car above the Moskva River. The cable car will connect the Luzhniki sports complex with Sparrow Hills and Kosygin Street. The journey from the well-known viewpoint on Vorobyovy Gory to Luzhniki Stadium will last for five minutes instead of 20 minutes that one would have to spend on the same journey by car. Moscow has an extensive tram system, which first opened in 1899. The newest line was built in 1984. Its daily usage by Muscovites is low, making up for approximately 5% of trips because many vital connections in the network have been withdrawn. Trams still remain important in some districts as feeders to Metro stations. The trams also provide important cross-links between metro lines, for example between Universitet station of Sokolnicheskaya Line (#1 red line) and Profsoyuznaya station of Kaluzhsko-Rizhskaya Line (#6 orange line) or between Voykovskaya and Strogino. There are three tram networks in the city: In addition, tram advocates have suggested that the new rapid transit services (metro to City, Butovo light metro, Monorail) would be more effective as at-grade tram lines and that the problems with trams are only due to poor management and operation, not the technical properties of trams. New tram models have been developed for the Moscow network despite the lack of expansion. Commercial taxi services and route taxis are in widespread use. In the mid-2010s, service platforms such as Yandex.Taxi, Uber and Gett displaced many private drivers and small service providers and were in 2015 servicing more than 50% of all taxi orders in Moscow. Russian tech firm Yandex is testing self-driving taxis in Moscow. Several train stations serve the city. Moscow's ten rail terminals (or vokzals) are: The terminals are located close to the city center, along with the metro ringline 5 or close to it, and connect to a metro line to the centre of town. Each station handles trains from different parts of Europe and Asia. There are many smaller railway stations in Moscow. As train tickets are cheap, they are the preferred mode of travel for Russians, especially when departing to Saint Petersburg, Russia's second-largest city. Moscow is the western terminus of the Trans-Siberian Railway, which traverses nearly 9,300 kilometres (5,800 mi) of Russian territory to Vladivostok on the Pacific coast. Suburbs and satellite cities are connected by commuter elektrichka (electric rail) network. Elektrichkas depart from each of these terminals to the nearby (up to 140 km or 87 mi) large railway stations. During the 2010s, the Little Ring of the Moscow Railway was converted to be used for frequent passenger service; it is fully integrated with Moscow Metro; the passenger service started on 10 September 2016. A connecting railway line on the North side of the town connects Belorussky terminal with other railway lines. This is used by some suburban trains. The Moskovskaya Okruzhnaya Zheleznaya Doroga formed a ring around the now-downtown Moscow since 1903, but only served as a non-electrified, fueled locomotive-only railway prior to reconstruction into MCC in 2010's. The Moscow Central Circle is a 54-kilometre-long (34 mi) urban-metro railway orbital line that encircles historical Moscow. It was built alongside Little Ring of the Moscow Railway, taking some of its tracks into itself as well. M.C.C. was opened for passenger use on 10 September 2016. The line is operated by the Moscow Government owned company MKZD through the Moscow Metro, with the Federal Government owned Russian Railways selected as the operation subcontractor. Another system, which forms "genuine S-Bahn" as in "suburbia-city-suburbia"-designed railway, is the Moscow Central Diameters, a pass-through railways system, created by constructing bypasses from "vokzals" final stations (e.g. by avoiding the central stations of already existing Moscow Railway, used for both intercity and urban-suburban travel before) and forming a train line across Moscow's centre. Out of 5 projected lines, the first 2 lines were completed and launched on 2019-11-21. There are over 2.6 million cars in the city daily. Recent years have seen growth in the number of cars, which have caused traffic jams and lack of parking space to become major problems. The Moscow Ring Road (MKAD), along with the Third Transport Ring and the canceled Fourth Transport Ring, is one of only three freeways that run within Moscow city limits. Several other roadway systems form concentric circles around the city. There are five primary commercial airports serving Moscow: Sheremetyevo (SVO), Domodedovo (DME), Vnukovo (VKO), Zhukovsky (ZIA), Ostafyevo (OSF). Sheremetyevo International Airport is the most globally connected of Moscow's airports, handling 60% of all international flights. It is also a home to all SkyTeam members, and the main hub for Aeroflot (itself a member of SkyTeam). Domodedovo International Airport is the leading airport in Russia in terms of passenger throughput and is the primary gateway to long-haul domestic and CIS destinations and its international traffic rivals Sheremetyevo. It is a hub for S7 airlines, and most of OneWorld and Star Alliance members use Domodedovo as their international hub. Vnukovo International Airport handles flights of Turkish Airlines, Wizz Air Abu Dhabi and others. Ostafyevo International Airport caters primarily to business aviation. Moscow's airports vary in distances from the MKAD beltway: Domodedovo is the farthest at 22 km (14 mi); Vnukovo is 11 km (7 mi); Sheremetyevo is 10 km (6 mi); and Ostafievo, the nearest, is about 8 kilometres (5.0 mi) from MKAD. There are a number of smaller airports close to Moscow (19 in Moscow Oblast) such as Myachkovo Airport, that are intended for private aircraft, helicopters and charters. Moscow has two passenger terminals, (South River Terminal and North River Terminal), on the river and regular ship routes and cruises along the Moskva and Oka rivers, which are used mostly for entertainment. The North River Terminal, built in 1937, is the main hub for long-range river routes. There are three freight ports serving Moscow. Moscow has different vehicle sharing options that are sponsored by the local government. There are several car sharing companies which are in charge of providing cars to the population. To drive the automobiles, the user has to book them through the app of the owning company. In 2018 the mayor Sergey Sobyanin said Moscow's car sharing system has become the biggest in Europe in terms of vehicle fleet. Every day about 25,000 people use this service. In the end of the same year Moscow carsharing became the second in the world in therms of fleet with 16.5K available vehicles. Another sharing system is bike sharing (Velobike) of a fleet formed by 3000 traditional and electrical bicycles. The Delisamokat is a new sharing service that provides electrical scooters. In 1992, the Moscow government began planning a projected new part of central Moscow, the Moscow International Business Center, with the goal of creating a zone, the first in Russia, and in all of Eastern Europe, that will combine business activity, living space and entertainment. Situated in Presnensky District and located at the Third Ring, the Moscow City area is under intense development. The construction of the MIBC takes place on the Krasnopresnenskaya embankment. The whole project takes up to one square kilometre (250 acres). The area is the only spot in downtown Moscow that can accommodate a project of this magnitude. Today, most of the buildings there are old factories and industrial complexes. The Federation Tower, completed in 2016, is the second-tallest building in Europe. It is planned to include a water park and other recreational facilities; business, office, entertainment, and residential buildings, a transport network and a new site for the Moscow government. The construction of four new metro stations in the territory has been completed, two of which have opened and two others are reserved for future metro lines crossing MIBC, some additional stations were planned. Major thoroughfares through MIBC are the Third Ring and Kutuzovsky Prospekt. Three metro stations were initially planned for the Filyovskaya Line. The station Delovoi Tsentr opened in 2005 and was later renamed Vystavochnaya in 2009. The branch extended to the Mezhdunarodnaya station in 2006, and all work on the third station, Dorogomilovskaya (between Kiyevskaya and Delovoi Tsentr), has been postponed. There are plans to extend the branch as far as the Savyolovskaya station, on the Serpukhovsko-Timiryazevskaya Line. The cellphone service provider MTS announced on 5 March 2021, that they would begin the country's first pilot 5G network in Moscow. Media Moscow is home to nearly all of Russia's nationwide television networks, radio stations, newspapers, and magazines. English-language media includes The Moscow Times. The Moscow News was the oldest English-language weekly newspaper Russia. Kommersant, Vedomosti and Novaya Gazeta are Russian-language media headquartered in Moscow. Kommersant and Vedomosti are among the country's oldest Russian-language business newspapers. Other media in Moscow include the Echo of Moscow, the first Soviet and Russian private news radio and information agency, and NTV, one of the first privately owned Russian television stations. The total number of radio stations in Moscow in the FM band is near 50. Moscow television networks: Moscow radio stations: People International relations Moscow is twinned with: Moscow has cooperation agreements with: See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-FOOTNOTEMcDonnell199720-204] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/64-bit_computing] | [TOKENS: 4551] |
Contents 64-bit computing In computer architecture, 64-bit integers, memory addresses, or other data units[a] are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size. A computer that uses such a processor is a 64-bit computer. From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and AArch64, for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all zeros (000...) or all ones (111...), and several 64-bit instruction sets support fewer than 64 bits of physical memory address. The term 64-bit also describes a generation of computers in which 64-bit processors are the norm. 64 bits is a word size that defines certain classes of computer architecture, buses, memory, and CPUs and, by extension, the software that runs on them. 64-bit CPUs have been used in supercomputers since the 1970s (Cray-1, 1975) and in reduced instruction set computers (RISC) based workstations and servers since the early 1990s. In 2003, 64-bit CPUs were introduced to the mainstream PC market in the form of x86-64 processors and the PowerPC G5. A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (equal to 264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (16 exabytes or EB) of byte-addressable memory. With no further qualification, a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses. However, a CPU might have external data buses or address buses with different sizes from the registers, even larger (the 32-bit Pentium had a 64-bit data bus, for instance). Architectural implications Processor registers are typically divided into several groups: integer, floating-point, single instruction, multiple data (SIMD), control, and often special registers for address arithmetic which may have various uses and names such as address, index, or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider. Most high performance 32-bit and 64-bit processors (some notable exceptions are older or embedded ARM architecture (ARM) and 32-bit MIPS architecture (MIPS) CPUs) have integrated floating point hardware, which is often, but not always, based on 64-bit units of data. For example, although the x86/x87 architecture has instructions able to load and store 64-bit (and 32-bit) floating-point values in memory, the internal floating-point data and register format is 80 bits wide, while the general-purpose registers are 32 bits wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and register format, and 64-bit integer registers. History Many computer instruction sets are designed so that a single integer register can store the memory address to any location in the computer's physical or virtual memory. Therefore, the total number of addresses to memory is often determined by the width of these registers. The IBM System/360 of the 1960s was an early 32-bit computer; it had 32-bit integer registers, although it only used the low order 24 bits of a word for addresses, resulting in a 16 MiB (16 × 10242 bytes) address space. 32-bit superminicomputers, such as the DEC VAX, became common in the 1970s, and 32-bit microprocessors, such as the Motorola 68000 family and the 32-bit members of the x86 family starting with the Intel 80386, appeared in the mid-1980s, making 32 bits something of a de facto consensus as a convenient register size. A 32-bit address register meant that 232 addresses, or 4 GB of random-access memory (RAM), could be referenced. When these architectures were devised, 4 GB of memory was so far beyond the typical amounts (4 MiB) in installations, that this was considered to be enough headroom for addressing. 4.29 billion addresses were considered an appropriate size to work with for another important reason: 4.29 billion integers are enough to assign unique references to most entities in applications like databases. Some supercomputer architectures of the 1970s and 1980s, such as the Cray-1, used registers up to 64 bits wide, and supported 64-bit integer arithmetic, although they did not support 64-bit addressing. In the mid-1980s, Intel i860 development began culminating in a 1989 release; the i860 had 32-bit integer registers and 32-bit addressing, so it was not a fully 64-bit processor, although its graphics unit supported 64-bit integer arithmetic. However, 32 bits remained the norm until the early 1990s, when the continual reductions in the cost of memory led to installations with amounts of RAM approaching 4 GB, and the use of virtual memory spaces exceeding the 4 GB ceiling became desirable for handling certain types of problems. In response, MIPS and DEC developed 64-bit microprocessor architectures, initially for high-end workstation and server machines. By the mid-1990s, HAL Computer Systems, Sun Microsystems, IBM, Silicon Graphics, and Hewlett-Packard had developed 64-bit architectures for their workstation and server systems. A notable exception to this trend were mainframes from IBM, which then used 32-bit data and 31-bit address sizes; the IBM mainframes did not include 64-bit processors until 2000. During the 1990s, several low-cost 64-bit microprocessors were used in consumer electronics and embedded applications. Notably, the Nintendo 64 and the PlayStation 2 had 64-bit microprocessors before their introduction in personal computers. High-end printers, network equipment, and industrial computers also used 64-bit microprocessors, such as the Quantum Effect Devices R5000. 64-bit computing started to trickle down to the personal computer desktop from 2003 onward, when some models in Apple's Macintosh lines switched to PowerPC 970 processors (termed G5 by Apple), and Advanced Micro Devices (AMD) released its first 64-bit x86-64 processor. Physical memory eventually caught up with 32-bit limits. In 2023, laptop computers were commonly equipped with 16GB and servers starting from 64 GB of memory, greatly exceeding the 4 GB address capacity of 32 bits. Limits of processors In principle, a 64-bit microprocessor can address 16 EB (16 × 10246 = 264 = 18,446,744,073,709,551,616 bytes) of memory. However, not all instruction sets, and not all processors implementing those instruction sets, support a full 64-bit virtual or physical address space. The x86-64 architecture (as of March 2024[update]) allows 48 bits for virtual memory and, for any given processor, up to 52 bits for physical memory. These limits allow memory sizes of 256 TB (256 × 10244 bytes) and 4 PB (4 × 10245 bytes), respectively. A PC cannot currently contain 4 petabytes of memory (due to the physical size of the memory chips), but AMD envisioned large servers, shared memory clusters, and other uses of physical address space that might approach this in the foreseeable future. Thus the 52-bit physical address provides ample room for expansion while not incurring the cost of implementing full 64-bit physical addresses. Similarly, the 48-bit virtual address space was designed to provide 65,536 (216) times the 32-bit limit of 4 GB (4 × 10243 bytes), allowing room for later expansion and incurring no overhead of translating full 64-bit addresses. The Power ISA v3.0 allows 64 bits for an effective address, mapped to a segmented address with between 65 and 78 bits allowed, for virtual memory, and, for any given processor, up to 60 bits for physical memory. The Oracle SPARC Architecture 2015 allows 64 bits for virtual memory and, for any given processor, between 40 and 56 bits for physical memory. The ARM AArch64 Virtual Memory System Architecture allows from 48 to 56 bits for virtual memory and, for any given processor, from 32 to 56 bits for physical memory. The DEC Alpha specification requires minimum of 43 bits of virtual memory address space (8 TB) to be supported, and hardware need to check and trap if the remaining unsupported bits are zero (to support compatibility on future processors). Alpha 21064 supported 43 bits of virtual memory address space (8 TB) and 34 bits of physical memory address space (16 GB). Alpha 21164 supported 43 bits of virtual memory address space (8 TB) and 40 bits of physical memory address space (1 TB). Alpha 21264 supported user-configurable 43 or 48 bits of virtual memory address space (8 TB or 256 TB) and 44 bits of physical memory address space (16 TB). 64-bit applications A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture, because that software has to manage the actual memory addressing hardware. Other software must also be ported to use the new abilities; older 32-bit software may be supported either by virtue of the 64-bit instruction set being a superset of the 32-bit instruction set, so that processors that support the 64-bit instruction set can also run code for the 32-bit instruction set, or through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with some Itanium processors from Intel, which included an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications. One significant exception to this is the IBM AS/400, software for which is compiled into a virtual instruction set architecture (ISA) called Technology Independent Machine Interface (TIMI); TIMI code is then translated to native machine code by low-level software before being executed. The translation software is all that must be rewritten to move the full OS and all software to a new platform, as when IBM transitioned the native instruction set for AS/400 from the older 32/48-bit IMPI to the newer 64-bit PowerPC-AS, codenamed Amazon. The IMPI instruction set was quite different from even 32-bit PowerPC, so this transition was even bigger than moving a given instruction set from 32 to 64 bits. On 64-bit hardware with x86-64 architecture (AMD64), most 32-bit operating systems and applications can run with no compatibility issues. While the larger address space of 64-bit architectures makes working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate on whether they or their 32-bit compatibility modes will be faster than comparably priced 32-bit systems for other tasks. A compiled Java program can run on a 32- or 64-bit Java virtual machine with no modification. The lengths and precision of all the built-in types, such as char, short, int, long, float, and double, and the types that can be used as array indices, are specified by the standard and are not dependent on the underlying architecture. Java programs that run on a 64-bit Java virtual machine have access to a larger address space. Speed is not the only factor to consider in comparing 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering – for high-performance computing (HPC) – may be more suited to a 64-bit architecture when deployed appropriately. For this reason, 64-bit clusters have been widely deployed in large organizations, such as IBM, HP, and Microsoft. Summary: A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GB of random-access memory. This is not entirely true: The main disadvantage of 64-bit architectures is that, relative to 32-bit architectures, the same data occupies more space in memory (due to longer pointers and possibly other types, and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache use. Maintaining a partial 32-bit model is one way to handle this, and is in general reasonably effective. For example, the z/OS operating system takes this approach, requiring program code to reside in 31-bit address spaces (the high order bit is not used in address calculation on the underlying hardware platform) while data objects can optionally reside in 64-bit regions. Not all such applications require a large address space or manipulate 64-bit data items, so these applications do not benefit from these features. x86-based 64-bit systems sometimes lack equivalents of software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers for obsolete hardware. Most 32-bit application software can run on a 64-bit operating system in a compatibility mode, also termed an emulation mode, e.g., Microsoft WoW64 Technology for IA-64 and AMD64. The 64-bit Windows Native Mode driver environment runs atop 64-bit NTDLL.DLL, which cannot call 32-bit Win32 subsystem code (often devices whose actual hardware function is emulated in user mode software, like Winprinters). Because 64-bit drivers for most devices were unavailable until early 2007 (Vista x64), using a 64-bit version of Windows was considered a challenge. However, the trend has since moved toward 64-bit computing, more so as memory prices dropped and the use of more than 4 GB of RAM increased. Most manufacturers started to provide both 32-bit and 64-bit drivers for new devices, so unavailability of 64-bit drivers ceased to be a problem. 64-bit drivers were not provided for many older devices, which could consequently not be used in 64-bit systems. Driver compatibility was less of a problem with open-source drivers, as 32-bit ones could be modified for 64-bit use. Support for hardware made before early 2007, was problematic for open-source platforms,[citation needed] due to the relatively small number of users. 64-bit versions of Windows cannot run 16-bit software. However, most 32-bit applications will work well. 64-bit users are forced to install a virtual machine of a 16- or 32-bit operating system to run 16-bit applications or use one of the alternatives for NTVDM. Mac OS X 10.4 "Tiger" and Mac OS X 10.5 "Leopard" had only a 32-bit kernel, but they can run 64-bit user-mode code on 64-bit processors. Mac OS X 10.6 "Snow Leopard" had both 32- and 64-bit kernels, and, on most Macs, used the 32-bit kernel even on 64-bit processors. This allowed those Macs to support 64-bit processes while still supporting 32-bit device drivers; although not 64-bit drivers and performance advantages that can come with them. Mac OS X 10.7 "Lion" ran with a 64-bit kernel on more Macs, and OS X 10.8 "Mountain Lion" and later macOS releases only have a 64-bit kernel. On systems with 64-bit processors, both the 32- and 64-bit macOS kernels can run 32-bit user-mode code, and all versions of macOS up to macOS Mojave (10.14) include 32-bit versions of libraries that 32-bit applications would use, so 32-bit user-mode software for macOS will run on those systems. The 32-bit versions of libraries have been removed by Apple in macOS Catalina (10.15). Linux and most other Unix-like operating systems, and the C and C++ toolchains for them, have supported 64-bit processors for many years. Many applications and libraries for those platforms are open-source software, written in C and C++, so that if they are 64-bit-safe, they can be compiled into 64-bit versions. This source-based distribution model, with an emphasis on frequent releases, makes availability of application software for those operating systems less of an issue. 64-bit data models In 32-bit programs, pointers and data types such as integers generally have the same length. This is not necessarily true on 64-bit machines. Mixing data types in programming languages such as C and its descendants such as C++ and Objective-C may thus work on 32-bit implementations but not on 64-bit implementations. In many programming environments for C and C-derived languages on 64-bit machines, int variables are still 32 bits wide, but long integers and pointers are 64 bits wide. These are described as having an LP64 data model, which is an abbreviation of "Long, Pointer, 64". Other models are the ILP64 data model in which all three data types are 64 bits wide, and even the SILP64 model where short integers are also 64 bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment with no changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. LL refers to the long long integer type, which is at least 64 bits on all platforms, including 32-bit environments. There are also systems with 64-bit processors using an ILP32 data model, with the addition of 64-bit long long integers; this is also used on many platforms with 32-bit processors. This model reduces code size and the size of data structures containing pointers, at the cost of a much smaller address space, a good choice for some embedded systems. For instruction sets such as x86 and ARM in which the 64-bit version of the instruction set has more registers than does the 32-bit version, it provides access to the additional registers without the space penalty. It is common in 64-bit RISC machines,[citation needed] explored in x86 as x32 ABI, and has recently been used in the Apple Watch Series 4 and 5. Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX, Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The disadvantage of the LP64 model is that storing a long into an int truncates. On the other hand, converting a pointer to a long will "work" in LP64. In the LLP64 model, the reverse is true. These are not problems which affect fully standard-compliant code, but code is often written with implicit assumptions about the widths of data types. C code should prefer (u)intptr_t instead of long when casting pointers into integer objects. A programming model is a choice made to suit a given compiler, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS application programming interface (API) typically dominates. Another consideration is the data model used for device drivers. Drivers make up the majority of the operating system code in most modern operating systems[citation needed] (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for direct memory access (DMA). As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gigabyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an input–output memory management unit (IOMMU). Current 64-bit architectures As of August 2023[update], 64-bit architectures for which processors were manufactured included: Most 64-bit architectures that are derived from a 32-bit version of the same architecture can execute code written for the 32-bit version natively, with no performance penalty. For example, x86-64 processors can run IA-32 applications at full speed. This kind of support is commonly called bi-arch support or more generally multi-arch support. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File_server] | [TOKENS: 946] |
Contents File server In computing, a file server (or fileserver) is a computer attached to a network that provides a location for shared disk access, i.e. storage of computer files (such as text, image, sound, video) that can be accessed by workstations within a computer network. The term server highlights the role of the machine in the traditional client–server scheme, where the clients are the workstations using the storage. A file server does not normally perform computational tasks or run programs on behalf of its client workstations (in other words, it is different from e.g. an application server, which is another type of server). File servers are commonly found in schools and offices, where users use a local area network to connect their client computers. Types of file servers A file server may be dedicated or non-dedicated. A dedicated server is designed specifically for use as a file server, with workstations attached for reading and writing files and databases. File servers may also be categorized by the method of access: Internet file servers are frequently accessed by File Transfer Protocol (FTP) or by Hypertext Transfer Protocol (HTTP) but are different from web servers that often provide dynamic web content in addition to static files. Servers on a LAN are usually accessed by SMB/CIFS protocol (Windows and Unix-like) or NFS protocol (Unix-like systems). Database servers, that provide access to a shared database via a database device driver, are not regarded as file servers even when the database is stored in files, as they are not designed to provide those files to users and tend to have differing technical requirements. Design of file servers In modern businesses, the design of file servers is complicated by competing demands for storage space, access speed, recoverability, ease of administration, security, and budget. This is further complicated by a constantly changing environment, where new hardware and technology rapidly obsolesces old equipment, and yet must seamlessly come online in a fashion compatible with the older machinery. To manage throughput, peak loads, and response time, vendors may utilize queuing theory to model how the combination of hardware and software will respond over various levels of demand. Servers may also employ dynamic load balancing scheme to distribute requests across various pieces of hardware. The primary piece of hardware equipment for servers over the last couple of decades has proven to be the hard disk drive. Although other forms of storage are viable (such as magnetic tape and solid-state drives) disk drives have continued to offer the best fit for cost, performance, and capacity. Since the crucial function of a file server is storage, technology has been developed to operate multiple disk drives together as a team, forming a disk array. A disk array typically has cache (temporary memory storage that is faster than the magnetic disks), as well as advanced functions like RAID and storage virtualization. Typically disk arrays increase level of availability by using redundant components other than RAID, such as power supplies. Disk arrays may be consolidated or virtualized in a SAN. Network-attached storage (NAS) is file-level computer data storage connected to a computer network providing data access to a heterogeneous group of clients. NAS devices specifically are distinguished from file servers generally in a NAS being a computer appliance – a specialized computer built from the ground up for serving files – rather than a general purpose computer being used for serving files (possibly with other functions). In discussions of NASs, the term "file server" generally stands for a contrasting term, referring to general purpose computers only. As of 2010[update] NAS devices are gaining popularity, offering a convenient method for sharing files between multiple computers. Potential benefits of network-attached storage, compared to non-dedicated file servers, include faster data access, easier administration, and simple configuration. NAS systems are networked appliances containing one or more hard drives, often arranged into logical, redundant storage containers or RAID arrays. Network Attached Storage removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS (Server Message Block/Common Internet File System), or AFP. File servers generally offer some form of system security to limit access to files to specific users or groups. In large organizations, this is a task usually delegated to directory services, such as openLDAP, Novell's eDirectory or Microsoft's Active Directory. These servers work within the hierarchical computing environment which treat users, computers, applications and files as distinct but related entities on the network and grant access based on user or group credentials. In many cases, the directory service spans many file servers, potentially hundreds for large organizations. In the past, and in smaller organizations, authentication could take place directly at the server itself. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Gypsum] | [TOKENS: 2120] |
Contents Gypsum Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula CaSO4·2H2O. It is widely mined and is used as a fertilizer and as the main constituent in many forms of plaster, drywall and blackboard or sidewalk chalk. Gypsum also crystallizes as translucent crystals of selenite. It forms as an evaporite mineral and as a hydration product of anhydrite.[citation needed] The Mohs scale of mineral hardness defines gypsum as hardness value 2 based on scratch hardness comparison. Fine-grained white or lightly tinted forms of gypsum known as alabaster have been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, the Byzantine Empire, and the Nottingham alabasters of Medieval England. Etymology and history The word gypsum is derived from the Greek word gypsos (γύψος), "plaster". Because the quarries of the Montmartre district of Paris have long furnished burnt gypsum (calcined gypsum) used for various purposes, this dehydrated gypsum became known as plaster of Paris. Upon adding water, after a few dozen minutes, plaster of Paris becomes regular gypsum (dihydrate) again, causing the material to harden or "set" in ways that are useful for casting and construction. Gypsum was known in Old English as spærstān, "spear stone", referring to its crystalline projections. Thus, the word spar in mineralogy, by comparison to gypsum, refers to any non-ore mineral or crystal that forms in spearlike projections. In the mid-18th century, the German clergyman and agriculturalist Johann Friderich Mayer investigated and publicized gypsum's use as a fertilizer. Gypsum may act as a source of sulfur for plant growth, and in the early 19th century, it was regarded as an almost miraculous fertilizer. American farmers were so anxious to acquire it that a lively smuggling trade with Nova Scotia evolved, resulting in the so-called "Plaster War" of 1820. Physical properties Gypsum is moderately water-soluble (~2.0–2.5 g/L at 25 °C) and, in contrast to most other salts, it exhibits retrograde solubility, becoming less soluble at higher temperatures. When gypsum is heated in air it loses water and converts first to calcium sulfate hemihydrate (bassanite, often simply called "plaster") and, if heated further, to anhydrous calcium sulfate (anhydrite). As with anhydrite, the solubility of gypsum in saline solutions and in brines is also strongly dependent on sodium chloride (common table salt) concentration. The structure of gypsum consists of layers of calcium (Ca2+) and sulfate (SO2−4) ions tightly bound together. These layers are bonded by sheets of water of crystallization via weaker hydrogen bonding, which gives the crystal perfect cleavage along the sheets (in the {010} plane). Crystal varieties Gypsum occurs in nature as flattened and often twinned crystals, and transparent, cleavable masses called selenite. In the form of selenite, gypsum forms some of the largest crystals found in nature, up to 12 m (39 ft) long. Selenite contains no significant selenium; rather, both substances were named for the ancient Greek word for the Moon. Selenite may also occur in a silky, fibrous form, in which case it is commonly called "satin spar". It may also be granular or quite compact. In hand-sized samples, it can be anywhere from transparent to opaque. A very fine-grained white or lightly tinted variety of gypsum, called alabaster, is prized for ornamental work of various sorts. In arid areas, gypsum can occur in a flower-like form, typically opaque, with embedded sand grains called desert rose. Occurrence Gypsum is a common mineral, with thick and extensive evaporite beds in association with sedimentary rocks. Deposits are known to occur in strata from as far back as the Archaean eon. Gypsum is deposited from lake and sea water, as well as in hot springs, from volcanic vapors, and sulfate solutions in veins. Hydrothermal anhydrite in veins is commonly hydrated to gypsum by groundwater in near-surface exposures. It is often associated with the minerals halite and sulfur. Gypsum is the most common sulfate mineral. Pure gypsum is white, but other substances found as impurities may give a wide range of colors to local deposits. Because gypsum dissolves over time in water, gypsum is rarely found in the form of sand. However, the unique conditions of the White Sands National Park in the US state of New Mexico have created a 710 km2 (270 sq mi) expanse of white gypsum sand, enough to supply the US construction industry with drywall for 1,000 years. Commercial exploitation of the area, strongly opposed by area residents, was permanently prevented in 1933 when President Herbert Hoover declared the gypsum dunes a protected national monument. Gypsum is also formed as a by-product of sulfide oxidation, amongst others by pyrite oxidation, when the sulfuric acid generated reacts with calcium carbonate. Its presence indicates oxidizing conditions. Under reducing conditions, the sulfates it contains can be reduced back to sulfide by sulfate-reducing bacteria. This can lead to accumulation of elemental sulfur in oil-bearing formations, such as salt domes, where it can be mined using the Frasch process Electric power stations burning coal with flue gas desulfurization produce large quantities of gypsum as a byproduct from the scrubbers. Orbital pictures from the Mars Reconnaissance Orbiter (MRO) have indicated the existence of gypsum dunes in the northern polar region of Mars, which were later confirmed at ground level by the Mars Exploration Rover (MER) Opportunity. Mining Commercial quantities of gypsum are found in the cities of Araripina and Grajaú in Brazil; in Pakistan, Jamaica, Iran (world's second largest producer), Thailand, Spain (the main producer in Europe), Germany, Italy, England, Ireland, Canada and the United States. Large open pit quarries are located in many places including Fort Dodge, Iowa, which sits on one of the largest deposits of gypsum in the world, and Plaster City, California, United States, and East Kutai, Kalimantan, Indonesia. Several small mines also exist in places such as Kalannie in Western Australia, where gypsum is sold to private buyers for additions of calcium and sulfur as well as reduction of aluminium toxicities on soil for agricultural purposes. Crystals of gypsum up to 11 m (36 ft) long have been found in the caves of the Naica Mine of Chihuahua, Mexico. The crystals thrived in the cave's extremely rare and stable natural environment. Temperatures stayed at 58 °C (136 °F), and the cave was filled with mineral-rich water that drove the crystals' growth. The largest of those crystals weighs 55 tonnes (61 short tons) and is around 500,000 years old. Synthesis Synthetic gypsum is produced as a waste product or by-product in a range of industrial processes. Flue gas desulfurization gypsum (FGDG) is recovered at some coal-fired power plants. The main contaminants are Mg, K, Cl, F, B, Al, Fe, Si, and Se. They come both from the limestone used in desulfurization and from the coal burned. This product is pure enough to replace natural gypsum in a wide variety of fields including drywalls, water treatment, and cement set retarder. Improvements in flue gas desulfurization have greatly reduced the amount of toxic elements present. Gypsum precipitates onto brackish water membranes, a phenomenon known as mineral salt scaling, such as during brackish water desalination of water with high concentrations of calcium and sulfate. Scaling decreases membrane life and productivity. This is one of the main obstacles in brackish water membrane desalination processes, such as reverse osmosis or nanofiltration. Other forms of scaling, such as calcite scaling, depending on the water source, can also be important considerations in distillation, as well as in heat exchangers, where either the salt solubility or concentration can change rapidly. A new study has suggested that the formation of gypsum starts as tiny crystals of a mineral called bassanite (2CaSO4·H2O). This process occurs via a three-stage pathway: The production of phosphate fertilizers requires breaking down calcium-containing phosphate rock with acid, producing calcium sulfate waste known as phosphogypsum (PG). This form of gypsum is contaminated by impurities found in the rock, namely fluoride, silica, radioactive elements such as radium, and heavy metal elements such as cadmium. Similarly, production of titanium dioxide produces titanium gypsum (TG) due to neutralization of excess acid with lime. The product is contaminated with silica, fluorides, organic matters, and alkalis. Impurities in refinery gypsum waste have, in many cases, prevented them from being used as normal gypsum in fields such as construction. As a result, waste gypsum is stored in stacks indefinitely, with significant risk of leaching their contaminants into water and soil. To reduce the accumulation and ultimately clear out these stacks, research is underway to find more applications for such waste products. Occupational safety People can be exposed to gypsum in the workplace by breathing it in, skin contact, and eye contact. Calcium sulfate per se is nontoxic and is even approved as a food additive, but as powdered gypsum, it can irritate skin and mucous membranes. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for gypsum exposure in the workplace as TWA 15 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 10 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. Uses Gypsum is used in a wide variety of applications: Gallery See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ministry_of_Education_(Israel)#cite_note-4] | [TOKENS: 543] |
Contents Ministry of Education (Israel) The Ministry of Education (Hebrew: מִשְׂרָד הַחִנּוּךְ, translit. Misrad HaHinukh; Arabic: وزارة التربية والتعليم) is the branch of the Israeli government charged with overseeing public education institutions in Israel. The department is headed by the Minister of Education, who is a member of the cabinet. The ministry has previously included culture and sport, although this is now covered by the Ministry of Culture and Sport. History In the first decade of statehood, the education system was faced with the task of establishing a network of kindergartens and schools for a rapidly growing student population. In 1949, there were 80,000 elementary school students. By 1950, there were 120,000 - an increase of 50 percent within the span of one year. Israel also took over responsibility for the education of Arab schoolchildren. The first minister of education was Zalman Shazar, later president of the State of Israel. Since 2002, the Ministry of Education has awarded a National Education Award to five top localities in recognizing excellence in investing substantial resources in the educational system. In 2012, first place was awarded to the Shomron Regional Council and followed by Or Yehuda, Tiberias, Eilat and Beersheba. The prize has been awarded to a variety of educational institutions including kindergartens and elementary schools. In 2013–2014, the Ministry of Education promoted the regulation of the activities of external parties within the state schools, in a dialogue between the Ministry, the local government, parents' representatives, the business sector and philanthropic parties, as part of what was called "the intersectoral round table in the Ministry of Education". As part of the regulation, the Ministry compiled a database of external programs that have some kind of partnership with a representative from the Ministry of Education's headquarters. In 2019, a petition was filed by pluralist Jewish organizations against the Ministry of Education due to a procedure that reduces by tens of thousands of shekels the support for the activities of these organizations in schools. In April 2021, the High Court invalidated the procedure in question, and even emphasized the importance of implementing the principles of the Shanhar Committee report on the teaching of Judaism in state education. In November 2021 it was announced that the Ministry of Education is not implementing the High Court ruling and that the damage to those organizations continues. List of ministers References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_ref-:0_3-0] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Four_species] | [TOKENS: 3246] |
Contents Four species The four species (Hebrew: ארבעת המינים arba'at ha-minim, also called arba'a minim) are four plants—the etrog, lulav, hadass, and aravah—mentioned in the Torah (Leviticus 23:40) as being relevant to the Jewish holiday of Sukkot. Observant Jews tie together three types of branches and one type of fruit and wave them in a special ceremony each day of the Sukkot holiday, excluding Shabbat. The waving of the four plants is a mitzvah prescribed by God in the Torah, and it contains symbolic allusions to a Jew's service of God. The four plants The mitzvah of waving the four species derives from the Torah. Leviticus 23:40 states: And you shall take on the first day the fruit of splendid trees, branches of palm trees and boughs of leafy trees and willows of the brook, and you shall rejoice before the LORD your God for seven days. English Standard Version The Hebrew terms in this verse are: In Talmudic tradition, the four plants are identified as: Practice To prepare the species for the mitzvah, the lulav is first bound together with the hadass and aravah (this bundle is also referred to as "the lulav") in the following manner: One lulav is placed in the center, two aravah branches are placed to the left, and three hadass boughs are placed to the right. (This order is the same for both right-handed and left-handed people.) The bundle may be bound with strips from another palm frond, or be placed in a special holder which is also woven from palm fronds. Sephardic Jews place one aravah to the right of the lulav and the second aravah to its left, and cover them with the three hadass boughs—one on the right, the second on the left, and the third atop the lulav's spine, leaning slightly to the right. The bundle is held together with rings made from strips of palm fronds. Many Hasidic Ashkenazi Jews follow this practice as well. In all cases, all of the species must be placed in the direction in which they grew. (For the etrog, this means that the stem end should be on the bottom and the blossom end on top; this is the direction in which the etrog begins to grow, though as it matures on the tree it usually hangs in the opposite direction.) To recite the blessing over the lulav and etrog, the lulav is held in one hand and the etrog in the other. Right-handed users hold the lulav in the right hand and the etrog in the left. The customs for those who are left-handed differ for Ashkenazim and Sephardim. According to the Ashkenazi custom, the lulav is held in the left hand, and according to the Sephardi custom, in the right hand. According to Sephardi custom, the blessing is said while holding only the lulav and the etrog is picked up once the blessing is completed. According to Ashkenazi custom, before the blessing is said, the etrog is turned upside-down, opposite the direction in which it grows. The reason for these two customs is that the blessing must precede the performance of the mitzvah. Should all the species be held in the direction in which they grew, the mitzvah would be fulfilled before the blessing is recited. After reciting the blessing, "Blessed are You, Lord our God, King of the universe, Who has sanctified us with His commandments, and commanded us to take the lulav" (the "Shehecheyanu" blessing is also recited the first time each year that one waves the lulav and etrog), the etrog is turned right side up (or picked up), and the user brings his or her two hands together so that the etrog touches the lulav bundle. The four species are then pointed and gently shaken three times toward each of the four directions, plus up and down, to attest to God's mastery over all of creation. This shaking or waving is called na'anu'im (נענועים). The waving ceremony can be performed in the synagogue, or in the privacy of one's home or sukkah, as long as it is daytime. Women and girls may choose to perform the mitzvah of waving the lulav and etrog, but are not required by Halakha to do so. Because women are not required to perform this mitzva, some are of the opinion that Sephardi women do not recite the blessing. The waving is performed again (though without the attendant blessings) during morning prayer services in the synagogue, at several points during the recital of Hallel. Additionally, in the synagogue, Hallel is followed by a further ceremony, in which the worshippers join in a processional around the sanctuary with their four species, while reciting special supplications (called hoshaanot, from the refrain hosha na, "save us"). From the first through the sixth day of Sukkot, one complete circuit is made; on Hoshanah Rabbah, the seventh and last day of Sukkot, seven complete circuits are made. As the four species are not used on Shabbat, there are variant customs as to whether hoshaanot are said and a circuit made on that day. The Lulav and Etrog Campaign, which became one of the inaugural efforts in Chabad's broad Mitzvah Campaigns, began in September 1953, under the direction of the Rabbi Menachem Schneerson. The initiative was part of the Rabbi Schneerson's overarching vision to reconnect every Jew with his or her Jewish soul. The campaign was formally launched when the Rebbe’s chief secretary, Rabbi Chaim Mordechai Aizik Hodakov, summoned two Yeshiva students, with a directive to go out onto the streets during the holiday of Sukkot. The idea was simple: to take the Four Kinds, and offer Jews they encountered the opportunity to fulfill this central Sukkot Mitzvah. Although Chabad had a long history of concern for fellow Jews and had established systems like the Jewish day school network and the Released Time program, going out into the streets and approaching strangers in this manner was a new type of outreach for both the Chabad students and the American Jews they encountered. The initial outreach, which Rabbi Yehuda Krinsky recalled as a "very quiet affair, person to person," saw 70 young men participating and visiting locations like Crown Heights, Brownsville, East New York, and the Bronx. The immediate results of this inaugural effort were chronicled in a report by Rabbi Berel Shemtov, documenting that approximately "1,500 fellow Jews fulfilled the mitzvah" during the first two days of Sukkot. The responses from non-observant Jews were highly emotional at times, with some performing the mitzvah with tears in their eyes, acknowledging that it had been years since they had fulfilled it. However, the campaign also received negative feedback, including comments such as "Mind your own business". The underlying philosophy of the Mitzvah Campaigns is the belief that engaging Jews with the physical action of a mitzvah is the core way to connect them to Judaism. Later observations reinforced the striking effects of this tangible approach, particularly when Chassidim visited nursing homes; doctors and nurses were astonished to see previously silent and depressed individuals suddenly display lively interest, eagerly grasping the lulav and etrog, and sometimes reciting the blessings from memory, demonstrating that the objects unlocked powerful, joyful recollections of their heritage.[better source needed] History During the time of the Temple in Jerusalem, the species were taken in the Temple on all seven days of Sukkot, and elsewhere only on the first day. This followed the command of Leviticus 23:40, which calls for the species to be "taken" on the first day, followed by seven days of celebration "before God" (at the Temple). Following the destruction of the Temple, Rabbi Yohanan ben Zakkai ordered that the four species be waved everywhere on every day of Sukkot (except on Shabbat), as a memorial to the Temple. The practice of waving the four species has been understood in different ways. Apparently, initially the waving was a part of the vigorous dancing (or shaking of musical instruments) which took place as part of the Temple celebrations on Sukkot. After the destruction of the Temple, the waving acquired a symbolic meaning, with six directions of waving symbolizing God's control over every direction of the universe, similar to certain Temple offerings which were also waved in six directions. In old Jewish Eastern European communities, the Jews lived in cities far from fields, which then required substantial travel in order to purchase the four species. Often whole towns would have had to share them. The etrog especially was rare and thus very expensive. Northern African communities—such as those in Morocco and Tunis—were located closer to fields where etrogim could be grown, but the etrog was still fairly expensive. There, instead of one per city, there was one per family. Still, the community would share their etrogim to some extent. Today, with improved transportation, farming techniques etc., more people have their own. An etrog can cost anywhere from $3 to $500 depending on its quality. Selecting the four species While all mitzvot should be performed in the best manner possible, hiddur mitzvah (beautifying the mitzvah) especially applies to the four species. The halacha is explicit on what constitutes the "best" in each species. To that end, people will spend large amounts of money to acquire the most perfect etrog, the straightest lulav, and the freshest hadass and aravah. Usually a father will buy several sets of the four species to outfit his sons as well. Hiddur mitzvah applies to all mitzvot, but its absence does not impede the mitzvah from being performed. For the four species specifically, there is a further "technical" requirement of hadar (beauty), which does impede the mitzvah of the four species from being performed. Despite their similar names and details, these two requirements are distinct from one another. Interpretations Several explanations are offered as to why these particular species were chosen for the mitzvah. The four species are distinctive that none of them are widespread in the Land of Israel, but rather occur in distinct parts of the land: etrog in the moist coastal plain, lulav near springs in the valleys, hadas in the mountains, and aravah near major wadis and rivers. Thus, in order to perform the commandment, Jews from different parts of the land had to gather together at the Temple to share the species that grew in their locations. After the exile made such a gathering impossible, the rabbis adapted the idea, teaching that the binding of the four species symbolizes the unification of four categories of Jews in service of God, as follows: A second explanation finds the four species alluding to parts of the human body. Each of the species or its leaves is similar in shape to the following organs: By binding them together for a mitzvah, Jews show their desire to consecrate their entire being to the service of God. An additional reason for waving the four species in all directions alludes to the fact that all these species require much water to grow. The lulav grows in watered valleys, hadass and aravah grow near water sources, and the etrog requires more water than other fruit trees. By taking these particular species and waving them in all directions, the Jew symbolically voices a prayer for abundant rainfall for all the vegetation of the earth in the coming year. Karaite interpretation According to Karaite Judaism, the purpose of the command to collect the four species in Lev. 23:40 is ambiguous, as the text does not explicitly state what to do with them. Karaite Jews believe the intent is not to wave the four species but rather to use them to build the "sukkah" which is described in neighboring verses (23:42–43). This interpretation is based in part on Nehemiah 8:14–18: And they found written in the Law, how that the LORD had commanded by Moses, that the children of Israel should dwell in booths in the feast of the seventh month; and that they should publish and proclaim in all their cities, and in Jerusalem, saying: 'Go forth unto the mount, and fetch olive branches, and branches of wild olive, and myrtle branches, and palm branches, and branches of thick trees, to make booths, as it is written.' So the people went forth, and brought them, and made themselves booths... The passage states that it is "written in the Law" for people to go to the mountains to get palm branches, olive leaves, pine needles, myrtle leaves, and other forms of vegetation with which to build the sukkot. The only verse in the Torah that mentions some of these species is Lev. 23:40. According to most Karaites, this indicates that Ezra's scribes interpreted that verse as referring to building materials for the sukkah, not waving the four species. Lawrence Schiffman interprets the passage the same way: One of the earliest examples of midrashic exegesis was the manner in which Lev. 23:40–42 was interpreted by the book of Ezra. The interpretation proposed here was rejected by Jewish tradition, which saw Lev. 23:40 as referring to the taking of the lulav and etrog, not to the building of the sukkah. Schiffman believes the passage in Nehemiah is a midrashic interpretation of Lev. 23:40, as do the Karaites. However, his view is that this interpretation was eventually rejected by "Jewish tradition," i.e., majority practice, in favor of the Talmudic interpretation of Lev. 23:40 as referring to waving the four species. In contrast to Schiffman, some commentators argue that the verse in Nehemiah cannot be referring to Lev. 23:40, since the language in Nehemiah has some differences from that verse. The peri ʿeṣ hadar (fruit of a beautiful tree) and the willow branches are omitted, and two species of olive branches are added. It remains unclear according to this interpretation where exactly the scribes in Nehemiah's day "found written in the law" that the Sukkah should be taken from the described species, as no such commandment appears in the books of Moses or anywhere else in the Hebrew Bible. A minority view exists among the Karaites sages which holds that the four species symbolize a wide variety of greeneries and fruits that are meant to be decoratively bundled together, carried around, and eaten throughout this holiday, thus fulfilling the injunction of Lev 23:40 "to rejoice before the Lord". Art The Four Species have been depicted in Jewish art since ancient times. These elements surfaced on coins during the First Jewish Revolt (AD 66–70) and the Bar Kochba rebellion (AD 132–136). The Four Species resurfaced in the visual arts of late antiquity, appearing in artistic objects found both in the Land of Israel and diaspora communities. See also Bibliography References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Luxottica] | [TOKENS: 2278] |
Contents Luxottica Luxottica Group S.p.A. is an Italian eyewear multinational corporation headquartered in Milan. As a vertically integrated company, Luxottica designs, manufactures, distributes, and retails its eyewear brands through its own subsidiaries. It is the largest eyewear company in the world. It is, since October 2018, a subsidiary of EssilorLuxottica, which arose out of a merger between the Italian company and the French ophthalmic optics corporation Essilor. Luxottica was founded in Agordo by Leonardo Del Vecchio in 1961 as a sunglasses manufacturer selling and branding under its own name. Del Vecchio quickly acquired numerous businesses in the pursuit of vertical integration, buying distribution companies rapidly and signing its first designer licensing agreement with Giorgio Armani. In 1990, the company listed American depositary receipts on the New York Stock Exchange where it traded until 2017. Luxottica retails its products through stores that it owns, predominantly LensCrafters, Sunglass Hut, Pearle Vision, Target Optical, and Glasses.com. It also owns EyeMed, one of the largest vision health insurance providers. In addition to licensing prescription and non-prescription sunglasses frames for many luxury and designer brands including Chanel, Prada, Giorgio Armani, Burberry, Versace, Dolce and Gabbana, Michael Kors, Coach, Miu Miu and Tory Burch, the Italian corporation further outright owns and manufactures Ray-Ban, Persol, Oliver Peoples, and Oakley. Luxottica's market power has allowed it to charge price markups of up to 1000%. In January 2017, Luxottica announced its merger with Essilor, in which Essilor would buy Luxottica while Del Vecchio would become executive chairman of the combined company, as well as co-lead the company with then-Essilor CEO Hubert Sagnières. The combined entity would command more than one quarter of global value sales of eyewear. In March 2018, the European Commission unconditionally approved the merger of Essilor and Luxottica. On 1 October 2018, the new holding company EssilorLuxottica was born, resulting in combined market capitalization of approximately $70 billion. History Leonardo Del Vecchio started the company in 1961, in Agordo north of Belluno, Veneto; today the company is headquartered in Milan, Italy. Del Vecchio began his career as the apprentice to a tool and diemaker in Milan, but he decided to turn his metalworking skills to making spectacle parts. In 1961, he moved to Agordo in the province of Belluno, home to most of the Italian eyewear industry. The new company was Luxottica s.a.s., a limited partnership with Del Vecchio as one of the founding partners. In 1967, he started selling complete eyeglass frames under the Luxottica brand, which proved successful enough that by 1971 he ended the contract manufacturing business. Convinced of the need for vertical integration, he acquired Scarrone in 1974, a distribution company. In 1981, the company set up its first international subsidiary, in Germany, the first in a rapid period of international expansion. The first of many licensing deals with a designer was struck with Giorgio Armani in 1988. The company listed in New York in 1990, and in Milan in December 2000, joining the MIB-30 (now FTSE MIB) index in September 2003. The listing raised money for the company and allowed it to use its shares to acquire other brands, starting with Italian brand Vogue Eyewear in 1990, Persol and LensCrafters in 1995, Ray-Ban from Bausch & Lomb in 1999 and Sunglass Hut in 2001. Luxottica later increased its presence in the retail sector by acquiring Sydney-based OPSM in 2003, Pearle Vision and Cole National in 2004. Luxottica acquired Oakley in November 2007 for US$2.1 billion. Oakley had tried to dispute their prices because of Luxottica's large marketshare, and Luxottica responded by dropping Oakley from their stores, causing their stock price to drop, followed by Luxottica's hostile take over of the company. In August 2011, Luxottica acquired Erroca for €20 million. In March 2014, it was announced that Luxottica would partner with Google on the development of Google Glass and its integration into Luxottica's eyewear. On 1 September 2014, a new organizational structure was announced, composed of two co-CEOs, one focusing on market development and the other overseeing corporate functions. After the exit of former CEO Andrea Guerra, Enrico Cavatorta was appointed CEO of Corporate Function and Interim CEO of Market (until new and permanent appointment to this role). Cavatorta left the company 40 days after being appointed CEO. In 2016, it was reported that Luxottica had lost its third chief executive in a year and a half, as Cavatora's replacement, Adil Mehboob-Khan stepped down one year after he gained the position. Upon the departure of Mehboob-Khan, Del Vecchio reclaimed executive powers and became much more active in the company. In January 2017, the company agreed to a merger with Essilor. The deal also offered a succession plan for Leonardo Del Vecchio, the company's founder. Shortly before the merger completed, reporter Sam Knight wrote in The Guardian, "in seven centuries of spectacles, there has never been anything like it. The new entity will be worth around $50bn (£37bn), sell close to a billion pairs of lenses and frames every year, and have a workforce of more than 140,000 people." On 1 October 2018 the new holding company EssilorLuxottica was founded, resulting in combined market capitalization of approximately €46.3 billion as of the date of the merger announcement. Eyewear brands Luxottica's two main product offerings are sunglasses and prescription frames. The company operates in two sectors: manufacturing & wholesale distribution, and retail distribution. The house brands include the following: The company also makes eyewear under license for the following designer labels: These brands are sold in the company's own shops, as well as to independent distributors such as department stores, duty-free shops, and opticians. Retail Luxottica Retail has about 9,100 retail locations in the United States, Latin America, Canada, India, China, Australia, New Zealand, South Africa, the United Kingdom, and United Arab Emirates. The headquarters of the retail division is in Mason, Ohio, United States (North America). Their retail banners include the following: Luxottica is the largest optical retailer in the United States, with 7.3% of US retail sales in 2015. With its merger with Essilor in 2018 the company owns Coastal/Clearly, an online contacts and glasses retail giant bought in 2014 that ships to over 200 countries beside its original North American market.[citation needed] Medical managed care Luxottica also owns EyeMed Vision Care, a managed vision care organization in the United States. As of 2014, it is the second-largest vision benefits company in the United States. Philanthropy Luxottica is affiliated with the charitable organization OneSight, formed in 1988. In August 2018, Luxottica restored Accademia Bridge in Venice. In March 2022, EssilorLuxottica announced the launch of the OneSight EssilorLuxottica Foundation to unify the group's philanthropic efforts, primarily providing vision services to underserved communities. Criticism The company has been criticized for the high price of its brand-name glasses, such as Ray-Ban, Oakley, and several others. A 2012 60 Minutes segment focused on whether the company's extensive holdings in the industry were used to keep prices high. Luxottica owns not only a large portfolio of brands (over a dozen) such as Ray-Ban and Oakley but also retailers such as Sunglass Hut, Lenscrafters and Oliver Peoples, the optical departments at Target, and (formerly) Sears, as well as key eye insurance groups including the second largest glasses insurance firm in the US, EyeMed. It has been accused of operating a complete monopoly on the optical industry and overcharging for its products; for example, temporarily dropping then-competitor Oakley from its frame design list, then, when the company stock crashed, purchasing the company, then increasing the prices of its Ray-Ban sunglasses. In addition, it has been argued that, by owning the vision insurance company EyeMed, it also controls part of the buyers' market as well. The company has said that the market is highly competitive, and that their frames account for ≈10% of sales worldwide and ≈20% in the United States. In 2017, their share of the prescription lens market was 41%. Euromonitor International estimated that Luxottica's market share was 14% worldwide, with the second-largest company in the industry, Essilor, holding a 13% market share. The third-largest player was Johnson & Johnson, with a 3.9% market share. In October 2018, Luxottica and Essilor merged into a single company, EssilorLuxottica, which now occupies nearly 30% of the global market share and represents almost a billion pairs of lenses and frames sold annually. Despite not owning most of the market, the company has considerable price-setting power. It uses "spiff money", financial incentives to reward other industry players who co-operate with it, and has repeatedly driven companies that competed with it on price out of business, crashing their market share and stock price, then buying them out. It has used a variety of techniques, including compelling retailers to drop suppliers and making imitations of competitor's products. It also funds university chairs of ophthalmology and is influential in professional associations. The HBO series Last Week Tonight with John Oliver has criticized the company as a prominent instance of corporate consolidation, as has the TruTV series Adam Ruins Everything. In 2019, LensCrafters founder E. Dean Butler spoke to the Los Angeles Times, admitting that Luxottica's dominance of the eyewear industry had resulted in price markups of nearly 1,000%. In the interview, Butler noted "You can get amazingly good frames, with a Warby Parker level of quality, for $4 to $8. For $15, you can get designer-quality frames, like what you'd get from Prada." When told that some eyeglasses cost as much as $800 in the United States, Butler remarked, "I know. It's ridiculous. It's a complete rip-off." Major shareholders The list of Luxottica shareholders with more than 2% of holdings, December 2014. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Vertebral_column] | [TOKENS: 5886] |
Contents Spinal column The spinal column, also known as the vertebral column, spine or backbone, is the part of the axial skeleton in vertebrates. The vertebral column is the defining and eponymous characteristic of the vertebrate. The spinal column is a segmented column of vertebrae that surrounds and protects the spinal cord. The vertebrae are separated by intervertebral discs in a series of cartilaginous joints. The dorsal portion of the spinal column houses the spinal canal, an elongated cavity formed by the alignment of the vertebral neural arches that encloses and protects the spinal cord, with spinal nerves exiting via the intervertebral foramina to innervate each body segment. There are around 50,000 species of animals that have a vertebral column. The human spine is one of the most-studied examples, as the general structure of human vertebrae is fairly typical of that found in other mammals, reptiles, and birds. The shape of the vertebral body does, however, vary somewhat between different groups of living species. Individual vertebrae are named according to their corresponding region including the neck, thorax, abdomen, pelvis or tail. In clinical medicine, features on vertebrae such as the spinous process can be used as surface landmarks to guide medical procedures such as lumbar punctures and spinal anesthesia. There are also many different spinal diseases in humans that can affect both the bony vertebrae and the intervertebral discs, with kyphosis, scoliosis, ankylosing spondylitis, and degenerative discs being recognizable examples. Spina bifida is the most common birth defect that affects the spinal column. Structure The number of vertebrae in a region can vary but overall the number remains the same. In a human spinal column, there are normally 33 vertebrae. The upper 24 pre-sacral vertebrae are articulating and separated from each other by intervertebral discs, and the lower nine are fused in adults, five in the sacrum and four in the coccyx, or tailbone. The articulating vertebrae are named according to their region of the spine. From top to bottom, there are 7 cervical vertebrae, 12 thoracic vertebrae and 5 lumbar vertebrae. The number of those in the cervical region, however, is only rarely changed, while that in the coccygeal region varies most. Excluding rare deviations, the total number of vertebrae ranges from 32 to 35. In about 10% of people, both the total number of pre-sacral vertebrae and the number of vertebrae in individual parts of the spine can vary. The most frequent deviations are: 11 (rarely 13) thoracic vertebrae, 4 or 6 lumbar vertebrae, 3 or 5 coccygeal vertebrae (rarely up to 7). There are numerous ligaments extending the length of the column, which include the anterior and posterior longitudinal ligaments at the front and back of the vertebral bodies, the ligamentum flavum deep to the laminae, the interspinous and supraspinous ligaments between spinous processes, and the intertransverse ligaments between the transverse processes. The vertebrae in the human vertebral column is divided into different body regions, which correspond to the curvatures of the vertebral column. The articulating vertebrae are named according to their region of the spine. Vertebrae in these regions are essentially alike, with minor variation. These regions are called the cervical spine, thoracic spine, lumbar spine, sacrum, and coccyx. There are seven cervical vertebrae, twelve thoracic vertebrae, and five lumbar vertebrae. The number of vertebrae in a region can vary but overall the number remains the same. The number of those in the cervical region, however, is only rarely changed. The vertebrae of the cervical, thoracic, and lumbar spines are independent bones and generally quite similar. The vertebrae of the sacrum and coccyx are usually fused and unable to move independently. Two special vertebrae are the atlas and axis, on which the head rests. A typical vertebra consists of two parts: the vertebral body (or centrum), which is ventral (or anterior, in the standard anatomical position) and withstands axial structural load; and the vertebral arch (also known as neural arch), which is dorsal (or posterior) and provides articulations and anchorages for ribs and core skeletal muscles. Together, these enclose the vertebral foramen, the series of which align to form the spinal canal, a body cavity that contains the spinal cord. Because the vertebral column will outgrow the spinal cord during child development, by adulthood the spinal cord often ends at the upper lumbar spine (at around L1/L2 level), the lower (caudal) end of the spinal canal is occupied by a ponytail-like bundle of spinal nerves descriptively called cauda equina (from Latin "horse's tail"), and the sacrum and coccyx are fused without a central foramen. The vertebral arch is formed by a ventral pair of pedicles and a dorsal pair of laminae, and supports seven processes, four articular, two transverse and one spinous, the latter also being known as the neural spine. The transverse and spinous processes and their associated ligaments serve as important attachment sites for back and paraspinal muscles and the thoracolumbar fasciae. The spinous processes of the cervical and lumbar regions can be felt through the skin, and are important surface landmarks in clinical medicine. The four articular processes for two pairs of plane facet joints above and below each vertebra, articulating with those of the adjacent vertebrae and are joined by a thin portion of the neural arch called the pars interarticularis. The orientation of the facet joints restricts the range of motion between the vertebrae. Underneath each pedicle is a small hole (enclosed by the pedicle of the vertebral below) called intervertebral foramen, which transmit the corresponding spinal nerve and dorsal root ganglion that exit the spinal canal. From top to bottom, the vertebrae are: For some medical purposes, adjacent vertebral regions may be considered together: The vertebral column is curved in several places, a result of human bipedal evolution. These curves increase the vertebral column's strength, flexibility, and ability to absorb shock, stabilising the body in upright position. When the load on the spine is increased, the curvatures increase in depth (become more curved) to accommodate the extra weight. They then spring back when the weight is removed. The upper cervical spine has a curve, convex forward, that begins at the axis (second cervical vertebra) at the apex of the odontoid process or dens and ends at the middle of the second thoracic vertebra; it is the least marked of all the curves. This inward curve is known as a lordotic curve. The thoracic curve, concave forward, begins at the middle of the second and ends at the middle of the twelfth thoracic vertebra. Its most prominent point behind corresponds to the spinous process of the seventh thoracic vertebra. This curve is known as a kyphotic curve. The lumbar curve is more marked in the female than in the male; it begins at the middle of the last thoracic vertebra, and ends at the sacrovertebral angle. It is convex anteriorly, the convexity of the lower three vertebrae being much greater than that of the upper two. This curve is described as a lordotic curve. The sacral curve begins at the sacrovertebral articulation, and ends at the point of the coccyx; its concavity is directed downward and forward as a kyphotic curve. The thoracic and sacral kyphotic curves are termed primary curves, because they are present in the fetus. The cervical and lumbar curves are compensatory, or secondary, and are developed after birth. The cervical curve forms when the infant is able to hold up its head (at three or four months) and sit upright (at nine months). The lumbar curve forms later from twelve to eighteen months, when the child begins to walk. When viewed from in front, the width of the bodies of the vertebrae is seen to increase from the second cervical to the first thoracic; there is then a slight diminution in the next three vertebrae. Below this, there is again a gradual and progressive increase in width as low as the sacrovertebral angle. From this point there is a rapid diminution, to the apex of the coccyx. From behind, the vertebral column presents in the median line the spinous processes. In the cervical region (with the exception of the second and seventh vertebrae), these are short, horizontal, and bifid. In the upper part of the thoracic region they are directed obliquely downward; in the middle they are almost vertical, and in the lower part they are nearly horizontal. In the lumbar region they are nearly horizontal. The spinous processes are separated by considerable intervals in the lumbar region, by narrower intervals in the neck, and are closely approximated in the middle of the thoracic region. Occasionally one of these processes deviates a little from the median line — which can sometimes be indicative of a fracture or a displacement of the spine. On either side of the spinous processes is the vertebral groove formed by the laminae in the cervical and lumbar regions, where it is shallow, and by the laminae and transverse processes in the thoracic region, where it is deep and broad; these grooves lodge the deep muscles of the back. Lateral to the spinous processes are the articular processes, and still more laterally the transverse processes. In the thoracic region, the transverse processes stand backward, on a plane considerably behind that of the same processes in the cervical and lumbar regions. In the cervical region, the transverse processes are placed in front of the articular processes, lateral to the pedicles and between the intervertebral foramina. In the thoracic region they are posterior to the pedicles, intervertebral foramina, and articular processes. In the lumbar region they are in front of the articular processes, but behind the intervertebral foramina. The sides of the vertebral column are separated from the posterior surface by the articular processes in the cervical and thoracic regions and by the transverse processes in the lumbar region. In the thoracic region, the sides of the bodies of the vertebrae are marked in the back by the facets for articulation with the heads of the ribs. More posteriorly are the intervertebral foramina, formed by the juxtaposition of the vertebral notches, oval in shape, smallest in the cervical and upper part of the thoracic regions and gradually increasing in size to the last lumbar. They transmit the special spinal nerves and are situated between the transverse processes in the cervical region and in front of them, in the thoracic and lumbar regions. Ligaments There are different ligaments involved in the holding together of the vertebrae in the column, and in the column's movement. The anterior and posterior longitudinal ligaments extend the length of the vertebral column along the front and back of the vertebral bodies. The interspinous ligaments connect the adjoining spinous processes of the vertebrae.[better source needed] The supraspinous ligament extends the length of the spine running along the back of the spinous processes, from the sacrum to the seventh cervical vertebra. From there it is continuous with the nuchal ligament. Development The striking segmented pattern of the spine is established during embryogenesis when somites are rhythmically added to the posterior of the embryo. Somite formation begins around the third week when the embryo begins gastrulation and continues until all somites are formed. Their number varies between species: there are 42 to 44 somites in the human embryo and around 52 in the chick embryo. The somites are spheres, formed from the paraxial mesoderm that lies at the sides of the neural tube and they contain the precursors of spinal bone, the vertebrae ribs and some of the skull, as well as muscle, ligaments and skin. Somitogenesis and the subsequent distribution of somites is controlled by a clock and wavefront model acting in cells of the paraxial mesoderm. Soon after their formation, sclerotomes, which give rise to some of the bone of the skull, the vertebrae and ribs, migrate, leaving the remainder of the somite now termed a dermamyotome behind. This then splits to give the myotomes which will form the muscles and dermatomes which will form the skin of the back. Sclerotomes become subdivided into an anterior and a posterior compartment. This subdivision plays a key role in the definitive patterning of vertebrae that form when the posterior part of one somite fuses to the anterior part of the consecutive somite during a process termed resegmentation. Disruption of the somitogenesis process in humans results in diseases such as congenital scoliosis. So far, the human homologues of three genes associated to the mouse segmentation clock, (MESP2, DLL3 and LFNG), have been shown to be mutated in cases of congenital scoliosis, suggesting that the mechanisms involved in vertebral segmentation are conserved across vertebrates. In humans the first four somites are incorporated in the base of the occipital bone of the skull and the next 33 somites will form the vertebrae, ribs, muscles, ligaments and skin. The remaining posterior somites degenerate. During the fourth week of embryogenesis, the sclerotomes shift their position to surround the spinal cord and the notochord. This column of tissue has a segmented appearance, with alternating areas of dense and less dense areas. As the sclerotome develops, it condenses further eventually developing into the vertebral body. Development of the appropriate shapes of the vertebral bodies is regulated by HOX genes. The less dense tissue that separates the sclerotome segments develop into the intervertebral discs. The notochord disappears in the sclerotome (vertebral body) segments but persists in the region of the intervertebral discs as the nucleus pulposus. The nucleus pulposus and the fibers of the anulus fibrosus make up the intervertebral disc. The primary curves (thoracic and sacral curvatures) form during fetal development. The secondary curves develop after birth. The cervical curvature forms as a result of lifting the head and the lumbar curvature forms as a result of walking. Function The vertebral column surrounds the spinal cord which travels within the spinal canal, formed from a central hole within each vertebra. The spinal cord is part of the central nervous system that supplies nerves and receives information from the peripheral nervous system within the body. The spinal cord consists of grey and white matter and a central cavity, the central canal. Adjacent to each vertebra emerge spinal nerves. The spinal nerves provide sympathetic nervous supply to the body, with nerves emerging forming the sympathetic trunk and the splanchnic nerves. The spinal canal follows the different curves of the column; it is large and triangular in those parts of the column that enjoy the greatest freedom of movement, such as the cervical and lumbar regions, and is small and rounded in the thoracic region, where motion is more limited. The spinal cord terminates in the conus medullaris and cauda equina. Clinical significance Spina bifida is a congenital disorder in which there is a defective closure of the vertebral arch. Sometimes the spinal meninges and also the spinal cord can protrude through this, and this is called spina bifida cystica. Where the condition does not involve this protrusion it is known as spina bifida occulta. Sometimes all of the vertebral arches may remain incomplete. Another, though rare, congenital disease is Klippel–Feil syndrome, which is the fusion of any two of the cervical vertebrae. Spondylolisthesis is the forward displacement of a vertebra and retrolisthesis is a posterior displacement of one vertebral body with respect to the adjacent vertebra to a degree less than a dislocation. Spondylolysis, also known as a pars defect, is a defect or fracture at the pars interarticularis of the vertebral arch. Spinal disc herniation, more commonly called a "slipped disc", is the result of a tear in the outer ring (anulus fibrosus) of the intervertebral disc, which lets some of the soft gel-like material, the nucleus pulposus, bulge out in a hernia. Spinal stenosis is a narrowing of the spinal canal which can occur in any region of the spine though less commonly in the thoracic region. The stenosis can constrict the spinal canal giving rise to a neurological deficit. Pain at the coccyx (tailbone) is known as coccydynia. Spinal cord injury is damage to the spinal cord that causes changes in its function, either temporary or permanent. Spinal cord injuries can be divided into categories: complete transection, hemisection, central spinal cord lesions, posterior spinal cord lesions, and anterior spinal cord lesions. Scalloping vertebrae is the increase in the concavity of the posterior vertebral body. It can be seen on lateral X-ray and sagittal views of CT and MRI scans. Its concavity is due to the increased pressure exerting on the vertebrae due to a mass. Internal spinal mass such as spinal astrocytoma, ependymoma, schwannoma, neurofibroma, and achondroplasia causes vertebrae scalloping. Excessive or abnormal spinal curvature is classed as a spinal disease or dorsopathy and includes the following abnormal curvatures: Individual vertebrae of the human vertebral column can be felt and used as surface anatomy, with reference points are taken from the middle of the vertebral body. This provides anatomical landmarks that can be used to guide procedures such as a lumbar puncture and also as vertical reference points to describe the locations of other parts of human anatomy, such as the positions of organs. Other animals The general structure of vertebrae in other animals is largely the same as in humans. Individual vertebrae are composed of a centrum (body), arches protruding from the top and bottom of the centrum, and various processes projecting from the centrum and/or arches. An arch extending from the top of the centrum is called a neural arch, while the haemal arch is found underneath the centrum in the caudal (tail) vertebrae of fish, most reptiles, some birds, some dinosaurs and some mammals with long tails. The vertebral processes can either give the structure rigidity, help them articulate with ribs, or serve as muscle attachment points. Common types are transverse process, diapophyses, parapophyses, and zygapophyses (both the cranial zygapophyses and the caudal zygapophyses). The centrum of the vertebra can be classified based on the fusion of its elements. In temnospondyls, bones such as the spinous process, the pleurocentrum and the intercentrum are separate ossifications. Fused elements, however, classify a vertebra as having holospondyly. A vertebra can also be described in terms of the shape of the ends of the centrum. Centra with flat ends are acoelous, like those in mammals. These flat ends of the centra are especially good at supporting and distributing compressive forces. Amphicoelous vertebra have centra with both ends concave. This shape is common in fish, where most motion is limited. Amphicoelous centra often are integrated with a full notochord. Procoelous vertebrae are anteriorly concave and posteriorly convex. They are found in frogs and modern reptiles. Opisthocoelous vertebrae are the opposite, possessing anterior convexity and posterior concavity. They are found in salamanders, and in some non-avian dinosaurs. Heterocoelous vertebrae have saddle-shaped articular surfaces. This type of configuration is seen in turtles that retract their necks, and birds, because it permits extensive lateral and vertical flexion motion without stretching the nerve cord too extensively or wringing it about its long axis. In horses, the Arabian (breed) can have one less vertebrae and pair of ribs. This anomaly disappears in foals that are the product of an Arabian and another breed of horse. Vertebrae are defined by their location in the vertebral column. Cervical vertebrae are those in the neck area. With the exception of the two sloth genera (Choloepus and Bradypus) and the manatee genus, (Trichechus), all mammals have seven cervical vertebrae. In other vertebrates, the number of cervical vertebrae can range from a single vertebra in amphibians to as many as 25 in swans or 76 in the extinct plesiosaur Elasmosaurus. The dorsal vertebrae range from the bottom of the neck to the top of the pelvis. Dorsal vertebrae attached to the ribs are called thoracic vertebrae, while those without ribs are called lumbar vertebrae. The sacral vertebrae are those in the pelvic region, and range from one in amphibians, to two in most birds and modern reptiles, or up to three to five in mammals. When multiple sacral vertebrae are fused into a single structure, it is called the sacrum. The synsacrum is a similar fused structure found in birds that is composed of the sacral, lumbar, and some of the thoracic and caudal vertebra, as well as the pelvic girdle. Caudal vertebrae compose the tail, and the final few can be fused into the pygostyle in birds, or into the coccygeal or tail bone in chimpanzees (and humans). The vertebrae of lobe-finned fishes consist of three discrete bony elements. The vertebral arch surrounds the spinal cord, and is of broadly similar form to that found in most other vertebrates. Just beneath the arch lies a small plate-like pleurocentrum, which protects the upper surface of the notochord, and below that, a larger arch-shaped intercentrum to protect the lower border. Both of these structures are embedded within a single cylindrical mass of cartilage. A similar arrangement was found in the primitive Labyrinthodonts, but in the evolutionary line that led to reptiles (and hence, also to mammals and birds), the intercentrum became partially or wholly replaced by an enlarged pleurocentrum, which in turn became the bony vertebral body. In most ray-finned fishes, including all teleosts, these two structures are fused with, and embedded within, a solid piece of bone superficially resembling the vertebral body of mammals. In living amphibians, there is simply a cylindrical piece of bone below the vertebral arch, with no trace of the separate elements present in the early tetrapods. In cartilaginous fish, such as sharks, the vertebrae consist of two cartilaginous tubes. The upper tube is formed from the vertebral arches, but also includes additional cartilaginous structures filling in the gaps between the vertebrae, and so enclosing the spinal cord in an essentially continuous sheath. The lower tube surrounds the notochord, and has a complex structure, often including multiple layers of calcification. Lampreys have vertebral arches, but nothing resembling the vertebral bodies found in all higher vertebrates. Even the arches are discontinuous, consisting of separate pieces of arch-shaped cartilage around the spinal cord in most parts of the body, changing to long strips of cartilage above and below in the tail region. Hagfishes lack a true vertebral column, and are therefore not properly considered vertebrates, but a few tiny neural arches are present in the tail. The general structure of human vertebrae is fairly typical of that found in other mammals, reptiles, and birds (amniotes). The shape of the vertebral body does, however, vary somewhat between different groups. In humans and other mammals, it typically has flat upper and lower surfaces, while in reptiles the anterior surface commonly has a concave socket into which the expanded convex face of the next vertebral body fits. Even these patterns are only generalisations, however, and there may be variation in form of the vertebrae along the length of the spine even within a single species. Some unusual variations include the saddle-shaped sockets between the cervical vertebrae of birds and the presence of a narrow hollow canal running down the centre of the vertebral bodies of geckos and tuataras, containing a remnant of the notochord. Reptiles often retain the primitive intercentra, which are present as small crescent-shaped bony elements lying between the bodies of adjacent vertebrae; similar structures are often found in the caudal vertebrae of mammals. In the tail, these are attached to chevron-shaped bones called haemal arches, which attach below the base of the spine, and help to support the musculature. These latter bones are probably homologous with the ventral ribs of fish. The number of vertebrae in the spines of reptiles is highly variable, and may be several hundred in some species of snake. In birds, there is a variable number of cervical vertebrae, which often form the only truly flexible part of the spine. The thoracic vertebrae are partially fused, providing a solid brace for the wings during flight. The sacral vertebrae are fused with the lumbar vertebrae, and some thoracic and caudal vertebrae, to form a single structure, the synsacrum, which is thus of greater relative length than the sacrum of mammals. In living birds, the remaining caudal vertebrae are fused into a further bone, the pygostyle, for attachment of the tail feathers. Aside from the tail, the number of vertebrae in mammals is generally fairly constant. There are almost always seven cervical vertebrae (sloths and manatees are among the few exceptions), followed by around twenty or so further vertebrae, divided between the thoracic and lumbar forms, depending on the number of ribs. There are generally three to five vertebrae with the sacrum, and anything up to fifty caudal vertebrae. The vertebral column in dinosaurs consists of the cervical (neck), dorsal (back), sacral (hips), and caudal (tail) vertebrae. Saurischian dinosaur vertebrae sometimes possess features known as pleurocoels, which are hollow depressions on the lateral portions of the vertebrae, perforated to create an entrance into the air chambers within the vertebrae, which served to decrease the weight of these bones without sacrificing strength. These pleurocoels were filled with air sacs, which would have further decreased weight. In sauropod dinosaurs, the largest known land vertebrates, pleurocoels and air sacs may have reduced the animal's weight by over a ton in some instances, a handy evolutionary adaption in animals that grew to over 30 metres in length. In many hadrosaur and theropod dinosaurs, the caudal vertebrae were reinforced by ossified tendons. The presence of three or more sacral vertebrae, in association with the hip bones, is one of the defining characteristics of dinosaurs. The occipital condyle is a structure on the posterior part of a dinosaur's skull that articulates with the first cervical vertebra. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/David_Tal_(historian)] | [TOKENS: 276] |
Contents David Tal (historian) David Tal (Hebrew: דוד טל; born c. 1964) is an Israeli historian and professor. Since 2009, he has been the Kahanoff Chair in Israeli Studies at the University of Calgary. He is an expert on Israel's security and diplomatic history, as well as U.S. disarmament policy. Biography Tal completed all his undergraduate and graduate degrees in history at Tel Aviv University, receiving his BA in 1986, MA in 1990, and Ph.D. in 1995. He was an instructor in the Department of History at Tel Aviv University from 1994–1996 and a lecturer in the Program of Security Studies from 1996–2005. He was a NATO Research Fellow from 2000–2002. Since 2005, Tal has been a visiting professor at Emory University (2005–2006, 2008–2009) and Syracuse University (2006–2008). In 2009, he joined the Department of History at the University of Calgary as the Kahanoff Chair in Israeli Studies (2009–2014). In 2014, he moved to the University of Sussex, UK, where he is holding the Yossi Harel Chair in Modern Israel Studies. Selected bibliography References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Peer-to-peer] | [TOKENS: 5188] |
Contents Peer-to-peer Peer-to-peer (P2P) computing or networking is a distributed computing or networking architecture in which participants share part of their own resources, such as processing power, storage, or network capacity. These resources are made directly available to other peers without reliance on intermediary entities, and participants act as both resource providers and resource requesters. In contrast, the traditional client–server model assigns resource provider and requester roles to different participants, with centralized servers acting as providers and clients acting as requesters. While P2P systems had previously been used in many application domains, the architecture was popularized by the Internet file sharing system Napster, originally released in 1999. P2P is used in many protocols such as BitTorrent file sharing over the Internet and in personal networks like Miracast displaying and Bluetooth radio. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. Development While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems". The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures.[page needed] This contrasts with the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing." Therefore, Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces a decentralized model of control. The basic model is a client–server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of Email clients and their direct connections is strictly a client-server relationship. In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions". Architecture Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two). Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols). Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network. However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful. In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare. The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key. However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance. Notable distributed networks that use DHTs include Tixati, an alternative to BitTorrent's distributed tracker, the Kad network, the Storm botnet, and the YaCy. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications. Hybrid models are a combination of peer-to-peer and client–server models. A common hybrid model is to have a central server that helps peers find each other. Spotify was an example of a hybrid model [until 2014]. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks. CoopNet (Cooperative Networking) was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working at Microsoft Research and Carnegie Mellon University. When a server experiences an increase in load it redirects incoming peers to other peers who have agreed to mirror the content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than the CPU, hence its server-centric design. It assigns peers to other peers who are 'close in IP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors. Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes. Peer-to-peer systems pose unique challenges from a computer security perspective. Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits. Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes. The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the gnutella network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for. Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts. The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client–server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point. In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems. Applications In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as BitTorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. Peer-to-peer file sharing networks such as Gnutella, G2, and the eDonkey network have been useful in popularizing peer-to-peer technologies. These advancements have paved the way for Peer-to-peer content delivery networks and services, including distributed caching systems like Correli Caches to enhance performance. Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing of Linux distribution and various games through file sharing networks. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. The P2PTV and PDTP protocols are used in various peer-to-peer applications. Some proprietary multimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients. Peercasting is employed for multicasting streams. Additionally, a project called LionShare, undertaken by Pennsylvania State University, MIT, and Simon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program, Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. Cryptocurrency systems are built around a distributed network of nodes that propagate transactions and blocks and maintain a shared ledger. According to the National Institute of Standards and Technology, blockchain networks operate in a peer-to-peer fashion, with geographically distributed nodes contributing to the resilience and operation of the system. Many users interact with these networks using light clients (also referred to as lightweight wallets), which do not operate as full nodes. Instead, light clients obtain blockchain data and submit transactions by querying full nodes or dedicated servers using a client–server model. Dat is a distributed version-controlled publishing platform. I2P, is an overlay network used to browse the Internet anonymously. Unlike the related I2P, the Tor network is not itself peer-to-peer[dubious – discuss]; however, it can enable peer-to-peer applications to be built on top of it via onion services. The InterPlanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia distribution protocol, with nodes in the IPFS network forming a distributed file system. Jami is a peer-to-peer chat and SIP app. JXTA is a peer-to-peer protocol designed for the Java platform. Netsukuku is a Wireless community network designed to be independent from the Internet. Open Garden is a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Sync is a directory-syncing app. Research includes projects such as the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system. Secure Scuttlebutt is a peer-to-peer gossip protocol capable of supporting many different types of applications, primarily social networking. Syncthing is also a directory-syncing app. Tradepal l and M-commerce applications are designed to power real-time marketplaces. The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May 2003, Anthony Tether, then director of DARPA, testified that the United States military uses P2P networks. WebTorrent is a P2P streaming torrent client in JavaScript for use in web browsers, as well as in the WebTorrent Desktop standalone version that bridges WebTorrent and BitTorrent serverless networks. Microsoft, in Windows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage. Artisoft's LANtastic was built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously. Hotline Communications Hotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today. Social implications Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance". Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources. Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. Some peer-to-peer networks (e.g. Hyphanet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity. Perpetrators of live streaming sexual abuse and other cybercrimes have used peer-to-peer platforms to carry out activities with anonymity. Political implications Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems. A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses. Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP's solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. Current research Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work." If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments." Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe. Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/World_Economic_Forum] | [TOKENS: 10298] |
Contents World Economic Forum The World Economic Forum (WEF) is an international advocacy non-governmental organization and think tank, based in Cologny, canton of Geneva, Switzerland. It was founded on 24 January 1971 by German engineer Klaus Schwab. The foundation's stated mission is "improving the state of the world by engaging business, political, academic, and other leaders of society to shape global, regional, and industry agendas". The WEF is mostly known for its annual meeting at the end of January in Davos, a mountain resort in the Swiss canton of Graubünden, in the eastern Alps region. The meeting brings together some 3,000 paying members and selected participants – among whom are investors, business leaders, political leaders, economists, celebrities and journalists – for up to five days to discuss global issues across 500 sessions. The foundation is mostly funded by its 1,000-member multi-national companies. Aside from Davos, the organization convenes regional conferences, it produces a series of reports, engages its members in sector-specific initiatives and provides a platform for leaders from selected stakeholder groups to collaborate on projects and initiatives. The World Economic Forum and its annual meeting in Davos have received criticism over the years, including allegations of the organization's corporate capture of global and democratic institutions, institutional whitewashing initiatives, the public cost of security, the organization's tax-exempt status, unclear decision processes and membership criteria, a lack of financial transparency, and the environmental footprint of its annual meetings. History The WEF was founded in 1971 by Klaus Schwab, a business professor at the University of Geneva. First named the European Management Forum, it changed its name to the World Economic Forum in 1987 and sought to broaden its vision to include providing a platform for resolving international conflicts. In February 1971, Schwab invited 450 executives from Western European firms to the first European Management Symposium held in the Davos Congress Centre under the patronage of the European Commission and European industrial associations, where Schwab sought to introduce European firms to American management practices. He then founded the WEF as a nonprofit organization based in Geneva and drew European business leaders to Davos for the annual meetings each January. The second European Management Forum in 1972 was the first meeting where a head of government featured as a speaker, Prime Minister Pierre Werner of Luxembourg. Events in 1973, including the collapse of the Bretton Woods fixed-exchange rate mechanism and the Yom Kippur War, saw the annual meeting expand its focus from management to economic and social issues, and, for the first time, political leaders were invited to the annual meeting in January 1974. Through the forum's first decade, it maintained a playful atmosphere, with many members skiing and participating in evening events. Appraising the 1981 event, one attendee noted that "the forum offers a delightful vacation on the expense account." Political leaders soon began to use the annual meeting as venue for promoting their interests. The Davos Declaration was signed in 1988 by Greece and Turkey, helping them turn back from the brink of war. In 1992, South African president F. W. de Klerk met with Nelson Mandela and Chief Mangosuthu Buthelezi at the annual meeting, their first joint appearance outside South Africa. At the 1994 annual meeting, Israeli foreign minister Shimon Peres and PLO chairman Yasser Arafat reached a draft agreement on Gaza and Jericho. The "Davos Pact" of 1996 saw the forum's elite help Boris Yeltsin retain power as president of the Russian Federation over the then-presumptive favorite Gennady Zyuganov, leader of the Communist Party of the Russian Federation. After 9/11, the WEF was held in the US, in New York City, for the first time. And in January 2003, US secretary of state Powell went to the forum to drum up sympathy for the global war on terrorism and the US's impending invasion of Iraq. In October 2004, the World Economic Forum gained attention through the resignation of its CEO and executive director José María Figueres over the undeclared receipt of more than US$900,000 in consultancy fees from the French telecommunications firm Alcatel. Transparency International highlighted this incident in their Global Corruption Report two years later in 2006. In January 2006, the WEF published an article in its Global Agenda magazine titled "Boycott Israel", which was distributed to all 2,340 participants of the annual meeting. Following the publication, Klaus Schwab described the publication as "an unacceptable failure in the editorial process". In late 2015, the invitation was extended to include a North Korean delegation for the 2016 WEF, "in view of positive signs coming out of the country", the WEF organizers noted. North Korea has not been attending the WEF since 1998. The invitation was accepted. However, WEF revoked the invitation on 13 January 2016, after the 6 January 2016 North Korean nuclear test, and the country's attendance was made subject to "existing and possible forthcoming sanctions". Despite protests by North Korea calling the decision by the WEF managing board a "sudden and irresponsible" move, the WEF committee maintained the exclusion because "under these circumstances there would be no opportunity for international dialogue". In 2017, the WEF in Davos attracted considerable attention when, for the first time, a head of state from the People's Republic of China was present at the alpine resort. With the backdrop of Brexit, an incoming protectionist US administration and significant pressures on free-trade zones and trade agreements, Paramount leader Xi Jinping defended the global economic scheme, and portrayed China as a responsible nation and a leader for environmental causes. He sharply rebuked the current populist movements that would introduce tariffs and hinder global commerce, warning that such protectionism could foster isolation and reduced economic opportunity. In 2018, Indian prime minister Narendra Modi gave the keynote speech, becoming the first head of government from India to deliver the inaugural keynote for the annual plenary at Davos. Modi highlighted global warming (climate change), terrorism and protectionism as the three major global challenges, and expressed confidence that they can be tackled with collective effort. In 2019, Brazilian president Jair Bolsonaro gave the keynote address at the plenary session of the conference. On his first international trip to Davos, he emphasized liberal economic policies despite his populist agenda, and attempted to reassure the world that Brazil is a protector of the rainforest while utilizing its resources for food production and export. He stated that "his government will seek to better integrate Brazil into the world by mainstreaming international best practices, such as those adopted and promoted by the OECD". Environmental concerns like extreme weather events, and the failure of climate change mitigation and adaptation were among the top-ranking global risks expressed by WEF attendees. On 13 June 2019, the WEF and the United Nations signed a "Strategic Partnership Framework" in order to "jointly accelerate the implementation of the 2030 Agenda for Sustainable Development." The 2021 World Economic Forum was due to be held from 17 to 20 August in Singapore. However, on 17 May, the forum was cancelled; with a new meeting to take place in the first half of 2022 instead with a final location and date to be determined later in 2021. In late December 2021, the World Economic Forum said in a release that pandemic conditions had made it extremely difficult to stage a global in-person meeting the following month; transmissibility of the SARS-CoV-2 Omicron variant and its impact on travel and mobility had made deferral necessary, with the meeting in Davos eventually rescheduled for 22 to 26 May 2022. Topics in the 2022 annual meeting included the Russian invasion of Ukraine, climate change, energy insecurity and inflation. Ukraine's president Volodymyr Zelenskyy gave a special address at the meeting, thanking the global community for its efforts but also calling for more support. The 2022 forum was marked by the absence of a Russian delegation for the first time since 1991, which The Wall Street Journal described as signalling the "unraveling of globalization." The former Russia House was used to present Russia's war crimes. The 2023 annual meeting of the World Economic Forum took place in Davos, Switzerland, from 16–20 January under the theme "Cooperation in a fragmented world". On 21 April 2025 Klaus Schwab, Chairman of the Board of the World Economic Forum, informed the Board: "Following my recent announcement and as I enter my 88th year, I have decided to step down from my position as Chair and as a member of the Board of Trustees, with immediate effect." In August 2025, the Forum faced renewed scrutiny after whistleblowers alleged financial irregularities and a toxic work environment. Interim chair Peter Brabeck-Letmathe, former CEO of Nestlé, resigned citing his personal observations of such conditions. Although an internal investigation by the Zurich-based law firm Homburger and US firm Covington & Burling found no evidence of "material wrongdoing" by Schwab or his wife Hilde, the Board pledged to strengthen governance structures. At the same time, the Forum appointed Larry Fink, CEO of BlackRock, and André Hoffmann, vice-chair of Roche Holding, as interim co-chairs of its board. Organization Headquartered in Cologny, the WEF also has offices in New York, Beijing, Tokyo and Seoul. In January 2015, it was designated an NGO with "other international body" status by the Swiss Federal Government under the Swiss Host-State Act. On 10 October 2016, the WEF announced the opening of its new Center for the Fourth Industrial Revolution in San Francisco. According to the WEF, the center will "serve as a platform for interaction, insight and impact on the scientific and technological changes that are changing the way we live, work and relate to one another". WEF has 19 such centers spread across Africa, Asia, Europe, North America and South America. The World Economic Forum declares that it is impartial and that it is not tied to any political, partisan, or national interests.[citation needed] Until 2012, it had observer status with the United Nations Economic and Social Council;[citation needed] it is under the supervision of the Swiss Federal Council.[citation needed] The foundation's highest governance body is the foundation board. The managing board is chaired by the WEF's president and CEO, Børge Brende, and acts as the executive body of the World Economic Forum. Managing board members are Børge Brende, Julien Gattoni, Jeremy Jurgens, Adrian Monck, Sarita Nayyar, Olivier M. Schwab, Saadia Zahidi, and Alois Zwinggi. The WEF was chaired by founder and chairman Klaus Schwab until his departure in 2025 and is guided by a board of trustees that is made up of leaders from business, politics, academia and civil society. As of 2024, the board of trustees was composed of: Queen Rania of Jordan, Mukesh Ambani, Ajay Banga, Marc Benioff, Peter Brabeck-Letmathe, Thomas Buberl, Laurence D. Fink, Chrystia Freeland, Orit Gadiesh, Kristalina Georgieva, Fabiola Gianotti, Al Gore, Andre Hoffmann, Paula Ingabire, Joe Kaeser, Christine Lagarde, Yo-Yo Ma, Patrice Motsepe, Ngozi Okonjo-Iweala, Lubna Olayan, David Rubenstein, Ulf Mark Schneider, Klaus Schwab, Tharman Shanmugaratnam, Jim Hagemann Snabe, Julie Sweet, Feike Sijbesma, Heizō Takenaka, and Zhu Min. Members of the board of trustees (past or present) include: Al Gore, Herman Gref, André Hoffmann, Carlos Ghosn, Christine Lagarde, Chrystia Freeland, David Rubenstein, Ernesto Zedillo, Fabiola Gianotti, Feike Sijbesma, Heizō Takenaka, Indra Nooyi, Jack Ma, Jim Hagemann Snabe, José Ángel Gurría, Josef Ackermann, Klaus Schwab, Kofi Annan, Laurence Fink, Leo Rafael Reif, Luis Alberto Moreno, Marc Benioff, Mark Carney, Maurice Lévy, Michael Dell, Mukesh Ambani, Muriel Pénicaud, Niall FitzGerald, Orit Gadiesh, Peter Brabeck-Letmathe, Peter Maurer, Queen Rania of Jordan, Rajat Gupta, Susan Hockfield, Tharman Shanmugaratnam, Tony Blair, Ulf Mark Schneider, Ursula von der Leyen, Yo-Yo Ma, Zhu Min, Ivan Pictet, Joseph P. Schoendorf, Peter D. Sutherland, and Victor L. L. Chu. The foundation is funded by its 1,000 member companies, typically global enterprises with more than five billion dollars in turnover (varying by industry and region). These enterprises rank among the top companies within their industry and/or country and play a leading role in shaping the future of their industry and/or region. Membership is stratified by the level of engagement with forum activities, with the level of membership fees increasing as participation in meetings, projects, and initiatives rises. In 2011, an annual membership cost $52,000 for an individual member, $263,000 for "Industry Partner" and $527,000 for "Strategic Partner". An admission fee costs $19,000 per person. In 2014, WEF raised annual fees by 20 percent, bringing the cost for "Strategic Partner" from CHF 500,000 ($523,000) to CHF 600,000 ($628,000). Activities The flagship event of the World Economic Forum is the invitation-only annual meeting held at the end of January in Davos, Switzerland, bringing together chief executive officers from its 1,000 member companies, as well as selected politicians, representatives from academia, NGOs, religious leaders, and the media in an alpine environment. The winter discussions ostensibly focus around key issues of global concern (such as the globalization, capital markets, wealth management, international conflicts, environmental problems and their possible solutions). The participants also take part in role playing events, such as the Investment Heat Map. Informal winter meetings may have led to as many ideas and solutions as the official sessions. In addition to the official programme inside the Congress Hall, numerous independent (by invitation only) events are hosted by governments, corporations, and civil-society organisations across Davos. These include the FT/CNBC Nightcap, Open Forum, the Swedish Lunch, Goals House, and SAP House and Avicii Tribute Concert for Mental Health Awareness. At the annual meeting, usually 3,000 participants from nearly 110 countries participate in over 400 sessions. Participation included more than 340 public figures, including more than 70 heads of state and government and 45 heads of international organizations; 230 media representatives and almost 40 cultural leaders were represented. As many as 500 journalists from online, print, radio, and television take part, with access to all sessions in the official program, some of which are also webcast. Not all the journalists are given access to all areas, however. This is reserved for white badge holders. "Davos runs an almost caste-like system of badges", according to BBC journalist Anthony Reuben. "A white badge means you're one of the delegates – you might be the chief executive of a company or the leader of a country (although that would also get you a little holographic sticker to add to your badge), or a senior journalist. An orange badge means you're just a run-of-the-mill working journalist." Since 2024 the WEF launched a new badge system where Accredited Badges are issued. These badges have access to the Ice Village, which is a smaller version of the official Congress Hall. All plenary debates from the annual meeting also are available on YouTube while photographs are available on Flickr. The town of Davos is designated as a high-security zone during the Annual Meeting, with extensive protective measures led by the Swiss authorities. Security operations include airspace monitoring, military deployments, and the protection of key sites and official delegations. The Swiss Armed Forces support the Canton of Graubünden with troops, infrastructure, and specialised units to safeguard participants and residents. Access to parts of the town is restricted, with secure hotels and controlled zones requiring specific accreditation to enter. A number of hotels in Davos are designated as "secure hotels", providing controlled access and heightened protection for heads of state, ministers and senior officials. The specific hotels receiving this status vary from year to year, but have included the Belvedere Hotel, the Alpengold Hotel (formerly the InterContinental), the Seehof Hotel, and the Hilton Garden Inn. These properties fall within the Secure Zone (Promenade 95 – 101) and require special accreditation for entry (Secure Hotel Badge), functioning as protected accommodation and meeting zones for official delegations during the World Economic Forum. The World Economic Forum 2025 took place in Davos, Switzerland, from 20 to 24 January, under the theme Collaboration for the Intelligent Age. The event brought together approximately 3,000 global leaders from over 125 countries, including 350 heads of state and government, business executives, policymakers, and representatives from international organizations. Discussions focused on geopolitical stability, economic resilience, climate change, artificial intelligence governance, and inclusive economic growth. Sessions covered topics such as the future of global trade, energy transition, and the impact of artificial intelligence and automation on the labor market. Several initiatives were introduced, including policy frameworks for AI regulation, climate financing mechanisms, and economic strategies for sustainable development. Among the initiatives discussed was the Global India Dialogues, launched by the Motwani Jadeja Foundation, which focused on India’s role in global geopolitics, technology, and innovation. Discussions on gender equity and economic inclusion were also highlighted through initiatives such as the Global Good Alliance for Gender Equity and Equality, which explored the economic impact of investments in women's health. The event featured key figures such as Ursula von der Leyen, Antony Blinken, Christian Lindner, and Sam Altman, alongside representatives from the United Nations, International Monetary Fund, and World Bank. The forum underscored the importance of international cooperation in addressing global economic and technological challenges. Some 3,000 individual participants joined the 2020 annual meeting in Davos. Countries with the most attendees include the United States (674 participants), the United Kingdom (270), Switzerland (159), Germany (137) and India (133). Among the attendees were heads of state or government, cabinet ministers, ambassadors, and heads or senior officials of international organizations, including: Sanna Marin (prime minister of Finland), Ursula von der Leyen (president of the European Commission), Christine Lagarde (ECB president), Greta Thunberg (climate activist), Ren Zhengfei (Huawei Technologies founder), Kristalina Georgieva (managing director of the IMF), Deepika Padukone (Bollywood actress), George Soros (investor), Hideki Makihara (House of Representatives Japan), and Donald Trump (president of the United States). An analysis by The Economist from 2014 found that the vast majority of participants are male and more than 50 years old. Careers in business account for most of the participants' backgrounds (1,595 conference attendees), with the remaining seats shared between government (364), NGOs (246) and press (234). Academia, which had been the basis of the first annual conference in 1971, had been marginalised to the smallest participant group (183 attendees). Next to individual participants, the World Economic Forum maintains a dense network of corporate partners that can apply for different partnership ranks within the forum. For 2019, Bloomberg has identified a total of 436 listed corporates that participated in the annual meeting while measuring a stock underperformance by the Davos participants of around −10% versus the S&P 500 during the same year. Drivers are among others an overrepresentation of financial companies and an underrepresentation of fast-growing health care and information technology businesses at the conference. The Economist had found similar results in an earlier study, showing an underperformance of Davos participants against both the MSCI World Index and the S&P 500 between 2009 and 2014. In 2007, the foundation established the Annual Meeting of the New Champions (also called Summer Davos), held annually in China, alternating between Dalian and Tianjin, bringing together 1,500 participants from what the foundation calls Global Growth Companies, primarily from rapidly growing emerging countries such as China, Russia, Mexico, and Brazil, but also including quickly growing companies from developed countries. The meeting also engages with the next generation of global leaders from fast-growing regions and competitive cities, as well as technology pioneers from around the globe. The premier of China has delivered a plenary address at each annual meeting.[citation needed] Every year regional meetings take place, enabling close contact among corporate business leaders, local government leaders, and NGOs. Meetings are held in Africa, East Asia, Latin America, and the Middle East. The mix of hosting countries varies from year to year, but consistently China and India have hosted throughout the decade since 2000. The group of Young Global Leaders consists of 800 people chosen by the WEF organizers as being representative of contemporary leadership. After five years of participation they are considered alumni. The program has received controversy when Schwab, the founder, admitted to "penetrat[ing]" governments with Young Global Leaders. He added that as of 2017 "more than half" of Justin Trudeau's Cabinet had been members of the program. Since 2000, the WEF has been promoting models developed by those in close collaboration with the Schwab Foundation for Social Entrepreneurship, highlighting social entrepreneurship as a key element to advance societies and address social problems. Selected social entrepreneurs are invited to participate in the foundation's regional meetings and the annual meetings where they may meet chief executives and senior government officials. At the annual meeting 2003, for example, Jeroo Billimoria met with Roberto Blois, deputy secretary-general of the International Telecommunication Union, an encounter that produced a key partnership for her organization Child Helpline International. The foundation also acts as a think tank, publishing a wide range of reports. In particular, "Strategic Insight Teams" focus on producing reports of relevance in the fields of competitiveness, global risks, and scenario thinking.[citation needed] The "Competitiveness Team" produces a range of annual economic reports (first published in brackets): the Global Competitiveness Report (1979) measured competitiveness of countries and economies; The Global Information Technology Report (2001) assessed their competitiveness based on their IT readiness; the Global Gender Gap Report examined critical areas of inequality between men and women; the Global Risks Report (2006) assessed key global risks; the Global Travel and Tourism Report (2007) measured travel and tourism competitiveness; the Financial Development Report (2008) aimed to provide a comprehensive means for countries to establish benchmarks for various aspects of their financial systems and establish priorities for improvement; and the Global Enabling Trade Report (2008) presented a cross-country analysis of the large number of measures facilitating trade among nations. The "Risk Response Network" produces a yearly report assessing risks which are deemed to be within the scope of these teams, have cross-industry relevance, are uncertain, have the potential to cause upwards of US$10 billion in economic damage, have the potential to cause major human suffering, and which require a multi-stakeholder approach for mitigation. In 2020, the forum published a report entitled Nature Risk Rising: Why the Crisis Engulfing Nature Matters for Business and the Economy. In this report the forum estimated that approximately half of global GDP is highly or moderately dependent on nature (the same as IPBES's 2019 assessment report). The report also found that 1 dollar spent on nature restoration yields 9 dollars in economic benefits. On 19 January 2017 the Coalition for Epidemic Preparedness Innovations (CEPI), a global initiative to fight epidemics, was launched at WEF in Davos. The internationally funded initiative aims at securing vaccine supplies for global emergencies and pandemics, and to research new vaccines for tropical diseases, that are now more menacing. The project is funded by private and governmental donors, with an initial investment of US$460m from the governments of Germany, Japan and Norway, plus the Bill & Melinda Gates Foundation and the Wellcome Trust. Between 21 and 24 January 2020, at the early stages of the COVID-19 outbreak, CEPI met with leaders from Moderna to establish plans for a COVID-19 vaccine at the Davos gathering, with a total global case number of 274 and total loss of life the virus at 16. The WHO declared a global health emergency 6 days later. The Global Water Initiative brings together diverse stakeholders such as Alcan Inc., the Swiss Agency for Development and Cooperation, USAID India, UNDP India, Confederation of Indian Industry (CII), Government of Rajasthan, and the NEPAD Business Foundation to develop public-private partnerships on water management in South Africa and India.[citation needed] In an effort to combat corruption, the Partnering Against Corruption Initiative (PACI) was launched by CEOs from the engineering and construction, energy and metals, and mining industries at the annual meeting in Davos during January 2004. PACI is a platform for peer exchange on practical experience and dilemma situations. Approximately 140 companies have joined the initiative. In the beginning of the 21st century, the forum began to increasingly deal with environmental issues. In the Davos Manifesto 2020 it is said that a company among other: The Environmental Initiative covers climate change and water issues. Under the Gleneagles Dialogue on Climate Change, the U.K. government asked the World Economic Forum at the G8 Summit in Gleneagles in 2005 to facilitate a dialogue with the business community to develop recommendations for reducing greenhouse gas emissions. This set of recommendations, endorsed by a global group of CEOs, was presented to leaders ahead of the G8 Summit in Toyako, Hokkaido, Japan held in July 2008. In 2016 WEF published an article in which it is said, that in some cases reducing consumption can increase well-being. In the article is mentioned that in Costa Rica the GDP is 4 times smaller than in many countries in Western Europe and North America, but people live longer and better. An American study shows that those whose income is higher than $75,000, do not necessarily have an increase in well-being. To better measure well-being, the New Economics Foundation's launched the Happy Planet Index. In January 2017, WEF launched the Platform for Accelerating the Circular Economy (PACE), which is a global public private partnership seeking to scale circular economy innovations. PACE is co-chaired by Frans van Houten (CEO of Philips), Naoko Ishii (CEO of the Global Environment Facility, and the head of United Nations Environment Programme (UNEP). The Ellen MacArthur Foundation, the International Resource Panel, Circle Economy, Chatham House, the Dutch National Institute for Public Health and the Environment, the United Nations Environment Programme and Accenture serve as knowledge partners, and the program is supported by the UK Department for Environment, Food and Rural Affairs, DSM, FrieslandCampina, Global Affairs Canada, the Dutch Ministry of Infrastructure and Water Management, Rabobank, Shell, SITRA, and Unilever. The Forum emphasized its 'Environment and Natural Resource Security Initiative' for the 2017 meeting to achieve inclusive economic growth and sustainable practices for global industries. With increasing limitations on world trade through national interests and trade barriers, the WEF has moved towards a more sensitive and socially-minded approach for global businesses with a focus on the reduction of carbon emissions in China and other large industrial nations. Also in 2017, WEF launched the Fourth Industrial Revolution (4IR) for the Earth Initiative, a collaboration among WEF, Stanford University and PwC, and funded through the Mava Foundation. In 2018, WEF announced that one project within this initiative was to be the Earth BioGenome Project, the aim of which is to sequence the genomes of every organism on Earth. The World Economic Forum is working to eliminate plastic pollution, stating that by 2050 it will consume 15% of the global carbon budget and will pass by its weight fishes in the world's oceans. One of the methods is to achieve circular economy. The theme of the 2020 World Economic Forum annual meeting was 'Stakeholders for a Cohesive and Sustainable World'. Climate change and sustainability were central themes of discussion. Many argued that GDP is failed to represent correctly the wellbeing and that fossil fuel subsidies should be stopped. Many of the participants said that a better capitalism is needed. Al Gore summarized the ideas in the conference as: "The version of capitalism we have today in our world must be reformed". In this meeting the World Economic Forum: At the 2021 annual meeting UNFCCC launched the 'UN Race-to-Zero Emissions Breakthroughs'. The aim of the campaign is to transform 20 sectors of the economy in order to achieve zero greenhouse gas emissions. At least 20% of each sector should take specific measures, and 10 sectors should be transformed before COP 26 in Glasgow. According to the organizers, 20% is a tipping point, after which the whole sector begins to irreversibly change. In April 2020, the forum published an article that postulates that the COVID-19 pandemic is linked to the destruction of nature. The number of emerging diseases is rising and this rise is linked to deforestation and species loss. In the article, there are multiple examples of the degradation of ecological systems caused by humans. It is also says that half of the global GDP is moderately or largely dependent on nature. The article concludes that the recovery from the pandemic should be linked to nature recovery. The forum proposed a plan for a green recovery. The plan includes advancing circular economy. Among the mentioned methods, there is green building, sustainable transport, organic farming, urban open space, renewable energy and electric vehicles. The Global Shapers Community (GSC), an initiative of World Economic Forum, selects young leaders below 30 years old to be change agents in the world. Global Shapers develop and lead their city-based hubs to implement social justice projects that advance the mission of World Economic Forum. The GSC has over 10,000 members in 500+ hubs in 154 countries. Some critics see the WEF's increasing focus on activist areas such as environmental protection and social entrepreneurship as a strategy to disguise the true plutocratic goals of the organisation. In May 2020, the WEF and the then-Prince of Wales's Sustainable Markets Initiative launched "The Great Reset" project, a five-point plan to enhance sustainable economic growth following the global recession caused by the COVID-19 pandemic lockdowns. "The Great Reset" was to be the theme of WEF's annual meeting in August 2021. According to forum founder Schwab, the intention of the project is to reconsider the meaning of capitalism and capital. While not abandoning capitalism, he proposes to change and possibly move on from some aspects of it, including neoliberalism and free-market fundamentalism. The role of corporations, taxation and more should be reconsidered. International cooperation and trade should be defended and the Fourth Industrial Revolution also. The forum defines the system that it wants to create as "Stakeholder Capitalism". The forum supports trade unions. The 'Great Reset' has also been the target of several "debunked" conspiracy theories, which heavily overlap with related conspiracy theories concerning the 'New World Order', Qanon, and COVID-19. Criticism During the late 1990s, the WEF, as well as the G7, World Bank, World Trade Organization, and International Monetary Fund, came under heavy criticism by anti-globalization activists who asserted that capitalism and globalization were increasing poverty and destroying the environment. In 2000, about 10,000 demonstrators disrupted a regional WEF meeting in Melbourne, by obstructing the path of 200 delegates. Small demonstrations are held in Davos on most but not all years, organised by the local Green Party (see Anti-WEF protests in Switzerland, January 2003) to protest against what have been called the meetings of "fat cats in the snow", a tongue-in-cheek term used by rock singer Bono. After 2014, the physical protest movement against the World Economic Forum largely died down, and Swiss police noted a significant decline in attending protesters, 20 at most during the meeting in 2016. While protesters are still more numerous in large Swiss cities, the protest movement itself has undergone significant change. Around 150 Tibetans and Uighurs protested in Geneva and 400 Tibetans in Bern against the visit of China's paramount leader Xi Jinping for the 2017 meeting, with subsequent confrontations and arrests. A number of NGOs have used the World Economic Forum to highlight growing inequalities and wealth gaps, which they consider to have been neglected, or even to be exacerbated, through institutions like the WEF. Winnie Byanyima, the former executive director of the anti-poverty confederation Oxfam International co-chaired the 2015 meeting, where she presented a critical report of global wealth distribution based on statistical research by the Credit Suisse Research Institute. In this study, the richest 1% of people in the world own 48% of the world's wealth. At the 2019 meeting, she presented another report in which she stated that the gap between rich and poor has widened. The report "Public Good or Private Wealth" stated that 2,200 billionaires worldwide saw their wealth grow by 12% while the poorest half saw its wealth fall by 11%. Oxfam calls for a global tax overhaul to increase and harmonise global tax rates for corporations and wealthy individuals. "You'll own nothing and be happy" is a phrase adapted from an essay written by Ida Auken in 2016 for the WEF, pondering a future in which urban residents would rely on shared services for many expensive items such as appliances and vehicles. Shortly after its publication, a commentator for European Digital Rights criticized Auken's vision of centralized property ownership as a "benevolent dictatorship". During the COVID-19 pandemic, the phrase went viral, eliciting strongly negative reactions from mostly conservative but also some left-wing and unaffiliated commentators. Responding to viral social media posts based on the phrase, the WEF denied that it had a goal related to limiting ownership of private property. Rutger Bregman, a Dutch historian invited to a 2018 WEF panel on inequality, went viral when he suggested that the best way for the attendees to attack inequality was to stop avoiding taxes. Bregman described his motivation, saying "it feels like I’m at a firefighters’ conference and no one’s allowed to speak about water". The formation of a detached elite, sometimes labeled with the neologism "Davos Man", refers to a global group whose members view themselves as completely "international". The term refers to people who "have little need for national loyalty, view national boundaries as obstacles, and see national governments as residues from the past whose only useful function is to facilitate the elite's global operations" according to political scientist Samuel P. Huntington, who is credited with inventing the neologism. In his 2004 article "Dead Souls: The Denationalization of the American Elite", Huntington argues that this international perspective is a minority elitist position not shared by the nationalist majority of the people. The Transnational Institute describes the World Economic Forum's main purpose as being "to function as a socializing institution for the emerging global elite, globalization's "Mafiocracy" of bankers, industrialists, oligarchs, technocrats and politicians. They promote common ideas, and serve common interests: their own." In 2019, the Manager Magazin journalist Henrik Müller argued that the "Davos Man" had already decayed into different groups and camps. He saw three central drivers for this development: In 2022 the term was again used in a book by The New York Times journalist Peter S. Goodman. Titled Davos Man: How the Billionaires Devoured the World, it was described as a passionate book against global financial inequality. Critics argue that the WEF, despite having reserves of several hundred million Swiss francs and paying its executives salaries of around 1 million Swiss francs per year, would not pay any federal tax and moreover allocate a part of its costs to the public. Following massive criticism from politicians and Swiss civil society, the Swiss federal government decided in February 2021 to reduce its annual contributions to the WEF. As of 2018, the police and military expenditures carried by the federal government stood at 39 million Swiss francs. The Aargauer Zeitung argued in January 2020 that the additional cost borne by the Kanton Graubünden stands at CHF 9 million per year. The Swiss Green Party summarised their criticism within the Swiss National Council that the holding of the World Economic Forum has cost Swiss taxpayers hundreds of millions of Swiss francs over the past decades. In their view, it was however questionable to what extent the Swiss population or global community benefit from these expenditures. Women have been broadly underrepresented at the WEF, according to some critics. The female participation rate at the WEF increased from 9% to 15% between 2001 and 2005. In 2016, 18% of the WEF attendees were female; this number increased to 21% in 2017, and 24% in 2020. Several women have since shared their personal impressions of the Davos meetings in media articles, highlighting that issues were more profound than "a quota at Davos for female leaders or a session on diversity and inclusion". The World Economic Forum has in this context filed legal complaints against at least three investigative articles by reporters Katie Gibbons and Billy Kenber that were published by the British newspaper The Times in March 2020, with the articles still online as of January 2024. According to The Wall Street Journal, the WEF has had numerous accusations of workplace discrimination against women and Black people. According to the European Parliament's think tank, critics see the WEF as an instrument for political and business leaders to "take decisions without having to account to their electorate or shareholders". Since 2009, the WEF has been working on a project called the Global Redesign Initiative (GRI), which proposes a transition away from intergovernmental decision-making towards a system of multi-stakeholder governance. According to the Transnational Institute (TNI), the Forum is hence planning to replace a recognised democratic model with a model where a self-selected group of "stakeholders" make decisions on behalf of the people. Some critics have seen the WEF's attention to goals like environmental protection and social entrepreneurship as mere window dressing to disguise its true plutocratic nature and goals. In a Guardian opinion piece, Cas Mudde said that such plutocrats should not be the group to have control over the political agendas and decide which issues to focus on and how to support them. A writer in the German magazine Cicero saw the situation as academic, cultural, media and economic elites grasping for social power while disregarding political decision processes. A materially well-endowed milieu would in this context try to "cement its dominance of opinion and sedate ordinary people with maternalistic-paternalistic social benefits, so that they are not disturbed by the common people when they steer". The French Les Echos furthermore concludes that Davos "represents the exact values people rejected at the ballot box". In 2017, the former Frankfurter Allgemeine Zeitung journalist Jürgen Dunsch criticized that financial reports of the WEF were not very transparent since neither income nor expenditures were broken down. In addition, he outlined that the foundation capital was not quantified while the apparently not insignificant profits would be reinvested. Recent annual reports published by the WEF include a more detailed breakdown of its financials and indicate revenues of CHF 349 million for the year 2019 with reserves of CHF 310 million and a foundation capital of CHF 34 million. There are no further details provided to what asset classes or individual names the WEF allocates its financial assets of CHF 261 million. From July 2019 to June 2020, the World Economic Forum has spent €250,000 on lobbying the European Union. The German newspaper Süddeutsche Zeitung criticised in this context that the WEF had turned into a "money printing machine", which is run like a family business and forms a comfortable way to make a living for its key personnel. The foundation's founder Klaus Schwab draws a salary of around one million Swiss francs per year. In a request to the Swiss National Council, the Swiss Green Party criticised that invitations to the annual meeting and programmes of the World Economic Forum are issued according to unclear criteria. They highlight that "despots" such as the son of the former Libyan dictator Saif al-Islam al-Gaddafi had been invited to the WEF and even awarded membership in the club of "Young Global Leaders". Even after the beginning of the Arab spring in December 2010 and related violent uprisings against despot regimes, the WEF continued to invite Gaddafi to its annual meeting. Critics emphasise that the annual meeting of the World Economic Forum is counterproductive when combating pressing problems of humanity such as the climate crisis. Even in 2020, participants travelled to the WEF annual meeting in Davos on around 1,300 private jets while the total emissions burden from transport and accommodation were enormous in their view. The World Economic Forum's "Global Redesign" report suggests to create "public-private" United Nations (UN) in which selected agencies operate and steer global agendas under shared governance systems. In September 2019, more than 400 civil society organizations and 40 international networks heavily criticised a partnership agreement between WEF and the United Nations and called on the UN secretary-general to end it. They see such an agreement as a "disturbing corporate capture of the UN, which moved the world dangerously towards a privatised global governance". The Dutch Transnational Institute think tank summarises that we are increasingly entering a world where gatherings such as Davos are "a silent global coup d'état" to capture governance. In 2019, the Swiss newspaper WOZ received a refusal of its accreditation request for the annual meeting with the editors and subsequently accused the World Economic Forum of favoring specific media outlets. The newspaper highlighted that the WEF stated in its refusal message that it [the forum] prefers media outlets it works with throughout the year. WOZ deputy head Yves Wegelin called this a strange idea of journalism because in "journalism you don't necessarily have to work with large corporations, but rather critique them". In addition to economic policy, the WEF's agenda is in recent years increasingly focusing on positively connoted activist topics such as environmental protection and social entrepreneurship, which critics see as a strategy to disguise the organisation's true plutocratic goals. In a December 2020 article by The Intercept, author Naomi Klein described that the WEF's initiatives like the "Great Reset" were simply a "coronavirus-themed rebranding" of things that the WEF was already doing and that it was an attempt by the rich to make themselves look good. In her opinion, "the Great Reset is merely the latest edition of this gilded tradition, barely distinguishable from earlier Davos Big Ideas. Similarly, in his review of COVID-19: The Great Reset, ethicist Steven Umbrello makes parallel critiques of the agenda. He says that the WEF "whitewash[es] a seemingly optimistic future post-Great Reset with buzz words like equity and sustainability" while it functionally jeopardizes those goals. A study published in the Journal of Consumer Research investigated the sociological impact of the WEF. It concluded that the WEF does not solve issues such as, poverty, global warming, and chronic illnesses, or debt. The Forum has, according to the study, simply shifted the burden for the solution of these problems from governments and business to "responsible consumers subjects: the green consumer, the health-conscious consumer, and the financially literate consumer." In December 2021, the Catholic Cardinal and former prefect of the Congregation for the Doctrine of the Faith (CDF) Gerhard Ludwig Müller criticised in a controversial interview that people like WEF founder Schwab were sitting "on the throne of their wealth" and were not touched by the everyday difficulties and sufferings people face e.g. due to the COVID-19 pandemic. On the contrary, such elites would see crises as an opportunity to push through their agendas. He particularly criticised the control such people would exercise on people and their embracement of areas such as transhumanism. The Central Council of Jews in Germany condemned this criticism, which is also linked to Jewish financial investors, as antisemitic. On the other hand, the WEF has been criticized as "hypocritical" towards Palestinian human rights, when it rejected a petition from its own constituents to condemn Israel's aggression against Palestinians. WEF cited the need to remain "impartial" on the issue. However, Khaled Al Sabawi, writing in MondoWeiss called it hypocritical after it voluntarily condemned Russia's aggression against Ukraine months later. In July 2025, the founder of World Economic Forum, Klaus Schwab, was accused of abuse of power after an internal WEF investigation found that when data for the 2017/18 WEF's Annual Competitiveness Report showed the UK had moved up the ranking from seventh to fourth place he intervened by writing to staff that the UK "must not see any improvement" as this would otherwise be "exploited by the Brexit camp". The final report published showed the UK had instead dropped by one place to eighth. In the same report India should have dropped 20 places in the ranking. However Schwab told his staff that "we must protect our relationship with India before Davos 2019", consequently the published report showed India had dropped by only one place to fortieth. Controversies In May 2025, Klaus Schwab launched a defamation and coercion complaint against anonymous whistleblowers whose allegations prompted his resignation as chair of the WEF board of trustees in 2025. The accusations, including claims of financial impropriety, research manipulation, and mishandling of sexual harassment cases, were described by Schwab as "stupid and constructed." Despite stepping down from the WEF, Schwab maintains his innocence and has stated that his lawyers filed a criminal complaint with the Geneva public prosecutor, marking a turbulent period for the World Economic Forum as an organisation. The controversy also highlighted management issues at the WEF identified in earlier investigations and reignited scrutiny over its workplace culture. In June 2021, WEF founder Klaus Schwab sharply criticised what he characterized as the "profiteering", "complacency" and "lack of commitment" by the municipality of Davos in relation to the annual meeting. He mentioned that the preparation of the COVID-related meeting in Singapore in 2021/2022 had created an alternative to its Swiss host and sees the chance that the annual meeting will stay in Davos between 40 and 70 percent. As there are many other international conferences nicknamed with "Davos" such as the "Davos of the Desert" event organised by Saudi Arabia's Future Investment Initiative Institute, the World Economic Forum objected to the use of "Davos" in such contexts for any event not organised by them. This particular statement was issued on 22 October 2018, a day before the opening of 2018 Future Investment Initiative (nicknamed "Davos in the desert") organised by the Public Investment Fund of Saudi Arabia. Alternatives Since the annual meeting in January 2003 in Davos, an Open Forum Davos, which was co-organized by the Federation of Swiss Protestant Churches, is held concurrently with the Davos forum, opening up the debate about globalization to the general public. The Open Forum has been held in the local high school every year, featuring top politicians and business leaders. It is open to all members of the public free of charge. The Public Eye Awards were held every year from 2000 to 2015. It is a counter-event to the annual meeting of the World Economic Forum in Davos. Public Eye Awards is a "public competition of the worst corporations in the world." In 2011, more than 50,000 people voted for companies that acted irresponsibly. At a ceremony at a Davos hotel, the "winners" in 2011 were named as Indonesian palm oil diesel maker, Neste Oil in Finland, and mining company AngloGold Ashanti in South Africa. According to Schweiz aktuell broadcast on 16 January 2015, a public presence during the WEF 2015, may not be guaranteed because the massively increased security in Davos. The Public Eye Award will be awarded for the last time in Davos: "Public Eyes says Goodbye to Davos", confirmed by Rolf Marugg (now Landrats politician), by not directly engaged politicians, and by the police responsible. See also Citations General and cited references External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lulav] | [TOKENS: 1316] |
Contents Lulav Lulav (Hebrew: לוּלָב [luˈlav]) is a closed frond of the date palm tree. It is one of the Four Species used during the Jewish holiday of Sukkot. The other Species are the hadass (myrtle), aravah (willow), and etrog (citron). When bound together, the lulav, hadass, and aravah are commonly referred to as "the lulav". Codification in the Torah The Torah mentions the commandments to obtain a lulav for the Sukkot holiday once in Leviticus: Leviticus 23:40 In the Oral Torah, the Mishnah comments that the biblical commandment to take the lulav, along with the other three species, is for all seven days of Sukkot only in and around the Temple Mount when the Holy Temple in Jerusalem is extant, as indicated by the verse as "in the presence of Hashem, your God, for seven days." In the rest of the Land of Israel, as well as in the Diaspora, the four species are biblically mandated only on the first day of Sukkot. After the destruction of the Temple in 70 CE, Rabbi Yochanan ben Zakai legislated a rabbinical enactment to take the four species for the entire seven days of the holiday in all locations as a commemoration of what was done in the Temple. As with all Biblical verses, Jewish law derives numerous details and specifications relating to the commandments by interpreting the manner in which words are utilized, spelled and juxtaposed in the verses of the Torah. Rashi explains the pertinent verse in the Bible based on the Talmud's erudition. which focuses on the spelling of the words in the verse that refer to the lulav: kapot t'marim (כפת תמרים, "palms [of] dates"). The first word refers to date stalks (the strands on which the dates sprout) and is written in plural form (kapót - כּפוֹת) instead of singular form (kaf - כף), in order to indicate that the commandment is not to take merely a single leaf of the entire palm. However the word is written in a deficient manner, without the letter vav, as the plural word would normally contain (כפת instead of כפות). Rashi further elucidates based on the Talmud's erudition, that the missing letter vav is to indicate that only a single palm is to be taken. The Talmud also uses this spelling irregularity to suggest according to the opinion of Rabbi Yehudah in the name of Rabbi Tarfon, that the lulav must be bound if its leaves spread away from the spine of the palm. This teaching is derived from the similarity between the spelling of the Hebrew words for "palm" and "binding", which would not be a viable teaching had the word for palm been written in its strictly singular form of kaf. The Keli Yakar comments that the words verse in Psalms 96:12 az yeranenu kol atzei ya'ar (אז ירננו כל עצי יער, "then all the trees of the forest will sing with joy"), is not only a reference to the shaking of the four species but a hint to this Biblical specification: the Hebrew word az (אז, "then") is composed of two letters, an aleph (א), with a numerical value of 1, and a zayin (ז), with a numerical value of 7, hinting that the four species are to be taken 1 day outside of the Temple area and seven days in the Temple. Regulations of the lulav A lulav, as with all mitzvah articles (those used to fulfill biblical and rabbinical requirements within Judaism), must meet certain specifications in order to be kosher and permissible to be used to fulfill the commandment of the four species. Ideally, a lulav consists of a tightly closed frond of the date palm tree. To qualify, the lulav must be straight, with whole leaves that lie closely together, and not be bent or broken at the top. The twin middle-most leaves, which naturally grow together and are known as the tiyomet (תיומת, "twin"), should ideally not be split at all; however, the lulav remains kosher as long as the twin middle leaves are not split more than a handbreadth, approximating 3-4 inches. This rule applies on the first day of Sukkot in the Land of Israel, and on the first two days elsewhere. On Chol HaMoed, the disqualifications arising from using a lulav with a split middle leaf do not apply. The term lulav also refers to the lulav in combination with two of the other species—the aravah and the hadass—that are bound together to perform the mitzvah of waving the lulav. These three species are held in one hand while the etrog is held in the other. The user brings his or her hands together and waves the species in all four directions, plus up and down, to attest to God's mastery over all of creation. This ritual also symbolically voices a prayer for adequate rainfall over all the Earth's vegetation in the coming year. (See Four Species for the complete description and symbolism of the waving ceremony.) Although Jews are commanded to take the four species together, the rabbinically ordained blessing mentions only the lulav because it is the largest and most evident of the four species. The biblical reference to the four species in Sukkot can be found in Leviticus Chapter 23, verse 40. The etrog is referred to as "Citrus fruit" (Etz Hadar), and the Lulav is referred to as "Palm branches" (Kapot t'marim). Each species is said to kabbalistically represent an aspect of the user's body; the lulav represents the spine, the myrtle the eyes, the willow the lips, and the etrog represents the heart. References External links Media related to Lulav at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wolfram_Language] | [TOKENS: 913] |
Contents Wolfram Language The Wolfram Language (/ˈwʊlfrəm/ WUUL-frəm) is a proprietary, high-level multi-paradigm programming language developed by Wolfram Research. It emphasizes symbolic computation, functional programming, and rule-based programming and can employ arbitrary structures and data. It is the programming language of the mathematical symbolic computation program Mathematica. History The Wolfram Language was part of the initial version of Mathematica in 1988. Symbolic aspects of the engine make it a computer algebra system. The language can perform integration, differentiation, matrix manipulations, and solve differential equations using a set of rules. Also, the initial version introduced the notebook model and the ability to embed sound and images, according to Theodore Gray's patent. Wolfram also added features for more complex tasks, such as 3D modeling. A name was finally adopted for the language in 2013, as Wolfram Research decided to make a version of the language engine free for Raspberry Pi users, and they needed to come up with a name for it. It was included in the recommended software bundle that the Raspberry Pi Foundation provides for beginners, which caused some controversy due to the Wolfram language's proprietary nature. Plans to port the Wolfram language to the Intel Edison were announced after the board's introduction at CES 2014 but were never released. In 2019, a link was added to make Wolfram libraries compatible with the Unity game engine, giving game developers access to the language's high-level functions. Syntax The Wolfram Language syntax is overall similar to the M-expression of 1960s LISP, with support for infix operators and "function-notation" function calls. The Wolfram language writes basic arithmetic expressions using infix operators. Function calls are denoted with square brackets: Lists are enclosed in curly brackets: The language may deviate from the M-expression paradigm when an alternative, more human-friendly way of showing an expression is available: A FullForm formatter desugars the input: Currying is supported. Functions in the Wolfram Language are effectively a case of simple patterns for replacement: The := is a "SetDelayed operator", so that the x is not immediately looked for. x_ is syntax sugar for Pattern[x, Blank[]], i.e. a "blank" for any value to replace x in the rest of the evaluation. An iteration of bubble sort is expressed as: The /; operator is "condition", so that the rule only applies when y>z. The three underscores are a syntax for a BlankNullSequence[], for a sequence that can be null. A ReplaceRepeated //. operator can be used to apply this rule repeatedly, until no more change happens: The pattern matching system also easily gives rise to rule-based integration and derivation. The following are excerpts from the Rubi package of rules: Implementations The official and reference implementation of the Wolfram Language lies in Mathematica and associated online services. These are closed source. Wolfram Research has, however, released a parser of the language under the open source MIT License. The parser was originally developed in C++ but was rewritten in Rust in 2023. The reference book is open access. In the over three-decade-long existence of the Wolfram language, a number of open-source third-party implementations have also been developed. Richard Fateman's MockMMA from 1991 is of historical note, both for being the earliest reimplementation and for having received a cease-and-desist from Wolfram. Modern ones still being maintained as of April 2020[update] include Symja in Java, expreduce in Golang, and SymPy-based Mathics. These implementations focus on the core language and the computer algebra system that it implies, not on the online "knowledgebase" features of Wolfram. In 2019, Wolfram Research released the freeware Wolfram Engine, to be used as a programming library in non-commercial software. This developer-only engine provides a command-line shell of the Mathematica evaluator (with a limited number of kernels) and requires signup and license activation over the web. The freely available Jupyter Notebook/Lab project provides a protocol (ZMQ) to connect their notebooks to various languages, this is available as an alternative to the text-only CLI interface via the Wolfram Kernel for Jupyter. Naming The language was officially named in June 2013 and has been used as the backend of Mathematica and other Wolfram technologies for over 30 years. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Thirty-seventh_government_of_Israel&action=history] | [TOKENS: 71] |
Thirty-seventh government of Israel: Revision history For any version listed below, click on its date to view it. For more help, see Help:Page history and Help:Edit summary. (cur) = difference from current version, (prev) = difference from preceding version, m = minor edit, → = section edit, ← = automatic edit summary |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.