text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-:5_61-0] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wadi_Hunayn] | [TOKENS: 459] |
Contents Wadi Hunayn Wadi Hunayn (Arabic: وادي حنين) was a Palestinian Arab village in the Ramle Subdistrict, located 9 km west of Ramla. According to a local tradition, it was named after the Yemeni home of the Qada'a tribe who settled here in the early Islamic period. History In 1881, it was noted as being named Wady Hanein, meaning "The valley of Hanein" (or Honein); the word means the cry of a she-camel to her colt. At the time of the 1922 census of Palestine, Wadi Hunayn had a population of 195 inhabitants, all Muslims, which increased to 278 Muslims and 2 Christians, living in 55 houses, by the 1931 census. In the 1945 statistics, there were 1,620 Muslims and 1,760 Jews estimated to live in Wadi Hunayn and Ness Ziona together. Its main export was citrus, grown in orchards that were irrigated by numerous water wells dug around the village. The residents worked in the orchards and sold their yield at the cities. They grew bananas and grains as well. During the 1940s, the village became a main source of basic supplies and meat for the nearby Jewish and Palestinian inhabitants due to its strategic location on the main road. The village was depopulated during the 1947–48 Civil War in Mandatory Palestine. The majority of the inhabitants fled the village during January 1948, with the remaining population being transported into Jordan by the Haganah who entered the village on 19 April 1948. Wadi Hunayn was mostly destroyed by the Haganah forces, who blew up all the buildings near the main road as well as the local mosque's minaret, since the village was used as a launching point for Arab attacks on Jewish convoys to Jerusalem. Only a few of the original houses of the village remained, while the mosque (built in 1934) was converted into a synagogue by the neighboring Jewish population of Ness Ziona and renamed "Geulat Yisra'el" ("Israel's salvation"). References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bir_Salim] | [TOKENS: 226] |
Contents Bir Salim Bir Salim (Arabic: بئر سالم) was a Palestinian Arab village in the Ramle Subdistrict of Mandatory Palestine. It was depopulated during the 1947–48 Civil War in Mandatory Palestine on May 9, 1948, by the Givati Brigade. It was located 4 km west of Ramla. History In the 1945 statistics, the village had a population of 410 Muslims, while the total land area was 3,401 dunams, according to an official land and population survey. Of this, 742 dunums of village land was used for citrus and bananas, 510 dunums were irrigated or used for plantations, 1,468 dunums were for cereals, while 681 dunams were classified as non-cultivable areas. According to a summary by the IDF Intelligence branch, Bir Salim was depopulated on 9 May 1948 after an attack on the orphanage. Netzer Sereni was established on village land in 1948. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Central_London] | [TOKENS: 1122] |
Contents Central London Central London is the innermost part of London, in England, spanning the City of London and several boroughs. Over time, a number of definitions have been used to define the scope of Central London for statistics, urban planning and local government. Its characteristics are understood to include a high-density built environment, high land values, an elevated daytime population and a concentration of regionally, nationally and internationally significant organisations and facilities. Road distances to London are traditionally measured from a central point at Charing Cross (in the City of Westminster), which is marked by the statue of King Charles I at the junction of the Strand, Whitehall and Cockspur Street, just south of Trafalgar Square. Characteristics The central area is distinguished, according to the Royal Commission, by the inclusion within its boundaries of Parliament and the Royal Palaces, the headquarters of Government, the Law Courts, the head offices of a very large number of commercial and industrial firms, as well as institutions of great influence in the intellectual life of the nation such as the British Museum, the National Gallery, the Tate Gallery, the University of London, the headquarters of the national ballet and opera, together with the headquarters of many national associations, the great professions, the trade unions, the trade associations and social service societies, as well as the Liberal Democrat Headquarters, Labour Party Headquarters and Conservative Campaign Headquarters, shopping centres and centres of entertainment which attract people from the whole of Greater London and farther afield. In many other respects the central area differs from areas farther out in London. The rateable value of the central area is exceptionally high. Its day population is very much larger than its night population. Its traffic problems reach an intensity not encountered anywhere else in the Metropolis or in any provincial city, and the enormous office developments which have taken place recently constitute a totally new phenomenon. Definitions Starting in 2004, the London Plan defined a 'Central Activities Zone' policy area, which as of 2008 comprised the City of London, most of Westminster and the inner parts of Camden, Islington, Hackney, Tower Hamlets, Southwark, Lambeth, Kensington & Chelsea and Wandsworth. It is described as "a unique cluster of vitally important activities including central government offices, headquarters and embassies, the largest concentration of London's financial and business services sector and the offices of trade, professional bodies, institutions, associations, communications, publishing, advertising and the media". For strategic planning, since 2011 there has been a Central London sub-region comprising the boroughs of Camden, Islington, Kensington and Chelsea, Lambeth, Southwark, Westminster and the City of London. From 2004 to 2008, the London Plan included a sub-region called Central London comprising Camden, Islington, Kensington and Chelsea, Lambeth, Southwark, Wandsworth and Westminster. It had a 2001 population of 1,525,000. The sub-region was replaced in 2008 with a new structure which amalgamated inner and outer boroughs together. This was altered in 2011 when a new Central London sub-region was created, now including the City of London and excluding Wandsworth. The 1901 Census defined Central London as the City of London and the metropolitan boroughs (subdivisions that existed from 1900 to 1965) of Bermondsey, Bethnal Green, Finsbury, Holborn, Shoreditch, Southwark, Stepney, St Marylebone and Westminster. During the Herbert Commission and the subsequent passage of the London Government Bill, three unsuccessful attempts were made to define an area that would form a central London borough. The first two were detailed in the 1959 Memorandum of Evidence of the Greater London Group of the London School of Economics. "Scheme A" envisaged a central London borough, one of 25, consisting of the City of London, Westminster, Holborn, Finsbury and the inner parts of St Marylebone, St Pancras, Chelsea, Southwark and Lambeth. The boundary deviated from existing lines to include all central London railway stations, the Tower of London and the museums, such that it included small parts of Kensington, Shoreditch, Stepney and Bermondsey. It had an estimated population of 350,000 and occupied 7,000 acres (28 km2). "Scheme B" delineated central London, as one of 7 boroughs, including most of the City of London, the whole of Finsbury and Holborn, most of Westminster and Southwark, parts of St Pancras, St Marylebone, Paddington and a small part of Kensington. The area had an estimated population of 400,000 and occupied 8,000 acres (32 km2). During the passage of the London Government Bill an amendment was put forward to create a central borough corresponding to the definition used at the 1961 census. It consisted of the City of London, all of Westminster, Holborn and Finsbury; and the inner parts of Shoreditch, Stepney, Bermondsey, Southwark, Lambeth, Chelsea, Kensington, Paddington, St Marylebone and St Pancras. The population was estimated to be 270,000. See also References 51°30′27″N 0°07′39″W / 51.5075°N 0.1275°W / 51.5075; -0.1275 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Austin,_Texas] | [TOKENS: 19074] |
Contents Austin, Texas Austin (/ˈɔːstɪn/ ⓘ AW-stin) is the capital city of the U.S. state of Texas. With a population of 961,855 at the 2020 census, it is the 13th-most populous city in the U.S., fifth-most populous city in Texas, and second-most populous U.S. state capital (after Phoenix, Arizona), while the Austin metro area with an estimated 2.55 million residents is the 25th-largest metropolitan area in the nation. Austin is the county seat and most populous city of Travis County, with portions extending into Hays and Williamson counties. Incorporated on December 27, 1839, it has been one of the fastest-growing large cities in the United States since 2010. Located in Central Texas within the greater Texas Hill Country, it is home to numerous lakes, rivers, and waterways, including Lady Bird Lake and Lake Travis on the Colorado River, Barton Springs, McKinney Falls, and Lake Walter E. Long. Evidence of human activity in the region is estimated to date back at least 11,200 years ago, with early habitation by Clovis peoples and later by American Indian groups such as the Tonkawa. Austin and San Antonio are approximately 80 miles (129 km) apart, and both fall along the I-35 corridor. This combined metropolitan region of San Antonio–Austin has approximately 5 million people. Austin is the southernmost state capital in the contiguous United States and is considered a Gamma + level global city as categorized by the Globalization and World Cities Research Network. Residents of Austin are known as Austinites. They include a diverse mix of government employees, college students, musicians, high-tech workers, and blue-collar workers. The city's official slogan promotes Austin as "The Live Music Capital of the World", a reference to the city's many musicians and live music venues, as well as the long-running PBS TV concert series Austin City Limits. Austin is the site of South by Southwest (SXSW), an annual conglomeration of parallel film, interactive media, and music festivals. The city also adopted "Silicon Hills" as a nickname in the 1990s due to a rapid influx of technology and development companies. In recent years, some Austinites have adopted the unofficial slogan "Keep Austin Weird", which refers to the desire to protect small, unique, and local businesses from being overrun by large corporations. Ongoing rapid development and gentrification challenge its bohemian roots and fuel nostalgia for “Old Austin.” Since the late 19th century, Austin has also been known as the "City of the Violet Crown", because of the colorful glow of light across the hills just after sunset. Emerging from a strong economic focus on government and education, Austin has become a center for technology and business since the 1990s. The technology roots in Austin can be traced back to the 1960s, when defense electronics contractor Tracor (now BAE Systems) began operations in the city in 1962. IBM followed in 1967, opening a facility to produce its Selectric typewriters. Texas Instruments was set up in Austin two years later, and Motorola (now NXP Semiconductors) started semiconductor chip manufacturing in 1974. A number of Fortune 500 companies have headquarters or regional offices in Austin, including 3M, Advanced Micro Devices (AMD), Agilent Technologies, Amazon, Apple, CrowdStrike, Dell, Expedia, Facebook (Meta), General Motors, Google, IBM, Intel, NXP Semiconductors, Oracle, Tesla, and Texas Instruments.[citation needed] With regard to education, Austin is the home of the University of Texas at Austin, one of the largest universities in the U.S., with over 50,000 students. In 2021, Austin became home to Austin FC, the first (and currently only) major professional sports team in the city. History Austin, Travis County and Williamson County have been the site of human habitation since at least 9200 BC. The area's earliest known inhabitants lived during the late Pleistocene (Ice Age) and are linked to the Clovis culture around 9200 BC (over 11,200 years ago), based on evidence found throughout the area and documented at the much-studied Gault Site, midway between Georgetown and Fort Cavazos.[failed verification] When settlers arrived from Europe, the Tonkawa tribe inhabited the area. The Comanches and Lipan Apaches were also known to travel through the area. Spanish colonists, including the Espinosa-Olivares-Aguirre expedition, traveled through the area, though few permanent settlements were created for some time. In 1730, three Catholic missions from East Texas were combined and reestablished as one mission on the south side of the Colorado River, in what is now Zilker Park, in Austin. The mission was in this area for only about seven months, then was moved to San Antonio de Béxar and split into three missions. During the 1830s, pioneers began to settle the area in central Austin along the Colorado River. Spanish forts were established in what are now Bastrop and San Marcos. Following Mexico's independence, new settlements were established in Central Texas. In 1835–1836, Texans fought and won independence from Mexico. Texas thus became an independent country with its own president, congress, and monetary system. In 1839, the Texas Congress formed a commission to seek a site for the new capital of the Republic of Texas to replace Houston. When he was Vice President of Texas, Mirabeau B. Lamar had visited the area during a buffalo-hunting expedition between 1837 and 1838. He advised the commissioners to consider the area on the north bank of the Colorado River (near the present-day Congress Avenue Bridge), noting the area's hills, waterways, and pleasant surroundings. It was seen as a convenient crossroads for trade routes between Santa Fe and Galveston Bay, as well as routes between Northern Mexico and the Red River. In 1839, the site was chosen, and briefly incorporated under the name "Waterloo". Shortly afterward, the name was changed to Austin in honor of Stephen F. Austin, the "Father of Texas" and the republic's first secretary of state. The city grew throughout the 19th century and became a center for government and education with the construction of the Texas State Capitol and the University of Texas at Austin. Edwin Waller was picked by Lamar to survey the village and draft a plan laying out the new capital. The original site was narrowed to 640 acres (260 ha) that fronted the Colorado River between two creeks, Shoal Creek and Waller Creek, which was later named in his honor. Waller and a team of surveyors developed Austin's first city plan, commonly known as the Waller Plan, dividing the site into a 14-block grid plan bisected by a broad north–south thoroughfare, Congress Avenue, running up from the river to Capital Square, where the new Texas State Capitol was to be constructed. A temporary one-story capitol was erected on the corner of Colorado and 8th Streets. On August 1, 1839, the first auction of 217 out of 306 lots total was held. The Waller Plan designed and surveyed now forms the basis of downtown Austin. In 1840, a series of conflicts between the Texas Rangers and the Comanches, known as the Council House Fight and the Battle of Plum Creek, pushed the Comanches westward, mostly ending conflicts in Central Texas. Settlement in the area began to expand quickly. Travis County was established in 1840, and the surrounding counties were mostly established within the next two decades. Initially, the new capital thrived but Lamar's political enemy, Sam Houston, used two Mexican army incursions to San Antonio as an excuse to move the government. Sam Houston fought bitterly against Lamar's decision to establish the capital in such a remote wilderness. The men and women who traveled mainly from Houston to conduct government business were intensely disappointed as well. By 1840, the population had risen to 856, nearly half of whom fled Austin when Congress recessed. The resident African American population listed in January of this same year was 176. The fear of Austin's proximity to the Indians and Mexico, which still considered Texas a part of their land, created an immense motive for Sam Houston, the first and third President of the Republic of Texas, to relocate the capital once again in 1841. Upon threats of Mexican troops in Texas, Houston raided the Land Office to transfer all official documents to Houston for safe keeping in what was later known as the Archive War, but the people of Austin would not allow this unaccompanied decision to be executed. The documents stayed, but the capital would temporarily move from Austin to Houston to Washington-on-the-Brazos. Without the governmental body, Austin's population declined to a low of only a few hundred people throughout the early 1840s. The voting by the fourth President of the Republic, Anson Jones, and Congress, who reconvened in Austin in 1845, settled the issue to keep Austin the seat of government, as well as annex the Republic of Texas into the United States. In 1860, 38% of Travis County residents were slaves. In 1861, with the outbreak of the American Civil War, voters in Austin and other Central Texas communities voted against secession. However, as the war progressed and fears of attack by Union forces increased, Austin contributed hundreds of men to the Confederate forces. The African American population of Austin swelled dramatically after the enforcement of the Emancipation Proclamation in Texas by Union General Gordon Granger at Galveston, in an event commemorated as Juneteenth. Black communities such as Wheatville, Pleasant Hill, and Clarksville were established, with Clarksville being the oldest surviving freedomtown ‒ the original post-Civil War settlements founded by former African-American slaves ‒ west of the Mississippi River. In 1870, blacks made up 36.5% of Austin's population. The postwar period saw dramatic population and economic growth. The opening of the Houston and Texas Central Railway (H&TC) in 1871 turned Austin into the major trading center for the region, with the ability to transport both cotton and cattle. The Missouri, Kansas & Texas (MKT) line followed close behind. Austin was also the terminus of the southernmost leg of the Chisholm Trail, and "drovers" pushed cattle north to the railroad. Cotton was one of the few crops produced locally for export, and a cotton gin engine was located downtown near the trains for "ginning" cotton of its seeds and turning the product into bales for shipment. However, as other new railroads were built through the region in the 1870s, Austin began to lose its primacy in trade to the surrounding communities. In addition, the areas east of Austin took over cattle and cotton production from Austin, especially in towns like Hutto and Taylor that sit over the blackland prairie, with its deep, rich soils for producing cotton and hay. In September 1881, Austin public schools held their first classes. The same year, Tillotson Collegiate and Normal Institute (now part of Huston–Tillotson University) opened its doors. The University of Texas held its first classes in 1883, although classes had been held in the original wooden state capitol for four years before. During the 1880s, Austin gained new prominence as the state capitol building was completed in 1888 and claimed as the seventh largest building in the world. In the late 19th century, Austin expanded its city limits to more than three times its former area, and the first granite dam was built on the Colorado River to power a new street car line and the new "moon towers". The first dam washed away in a flood on April 7, 1900. In the late 1920s and 1930s, Austin implemented the 1928 Austin city plan through a series of civic development and beautification projects that created much of the city's infrastructure and many of its parks. In addition, the state legislature established the Lower Colorado River Authority (LCRA) that, along with the city of Austin, created the system of dams along the Colorado River to form the Highland Lakes. These projects were enabled in large part because the Public Works Administration provided Austin with greater funding for municipal construction projects than other Texas cities. During the early 20th century, a three-way system of social segregation emerged in Austin, with Anglos, African Americans and Mexicans being separated by custom or law in most aspects of life, including housing, health care, and education. Deed restrictions also played an important role in residential segregation. After 1935 most housing deeds prohibited African Americans (and sometimes other nonwhite groups) from using land. Combined with the system of segregated public services, racial segregation increased in Austin during the first half of the twentieth century, with African Americans and Mexicans experiencing high levels of discrimination and social marginalization. In 1940, the destroyed granite dam on the Colorado River was finally replaced by a hollow concrete dam that formed Lake McDonald (now called Lake Austin) and which has withstood all floods since. In addition, the much larger Mansfield Dam was built by the LCRA upstream of Austin to form Lake Travis, a flood-control reservoir. In the early 20th century, the Texas Oil Boom took hold, creating tremendous economic opportunities in Southeast Texas and North Texas. The growth generated by this boom largely passed by Austin at first, with the city slipping from fourth largest to tenth largest in Texas between 1880 and 1920. After a severe lull in economic growth from the Great Depression, Austin resumed its steady development. Following the mid-20th century, Austin became established as one of Texas' major metropolitan centers. In 1970, the U.S. Census Bureau reported Austin's population as 14.5% Hispanic, 11.9% black, and 73.4% non-Hispanic white. In the late 20th century, Austin emerged as an important high tech center for semiconductors and software. The University of Texas at Austin emerged as a major university. The 1970s saw Austin's emergence in the national music scene, with local artists such as Willie Nelson, Asleep at the Wheel, and Stevie Ray Vaughan and iconic music venues such as the Armadillo World Headquarters. Over time, the long-running television program Austin City Limits, its namesake Austin City Limits Festival, and the South by Southwest music festival solidified the city's place in the music industry. Geography Austin, the southernmost state capital of the contiguous 48 states, is located in Central Texas on the Colorado River. Austin is 146 miles (230 km) northwest of Houston, 182 miles (290 km) south of Dallas, and 74 miles (120 km) northeast of San Antonio. Austin occupies a total area of 305.1 square miles (790.1 km2). Approximately 7.2 square miles (18.6 km2) of this area is water. Austin is situated at the foot of the Balcones Escarpment, on the Colorado River, with three artificial lakes within the city limits: Lady Bird Lake (formerly known as Town Lake), Lake Austin (both created by dams along the Colorado River), and Lake Walter E. Long that is partly used for cooling water for the Decker Power Plant. Mansfield Dam and the foot of Lake Travis are located within the city's limits. Lady Bird Lake, Lake Austin, and Lake Travis are each on the Colorado River. The elevation of Austin varies from 425 feet (130 m) to approximately 1,000 feet (305 m) above sea level. Due to the fact it straddles the Balcones Fault, much of the eastern part of the city is flat, with heavy clay and loam soils, whereas the western part and western suburbs consist of rolling hills on the edge of the Texas Hill Country. Because the hills to the west are primarily limestone rock with a thin covering of topsoil, portions of the city are frequently subjected to flash floods from the runoff caused by thunderstorms. To help control this runoff and to generate hydroelectric power, the Lower Colorado River Authority operates a series of dams that form the Texas Highland Lakes. The lakes also provide venues for boating, swimming, and other forms of recreation within several parks on the lake shores. Austin is located at the intersection of four major ecological regions, and is consequently a temperate-to-hot green oasis with a highly variable climate having some characteristics of the desert, the tropics, and a wetter climate. The area is very diverse ecologically and biologically, and is home to a variety of animals and plants. Notably, the area is home to many types of wildflowers that blossom throughout the year but especially in the spring. This includes the popular bluebonnets, some planted by "Lady Bird" Johnson, wife of former President Lyndon B. Johnson. The soils of Austin range from shallow, gravelly clay loams over limestone in the western outskirts to deep, fine sandy loams, silty clay loams, silty clays or clays in the city's eastern part. Some of the clays have pronounced shrink-swell properties and are difficult to work under most moisture conditions. Many of Austin's soils, especially the clay-rich types, are slightly to moderately alkaline and have free calcium carbonate. Austin's skyline historically was modest, dominated by the Texas State Capitol and the University of Texas Main Building. However, since the 2000s, many new high-rise towers have been constructed. Austin is currently undergoing a skyscraper boom, which includes recent construction on new office, hotel and residential buildings. Downtown's buildings are somewhat spread out, partly due to a set of zoning restrictions that preserve the view of the Texas State Capitol from various locations around Austin, known as the Capitol View Corridors. At night, parts of Austin are lit by "artificial moonlight" from moonlight towers built to illuminate the central part of the city. The 165-foot (50 m) moonlight towers were built in the late 19th century and are now recognized as historic landmarks. Only 15 of the 31 original innovative towers remain standing in Austin, but none remain in any of the other cities where they were installed. The towers are featured in the 1993 film Dazed and Confused. In December 2023, amid rising home prices, the Austin City Council loosened the city's zoning rules to permit by-right development of triplexes on each lot and loosened restrictions on tiny homes. The central business district of Austin is home to the tallest condo towers in the state, with The Independent (58 stories and 690 ft (210 m) tall) and The Austonian (topping out at 56 floors and 685 ft (209 m) tall). The Independent became the tallest all-residential building in the U.S. west of Chicago when topped out in 2018. In 2005, then-Mayor Will Wynn set out a goal of having 25,000 people living downtown by 2015. Although downtown's growth did not meet this goal, downtown's residential population did surge from an estimated 5,000 in 2005 to 12,000 in 2015. The skyline has drastically changed in recent years, and the residential real estate market has remained relatively strong. As of December 2016[update], there were 31 high rise projects either under construction, approved or planned to be completed in Austin's downtown core between 2017 and 2020. Sixteen of those were set to rise above 400 ft (120 m) tall, including four above 600', and eight above 500'. An additional 15 towers were slated to stand between 300' and 399' tall. Austin is located within the middle of a unique, narrow transitional zone between the dry deserts of the American Southwest and the lush, green, more humid regions of the American Southeast. Its climate, topography, and vegetation share characteristics of both. Officially, Austin has a humid subtropical climate (Cfa under the Köppen climate classification, Cfhl under the Trewartha climate classification). This climate is typified by long, very hot summers, short, mild winters, and warm to hot spring and fall seasons in-between. Austin averages 34.32 inches (872 mm) of annual rainfall distributed mostly evenly throughout the year, though spring and fall are the wettest seasons. Sunshine is common during all seasons, with 2,650 hours, or 60.3% of the possible total, of bright sunshine per year. Summers in Austin are very hot, with average July and August highs frequently reaching the high-90s (34–36 °C) or above. Highs reach 90 °F (32 °C) on 123 days per year, of which 29 days reach 100 °F (38 °C); all years in the 1991-2020 period recorded at least 1 day of the latter. The average daytime high is 70 °F (21 °C) or warmer between March 1 and November 21, rising to 80 °F (27 °C) or warmer between April 14 and October 24, and reaching 90 °F (32 °C) or warmer between May 30 and September 18. The highest ever recorded temperature was 112 °F (44 °C) occurring on September 5, 2000, and August 28, 2011. An uncommon characteristic of Austin's climate is its highly variable humidity, which fluctuates frequently depending on the shifting patterns of air flow and wind direction. It is common for a lengthy series of warm, dry, low-humidity days to be occasionally interrupted by very warm and humid days, and vice versa. Humidity rises with winds from the east or southeast, when the air drifts inland from the Gulf of Mexico, but decreases significantly with winds from the west or southwest, bringing air flowing from Chihuahuan Desert areas of West Texas or Northern Mexico. Winters in Austin are mild, although occasional short-lived bursts of cold weather known as "Blue Northers" can occur. January is the coolest month with an average daytime high of 62.5 °F (17 °C). The overnight low drops to or below freezing 12 times per year, and sinks below 45 °F (7 °C) during 76 evenings per year, mostly between mid-December and mid-February. The average first and last dates for a freeze are December 1 and February 15, giving Austin an average growing season of 288 days, and the coldest temperature of the year is normally about 24.2 °F (−4 °C) under the 1991-2020 climate normals, putting Austin in USDA zone 9a. Conversely, winter months also produce warm days on a regular basis. On average, 10 days in January reach or exceed 70 °F (21 °C) and 1 day reaches 80 °F (27 °C); during the 1991-2020 period, all Januarys had at least 1 day with a high of 70 °F (21 °C) or more, and most (60%) had at least 1 day with a high of 80 °F (27 °C) or more. The lowest ever recorded temperature in the city was −2 °F (−19 °C) on January 31, 1949. Roughly every two years Austin experiences an ice storm that freezes roads over and cripples travel in the city for 24 to 48 hours. When Austin received 0.04 inches (1 mm) of ice on January 24, 2014, there were 278 vehicular collisions. Similarly, snowfall is rare in Austin. A snow event of 0.9 inches (2 cm) on February 4, 2011, caused more than 300 car crashes. The most recent major snow event occurred February 14–15, 2021, when 6.4 inches (16 cm) of snow fell at Austin's Camp Mabry, the largest two-day snowfall since records began being kept in 1948. Typical of Central Texas, severe weather in Austin is a threat that can strike during any season. However, it is most common during the spring. According to most classifications, Austin lies within the extreme southern periphery of Tornado Alley, although many sources place Austin outside of Tornado Alley altogether. Consequently, tornadoes strike Austin less frequently than areas farther to the north. However, severe weather and/or supercell thunderstorms can occur multiple times per year, bringing damaging winds, lightning, heavy rain, and occasional flash flooding to the city. The deadliest storm to ever strike city limits was the twin tornadoes storm of May 4, 1922, while the deadliest tornado outbreak to ever strike the metro area was the Central Texas tornado outbreak of May 27, 1997. From October 2010 through September 2011, both major reporting stations in Austin, Camp Mabry and Bergstrom Int'l, had the least rainfall of a water year on record, receiving less than a third of normal precipitation. This was a result of La Niña conditions in the eastern Pacific Ocean where water was significantly cooler than normal. David Brown, a regional official with the National Oceanic and Atmospheric Administration, explained that "these kinds of droughts will have effects that are even more extreme in the future, given a warming and drying regional climate." The drought, coupled with exceedingly high temperatures throughout the summer of 2011, caused many wildfires throughout Texas, including notably the Bastrop County Complex Fire in neighboring Bastrop, Texas. In the fall of 2018, Austin and surrounding areas received heavy rainfall and flash flooding following Hurricane Sergio. The Lower Colorado River Authority opened four floodgates of the Mansfield Dam after Lake Travis was recorded at 146% full at 704.3 feet (214.7 m). From October 22 to October 29, 2018, the City of Austin issued a mandatory citywide boil-water advisory after the Highland Lakes, home to the city's main water supply, became overwhelmed by unprecedented amounts of silt, dirt, and debris that had washed in from the Llano River. Austin Water, the city's water utility, has the capacity to process up to 300 million gallons of water per day; however, the elevated level of turbidity reduced output to only 105 million gallons per day. Since Austin residents consumed an average of 120 million gallons of water per day, the infrastructure was not able to keep up with demand. In February 2021, Winter Storm Uri dropped prolific amounts of snow across Texas and Oklahoma, including Austin. The Austin area received a total of 6.4 inches (160 mm) of snowfall between February 14 and 15, with snow cover persisting until February 20. This marked the longest time the area had had more than 1 inch (25 mm) of snow, with the previous longest time being three days in January 1985. Lack of winterization in natural gas power plants, which supply a large amount of power to the Texas grid, and increased energy demand caused ERCOT and Austin Energy to enact rolling blackouts in order to avoid total grid collapse between February 15 and February 18. Initial rolling blackouts were to last for a maximum of 40 minutes, however lack of energy production caused many blackouts to last for much longer, at the peak of the blackouts an estimated 40% of Austin Energy homes were without power. Starting on February 15, Austin Water received reports of pipe breaks, hourly water demand increased from 150 million gallons per day on February 15 to a peak hourly demand of 260 million gallons per day on February 16. On the morning of February 17 demand increased to 330 million gallons per day, the resulting drop of water pressure caused the Austin area to enter into a boil-water advisory which would last until water pressure was restored on February 23. Beginning January 30, 2023 the City of Austin experienced a winter freeze which left 170,000 Austin Energy customers without electricity or heat for several days. The slow pace of repairs and lack of public information from City officials frustrated many residents. A week after the freeze and when Austin City Council members were proposing to evaluate his employment, City Manager Spencer Cronk finally apologized. On Thursday February 16, 2023, Cronk was fired by the Austin City Council for the city's response to the winter storm. Former City Manager Jesus Garcia was named Interim City Manager. The Austin Parks and Recreation Department received the Excellence in Aquatics award in 1999 and the Gold Medal Awards in 2004 from the National Recreation and Park Association. To strengthen the region's parks system, which spans more than 29,000 acres (11,736 ha), The Austin Parks Foundation was established in 1992 to develop and improve parks in and around Austin. APF works to fill the city's park funding gap by leveraging volunteers, philanthropists, park advocates, and strategic collaborations to develop, maintain and enhance Austin's parks, trails and green spaces. Lady Bird Lake (formerly Town Lake) is a river-like reservoir on the Colorado River. The lake is a popular recreational area for paddleboards, kayaks, canoes, dragon boats, and rowing shells. Austin's warm climate and the river's calm waters, nearly 6 miles (9.7 km) length and straight courses are especially popular with crew teams and clubs. Other recreational attractions along the shores of the lake include swimming in Deep Eddy Pool, the oldest swimming pool in Texas, and Red Bud Isle, a small island formed by the 1900 collapse of the McDonald Dam that serves as a recreation area with a dog park and access to the lake for canoeing and fishing. The 10.1 miles (16.3 km) Ann and Roy Butler Hike and Bike Trail forms a complete circuit around the lake. A local nonprofit, The Trail Foundation, is the Trail's private steward and has built amenities and infrastructure including trailheads, lakefront gathering areas, restrooms, exercise equipment, as well as doing Trailwide ecological restoration work on an ongoing basis. The Butler Trail loop was completed in 2014 with the public-private partnership 1-mile Boardwalk project. Along the shores of Lady Bird Lake is the 350 acres (140 hectares) Zilker Park, which contains large open lawns, sports fields, cross country courses, historical markers, concession stands, and picnic areas. Zilker Park is also home to numerous attractions, including the Zilker Botanical Garden, the Umlauf Sculpture Garden, Zilker Hillside Theater, the Austin Nature & Science Center, and the Zilker Zephyr, a 12 in (305 mm) gauge miniature railway carries passengers on a tour around the park. Auditorium Shores, an urban park along the lake, is home to the Palmer Auditorium, the Long Center for the Performing Arts, and an off-leash dog park on the water. Both Zilker Park and Auditorium Shores have a direct view of the Downtown skyline. The Barton Creek Greenbelt is a 7.25-mile (11.67 km) public green belt managed by the City of Austin's Park and Recreation Department. The Greenbelt, which begins at Zilker Park and stretches South/Southwest to the Woods of Westlake subdivision, is characterized by large limestone cliffs, dense foliage, and shallow bodies of water. Popular activities include rock climbing, mountain biking, and hiking. Some well known naturally forming swimming holes along Austin's greenbelt include Twin Falls, Sculpture Falls, Gus Fruh Pool, and Campbell's Hole. During years of heavy rainfall, the water level of the creek rises high enough to allow swimming, cliff diving, kayaking, paddle boarding, and tubing. Austin is home to more than 50 public pools and swimming holes. These include Deep Eddy Pool, Texas' oldest human-made swimming pool, and Barton Springs Pool, the nation's largest natural swimming pool in an urban area. Barton Springs Pool is spring-fed while Deep Eddy is well-fed. Both range in temperature from about 68.0 °F (20.0 °C) during the winter to about 71.6 °F (22.0 °C) during the summer. Hippie Hollow Park, a county park situated along Lake Travis, is the only officially sanctioned clothing-optional public park in Texas. Hamilton Pool Preserve is a natural pool that was created when the dome of an underground river collapsed due to massive erosion thousands of years ago. The pool, located about 23 miles (37 km) west of Austin, is a popular summer swimming spot for visitors and residents. Hamilton Pool Preserve consists of 232 acres (0.94 km2) of protected natural habitat featuring a jade green pool into which a 50-foot (15 m) waterfall flows. In May 2021, voters in the City of Austin reinstated a public camping ban. That includes downtown green spaces as well as trails and greenbelts such as along Barton Creek.[d] McKinney Falls State Park is a state park administered by the Texas Parks and Wildlife Department, located at the confluence of Onion Creek and Williamson Creek. The park includes several designated hiking trails and campsites with water and electric. The namesake features of the park are the scenic upper and lower falls along Onion Creek. The Emma Long Metropolitan Park is a municipal park along the shores of Lake Austin, originally constructed by the Civilian Conservation Corps. The 284 acres (115 ha) Lady Bird Johnson Wildflower Center is a botanical garden and arboretum that features more than 800 species of native Texas plants in both garden and natural settings; the Wildflower Center is located 10 miles (16 km) southwest of Downtown in Circle C Ranch. Roy G. Guerrero Park is located along the Colorado River in East Riverside and contains miles of wooded trails, a sandy beach along the river, and a disc golf course. Covert Park, located on the top of Mount Bonnell, is a popular tourist destination overlooking Lake Austin and the Colorado River. The mount provides a vista for viewing the city of Austin, Lake Austin, and the surrounding hills. It was designated a Recorded Texas Historic Landmark in 1969, bearing Marker number 6473, and was listed on the National Register of Historic Places in 2015. The Louis René Barrera Indiangrass Wildlife Sanctuary, located on the north shore of Lake Walter E. Long, is a park managed by the Austin Parks and Recreation Department with the goal of restoring the Blackland Prairie. While not open to the public, it is accessible through guided tours. Demographics As of the 2020 census, Austin had a population of 961,855. The median age was 33.0 years. 19.4% of residents were under the age of 18 and 9.5% of residents were 65 years of age or older. For every 100 females there were 102.0 males, and for every 100 females age 18 and over there were 101.4 males age 18 and over. 99.5% of residents lived in urban areas, while 0.5% lived in rural areas. There were 410,868 households in Austin, of which 24.9% had children under the age of 18 living in them. Of all households, 35.0% were married-couple households, 26.5% were households with a male householder and no spouse or partner present, and 29.0% were households with a female householder and no spouse or partner present. About 35.3% of all households were made up of individuals and 6.1% had someone living alone who was 65 years of age or older. There were 444,426 housing units, of which 7.6% were vacant. Among occupied housing units, 41.2% were owner-occupied and 58.8% were renter-occupied. The homeowner vacancy rate was 1.2% and the rental vacancy rate was 7.8%. The 2000 census found that the median income for a household in the city was US$42,689, and the median income for a family was $54,091. Males had a median income of $35,545 compared to $30,046 for females. The per capita income for the city was $24,163. About 9.1% of families and 14.4% of the population were below the poverty line, including 16.5% of those under age 18 and 8.7% of those age 65 or over. The median house price was $185,906 in 2009, and it has increased every year since 2004.[needs update] The median value of a house which the owner occupies was $318,400 in 2019—higher than the average American home value of $240,500. A 2014 University of Texas study stated that Austin was the only U.S. city with a fast growth rate between 2000 and 2010 with a net loss in African Americans. As of 2014[update], Austin's African American and non-Hispanic White percentage shares of the total population were declining despite the total populations of both ethnic groups increasing, as their rate of growth was slower than that of the city's Asian and Hispanic or Latino communities. Austin's non-Hispanic White population dropped below 50% in 2005. According to the 2010 United States census, the city of Austin had a population of 790,390. The racial and ethnic composition of Austin was 68.3% White (48.7% non-Hispanic White), 35.1% Hispanic or Latino (29.1% Mexican, 0.5% Puerto Rican, 0.4% Cuban, 5.1% Other), 8.1% African American, 6.3% Asian (1.9% Indian, 1.5% Chinese, 1.0% Vietnamese, 0.7% Korean, 0.3% Filipino, 0.2% Japanese, 0.8% Other), 0.9% American Indian, 0.1% Native Hawaiian and Other Pacific Islander, and 3.4% of two or more races. By 2020, its racial and ethnic composition was 47.1% non-Hispanic white, 32.5% Hispanic or Latino, 8.9% Asian, 6.9% Black or African American, and 3.9% multiracial. According to a survey completed in 2014 by Gallup, it was estimated that 5.3% of residents in the Austin metropolitan area identified as lesbian, gay, bisexual, or transgender. The Austin metropolitan area had the third-highest rate in the nation. According to Sperling's BestPlaces, 52.4% of Austin's population identified as religious in 2018. The majority of Austinites identified themselves as Christians; about 25.2% of the population claimed affiliation with the Catholic Church. The city's Catholic population is served by the Roman Catholic Diocese of Austin, headquartered at the Cathedral of Saint Mary. Nationwide, 23% of Americans identified as Catholic in 2016. Other significant Christian groups in Austin include Baptists (8.7%), followed by Methodists (4.3%), Latter-day Saints (1.5%), Episcopalians or Anglicans (1.0%), Lutherans (0.8%), Presbyterians (0.6%), Pentecostals (0.3%), and other Christians such as the Disciples of Christ and Eastern Orthodox Church (7.1%). The second largest religion Austinites identify with is Islam (1.7%); roughly 0.8% of Americans nationwide claimed affiliation with the Islamic faith. The dominant branch of Islam is Sunni Islam. Established in 1977, the largest mosque in Austin is the Islamic Center of Greater Austin. The community is affiliated with the Islamic Society of North America. Judaism forms less than 0.1% of the religious demographic in Austin. Orthodox, Reform, and Conservative congregations are present in the community. The same study says that eastern faiths including Buddhism and Hinduism made up 0.9% of the city's religious population. Several Hindu temples exist in the Austin Metropolitan area with the most notable one being Radha Madhav Dham. In addition to those religious groups, Austin is also home to an active secular humanist community, hosting nationwide television shows and charity work. As of 2019, there were 2,255 individuals experiencing homelessness in Travis County. Of those, 1,169 were sheltered and 1,086 were unsheltered. In September 2019, the Austin City Council approved $62.7 million for programs aimed at homelessness, which includes housing displacement prevention, crisis mitigation, and affordable housing; the city council also earmarked $500,000 for crisis services and encampment cleanups. In June 2019, following Martin v. Boise, a federal court ruling on homelessness sleeping in public, the Austin City Council lifted a 25-year-old ban on camping, sitting, or lying down in public unless doing so causes an obstruction. The resolution also included the approval of a new housing-focused shelter in South Austin. In early October 2019, Texas Governor Greg Abbott sent a letter to Mayor Steve Adler threatening to deploy state resources to combat the camping ban repeal. On October 17, 2019, the City Council revised the camping ordinance, which imposed increased restrictions on sidewalk camping. In November 2019, the State of Texas opened a temporary homeless encampment on a former vehicle storage yard owned by the Texas Department of Transportation. In May 2021, the camping ban was reinstated after a ballot proposition was approved by 57% of voters. The ban introduces penalties for camping, sitting, or lying down on a public sidewalk or sleeping outdoors in or near Downtown Austin or the area around the University of Texas campus. The ordinance also prohibits solicitation at certain locations. Economy The Greater Austin metropolitan statistical area had a gross domestic product (GDP) of $248 billion in 2023. Austin is considered to be a major center for high tech. Thousands of graduates each year from the engineering and computer science programs at the University of Texas at Austin provide a steady source of employees that help to fuel Austin's technology and defense industry sectors. As a result of the high concentration of high-tech companies in the region, Austin was strongly affected by the dot-com boom in the late 1990s and subsequent bust. Austin's largest employers include the Austin Independent School District, the City of Austin, Dell Technologies, the U.S. Federal Government, NXP Semiconductors, IBM, St. David's Healthcare Partnership, Seton Family of Hospitals, the State of Texas, the Texas State University, and the University of Texas at Austin. Other high-tech companies with operations in Austin include 3M, Apple (the largest campus outside of Cupertino), Amazon, AMD, Apartment Ratings, Applied Materials, Arm, Bigcommerce, BioWare, Blizzard Entertainment, Buffalo Technology, Cirrus Logic, Cisco Systems, Cloudflare, CrowdStrike, Dropbox, eBay, Electronic Arts, Flextronics, Facebook, Google, Hewlett-Packard, Hoover's, HomeAway, HostGator, Indeed, Intel Corporation, Meta, National Instruments, Nintendo, Nvidia, Oracle, PayPal, Polycom, Qualcomm, Rackspace, RetailMeNot, Rooster Teeth, Samsung Group, Silicon Labs, Spansion, TikTok, United Devices, VMware, X (formerly Twitter), Xerox, and Zoho Corporation. The proliferation of technology companies has led to the region's nickname, "Silicon Hills", and spurred development that greatly expanded the city. Tesla, Inc., an electric vehicle and electric power company has its corporate headquarters in Austin inside Gigafactory Texas, a large vehicle assembly plant which employs over 20,000 people. The company expects to eventually have a staff of 60,000 in the Austin area as production ramps up.[unreliable source?] Austin is also emerging as a hub for pharmaceutical and biotechnology companies; the city is home to about 85 of them. In 2004, the city was ranked by the Milken Institute as the No. 12 biotech and life science center in the United States and in 2018, CBRE Group ranked Austin as #3 emerging life sciences cluster. Companies such as Hospira, Pharmaceutical Product Development, and ArthroCare Corporation are located there. Whole Foods Market, an international grocery store chain specializing in fresh and packaged food products, was founded and is headquartered in Austin. Other companies based in Austin include NXP Semiconductors, GoodPop, Temple-Inland, Sweet Leaf Tea Company, Keller Williams Realty, National Western Life, GSD&M, Dimensional Fund Advisors, Golfsmith, Forestar Group, EZCorp, Outdoor Voices, Tito's Vodka, Speak Social, and YETI. In 2018, Austin metro-area companies saw a total of $1.33 billion invested. In 2018, Austin's venture capital investments accounted for more than 60 percent of Texas' total investments. As of 2024[update], the unemployment rate in Austin was 3.4% and the median household income was $83,830; Austin's top employers were: Infrastructure In 2009, 72.7% of Austin (city) commuters drove alone, with other mode shares being: 10.4% carpool, 6% were remote workers, 5% use transit, 2.3% walk, and 1% bicycle. In 2016, the American Community Survey estimated modal shares for Austin (city) commuters of 73.5% for driving alone, 9.6% for carpooling, 3.6% for riding transit, 2% for walking, and 1.5% for cycling. The city of Austin has a lower than average percentage of households without a car. In 2015, 6.9 percent of Austin households lacked a car, and decreased slightly to 6 percent in 2016. The national average was 8.7 percent in 2016. Austin averaged 1.65 cars per household in 2016, compared to a national average of 1.8. Central Austin lies between two major north–south freeways: I-35 to the east and the Mopac Expressway (Loop 1) to the west. US 183 runs from northwest to southeast, and SH 71 crosses the southern part of the city from east to west, completing a rough "box" around central and north-central Austin. Austin is the largest city in the United States to be served by only one Interstate Highway. US 290 enters Austin from the east and merges into I-35. Its highway designation continues south on I-35 and then becomes part of SH 71, continuing to the west. US 290 splits from Highway 71 in southwest Austin, in an interchange known as "The Y." SH 71 continues to Brady, and Highway 290 continues west to interchange for I-10 near Junction. I-35 continues south through San Antonio to Laredo on the Mexican border. I-35 is the highway link to the Dallas-Fort Worth metroplex in Northern Texas. There are two links to Houston (US 290 and SH 71/I-10). US 183 leads northwest of Austin toward Lampasas. In the mid-1980s, construction was completed on Loop 360, a scenic highway that curves through the hill country from near the 71/Mopac interchange in the south to near the US 183/Mopac interchange in the north. The iconic Pennybacker Bridge, also known as the "360 Bridge," crosses Lake Austin to connect the northern and southern portions of Loop 360. SH 130 is a bypass route designed to relieve traffic congestion, starting from I-35 just north of Georgetown and running along a parallel route to the east, where it bypasses Round Rock, Austin, San Marcos and New Braunfels before ending at I-10 east of Seguin, where drivers could drive 30 miles (48 km) west to return to I-35 in San Antonio. The first segment was opened in November 2006, which was located east of Austin–Bergstrom International Airport at Austin's southeast corner on SH 71. Highway 130 runs concurrently with SH 45 from Pflugerville on the north until it reaches US 183 well south of Austin, at which point SR 45 continues west. The entire route of SH 130 is now complete. The final leg opened on November 1, 2012. The highway is noted for having a maximum speed limit of 85 mph (137 km/h) for the entire route. The 41-mile (66 km) section of the toll road between Mustang Ridge and Seguin has a posted speed limit of 85 mph (137 km/h), the highest posted speed limit in the United States. SH 45 runs east–west from just south of US 183 in Cedar Park to 130 inside Pflugerville (just east of Round Rock). A tolled extension of State Highway Loop 1 was also created. A new southeast leg of SH 45 has recently been completed, running from US 183 and the south end of Segment 5 of TX-130 south of Austin due west to I-35 at the FM 1327/Creedmoor Road exit between the south end of Austin and Buda. The 183A Toll Road opened in March 2007, providing a tolled alternative to US 183 through the cities of Leander and Cedar Park. Currently under construction is a change to East US 290 from US 183 to the town of Manor. Officially, the tollway will be dubbed Tollway 290 with "Manor Expressway" as nickname. Despite the overwhelming initial opposition to the toll road concept when it was first announced, all three toll roads have exceeded revenue projections. Austin's primary airport is Austin–Bergstrom International Airport (ABIA) (IATA code AUS), located 5 miles (8 km) southeast of the city. The airport is on the site of the former Bergstrom Air Force Base, which was closed in 1993 as part of the Base Realignment and Closure process. Until 1999, Robert Mueller Municipal Airport was Austin's main airport until ABIA took that role and the old airport was shut down. Austin Executive Airport, along with several smaller airports outside the city center, serves general aviation traffic. Amtrak's Austin Station is located in west downtown and is served by the Texas Eagle which runs daily between Chicago and San Antonio, continuing on to Los Angeles several times a week. Railway segments between Austin and San Antonio have been evaluated for a proposed regional passenger rail project called "Lone Star Rail". However, failure to come to an agreement with the track's current owner, Union Pacific Railroad, ended the project in 2016. Greyhound Lines operates the current Austin Bus Station at the Eastside Bus Plaza Grupo Senda's Turimex Internacional service operates bus service from Austin to Nuevo Laredo and on to many destinations in Mexico from their station in East Austin. Megabus offers daily service to San Antonio, Dallas/Fort Worth and Houston. The Capital Metropolitan Transportation Authority (CapMetro) provides public transportation to the city, primarily with its CapMetro Bus local bus service, the CapMetro Express express bus system, as well as a bus rapid transit service, CapMetro Rapid. CapMetro opened a 32-mile (51 km) hybrid rail system, CapMetro Rail, in 2010. The system consists of a single line serving downtown Austin, the neighborhoods of East Austin, North Central Austin, and Northwest Austin plus the suburb of Leander. Since it began operations in 1985, CapMetro has proposed adding light rail services to its network. Despite support from the City Council, voters rejected light rail proposals in 2000 and 2014. However, in 2020, voters approved CapMetro's US$10 billion transit expansion plan, Project Connect, by a comfortable margin. The plan proposes 2 new light rail lines, an additional bus rapid transit line (which could be converted to light rail in the future), a second commuter rail line, several new MetroRapid lines, more MetroExpress routes, and a number of other infrastructure, technology and service expansion projects. Capital Area Rural Transportation System connects Austin with outlying suburbs and surrounding rural areas. Austin is served by several ride-sharing companies including Uber and Lyft. On May 9, 2016, Uber and Lyft voluntarily ceased operations in Austin in response to a city ordinance that required ride sharing company drivers to get fingerprint checks, have their vehicles labeled, and not pick up or drop off in certain city lanes. Uber and Lyft resumed service in the summer of 2017. The city was previously served by Fasten until they ceased all operations in the city in March 2018. Austin is also served by Electric Cab of North America's six-passenger electric cabs that operate on a flexible route from the Kramer MetroRail Station to Domain Northside and from the Downtown CapMetro Rail station and MetroRapid stops to locations between the Austin Convention Center and near Sixth and Bowie streets by Whole Foods. Carsharing service Zipcar operates in Austin and, until 2019, the city was also served by Car2Go which kept its North American headquarters in the city even after pulling out. The city's bike advocacy organization is Bike Austin. BikeTexas, a state-level advocacy organization, also has its main office in Austin. Bicycles are a popular transportation choice among students, faculty, and staff at the University of Texas. According to a survey done at the University of Texas, 57% of commuters bike to campus. The City of Austin and CapMetro jointly own a bike-sharing service, CapMetro Bike, which is available in and around downtown. The service is a franchise of BCycle, a national bike sharing network owned by Trek Bicycle, and is operated by local nonprofit organization Bike Share of Austin. Until 2020 the service was known as Austin BCycle. In 2018, Lime began offering dockless bikes, which do not need to be docked at a designated station. In 2018, scooter-sharing companies Lime and Bird debuted rentable electric scooters in Austin. The city briefly banned the scooters — which began operations before the city could implement a permitting system — until the city completed development of their "dockless mobility" permitting process on May 1, 2018. Dockless electric scooters and bikes are banned from Austin city parks and the Ann and Roy Butler Trail and Boardwalk. For the 2018 Austin City Limits Music Festival, the city of Austin offered a designated parking area for dockless bikes and scooters. In November 2023, Austin became the largest city in the US that has abolished parking mandates. It did so to encourage walking, biking, and public transit use, as well as to lower the cost of housing and increase the amount of housing units that can be built in the city. Portland and Minneapolis also took this action. Culture "Keep Austin Weird" has been a local motto for years, featured on bumper stickers and T-shirts. This motto has not only been used in promoting Austin's eccentricity and diversity, but is also meant to bolster support of local independent businesses. According to the 2010 book Weird City the phrase was begun by a local Austin Community College librarian, Red Wassenich, and his wife, Karen Pavelka, who were concerned about Austin's "rapid descent into commercialism and overdevelopment." The slogan has been interpreted many ways since its inception, but remains an important symbol for many Austinites who wish to voice concerns over rapid growth and development. Austin has a long history of vocal citizen resistance to development projects perceived to degrade the environment, or to threaten the natural and cultural landscapes. According to the Nielsen Company, adults in Austin read and contribute to blogs more than those in any other U.S. metropolitan area and have the highest Internet usage in all of Texas. In 2013, Austin was the most active city on Reddit, having the largest number of views per capita. South Congress is a shopping district stretching down South Congress Avenue from Downtown. This area is home to coffee shops, eccentric stores, restaurants, food trucks, trailers, and festivals. It prides itself on "Keeping Austin Weird," especially with development in the surrounding area(s). Many Austinites attribute its enduring popularity to the unobstructed view of the Texas State Capitol. The Rainey Street Historic District is a neighborhood in Downtown Austin formerly consisting of bungalow style homes built in the early 20th century. Since the early 2010s, the former working class residential street has turned into a popular nightlife district. Much of the historic homes have been renovated into hotels, condominiums, bars and restaurants, many of which feature large porches and outdoor yards for patrons. The Rainey Street district is also home to the Emma S. Barrientos Mexican American Cultural Center. Austin has been part of the UNESCO Creative Cities Network under the category Media Arts. "Old Austin" is an adage often used by nostalgic natives. The term "Old Austin" refers to a time when the city was smaller and more bohemian with a considerably lower cost of living and better known for its lack of traffic, hipsters, and urban sprawl. It is often employed by longtime residents expressing displeasure at the rapidly changing culture, or when referencing nostalgia of Austin culture. The growth and popularity of Austin can be seen by the expansive development taking place in its downtown landscape. This growth can have a negative impact on longtime small businesses that cannot keep up with the expenses associated with gentrification and the rising cost of real estate. A former Austin musician, Dale Watson, described his move away from Austin, "I just really feel the city has sold itself. Just because you're going to get $45 million for a company to come to town – if it's not in the best interest of the town, I don't think they should do it. This city was never about money. It was about quality of life." Though much is changing rapidly in Austin, businesses such as Thundercloud Subs are thought by many to maintain classic Austin business cultural sentiments unique to the history of the city; as Diana Burgess stated, "I definitely appreciate that they haven't raised their prices a ton or made things super fancy. I think it speaks to that original Old Austin vibe. A lot of us that grew up here really appreciate that." Aaron Franklin, owner of Franklin Barbecue, credited the Old Austin cultural mindset and community support with the success of his barbecue restaurant and the long lines that have supported his business since starting it out of a food trailer in 2009. The O. Henry House Museum hosts the annual O. Henry Pun-Off, a pun contest where the successful contestants exhibit wit akin to that of the author William Sydney Porter. Other annual events include Eeyore's Birthday Party, Spamarama, Austin Pride Festival & Parade in August, the Austin Reggae Festival in April, Kite Festival, Texas Craft Brewers Festival in September, Art City Austin in April, East Austin Studio Tour in November, and Carnaval Brasileiro in February. Sixth Street features annual festivals such as the Pecan Street Festival and Halloween night. The three-day Austin City Limits Music Festival has been held in Zilker Park every year since 2002. Every year around the end of March and the beginning of April, Austin is home to "Texas Relay Weekend." Austin's Zilker Park Tree is a Christmas display made of lights strung from the top of a Moonlight tower in Zilker Park. The Zilker Tree is lit in December along with the "Trail of Lights," an Austin Christmas tradition. The Trail of Lights was canceled four times, first starting in 2001 and 2002 due to the September 11 Attacks, and again in 2010 and 2011 due to budget shortfalls, but the trail was turned back on for the 2012 holiday season. From 1962 to 1998, the Austin Aqua Festival, or "Aqua Fest", took place on the shores of Town Lake (now known as Lady Bird Lake). Originally conceived as a summer tourism draw, the multi-day event evolved from water-themed activities to a broader civic festival due to growth and community interest. Eventually attendance and financial solvency began to dwindle as larger music and summer festivals grew in prominence. Notable Austin cuisine includes Texas barbecue and Tex-Mex; Franklin Barbecue in Austin's has sold out of brisket every day since its establishment. Breakfast tacos and queso are popular food items in the city; Austin is sometimes called the "home of the breakfast taco." Kolaches are a common pastry in Austin bakeries due to the large Czech and German immigrant population in Texas. The Oasis Restaurant is the largest outdoor restaurant in Texas, which promotes itself as the "Sunset Capital of Texas" with its terraced views looking West over Lake Travis. Birdie's, a counter-service restaurant and wine bar that opened in 2021, was Food & Wine's 2023 Restaurant of the Year. P. Terry's, an Austin-based fast food burger chain, has a loyal following among Austinites. Some other Austin-based chain restaurants include Amy's Ice Creams, Chuy's, DoubleDave's Pizzaworks, and Schlotzky's. The Chili's at 45th and Lamar has been the subject of internet memes since 2011. Austin is also home to a large number of food trucks, with 1,256 food trucks operating in 2016. The city of Austin has the second-largest number of food trucks per capita in the United States. Austin's first food hall, "Fareground," features a number of Austin-based food vendors and a bar in the ground level and courtyard of One Congress Plaza. Austin has a large craft beer scene, with over 50 microbreweries in the metro area. Notable Austin-area breweries include Jester King Brewery, Live Oak Brewing Company, and Real Ale Brewing Company. As Austin's official slogan is The Live Music Capital of the World, the city has a vibrant live music scene with more music venues per capita than any other U.S. city. Austin's music revolves around the many nightclubs on 6th Street and an annual music/film/interactive festival known as South by Southwest (SXSW). The concentration of restaurants, bars, and music venues in the city's downtown core is a major contributor to Austin's live music scene, as the ZIP Code encompassing the downtown entertainment district hosts the most bar or alcohol-serving establishments in the U.S. The longest-running concert music program on American television, Austin City Limits, is recorded at ACL Live at The Moody Theater, located in the bottom floor of the 478 feet (146 m) W Hotels in Austin. Austin City Limits and C3 Presents produce the Austin City Limits Music Festival, an annual music and art festival held at Zilker Park in Austin. Other music events include the Urban Music Festival, Fun Fun Fun Fest, Wobeon Music Festival, Chaos In Tejas, Seismic Music Festival at the Concourse Project, and Old Settler's Music Festival. Austin Lyric Opera performs multiple operas each year (including the 2007 opening of Philip Glass's Waiting for the Barbarians, written by University of Texas at Austin alumnus J. M. Coetzee). The Austin Symphony Orchestra performs a range of classical, pop and family performances and is led by music director and conductor Peter Bay. The Austin Baroque Orchestra and La Follia Austin Baroque ensembles both give historically informed performances of Baroque music. The Texas Early Music Project regularly performs music from the Medieval and Renaissance eras, as well as the Baroque. Austin hosts several film festivals, including the SXSW (South by Southwest) Film Festival and the Austin Film Festival, which hosts international films. A movie theater chain by the name of Alamo Drafthouse Cinema was founded in Austin in 1997; the South Lamar location of which is home to the annual week-long Fantastic Fest film festival. In 2004 the city was first in MovieMaker Magazine's annual top ten cities to live and make movies. Austin has been the location for a number of motion pictures, partly due to the influence of The University of Texas at Austin Department of Radio-Television-Film. Films produced in Austin include The Texas Chain Saw Massacre (1974), Songwriter (1984), Man of the House, Secondhand Lions, Texas Chainsaw Massacre 2, Nadine, Waking Life, Spy Kids, The Faculty, Dazed and Confused, The Guards Themselves, Wild Texas Wind, Office Space, The Life of David Gale, Miss Congeniality, Doubting Thomas, Slacker, Idiocracy, Death Proof, The New Guy, Hope Floats, The Alamo, Blank Check, The Wendall Baker Story, School of Rock, A Slipping-Down Life, A Scanner Darkly, Saturday Morning Massacre, and most recently, the Coen brothers' True Grit, Grindhouse, Machete, How to Eat Fried Worms, Bandslam and Lazer Team. In order to draw future film projects to the area, the Austin Film Society has converted several airplane hangars from the former Mueller Airport into filmmaking center Austin Studios. Projects that have used facilities at Austin Studios include music videos by The Flaming Lips and feature films such as 25th Hour and Sin City. Austin also hosted the MTV series, The Real World: Austin in 2005. Season 4 of the AMC show Fear the Walking Dead was filmed in various locations around Austin in 2018. The film review websites Spill.com and Ain't It Cool News are based in Austin. Rooster Teeth Productions, creator of popular web series such as Red vs. Blue, RWBY, and Camp Camp was also located in Austin. Austin has a strong theater culture, with dozens of itinerant and resident companies producing a variety of work. A volunteer-run arts organization supporting creative expression and counter-culture community - Church of the Friendly Ghost (COTFG) helped many experimental programs get their start in Austin, TX.[citation needed] The city also has live performance theater venues such as the Zachary Scott Theatre Center, Vortex Repertory Company, Salvage Vanguard Theater, Rude Mechanicals' the Off Center, Austin Playhouse, Scottish Rite Children's Theater, Hyde Park Theatre, the Blue Theater, The Hideout Theatre, and Esther's Follies. The Victory Grill was a renowned venue on the Chitlin' Circuit. Public art and performances in the parks and on bridges are popular. Austin hosts the Fuse Box Festival each April featuring theater artists. The Paramount Theatre, opened in downtown Austin in 1915, contributes to Austin's theater and film culture, showing classic films throughout the summer and hosting regional premieres for films such as Miss Congeniality. The Zilker Park Summer Musical is a long-running outdoor musical. The Long Center for the Performing Arts is a 2,300-seat theater built partly with materials reused from the old Lester E. Palmer Auditorium. Ballet Austin is among the fifteen largest ballet academies in the country. Each year Ballet Austin's 20-member professional company performs ballets from a wide variety of choreographers, including their artistic director, Stephen Mills. The city is also home to the Ballet East Dance Company, a modern dance ensemble, and the Tapestry Dance Company which performs a variety of dance genres. The Austin improvisational theatre scene has several theaters: ColdTowne Theater, The Hideout Theater, and The Fallout Theater. Austin also hosts the Out of Bounds Comedy Festival, which draws comedic artists in all disciplines to Austin. The Austin Public Library is operated by the City of Austin and consists of the Central Library on César Chávez Street, the Austin History Center, 20 branches and the Recycled Reads bookstore and upcycling facility. The APL library system also has mobile libraries – bookmobile buses and a human-powered trike and trailer called "unbound: sin fronteras." The Central Library, which is an anchor to the redevelopment of the former Seaholm Power Plant site and the Shoal Creek Walk, opened on October 28, 2017. The six-story Central Library contains a living rooftop garden, reading porches, an indoor reading room, bicycle parking station, large indoor and outdoor event spaces, a gift shop, an art gallery, café, and a "technology petting zoo" where visitors can play with next-generation gadgets like 3D printers. In 2018, Time magazine named the Austin Central Library on its list of "World's Greatest Places." Museums in Austin include the Texas Science and Natural History Museum, the George Washington Carver Museum and Cultural Center, Thinkery, the Blanton Museum of Art (reopened in 2006), the Bob Bullock Texas State History Museum across the street (which opened in 2000), The Contemporary Austin, the Elisabet Ney Museum, the Women and Their Work gallery, and the galleries at the Harry Ransom Center. The Texas State Capitol itself is also a major tourist attraction. The Driskill Hotel, built in 1886, once owned by George W. Littlefield, and located at 6th and Brazos streets, was finished just before the construction of the Capitol building. Sixth Street is a musical hub for the city. The Enchanted Forest, a multi-acre outdoor music, art, and performance art space in South Austin hosts events such as fire-dancing and circus-like-acts. Austin is also home to the Lyndon Baines Johnson Library and Museum, which houses documents and artifacts related to the Johnson administration, including LBJ's limousine and a re-creation of the Oval Office. Locally produced art is featured at the South Austin Museum of Popular Culture. The Mexic-Arte Museum is a Mexican and Mexican-American art museum founded in 1983. Austin is also home to the O. Henry House Museum, which served as the residence of O. Henry from 1893 to 1895. Farmers' markets are popular attractions, providing a variety of locally grown and often organic foods. Austin also has many odd statues and landmarks, such as the Stevie Ray Vaughan Memorial, the Willie Nelson statue, the Mangia dinosaur, the Loca Maria lady at Taco Xpress, the Hyde Park Gym's giant flexed arm, and Daniel Johnston's Hi, How are You? Jeremiah the Innocent frog mural. The Ann W. Richards Congress Avenue Bridge houses the world's largest urban population of Mexican free-tailed bats. Starting in March, up to 1.5 million bats take up residence inside the bridge's expansion and contraction zones as well as in long horizontal grooves running the length of the bridge's underside, an environment ideally suited for raising their young. Every evening around sunset, the bats emerge in search of insects, an exit visible on weather radar. Watching the bat emergence is an event that is popular with locals and tourists, with more than 100,000 viewers per year. The bats migrate to Mexico each winter. The Austin Zoo, located in unincorporated western Travis County, is a rescue zoo that provides sanctuary to displaced animals from a variety of situations, including those involving neglect. The HOPE Outdoor Gallery was a public, three-story outdoor street art project located on Baylor Street in the Clarksville neighborhood. The gallery, which consisted of the foundations of a failed multifamily development, was a constantly-evolving canvas of graffiti and murals. Also known as "Castle Hill" or simply "Graffiti Park", the site on Baylor Street was closed to the public in early January 2019 but remained intact, behind a fence and with an armed guard, in mid-March 2019. The gallery will build a new art park at Carson Creek Ranch in Southeast Austin. Many Austinites support the athletic programs of the University of Texas at Austin known as the Texas Longhorns. During the 2005–2006 academic term, the Longhorns football team was named the NCAA Division I FBS National Football Champion, and the Longhorns baseball team won the College World Series. The football team plays its home games in the state's second-largest sports stadium, Darrell K Royal–Texas Memorial Stadium, seating over 101,000 fans. Baseball games are played at UFCU Disch–Falk Field. Austin was the most populous city in the United States without a major-league professional sports team, which changed in 2021 with Austin FC's entry to MLS. Minor-league professional sports came to Austin in 1996, when the Austin Ice Bats began playing at the Travis County Expo Center; they were later replaced by the AHL Texas Stars. Austin has hosted a number of other professional teams, including the Austin Spurs of the NBA G League, the Austin Aztex of the United Soccer League, the Austin Outlaws in WFA football, and the Austin Aces in WTT tennis. Natural features like the bicycle-friendly Texas Hill Country and generally mild climate make Austin the home of several endurance and multi-sport races and communities. The Capitol 10,000 is the largest 10 k race in Texas, and approximately fifth largest in the United States. The Austin Marathon has been run in the city every year since 1992. Additionally, the city is home to the largest 5 mile race in Texas, named the Turkey Trot as it is run annually on Thanksgiving. Started in 1991 by Thundercloud Subs, a local sandwich chain (who still sponsors the event), the event has grown to host over 20,000 runners. All proceeds are donated to Caritas of Austin, a local charity. The Austin-founded American Swimming Association hosts several swim races around town. Austin is also the hometown of several cycling groups and the disgraced cyclist Lance Armstrong. Combining these three disciplines is a growing crop of triathlons, including the Capital of Texas Triathlon held every Memorial Day on and around Lady Bird Lake, Auditorium Shores, and Downtown Austin. Austin is home to the Circuit of the Americas (COTA), a grade 1 Fédération Internationale de l'Automobile specification 3.427-mile (5.515 km) motor racing facility which hosts the Formula One United States Grand Prix. The State of Texas has pledged $25 million in public funds annually for 10 years to pay the sanctioning fees for the race. Built at an estimated cost of $250 to $300 million, the circuit opened in 2012 and is located just east of the Austin Bergstrom International Airport. The circuit also hosts the EchoPark Automotive Grand Prix NASCAR race in late March each year. The summer of 2014 marked the inaugural season for World TeamTennis team Austin Aces, formerly Orange County Breakers of the southern California region. The Austin Aces played their matches at the Cedar Park Center northwest of Austin, and featured former professionals Andy Roddick and Marion Bartoli, as well as current WTA tour player Vera Zvonareva. The team left after the 2015 season. In 2017, Precourt Sports Ventures announced a plan to move the Columbus Crew SC soccer franchise from Columbus, Ohio to Austin. Precourt negotiated an agreement with the City of Austin to build a $200 million privately funded stadium on public land at 10414 McKalla Place, following initial interest in Butler Shores Metropolitan Park and Roy G. Guerrero Colorado River Park. As part of an arrangement with the league, operational rights of Columbus Crew SC were sold in late 2018, and Austin FC was announced as Major League Soccer's 27th franchise on January 15, 2019, with the expansion team starting play in 2021. The Austin Country Club is a private golf club located along the shores of the Colorado River, right next to the Pennybacker Bridge. Founded in 1899, the club moved to its third and present site in 1984, which features a challenging layout designed by noted course architect Pete Dye. Austin is set to host the BLAST.TV Austin Major, the 22nd Counter-Strike Major esports tournament, from June 9 to 22, 2025. Government Austin is administered by an 11-member city council (10 council members elected by geographic district plus a mayor elected at large). The council is accompanied by a hired city manager under the manager-council system of municipal governance. Council and mayoral elections are non-partisan, with a runoff in case there is no majority winner. A referendum approved by voters on November 6, 2012, changed the council composition from six council members plus a mayor elected at large to the current "10+1" district system. Supporters maintained that the at-large system would increase participation for all areas of the city, especially for those which had lacked representation from City Council. November 2014 marked the first election under the new system. The Federal government had forced San Antonio and Dallas to abandon at-large systems before 1987; however, the court could not show a racist pattern in Austin and upheld the city's at-large system during a 1984 lawsuit. In five elections between 1973 and 1994 Austin voters rejected single-member districts. Austin formerly operated its city hall at 128 West 8th Street. Antoine Predock and Cotera Kolar Negrete & Reed Architects designed a new city hall building, which was intended to reflect what The Dallas Morning News referred to as a "crazy-quilt vitality, that embraces everything from country music to environmental protests and high-tech swagger." The new city hall, built from recycled materials, has solar panels in its garage. The city hall, at 301 West Second Street, opened in November 2004. Kirk Watson is the current mayor of Austin, assuming the office for a second non-consecutive term on January 6, 2023. In the 2012 elections, City Council elections were moved from May to November and City council members were given staggered term limits In 2022 Proposition D moved the term of the Austin Mayor to coincide with Presidential election years, so Kirk Watson would only serve two years unlike his predecessor Steve Adler Law enforcement in Austin is provided by the Austin Police Department, except for state government buildings, which are patrolled by the Texas Department of Public Safety. The University of Texas Police operate from the University of Texas. Fire protection within the city limits is provided by the Austin Fire Department, while the surrounding county is divided into twelve geographical areas known as emergency services districts, which are covered by separate regional fire departments. Emergency medical services are provided for the whole county by Austin-Travis County Emergency Medical Services. In 2003, the city adopted a resolution against the USA PATRIOT Act that reaffirmed constitutionally guaranteed rights. The current mayor of Austin is Kirk Watson. As of 2025, though positions are officiallly nonpartisan, all elected member of the city council are Democrats. As of 2019, Austin is one of the safest large cities in the United States. In 2019, the FBI named Austin the 11th safest city on a list of 22 American cities with a population above 400,000. FBI statistics show that overall violent and property crimes dropped in Austin in 2015, but increased in suburban areas of the city. One such southeastern suburb, Del Valle, reported eight homicides within two months in 2016. According to 2016 APD crime statistics, the 78723 census tract had the most violent crime, with 6 murders, 25 rapes, and 81 robberies. The city had 39 homicides in 2016, the most since 1997. In 1884 and 1885, one of the earliest recorded serial killings in the United States occurred in Austin, in which 8 people were murdered by a suspect known as the "Servant Girl Annihilator". One of the first American mass school shooting incidents took place in Austin on August 1, 1966, when Charles Whitman shot 43 people, killing 13 from the top of the University of Texas tower. The University of Texas tower shooting led to the formation of the SWAT team of the Austin Police Department. In 1991, four teenage girls were murdered in a yogurt shop by serial killer Robert Eugene Brashers. A police officer responded to reports of a fire at the I Can't Believe It's Yogurt! store on Anderson Lane and discovered the girls' bodies in a back room. The murders were unsolved until 2025. In 2010, Andrew Joseph Stack III deliberately crashed his Piper PA-28 Cherokee into Echelon 1, a building in which the Internal Revenue Service was a lessee of, housing 190 employees. The resulting explosion killed one and injured 13 IRS employees, partially damaged the building and cost the IRS a total of $38.6 million. (see 2010 Austin suicide attack) A series of bombings occurred in Austin in March 2018. Over the course of 20 days, five package bombs exploded, killing two people and injuring another five. The suspect, 23-year-old Mark Anthony Conditt of Pflugerville, Texas, blew himself up inside his vehicle after he was pulled over by police on March 21, also injuring a police officer. In 2020, Austin was the victim of a cyberattack by the Russian group Berserk Bear, possibly related to the U.S. federal government data breach earlier that year. On April 18, 2021, a shooting occurred at the Arboretum Oaks Apartments near The Arboretum shopping center, in which a former Travis County Sheriff's Office detective killed his ex-wife, his adoptive daughter, and his daughter's boyfriend. The suspect, who was previously charged with child sexual assault, was arrested in Manor after a 20-hour manhunt. A mass shooting took place in the early morning of June 12, 2021, on Sixth Street, which resulted in 14 people injured and one dead. The man killed was believed to be an innocent bystander who was struck as he was standing outside a bar. A 19-year-old suspect, De'ondre "Dre" White, was formally charged and arrested in Killeen nearly two weeks after the shooting. In 2024, Zacharia Doar, a 23-year old Palestinian-American man, was attacked and stabbed in the chest on West 26th Street, West Campus, after returning from a rally in support of Palestinian human rights. The assailant, 36-year old Bert James Baker was arrested at the scene and charged with aggravated assault with a deadly weapon. On August 11, 2025, a mass shooting occurred outside a Target store in North Austin, killing three people. The suspect fled the Target by hijacking a car belonging to one of the victims. After leaving the Target area, the suspect crashed the stolen vehicle and proceeded to steal another vehicle from a car dealership. The suspect was later taken into custody after being subdued with a Taser. He was later identified as a 32-year-old male. Austin is the county seat of Travis County and hosts the Heman Marion Sweatt Travis County Courthouse downtown, as well as other county government offices. The Texas Department of Transportation operates the Austin District Office in Austin. The Texas Department of Criminal Justice (TDCJ) operates the Austin I and Austin II district parole offices in Austin. The United States Postal Service operates several post offices in Austin. Former Governor Rick Perry had previously referred to it as a "blueberry in the tomato soup", meaning, Austin had previously been a Democratic city in a Republican state. However, Texas currently has multiple urban cities also voting Democratic and electing Democratic mayors in elections. After the most recent redistricting, Austin is currently divided between the 10th, 35th and 37th Congressional districts. A controversial turning point in the political history of the Austin area was the 2003 Texas redistricting. Before then, Austin had been entirely or almost entirely within the borders of a single congressional district–what was then the 10th District–for over a century. Opponents characterized the resulting district layout as excessively partisan gerrymandering, and the plan was challenged in court by Democratic and minority activists. The Supreme Court of the United States has never struck down a redistricting plan for being excessively partisan. The plan was subsequently upheld by a three-judge federal panel in late 2003, and on June 28, 2006, the matter was largely settled when the Supreme Court, in a 7–2 decision, upheld the entire congressional redistricting plan with the exception of a Hispanic-majority district in southwest Texas. This affected Austin's districting, as U.S. Rep. Lloyd Doggett's district (U.S. Congressional District 25) was found to be insufficiently compact to compensate for the reduced minority influence in the southwest district; it was redrawn so that it took in most of southeastern Travis County and several counties to its south and east. The distinguishing political movement of Austin politics has been that of the environmental movement, which spawned the parallel neighborhood movement, then the more recent conservationist movement (as typified by the Hill Country Conservancy), and eventually the current ongoing debate about "sense of place" and preserving the Austin quality of life. Much of the environmental movement has matured into a debate on issues related to saving and creating an Austin "sense of place." In 2012, Austin became just one of a few cities in Texas to ban the sale and use of plastic bags. However, the ban ended in 2018 due to a court ruling that regarded all bag bans in the state to contravene the Texas Solid Waste Disposal Act. In 2016, Austin became the first Gold designee of the SolSmart program, a national program from the U.S. Department of Energy that recognizes local governments for enacting solar-friendly measures at the local level. Education According to the 2015–2019 Census estimates, 51.7% of Austin residents aged 25 and over have earned at least a bachelor's degree, compared with the national figure of 32.1%. 19.4% hold a graduate or professional degree, compared with the national figure of 12.4%. Austin is home to the University of Texas at Austin, the flagship institution of the University of Texas System with over 40,000 undergraduate students and 11,000 graduate students. Other institutions of higher learning in Austin include St. Edward's University, Huston–Tillotson University, Austin Community College, Concordia University, the Seminary of the Southwest, Texas Health and Science University, University of St. Augustine for Health Sciences, Austin Graduate School of Theology, Austin Presbyterian Theological Seminary, Virginia College's Austin Campus, The Art Institute of Austin, Southern Careers Institute of Austin, Austin Conservatory and branch campuses of Case Western Reserve University and Park University. The University of Texas System and Texas State University System are headquartered in downtown Austin. Approximately half of the city by area is served by the Austin Independent School District. This district includes notable schools such as the magnet Liberal Arts and Science Academy High School of Austin, Texas (LASA), which, by test scores, has consistently been within the top thirty high schools in the nation, as well as The Ann Richards School for Young Women Leaders. The remaining portion of Austin is served by adjoining school districts, including Round Rock ISD, Pflugerville ISD, Leander ISD, Manor ISD, Del Valle ISD, Lake Travis ISD, Hays ISD, and Eanes ISD. The Austin metropolitan area includes 27 charter school systems. University of Texas Elementary School is in the city. There are two state-operated schools for children with disabilities: Texas School for the Blind and Visually Impaired and Texas School for the Deaf. Texas Blind, Deaf, and Orphan School, in the segregation era, had black deaf and blind students. The Austin metropolitan area includes over 100 private schools. Austin is also home to several child developmental institutions. Media Austin's main daily newspaper is the Austin American-Statesman. The Austin Chronicle is Austin's alternative weekly, while The Daily Texan is the student newspaper of the University of Texas at Austin. Austin's business newspaper is the weekly Austin Business Journal. The Austin Monitor is an online outlet that specializes in insider reporting on City Hall, Travis County Commissioners Court, AISD, and other related local civics beats. The Monitor is backed by the nonprofit Capital of Texas Media Foundation. Austin also has numerous smaller special interest or sub-regional newspapers such as the Oak Hill Gazette, Westlake Picayune, Hill Country News, Round Rock Leader, NOKOA, and The Villager among others. Texas Monthly, a major regional magazine, is also headquartered in Austin. The Texas Observer, a muckraking biweekly political magazine, has been based in Austin for over five decades. The weekly Community Impact Newspaper published by John Garrett, former publisher of the Austin Business Journal has five regional editions and is delivered to every house and business within certain ZIP codes and all of the news is specific to those ZIP codes. Another statewide publication based in Austin is The Texas Tribune, an on-line publication focused on Texas politics. The Tribune is "user-supported" through donations, a business model similar to public radio. The editor is Evan Smith, former editor of Texas Monthly. Smith co-founded the Texas Tribune, a nonprofit, non-partisan public media organization, with Austin venture capitalist John Thornton and veteran journalist Ross Ramsey. Commercial radio stations include KASE-FM (country), KVET (sports), KVET-FM (country), KKMJ-FM (adult contemporary), KLBJ (talk), KLBJ-FM (classic rock), KJFK (variety hits), KFMK (contemporary Christian), KOKE-FM (progressive country) and KPEZ (rhythmic contemporary). KUT-FM is the leading public radio station in Texas and produces the majority of its content locally. KOOP (FM) is a volunteer-run radio station with more than 60 locally produced programs. KVRX is the student-run college radio station of the University of Texas at Austin with a focus on local and non-mainstream music and community programming. Other listener-supported stations include KAZI (urban contemporary), and KMFA (classical). Network television stations (affiliations in parentheses) include KTBC (Fox O&O), KVUE (ABC), KXAN (NBC), KEYE-TV (CBS), KLRU (PBS), KNVA (The CW), KBVO (MyNetworkTV), and KAKW (Univision O&O). KLRU produces several award-winning locally produced programs such as Austin City Limits. Despite Austin's explosive growth, it is only a medium-sized market (currently 38th) because the suburban and rural areas are not much larger than the city proper. Additionally, the proximity of San Antonio truncates the potential market area. Alex Jones, journalist, radio show host and filmmaker, produces his talk show The Alex Jones Show in Austin which broadcasts nationally on more than 60 AM and FM radio stations in the United States, WWCR Radio shortwave and XM Radio: Channel 166. Notable people International relations Austin has two types of relationships with other cities, sister and friendship. Austin's sister cities are: The cities of Belo Horizonte, Brazil and Elche, Spain were formerly sister cities, but after an Austin City Council vote in 1991, their statuses were deactivated. Orlu, South East, Nigeria became a sister city in 2000, but was downgraded to emeritus status later. Covenants between two city leaders: See also Notes References Further reading External links Nation: States: Territories: |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-Environmental_Impact_Assessment_Review-192] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-49] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-159] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Eric_(software)] | [TOKENS: 666] |
Contents eric (software) eric is a free integrated development environment (IDE) used for computer programming. Since it is a full featured IDE, it provides by default all necessary tools needed for the writing of code and for the professional management of a software project. eric is written in the programming language Python and its primary use is for developing software written in Python. It is usable for development of any combination of Python 3 or Python 2, Qt 5 or Qt 4 and PyQt 5 or PyQt 4 projects, on Linux, macOS and Microsoft Windows platforms. Characteristics eric is written in Python and uses the PyQt Python bindings for the Qt GUI toolkit. By design, eric acts as a front end for several programs, for example the QScintilla editor widget. The key features of eric 6 are: Prior to the release of eric version 5.5.0, eric version 4 and eric version 5 coexisted and were maintained simultaneously, while eric 4 was the variant for writing software in Python version 2 and eric version 5 was the variant for writing software in Python version 3. With the release of eric version 5.5.0 both variants had been merged into one, so that all versions as of eric version 5.5.0 support writing software in Python 2 as well as in Python 3, making the separate development lanes of eric version 4 and 5 obsolete. Those two separate development lanes are no longer maintained, and the last versions prior to merging them both to 5.5.0 were versions 4.5.25 and 5.4.7. Releases Until 2016, eric used a software versioning scheme with a three-sequence identifier, e.g. 5.0.1. The first sequence represents the major version number which is increased when there are significant jumps in functionality, the second sequence represents the minor number, which is incremented when only some features or significant fixes have been added, and the third sequence is the revision number, which is incremented when minor bugs are fixed or minor features have been added. From late 2016, the version numbers show the year and month of release, e.g. 16.11 for November 2016. eric follows the development philosophy of Release early, release often, following loosely a time-based release schedule. Currently a revision version is released around the first weekend of every month, a minor version is released annually, in most cases approximately between December and February. The following table shows the version history of eric, starting from version 4.0.0. Only major (e.g. 6.0.0) and minor (e.g. 6.1.0) releases are listed; revision releases (e.g. 6.0.1) are omitted. Name Several allusions are made to the British comedy group Monty Python, which the Python programming language is named after. Eric alludes to Eric Idle, a member of the group, as does IDLE, the standard python IDE shipped with most distributions.[failed verification] Criticism The Eric Python IDE does not feature an integrated toolchain for now. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-lightman97-307] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lewis_White_Beck] | [TOKENS: 4051] |
Contents Lewis White Beck Lewis White Beck (September 26, 1913 – June 7, 1997) was an American philosopher and scholar of German philosophy specializing in German idealism at the University of Rochester. As Chairman of the Department of Philosophy, he achieved international recognition for encouraging collaborative research by scholars within the United States and Germany into the philosophy of Immanuel Kant during the post World War II era. Beck also translated several of Kant's works from German, including the Critique of Practical Reason, and authored Studies in the Philosophy of Kant (1965). Biography Born in Griffin, Georgia, Beck was the youngest of four children in a family raised by Erasmus W. Beck and Ann H. Beck. His siblings included: Evelyn H. Beck, Edwin H. Beck and Sarah A Beck. His father was employed as both an engineer and a sales representative. In his youth, Beck exhibited a natural talent for philosophical discourse and repeatedly raised questions related to the famous "Scopes Monkey Trial". Much to his delight, he was formally introduced to the subject of philosophy by his sister who provided him with a copy of Will Durant's The Story of Philosophy at the age of fourteen. This subsequently inspired him to investigate the scientific writings of Thomas Henry Huxley and to acquire employment as a "lab assistant" while enrolled in high school. Beck's passion for dabbling in the synthesis of organic compounds after hours attracted the attention of his mentors and he was excused from studying introductory chemistry courses upon being enrolled at Emory University. Beck's performance in the quantitative chemistry lab was hindered, however, by an undiagnosed case of color blindness which he successfully concealed. Nevertheless, his perseverance was rewarded and by the conclusion of his junior year he was honored with an unusual admission to an honorary fraternity for chemists. Beck already suspected that his affliction might prove to be a dangerous hindrance to his aspiration of becoming a professional chemist. The fates intervened, however, as Beck soon attended a philosophical lecture by Leroy Loemker on "The Limits of Scientific Concepts" which was based upon the writings of Heinrich Rickert and Ernst Cassirer. Beck was captivated by the prospect of conducting "gedankenexperiments" without toiling in a dangerous laboratory. He immediately convinced Loemker to take on the monumental task of tutoring him in philosophy during his junior year so that he could change his major before graduating. One year later, Beck entered graduate school and remained forever grateful to Loemker for his guidance and personal interest in Beck's aspiration to join the ranks of "philosophic workmen". Beck received his bachelor's degree Phi Beta Kappa from Emory University in 1934, his master's degree from Duke University in 1935, and his doctoral degree from Duke University in 1937. His dissertation was entitled: "Synopsis: A Study in the Theory of Knowledge. Before moving to Rochester, Beck was an international student and a Rosenwald Fund Fellow at the University of Berlin (1937–38; an interview about his experiences there appeared in The Atlanta Constitution, September 18, 1938), an instructor at Emory University (1938–41), Associate Professor of Philosophy at the University of Delaware (1941–48), and associate professor at Lehigh University (1946–48), eventually becoming professor (1948–49). Beck joined the faculty at the University of Rochester in 1949 and served as Chairman of its Department of Philosophy from 1949 to 1966. He also served as Associate Dean of the Graduate School (1952-1956) as well as the Dean of the Graduate School (1956–1957) where he helped to raise international recognition for the PhD. program in Philosophy. During this time he was awarded a Guggenheim Fellowship in the field of Philosophy (1957). He is credited with assisting his colleague Colin Murray Turbayne in his work The Myth of Metaphor (1962). Subsequently, he collaborated with his colleague Robert L. Holmes in the publication of a comprehensive introduction to the study of philosophy, Philosophical Inquiry: An Introduction to Philosophy (1968)). In 1970 he also collaborated with the Kantian scholar Gottfried Martin at the University of Bonn to organize the first International Kant Congress to be hosted in the United States and helped to established an enduring close collaboration between Kantian scholars in both Germany and America. In 1962 he was appointed as the Burbank Professor of Moral and Intellectual Philosophy and subsequently Professor Emeritus in 1979. In 1962 he became the first recipient of the University's Edward Peck Curtis Award for Excellence in Undergraduate Teaching. He was subsequently elected a Fellow of the American Academy of Arts and Sciences in 1963 and the American Council of Learned Societies in 1964. From 1970 to 1975, Beck also served on the National Endowment for the Humanities Council. During this time he also served as a member of the board of directors for the American Academy of Arts and Sciences (1970–1978). In addition, he was a President of the Eastern Division of the American Philosophical Association. During the course of his long academic career, Beck also held appointments as a visiting lecturer at several leading academic research centers including: Columbia University (1950), George Washington University, the University of Minnesota (1953), the University of California at Berkeley (1973), Yale University (1974) and the Rochester Institute of Technology (1982–1983). In addition, he received honorary degrees from Hamilton College, Emory University and the University of Tubingen. In addition to his teaching activities, Beck also served on the editorial board of several leading philosophical research journals including: the Journal of the History of Ideas and Kantian-Studien. Over the years he also served on the editorial board of the journal The Monist which also featured his work. His original research into the philosophy of Immanuel Kant was also published within the authoritative journal Kant-Studien in both the German and English languages. In addition, in 1970 he served as editor of the Proceedings of the Third International Kant Congress. In 1985 he also contributed to the formation of the North American Kantian Society. Over the years, Beck was praised by his students for his charm and wit. Even after his formal retirement in 1979 he continued to meet with informal gatherings of aspiring young scholars in an effort to share his unique insights into Kant's works until 1996. Always humble, Beck was often observed to joke that his prize for an award in teaching excellence was rejected as "nontaxable" by the Internal Revenue Service because it was more appropriately categorized as "unearned" income. Beck is most noted for his research into the collective writings of the German philosopher Immanuel Kant. Included among his publications is a translation of Kant's extensive "Critique of Pure Reason" in 1949. He also achieved widespread national and international recognition within academic circles for his scholarship, commentary and encyclopedic knowledge of Kant's philosophical works. His comprehensive work, A Commentary on Kant's Critique of Practical Reason (1960) was praised by Professor A. R. C. Duncan at Queen's University as "an unquestionably first-rate piece of Kantian scholarship which ranks along with the great German, French, and British commentaries on Kant." In addition, he has been cited in Kant-Studien as one of the first scholars in the Anglo-Saxon tradition to compile a comprehensive review of early German philosophy before Kant and clarifying Kant's work within such a historical context. In the course of his exhaustive commentaries, Beck shared several noteworthy insights into Kant's philosophical thoughts. While revisiting Kant's distinction between "analytic" and "synthetic" truths and his concept of the "synthetic a priori", Beck attempted to clarify Kant's reasoning by exploring whether synthetic judgements should be made analytic, as well as whether Kant incorrectly identified some "contingent judgements" as "necessary judgements". He further observed that Kant's utilization of the term "synthetic" appears to convey different meanings in Kant's writings on transcendental logic as compared to his writings on the theory of general logic. Beck observed further that this divergence in meaning accounts for the unfortunate confusion in the minds of many students who explore translations of Kant's works from the original German into English. Beck also asserted that Kant's Critique of Practical Reason has been largely neglected by modern readers and sometimes supplanted in the minds of many scholars by the Foundations of the Metaphysics of Morals. He claimed that a complete understanding of Kant's moral philosophy is most easily attained by reviewing Kant's "second critique" which puts forth an analysis of the concepts of both freedom and practical reason. In his A Commentary on Kant's Critique of Practical Reason (1961) Beck asserts that Kant's "second critique" serves to weave these divers strands into a unified pattern for his theory on moral authority in general. In addition, Beck argues that Kant revised his initial resolution of the antimony between the two concepts of freedom and determinism which was first presented in the Critique of Pure Reason. In Beck's view, this revision emerges in Kant's resolution of the "Antimony of Teleological Judgment" which is presented in his "third critique", the Critique of the Power of Judgment (1790). Beck also traced the development of the "antimony of pure reason," which Kant described as "the most singular phenomenon of human reason." Beck observed that Kant's development of the "antiinomy" may have been influenced by its use in jurisprudence, biblical exegesis, and the antinomic mode of argument employed by the Greek philosopher Zeno. Such a "skeptical method" avoids the objective of resolving a conflict between opposing assertions by favoring one assertion over another. Instead, it emphasizes an investigation into whether the object of the controversy itself is deceptive in nature. Beck cites the second chapter of the Transcendental Dialectic in the Critique of Pure Reason to argue that Kant's development of the antimony played a central role in his effort, "to dispel the illusion that pure reason can give knowledge of what lies beyond the limits of sensory perception" while asserting that "the world we experience is not and does not contain a thing in itself but is only phenomenal." He then traces the influence of Kant's antimonies on the works of later philosophers including Charles Renouvier and Nicolai Hartmann. In his Six Secular Philosophers (1966, Rev. 1997), Beck also endeavored to explore the general characteristics of a secular philosophy and whether such a philosophy can be formulated to accommodate religious beliefs and values. Beck observed that while an exact or precise conceptualization of a secular philosophy might be elusive, a secular philosophy is likely to require an appeal to an independence of thought. In Beck's view it should also incorporate certain aspects of religious thought as well. With this in mind, Beck identified several "families" of secular philosophers. In his first group Beck calls our attention to philosophers who placed limits on the scope, validity and content of religious belief by an appeal to scientific and philosophic endeavors. He identifies Baruch Spinoza, David Hume and Kant in this grouping. In his second grouping, Beck identified Friedrich Nietzsche, William James and George Santayana, each of whom explored the relationship of religious values in general to other values in life. Beck asserted that Kant ultimately could not embrace Spinoza's embrace of substance or his appeal to monism. According to Beck, Kant agreed instead with Hume that a scientific interpretation of nature cannot serve by itself to confirm religious belief. According to Beck, Kant also parted ways with Hume, however, by insisting that a different rational basis for religious thought can be found in mankind's moral consciousness. In his book The Actor and the Spectator (1975), Beck embarked upon an attempt to "contrast and assess" the two accounts of human nature which are sometimes put forth by spectators of mankind's behavior: scientists and humanists. In Beck's view, the former are generally inclined to regard man as little more that a "cog in the machinery of the world" as described in the philosophy of mechanism, while the latter frequently characterize him as an "autonomous and self-creating" being. While Beck hints at being sympathetic to the humanistic interpretation, he is also careful to avoid any temptation to rebuke the scientific interpretation through the use of argumentation. Rather than advancing an "argument" in support of the truth or falsehood of the scientific interpretation, Beck patiently offers a reductio ad absurdum criticism and reminds his readers that no rationale argument could in theory be formulated to prove the veracity of such a "machine theory" since it is by its very nature "self-stulify". Stated otherwise, if the theory is in fact true, there can be no reason to uphold a belief in its veracity since in a community of machines all questions about reasoning could never arise in the first place. Reasoners cannot act intelligibly by regarding themselves as machines. As Beck diplomatically reminds his readers: "If you believe that you are not a machine, but that I am (then) I do not know why you are reading this book". He further suggests that while Skinnerian Behaviorism may serve as a rich model for psychology, it could readily be improved by including a "self-exemption clause". Beck also embarks upon an exploration of several topics in his book including the nature of thought, human behavior and the nature of free will. Beck's scholarly publications also reflect his interest in philosophical topics which are not prima facia directly related to the works of Immanuel Kant. In 1966 he published a detailed philosophical examination of the characteristics of mankind's conscious and unconscious motives entitled Conscious and Unconscious Motives. In 1968, he also collaborated with his colleague Robert L. Holmes at the University of Rochester in the book Philosophic Inquiry: An Introduction to Philosophy. Years later in 1971, he also presented his insights into the topic of searching for extraterrestrial life for the sixty-eighth annual Eastern Meeting of the American Philosophical Association in New York City in a paper which he entitled Extraterrestrial Intelligent Life. In the later work, Beck traces the evolution of philosophical speculation concerning the presence of intelligent extraterrestrial life forms starting with the ancient writings of Lucretius, Plutarch and Aristotle, to the contributions made by Copernicus and culminating in the modern age within the works Darwin, Immanuel Kant, William Whewell and Marx. He argues that our ancestors in the sixteenth and seventeenth centuries were plagued by a profound pessimism over the decline of the natural world due to mankind's sinfulness and consequently sought redemption by searching for the presence of "higher beings" within the universe. Similarly, in modern times, mankind's despair and technological shock is due in part to his pollution of the natural world and in part due to repeated failures of moral belief. He argues further that deeply seated religious, philosophical and existential beliefs are serving to perpetuate the comforting archetypal idea that mankind is not alone in the universe. Beck concludes on an optimistic note, however, by suggesting that while the quest for other or superior forms of life in the universe may not prove successful, it may yield beneficial consequences by assisting mankind in the actualization of better ways of life here on Earth. Beck was also intrigued by the concept of "man as a creator". His analysis of the history of philosophy within the Western tradition, traces the dynamic interaction of Kant's idea of the "land of truth", in which man's creativity evolves within the context of his search for knowledge, with the creative idea of an "unknowable beyond", which was first cultivated by philosophers of the ancient world. In Beck's view, the Platonic idea of a creative yet hidden ultimate reality now functions as a more dominant paradigm in the form of a nervus probandi within our modern systems of thought and ethical values. He notes that three responses to such a paradigm shift have emerged. In the first, philosophers deny the existence of such a transcendent "unknowable beyond" by asserting that it is merely the product of human imagination which can be easily dismissed. As examples, Beck cites the works of Karl Marx, Friedrich Nietzsche, and various positivistic scholars. The second possible response has been adopted by scholars who accept that such a hidden reality exists and that it can be known through either philosophical reasoning, mystical insight or a combination of both. As examples, Beck points to the works of both Plato and Georg Hegel. Lastly, Beck observes yet a third response incorporates the assertion that such an "unknowable beyond" may exist but that mankind is "indefeasibly" ignorant of it. Beck argues that Thomas Acquinas, Blaise Pascal, Søren Kierkegaard, William James and Immanuel Kant all adopt variations on this theme In this view, man is a creator of order only within narrow limits and cannot acquire definitive knowledge of the "unknowable beyond." Nevertheless, such a realm is clearly of paramount existential importance. Therefore, instead of professing "knowledge" of its existence, mankind is advised to knowingly acknowledge his ignorance and affirm its existence purely as an act of faith. Beck himself seems partial to this view when he gently reminds his readers that: An additional central theme which emerges in several of Beck's philosophical writings is the importance of recognizing the distinction between a causal explanation of both natural events and human behavior, as contrasted with a rational explanation or justification of human actions. In Beck's view, these constitute two entirely different perspectives on essentially the same subject matter. Consequently, neither view can claim to be metaphysically superior in its nature when compared to the alternate view. Stated more simply, causal explanations of human behavior when considered on one hand and rational assessments of actions when considered on the other hand, are rendered compatible with each other only by the recognition that they represent a regulative ideal in mankind's conduct of inquiry. In short, Beck's resolution of the apparent incompatibility of these two ideals illustrates the profound influence of Kant's work on his own philosophical perspective. In addition to receiving fellowships from the Rosenwald Fund in 1937, the Guggenheim Foundation in 1957, the American Academy of Arts and Sciences in 1963, and the American Council of Learned Societies in 1964, Beck was the first recipient of the University Of Rochester's Edward Peck Curtis Award for Excellence in Undergraduate Teaching in 1962. In addition, Beck was the recipient of several honorary degrees from several leading scholarly institutions, including Hamilton College, Emory University, and the University of Tubingen. He was also an honorary member of the Kant Society in Germany. In 2001 Beck was honored by several prominent scholars and the philosopher Predrag Cicovaki with the publication of Kant's Legacy: Essays in Honor of Lewis White Beck. The leading scholar of German philosophy Walter Kaufmann also paid special tribute to Beck's scholarship in his work Goethe, Kant and Hegel in 1980. Beck retired in 1979 and died in 1997 at age 83 in Rochester, New York. He was survived by his wife Caroline as well as his two sons Brandon and Hamilton along with two grandsons. Selected publications During his long academic career, Lewis White Beck published several books and numerous scholarly articles which include the following works. Archived works Professional affiliations Lewis White Beck was both an active member and a member emeritus of the American Philosophical Association. He served as President of the American Philosophical Association- Eastern Division in 1971 as well as the chairman of its board of officers (1974–1977). He also served as the president of the North East Society for 18th Century Studies in 1974. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-:5_61-3] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-67] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_ref-66] | [TOKENS: 6152] |
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/International_treaties] | [TOKENS: 6667] |
Contents Treaty A treaty is a recorded international agreement between sovereign states or other subjects of international law (including international organizations) that is governed by international law. A treaty may also be known as an international agreement, protocol, covenant, convention, pact, or exchange of letters, among other terms; however, only documents that are legally binding on the parties are considered treaties under international law. Treaties may be bilateral (between two countries) or multilateral (involving more than two countries). International agreements were used in some form by most major civilizations and became increasingly common and more sophisticated during the early modern era. The early 19th century saw developments in diplomacy, foreign policy, and international law reflected by the widespread use of treaties. The 1969 Vienna Convention on the Law of Treaties (VCLT) codified these practices and established rules and guidelines for creating, amending, interpreting, and terminating treaties, and for resolving disputes and alleged breaches. They vary in their obligations (the extent to which states are bound to the rules), precision (the extent to which the rules are unambiguous), and delegation (the extent to which third parties have authority to interpret, apply and make rules). Treaties can take many forms and govern a wide range of subject matters, such as security, trade, environment, and human rights; they may also be used to establish international institutions, such as the International Criminal Court and the United Nations, for which they often provide a governing framework. Treaties serve as primary sources of international law and have codified or established most international legal principles since the early 20th century. In contrast with other sources of international law, such as customary international law, treaties are only binding on the parties that have signed and ratified them. Notwithstanding the VCLT and customary international law, treaties are not required to follow any standard form, and differ widely in substance and complexity. Nevertheless, all valid treaties must comply with the legal principle of pacta sunt servanda (Latin: "agreements must be kept"), under which parties are committed to perform their duties and honor their agreements in good faith. A treaty may also be invalidated, and thus rendered unenforceable, if it violates a preemptory norm (jus cogens), such as permitting a war of aggression or crimes against humanity. Modern usage and form A treaty is an official, express written agreement that states use to legally bind themselves. It is also the objective outcome of a ceremonial occasion that acknowledges the parties and their defined relationships. There is no prerequisite of academic accreditation or cross-professional contextual knowledge required to publish a treaty. However, since the late 19th century, most treaties have followed a fairly consistent format. A treaty typically begins with a preamble describing the "High Contracting Parties" and their shared objectives in executing the treaty, as well as summarizing any underlying events (such as the aftermath of a war in the case of a peace treaty). Modern preambles are sometimes structured as a single very long sentence formatted into multiple paragraphs for readability, in which each of the paragraphs begins with a gerund (desiring, recognizing, having, etc.). The High Contracting Parties—referred to as either the official title of the head of state (but not including the personal name), e.g. His Majesty The King of X or His Excellency The President of Y, or alternatively in the form of " Government of Z"—are enumerated, along with the full names and titles of their plenipotentiary representatives; a boilerplate clause describes how each party's representatives have communicated (or exchanged) their "full powers" (i.e., the official documents appointing them to act on behalf of their respective high contracting party) and found them in good or proper form. However, under the Vienna Convention on the Law of Treaties if the representative is the head of state, head of government or minister of foreign affairs, no special document is needed, as holding such high office is sufficient. The end of the preamble and the start of the actual agreement is often signaled by the words "have agreed as follows". After the preamble comes numbered articles, which contain the substance of the parties' actual agreement. Each article heading usually encompasses a paragraph. A long treaty may further group articles under chapter headings. Modern treaties, regardless of subject matter, usually contain articles governing where the final authentic copies of the treaty will be deposited and how any subsequent disputes as to their interpretation will be peacefully resolved. The end of a treaty, the eschatocol (or closing protocol), is often signaled by language such as "in witness whereof" or "in faith whereof", followed by the words "DONE at", then the site(s) of the treaty's execution and the date(s) of its execution. The date is typically written in its most formal, non-numerical form; for example, the Charter of the United Nations reads "DONE at the city of San Francisco the twenty-sixth day of June, one thousand nine hundred and forty-five". If applicable, a treaty will note that it is executed in multiple copies in different languages, with a stipulation that the versions in different languages are equally authentic. The signatures of the parties' representatives follow at the very end. When the text of a treaty is later reprinted, such as in a collection of treaties currently in effect, an editor will often append the dates on which the respective parties ratified the treaty and on which it came into effect for each party. Bilateral treaties are concluded between two states or entities. It is possible for a bilateral treaty to have more than two parties; for example, each of the bilateral treaties between Switzerland and the European Union (EU) has seventeen parties: The parties are divided into two groups, the Swiss ("on the one part") and the EU and its member states ("on the other part"). The treaty establishes rights and obligations between the Swiss and the EU and the member states severally—it does not establish any rights and obligations amongst the EU and its member states.[citation needed] A multilateral treaty is concluded among several countries, establishing rights and obligations between each party and every other party. Multilateral treaties may be regional or may involve states across the world. Treaties of "mutual guarantee" are international compacts, e.g., the Treaty of Locarno which guarantees each signatory against attack from another. The United Nations has extensive power to convene states to enact large-scale multilateral treaties and has experience doing so. Under the United Nations Charter, which is itself a treaty, treaties must be registered with the UN to be invoked before it, or enforced in its judiciary organ, the International Court of Justice. This was done to prevent the practice of secret treaties, which proliferated in the 19th and 20th centuries and often precipitated or exacerbated conflict. Article 103 of the Charter also states that its members' obligations under the Charter outweigh any competing obligations under other treaties. After their adoption, treaties, as well as their amendments, must follow the official legal procedures of the United Nations, as applied by the Office of Legal Affairs, including signature, ratification and entry into force. In function and effectiveness, the UN has been compared to the United States federal government under the Articles of Confederation. Adding and amending treaty obligations Reservations are essentially caveats to a state's acceptance of a treaty. Reservations are unilateral statements purporting to exclude or to modify the legal obligation and its effects on the reserving state. These must be included at the time of signing or ratification, i.e., "a party cannot add a reservation after it has already joined a treaty". Article 19 of the Vienna Convention on the law of Treaties in 1969. Originally, international law was unaccepting of treaty reservations, rejecting them unless all parties to the treaty accepted the same reservations. However, in the interest of encouraging the largest number of states to join treaties, a more permissive rule regarding reservations has emerged. While some treaties still expressly forbid any reservations, they are now generally permitted to the extent that they are not inconsistent with the goals and purposes of the treaty. When a state limits its treaty obligations through reservations, other states party to that treaty have the option to accept those reservations, object to them, or object and oppose them. If the state accepts them (or fails to act at all), both the reserving state and the accepting state are relieved of the reserved legal obligation as concerns their legal obligations to each other (accepting the reservation does not change the accepting state's legal obligations as concerns other parties to the treaty). If the state opposes, the parts of the treaty affected by the reservation drop out completely and no longer create any legal obligations on the reserving and accepting state, again only as concerns each other. Finally, if the state objects and opposes, there are no legal obligations under that treaty between those two state parties whatsoever. The objecting and opposing state essentially refuses to acknowledge the reserving state is a party to the treaty at all. There are three ways an existing treaty can be amended. First, a formal amendment requires State parties to the treaty to go through the ratification process all over again. The re-negotiation of treaty provisions can be long and protracted, and often some parties to the original treaty will not become parties to the amended treaty. When determining the legal obligations of states, one party to the original treaty and one party to the amended treaty, the states will only be bound by the terms they both agreed upon. Treaties can also be amended informally by the treaty executive council when the changes are only procedural, technical change in customary international law can also amend a treaty, where state behavior evinces a new interpretation of the legal obligations under the treaty. Minor corrections to a treaty may be adopted by a procès-verbal; but a procès-verbal is generally reserved for changes to rectify obvious errors in the text adopted, i.e., where the text adopted does not correctly reflect the intention of the parties adopting it. In international law and international relations, a protocol is generally a treaty or international agreement that supplements a previous treaty or international agreement. A protocol can amend the previous treaty or add additional provisions. Parties to the earlier agreement are not required to adopt the protocol, and this is sometimes made explicit, especially where many parties to the first agreement do not support the protocol. A notable example is the United Nations Framework Convention on Climate Change (UNFCCC), which established a general framework for the development of binding greenhouse gas emission limits, followed by the Kyoto Protocol contained the specific provisions and regulations later agreed upon. Execution and implementation Treaties may be seen as "self-executing", in that merely becoming a party puts the treaty and all its obligations in action. Other treaties may be non-self-executing and require "implementing legislation"—a change in the domestic law of a state party that will direct or enable it to fulfill treaty obligations. An example of a treaty requiring such legislation would be one mandating local prosecution by a party for particular crimes. The division between the two is often unclear and subject to disagreements within a government, since a non-self-executing treaty cannot be acted on without the proper change in domestic law; if a treaty requires implementing legislation, a state may default on its obligations due to its legislature failing to pass the necessary domestic laws. The language of treaties, like that of any law or contract, must be interpreted when the wording does not seem clear, or it is not immediately apparent how it should be applied in a perhaps unforeseen circumstance. The Vienna Convention states that treaties are to be interpreted "in good faith" according to the "ordinary meaning given to the terms of the treaty in their context and in the light of its object and purpose". International legal experts also often invoke the "principle of maximum effectiveness", which interprets treaty language as having the fullest force and effect possible to establish obligations between the parties. No one party to a treaty can impose its particular interpretation of the treaty upon the other parties. Consent may be implied, however, if the other parties fail to explicitly disavow that initially unilateral interpretation, particularly if that state has acted upon its view of the treaty without complaint. Consent by all parties to the treaty to a particular interpretation has the legal effect of adding another clause to the treaty – this is commonly called an "authentic interpretation". International tribunals and arbiters are often called upon to resolve substantial disputes over treaty interpretations. To establish the meaning in context, these judicial bodies may review the preparatory work from the negotiation and drafting of the treaty as well as the final, signed treaty itself. One significant part of treaty-making is that signing a treaty implies a recognition that the other side is a sovereign state and that the agreement being considered is enforceable under international law.[citation needed] Hence, nations can be very careful about terming an agreement to be a treaty. For example, within the United States, agreements between states are compacts and agreements between states and the federal government or between agencies of the government are memoranda of understanding. Another situation can occur when one party wishes to create an obligation under international law, but the other party does not. This factor has been at work with respect to discussions between North Korea and the United States over security guarantees and nuclear proliferation. The definition of the English word "treaty" varies depending on the legal and political context; in some jurisdictions, such as the United States, a treaty is specifically an international agreement that has been ratified, and thus made binding, per the procedures established under domestic law. While the Vienna Convention provides a general dispute resolution mechanism, many treaties specify a process outside the convention for arbitrating disputes and alleged breaches. This may by a specially convened panel, by reference to an existing court or panel established for the purpose such as the International Court of Justice, the European Court of Justice or processes such as the Dispute Settlement Understanding of the World Trade Organization. Depending on the treaty, such a process may result in financial penalties or other enforcement action. Ending treaty obligations Treaties are not necessarily permanently binding upon the signatory parties. As obligations in international law are traditionally viewed as arising only from the consent of states, many treaties expressly allow a state to withdraw as long as it follows certain procedures of notification ("denunciation"). For example, the Single Convention on Narcotic Drugs provides that the treaty will terminate if, as a result of denunciations, the number of parties falls below 40. Many treaties expressly forbid withdrawal. Article 56 of the Vienna Convention on the Law of Treaties provides that where a treaty is silent over whether or not it can be denounced there is a rebuttable presumption that it cannot be unilaterally denounced unless: The possibility of withdrawal depends on the terms of the treaty and its travaux preparatory. It has, for example, been held that it is not possible to withdraw from the International Covenant on Civil and Political Rights. When North Korea declared its intention to do this the Secretary-General of the United Nations, acting as registrar, said that original signatories of the ICCPR had not overlooked the possibility of explicitly providing for withdrawal, but rather had deliberately intended not to provide for it. Consequently, withdrawal was not possible. The Organization of American States (OAS) offers the ability of member states to withdraw from its framework by allowing states to officially inform the General Secretariat of the OAS of such intended withdrawal and being subject to a two-year long sunset period in accordance with Article 143 of the body's charter. In practice, state legislatures or other officials where so structured sometimes use their sovereignty or provisions of supreme law to declare their withdrawal from and stop following the terms of a treaty even if this violates the terms of the treaty. Other parties may accept this outcome, may consider the state to be untrustworthy in future dealings, or may retaliate with sanctions or military action. Withdrawal by one party from a bilateral treaty is typically considered to terminate the treaty. Multilateral treaties typically continue even after the withdrawal of one member, unless the terms of the treaty or mutual agreement causes its termination. If a party has materially violated or breached its treaty obligations, the other parties may invoke this breach as grounds for temporarily suspending their obligations to that party under the treaty. A material breach may also be invoked as grounds for permanently terminating the treaty itself. A treaty breach does not automatically suspend or terminate treaty relations, however. It depends on how the other parties regard the breach and how they resolve to respond to it. Sometimes treaties will provide for the seriousness of a breach to be determined by a tribunal or other independent arbiter. An advantage of such an arbiter is that it prevents a party from prematurely and perhaps wrongfully suspending or terminating its own obligations due to another's an alleged material breach. Treaties sometimes include provisions for self-termination, meaning that the treaty is automatically terminated if certain defined conditions are met. Some treaties are intended by the parties to be only temporarily binding and are set to expire on a given date. Other treaties may self-terminate if the treaty is meant to exist only under certain conditions. A party may claim that a treaty should be terminated, even absent an express provision, if there has been a fundamental change in circumstances. Such a change is sufficient if unforeseen, if it undermined the "essential basis" of consent by a party if it radically transforms the extent of obligations between the parties, and if the obligations are still to be performed. A party cannot base this claim on change brought about by its own breach of the treaty. This claim also cannot be used to invalidate treaties that established or redrew political boundaries. Cartels Cartels ("Cartells", "Cartelle" or "Kartell-Konventionen" in other languages) were a special kind of treaty within the international law of the 17th to 19th centuries. Their purpose was to regulate specific activities of common interest among contracting states that otherwise remained rivals in other areas. They were typically implemented on an administrative level. Similar to the cartels for duels and tournaments, these intergovernmental accords represented fairness agreements or gentlemen's agreements between states. In the United States, cartels governed humanitarian actions typically carried out by cartel ships were dispatched for missions, such as to carry communications or prisoners between belligerents. From the European history, a broader range of purposes is known. These "cartels" often reflected the cohesion of authoritarian ruling classes against their own unruly citizens. Generally, the European governments concluded - while curbing their mutual rivalries partially - cooperation agreements, which should apply generally or only in case of war: The measures against criminals and unruly citizens were to be conducted regardless of the nationality and origin of the relevant persons. If necessary, national borders could be crossed by police forces of the respective neighboring country for capture and arrest. In the course of the 19th century, the term "cartel" (or "Cartell") gradually disappeared for intergovernmental agreements under international law. Instead, the term "convention" was used. Invalid treaties An otherwise valid and agreed upon treaty may be rejected as a binding international agreement on several grounds. For example, the Japan–Korea treaties of 1905, 1907, and 1910 were protested by several governments as having been essentially forced upon Korea by Japan; they were confirmed as "already null and void" in the 1965 Treaty on Basic Relations between Japan and the Republic of Korea. If an act or lack thereof is condemned under international law, the act will not assume international legality even if approved by internal law. This means that in case of a conflict with domestic law, international law will always prevail. A party's consent to a treaty is invalid if it had been given by an agent or body without power to do so under that state's domestic laws. States are reluctant to inquire into the internal affairs and processes of other states, and so a "manifest violation" is required such that it would be "objectively evident to any State dealing with the matter". A strong presumption exists internationally that a head of state has acted within his proper authority. It seems that no treaty has ever actually been invalidated on this provision.[citation needed] Consent is also invalid if it was given by a representative acting outside their restricted powers during the negotiations, if the other parties to the treaty were notified of those restrictions prior to his or her signing.[citation needed] Articles 46–53 of the Vienna Convention on the Law of Treaties set out the only ways that treaties can be invalidated—considered unenforceable and void under international law. A treaty will be invalidated due to either the circumstances by which a state party joined the treaty or due to the content of the treaty itself. Invalidation is separate from withdrawal, suspension, or termination (addressed above), which all involve an alteration in the consent of the parties of a previously valid treaty rather than the invalidation of that consent in the first place. A governmental leader's consent may be invalidated if there was an erroneous understanding of a fact or situation at the time of conclusion, which formed the "essential basis" of the state's consent. Consent will not be invalidated if the misunderstanding was due to the state's own conduct, or if the truth should have been evident. Consent will also be invalidated if it was induced by the fraudulent conduct of another party, or by the direct or indirect "corruption" of its representative by another party to the treaty. Coercion of either a representative or the state itself through the threat or use of force, if used to obtain the consent of that state to a treaty, will invalidate that consent. A treaty is null and void if it is in violation of a peremptory norm. These norms, unlike other principles of customary law, are recognized as permitting no violations and so cannot be altered through treaty obligations. These are limited to such universally accepted prohibitions as those against the aggressive use of force, genocide and other crimes against humanity, piracy, hostilities directed at civilian population, racial discrimination and apartheid, slavery and torture, meaning that no state can legally assume an obligation to commit or permit such acts. Treaties under domestic national law The constitution of Australia allows the executive government to enter into treaties, but the practice is for treaties to be tabled in both houses of parliament at least 15 days before signing. Treaties are considered a source of Australian law but sometimes require an act of parliament to be passed depending on their nature. Treaties are administered and maintained by the Department of Foreign Affairs and Trade, which advised that the "general position under Australian law is that treaties which Australia has joined, apart from those terminating a state of war, are not directly and automatically incorporated into Australian law. Signature and ratification do not, of themselves, make treaties operate domestically. In the absence of legislation, treaties cannot impose obligations on individuals nor create rights in domestic law. Nevertheless, international law, including treaty law, is a legitimate and important influence on the development of the common law and may be used in the interpretation of statutes." Treaties can be implemented by executive action, and often, existing laws are sufficient to ensure a treaty is honored. Australian treaties generally fall under the following categories: extradition, postal agreements and money orders, trade and international conventions. The federal constitution of Brazil states that the power to enter into treaties is vested in the president of Brazil and that such treaties must be approved by the Congress of Brazil (Articles 84, Clause VIII, and 49, Clause I). In practice, that has been interpreted as meaning that the executive branch is free to negotiate and sign a treaty but that its ratification by the president requires the prior approval of Congress. Additionally, the Supreme Federal Court has ruled that after ratification and entry into force, a treaty must be incorporated into domestic law by means of a presidential decree published in the federal register for it to be valid in Brazil and applicable by the Brazilian authorities. The court has established that treaties are subject to constitutional review and enjoy the same hierarchical position as ordinary legislation (leis ordinárias, or "ordinary laws", in Portuguese). A more recent ruling by the Supreme Federal Court in 2008 has altered that somewhat by stating that treaties containing human rights provisions enjoy a status above that of ordinary legislation, subject to only the constitution itself. Additionally, the 45th Amendment to the constitution makes human rights treaties approved by Congress by a special procedure enjoy the same hierarchical position as a constitutional amendment. The hierarchical position of treaties in relation to domestic legislation is of relevance to the discussion on whether and how the latter can abrogate the former and vice versa. The constitution does not have an equivalent to the Supremacy Clause in U.S. Constitution, which is of interest to the discussion on the relation between treaties and legislation of the states of Brazil. In India, subjects are divided into three lists: union, state and concurrent. In the normal legislation process, the subjects on the union list must be legislated by the Parliament of India. For subjects on the state list, only the respective state legislature can legislate. For subjects on the concurrent list, both governments can make laws. However, to implement international treaties, Parliament can legislate on any subject and even override the general division of subject lists. In the United States, the term "treaty" has a distinct and more restricted legal definition than in international law. U.S. law distinguishes between "treaties", as defined in the U.S. Constitution, and "executive agreements", which are either "congressional-executive agreements" or "sole executive agreements"; although all three classes are equally treaties under international law, they are subject to different political and legal requirements and implications in the U.S. The distinctions primarily concern the method of approval: Treaties require the "advice and consent" by two-thirds of the Senators present, whereas sole executive agreements are executed by the President acting alone and congressional-executive agreements require majority approval by both the House and the Senate. The three classifications are not mutually exclusive: A treaty may require a simple majority in Congress before or after it is signed by the President or may grant the President authority to fill in the gaps with executive agreements, rather than additional treaties or protocols. Currently, international agreements are ten times more likely to be executed by executive agreement, due to their relative ease. Nevertheless, the President still often chooses to pursue the formal treaty process over an executive agreement to gain congressional support on matters that require Congress to pass implementing legislation or appropriate funds as well as for agreements that impose long-term, complex legal obligations on the U.S. For example, the agreement by the United States, Iran, and other countries is not a treaty under U.S. law, but rather a "political commitment" that does not bind the parties by law. The nuances and ambiguity of how international agreements are effectuated or implemented in U.S. law has been subject to multiple legal cases. The U.S. Supreme Court ruled in the Head Money Cases (1884) that "treaties" do not have a privileged position over acts of Congress and can be repealed or modified by legislative action just like any other regular law. In a similar vein, the court's decision in Reid v. Covert (1957) held that treaty provisions that conflict with the U.S. Constitution are null and void under U.S. law. However, the U.S. Supreme Court has also recognized the "supremacy" of treaties in the U.S. Constitution, such as in Ware v. Hylton (1796) and Missouri v. Holland (1920). The relative ease by which certain international agreements could be entered into by the President has often prompted congressional pushback, most notably in the proposed Bricker Amendment to the U.S. Constitution, which explicitly sought to reign in executive treatymaking powers. Treaties and indigenous peoples Treaties formed an important part of European colonization; in many parts of the world, Europeans attempted to legitimize their sovereignty by signing treaties with indigenous peoples. In most cases, these treaties were in extremely disadvantageous terms to the native people, who often did not comprehend the implications of what they were signing. In some rare cases, such as with Ethiopia and Qing China, local governments were able to use the treaties to at least mitigate the impact of European colonization. This involved learning the intricacies of European diplomatic customs and then using the treaties to prevent power from overstepping their agreement or by playing different powers against each other.[citation needed] In other cases, such as New Zealand with the Māori and Canada with its First Nations people, treaties allowed native peoples to maintain a minimum amount of autonomy. Such treaties between colonizers and indigenous peoples are an important part of political discourse in the late 20th and early 21st century, the treaties being discussed have international standing as has been stated in a treaty study by the UN. In the case of Indigenous Australians, no treaty was ever entered into with the Indigenous peoples entitling the Europeans to land ownership, mostly adopting the doctrine of terra nullius (with the exception of South Australia). This concept was later overturned by Mabo v Queensland, which established the concept of native title in Australia well after colonization was already a fait accompli. On 10 December 2019, the Victorian First Peoples' Assembly met for the first time in the Upper House of the Parliament of Victoria in Melbourne. The main aim of the Assembly is to work out the rules by which individual treaties would be negotiated between the Victorian Government and individual Aboriginal Victorian peoples. It will also establish an independent Treaty Authority, which will oversee the negotiations between the Aboriginal groups and the Victorian Government and ensure fairness. Prior to 1871, the government of the United States regularly entered into treaties with Native Americans but the Indian Appropriations Act of 3 March 1871 had a rider attached that effectively ended the President's treaty-making by providing that no Indian nation or tribe shall be acknowledged as an independent nation, tribe, or power with whom the United States may contract by treaty. The federal government continued to provide similar contractual relations with the Indian tribes after 1871 by agreements, statutes, and executive orders. Colonization in Canada saw a number of treaties signed between European settlers and Indigenous First Nations peoples. Historic Canadian treaties tend to fall into three broad categories: commercial, alliance, and territorial. Commercial treaties first emerged in the 17th century and were agreements made between the European fur trading companies and the local First Nations. The Hudson's Bay Company, a British trading company located in what is now Northern Ontario, signed numerous commercial treaties during this period. Alliance treaties, commonly referred to as "treaties of peace, friendship and alliance" emerged in the late 17th to early 18th century. Finally, territorial treaties dictating land rights were signed between 1760 and 1923. The Royal Proclamation of 1763 accelerated the treaty-making process and provided the Crown with access to large amounts of land occupied by the First Nations. The Crown and 364 First Nations signed 70 treaties that are recognized by the Government of Canada and represent over 600,000 First Nation individuals. The treaties are as follows: There is evidence that "although both Indigenous and European Nations engaged in treaty-making before contact with each other, the traditions, beliefs, and worldviews that defined concepts such as "treaties" were extremely different". The Indigenous understanding of treaties is based on traditional culture and values. Maintaining healthy and equitable relationships with other nations, as well as the environment, is paramount. Gdoo-naaganinaa, a historic treaty between the Nishnaabeg nation and the Haudenosaunee Confederacy is an example of how First Nations approach treaties. Under Gdoo-naaganinaa, also referred-to in English as Our Dish, the neighbouring nations acknowledged that while they were separate nations they shared the same ecosystem or Dish. It was agreed that the nations would respectably share the land, not interfering with the other nation's sovereignty while also not monopolizing environmental resources. First Nations agreements, such as the Gdoo-naaganigaa, are considered "living treaties" that must be upheld continually and renewed over time. European settlers in Canada had a different perception of treaties. Treaties were not a living, equitable agreement but rather a legal contract over which the future creation of Canadian law would later rely on. As time passed, the settlers did not think it necessary to abide by all treaty agreements. A review of historic treaties reveals that the European settler understanding is the dominant view portrayed in Canadian treaties. Canada today recognizes 25 additional treaties called Modern Treaties. These treaties represent the relationships between 97 Indigenous groups which includes over 89,000 people. The treaties have been instrumental in strengthening Indigenous stronghold in Canada by providing the following (as organized by the Government of Canada) : See also Further reading Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Daniyal] | [TOKENS: 959] |
Contents Daniyal Daniyal (Arabic: دانيال) was a Palestinian village in the Ramle Subdistrict that was located 5 km east of Ramla and southeast of Lydda. It was depopulated during the 1948 Arab–Israeli War on July 10, 1948, by the Yiftach Brigade under the first phase of Operation Dani, as part of the broader 1948 Palestinian expulsion and flight and Nakba. History In 1838, Edward Robinson stopped by the village well, west of the village. He estimated the depth of the well to be 160 feet. The villagers were Muslim, and the village was noted as being in the Lydda District. It was populated by residents from Rafat, Jerusalem who established it as a dependency - or satellite village - of their home village. In 1863, Victor Guérin noted: "a small mosque situated on a height; it contains the tomb of a saint, called Neby Danyal. Some olive trees and a palm tree surround it. Near there is a village of about forty houses, also called Danyal. I observed there, not far from the dwellings, a considerable number of silos, intended to preserve straw, barley, and wheat." An official village list of about 1870 showed that the village had 24 houses and a population of 80, though the population count included men, only. In 1882, the PEF's Survey of Western Palestine (SWP) described Neby Danial: "A small settlement round the sacred shrine of the Prophet, with a well to the west. The tomb of Dan is shown here, and is believed by the Samaritans to be the true site." They further noted that: "The village of Neby Danial includes the Mukam of Neby Dan, from which it is said by the natives to take its name." In the 1922 census of Palestine conducted by the British Mandate authorities, Danial had a population of 277 Muslims, increasing slightly in the 1931 census to 284 Muslims, in a total of 71 houses. In the 1945 statistics, it had a population of 410 Muslims with a total of 2,808 dunums of land. Of this, 37 dunums were for plantations and irrigable land, 2,599 dunums were for cereals, while a total of 15 dunams were classified as built-up areas. An elementary school for boys which is still standing today was founded in 1945, and had an enrollment of 55 students. During the 1948 Palestine war, the village was depopulated by Israeli forces as part of the broader 1948 Palestinian expulsion and flight. The village was attacked by the IDF on the 10 July 1948. On that day, the Yiftach Brigade reported: "Our forces are clearing the Innaba – Jimzu – Daniyal area and are torching everything that can be burned." On July 11, they reported that they had conquered Jimzu and Daniel and were "busy clearing the villages and blowing up the houses." Historian Saleh Abdel Jawad writes that upon the Israeli conquest of the village that "indiscriminate killings" occurred in Daniyal, with the IDF having first shelled the village to induce civilian flight and afterwards killing any residents who remained, with at least 9 residents being killed after the capture of the village. In September, 1948, Daniyal was among the Palestinian villages that Ben Gurion wanted destroyed.[clarification needed] The Israeli settlement of Kfar Daniel was established on village land in 1949. In 1992 the remains of the village was described by historian Walid Khalidi: "The shrine of al-Nabi Daniyal, the school, and seven well built houses are all that remain of the village. The shrine, deserted and weathered amid weeds and a few trees, is made of stone, with a second story rising on one side. The first story has arched windows and doors and the second has a porch and a rectangular window. The school is presently used by residents of Kefar Daniyyel. The houses are built of stone and are all flat-roofed, with a mix of arched and rectangular doors and windows. One house is used as a warehouse." Its Arab settlers left to neighboring countries and territories such as Jordan and the West Bank. Others later settled in the United States of America in the States of Texas and Illinois. The descendants of those who were forced to leave their homes face many difficulties from local Israeli authority when attempting to revisit the land where the village once stood. References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Abu_al-Fadl,_Ramle] | [TOKENS: 692] |
Contents Abu al-Fadl, Ramle Abu al-Fadl (Arabic: أبو الفضل/السطرية) was a Palestinian village in the Ramle Subdistrict, about 4 km (2.5 mi) northwest of Ramla in, what was until 1948, Mandatory Palestine. The village was also known as al-Satariyya. In 1945/44, the village had a population of 510. Location The village was located just south of Sarafand al-Amar, in the Ramleh District. History The village land was owned by the Islamic waqf of Fadl ibn Abbas, possibly a cousin of the Islamic prophet Muhammad, after whom the village was named. In the Palestine Index Gazetteer, Abu al-Fadl was classified as a hamlet. At the time of the 1931 census, Abu al-Fadl had a population of 1565 residents, all Muslims. (Noted under the name of Es Sautariya). In the 1945 statistics, the village had a population of 510 Muslims. A total of 818 dunums of village land was used for citrus and bananas, 1,035 dunums were used for cereals, and 822 dunums were irrigated or used for orchards. In February 1948 it was reported that ten Arabs, one of them a woman, were murdered ("probably") by IZL gunmen, in a grove, where they apparently worked, near the village. This was one of the massacres of Palestinian civilians which was said to "erode Arab morale". The villagers probably left their homes in the second week of May 1948 during Operation Barak. This campaign was undertaken by the Givati Brigade commanded by Shimon Avidan; its objective was to clear the villages south of Tel Aviv and "cause a wandering of the inhabitants of the smaller settlements in the area." Each ground assault started with a mortar bombardment, followed by the expulsion of the remaining residents and the demolition of houses. The village was probably permanently occupied during the first stage of Operation Danny, 9–12 July 1948. This offensive, commanded by Yitzhak Rabin, resulted in the expulsion of some 70,000 people from the neighbouring towns of Lod and al-Ramla. The Palestinian historian Walid Khalidi, described the area of Abu al-Fadl in 1992: "Of the original village houses, no more than five still stand, deserted and nearly collapsing. One of these houses, located at the edge of a citrus grove, is made of cement blocks, with rectangular doors and windows and a tiled, sloping roof. Another house, composed of three units, is located in the middle of a citrus grove. A few cypress trees, castor oil (ricinus) plants, and cactuses grow on the site, and Israeli buildings have been constructed nearby. The surrounding lands are cultivated by Israelis." The Israeli moshav of Sitria was established on village farmlands in 1949, Talmei Menashe was established on the site of the village proper in 1953, and some of Be'er Ya'akov and the eastern reaches of Rishon LeZion are partially on the village's land. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lydda] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bayt_Nabala] | [TOKENS: 1431] |
Contents Bayt Nabala Bayt Nabala or Beit Nabala was a Palestinian Arab village in the Ramle Subdistrict in Palestine that was destroyed during the 1948 Arab–Israeli War. The village was in the territory allotted to the Arab state under the 1947 UN Partition Plan, which was rejected by Arab leaders and never implemented. Its population in 1945, before the war, was 2,310. It was occupied by Israeli forces on 13 May 1948 and was completely destroyed by them on 13 September 1948. Village refugees were scattered around Deir 'Ammar, Ramallah city, Bayt Tillow, Rantis, and Jalazone refugee camps north of Ramallah. Some of the clans that lived in Bayt Nabala include the AlHeet, Nakhleh, Safi, AL-Sharaqa, al-Khateeb, Saleh and Zaid families. Today the area is part of the Israeli town of Beit Nehemia. History Bayt Nabala is identical with the ancient Beth Nabala/Beth Nablata. In 1526 Bayt Nabala was part of the Ottoman Empire, nahiya (subdistrict) of Ramla under the Liwa of al-Quds. According to Ottoman tax records, the village paid 500 akçe annually. In the 1596 tax record, Bayt Nabala was categorized under the Liwa of Gaza, with a population of 54 Muslim households, an estimated 297 people. They paid a fixed tax-rate of 33,3 % on a number of crops, including wheat, barley, olives, fruit, as well as on goats, beehives and a press that was used for processing either olives or grapes, in addition to occasional revenues; a total of 8,688 akçe. In the 17th century, the village received an influx of refugees from neighboring Beit Qufa, who had to abandon their home due to unsettled conditions. During the 18th and 19th centuries, Beit Nabala belonged to the Nahiyeh (sub-district) of Lod that encompassed the area of the present-day city of Modi'in-Maccabim-Re'ut in the south to the present-day city of El'ad in the north, and from the foothills in the east, through the Lod Valley to the outskirts of Jaffa in the west. This area was home to thousands of inhabitants in about 20 villages, who had at their disposal tens of thousands of hectares of prime agricultural land. According to historian Roy Marom "Bayt Nabālā was a major hub for the Qays and Yaman conflicts in the area." Bayt Nabala's first residents were the Qaysi "al-Sharāqa" clan. Local tradition holds that a Yamani immigrant called Salām came and camped in the caves near Bayt Nabālā. When a conflict broke out between Bayt Nabālā and al-Ḥadītha, Salām took advantage of the plight of the residents of Bayt Nabālā to gain control over them, and his three “sons” – Zayd, Nakhla and Ṣāfī – settled in the village. Relations between the clans were strained, and riots broke out between them. A Qaysī leader, named ‘Ābid, from the old al-Sharāqa clan, led his forces and allies, from Jayyūs and Dayr Abū Mash‘al, against the supporters of the Yaman in Qibyā and Dayr Ṭarīf. With the support of the powerful and influential Yamanī families – al-Khawāja from Ni‘līn and the Abu Ghosh family – Ṣāfī succeeded in persuading the authorities to arrest ‘Ābid and eliminate him. Ṣāfī then extended his control over Dayr Ṭarīf, al-Ṭīra, Qūla, Fajja and Mulabbis. In 1838 Edward Robinson noted Bayt Nabala from the tower in Ramle. In 1870 Victor Guérin visited and found the village to have about 900 inhabitants. Socin found from an official Ottoman village list from about the same year that Bayt Nabala had 108 houses and a population of 427, though the population count included men, only. Hartmann found that Bet Nebala had 118 houses. In 1882, the PEF's Survey of Western Palestine described Bayt Nabala as being of moderate size, situated at the edge of a plain. Since the end of the 19th century, the inhabitants of Beit Nabala cultivated the lands of the deserted village of Jindas. The school was founded in 1921 and had about 230 students in 1946–47. In the 1922 census of Palestine, conducted by the British Mandate authorities, Bait Nabala had a population of 1,324 inhabitants; 1,321 Muslims and 3 Christians, increasing in the 1931 census to 1758, all Muslims, in a total of 471 houses. In the 1945 statistics, the village had a population of 2,310 Muslims, while the total land area was 15,051 dunams, according to an official land and population survey. A total of 226 dunums of village land was used for citrus and bananas, 10,197 dunums were used for cereals, 1,733 dunums were irrigated or used for orchards, while 123 dunams were classified as built-up public areas. Benny Morris writes that the village residents abandoned it on Arab orders on 13 May 1948. However, according to Walid Khalidi, this cannot be confirmed. The Palestinian historian Walid Khalidi described the village site in 1992: "The site is overgrown with grass, thorny bushes, and cypress and fig trees. It lies on the east side of the settlement of Beyt Nechemya, due east of the road from the Lod (Lydda) airport. On its fringes are the remains of quarries and crumbled houses. Sections of walls from the houses still stand. The surrounding land is cultivated by the Israeli settlements." Culture According to the Palestinian Heritage Foundation, Beit Nabala dresses (together with those of the village of Dayr Tarif), "were usually done on cotton, velvet or kermezot silk fabric. Taffeta inserts embroidered in Bethlehem style couching-stitch in gold and silk cord were attached to the yoke, chest panel, sleeves and skirt. In the 1930s black velvet material became popular, and dresses were embroidered in couching straight on the fabric with brown or orange couching embroidery which later became famous for this area." See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-FedJud-276] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Triangulum_Minus] | [TOKENS: 198] |
Contents Triangulum Minus Triangulum Minus (Latin for the Smaller Triangle) was a constellation created by Johannes Hevelius. Its name is sometimes wrongly written as Triangulum Minor. It was formed from the southern parts of his Triangula (plural form of Triangulum), alongside Triangulum Majus, but is no longer in use. The triangle was defined by the fifth-magnitude stars ι Trianguli (6 Tri), 10 Trianguli, and 12 Trianguli. Also known as TZ Trianguli, ι (6) Trianguli is a multiple star system with a combined magnitude of 4.7, whose main component is a yellow giant of spectral type G5III. It was named Triminus in 2025 by the IAU Working Group on Star Names, after the obsolete constellation. See also References External links This constellation-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Roman_numerals] | [TOKENS: 6103] |
Contents Roman numerals Roman numerals are a numeral system that originated in ancient Rome and remained the usual way of writing numbers throughout Europe well into the Late Middle Ages. Numbers are written with combinations of letters from the Latin alphabet, each with a fixed integer value. The modern style uses only these seven: The use of Roman numerals continued long after the decline of the Roman Empire. From the 14th century on, Roman numerals began to be replaced by Arabic numerals; however, this process was gradual, and the use of Roman numerals has persisted in some contexts, such as on clock faces. For instance, on the clock of Big Ben (designed in 1852), the hours from 1 to 12 are written as: The notations IV and IX can be read as "one less than five" (i.e., four) and "one less than ten" (nine). Other common uses include year numbers on monuments and buildings and copyright dates on the title screens of films and television programmes. MCM, signifying "a thousand, and a hundred less than another thousand", means 1900, so 1912 is written MCMXII. For the years of the current (21st) century, MM indicates 2000; this year is MMXXVI (2026). Description Roman numerals use different symbols for each power of ten, and there is no zero symbol, in contrast with the place value notation of Arabic numerals (in which place-keeping zeros enable the same digit to represent different powers of ten). This allows some flexibility in notation, and there has never been an official or universally accepted standard for Roman numerals. Usage varied greatly in ancient Rome and became thoroughly chaotic in medieval times. The more recent restoration of a largely "classical" notation has gained popularity among some, while variant forms are used by some modern writers as seeking more "flexibility". Roman numerals may be considered legally binding expressions of a number, as in U.S. copyright law before the Berne Convention Implementation Act of 1988 (where an "incorrect" or ambiguous numeral in a copyright notice could invalidate a copyright claim or affect the termination date of the copyright period). The following table displays how Roman numerals are usually written in modern times: The numerals for 4 (IV) and 9 (IX) are written using subtractive notation, where the smaller symbol (I) is subtracted from the larger one (V or X), instead of IIII and VIIII.[a] Subtractive notation is also used for 40 (XL), 90 (XC), 400 (CD) and 900 (CM). These are the only subtractive forms in standard use. A number containing two or more decimal digits is built by appending the Roman numeral equivalent for each, from highest to lowest, as in the following examples: Any missing place (represented by a zero in the place-value equivalent) is omitted, as in Latin (and English) speech: The largest number that can be represented in this manner is 3,999 (MMMCMXCIX), but this is sufficient for the values for which Roman numerals are commonly used today, such as year numbers: For larger numbers (4,000 and larger): Both before and after the introduction of Arabic numerals in the West, from ancient times through medieval and modern, users of Roman numerals have used various means to write larger numbers (see § Large numbers below). Forms exist that vary in one way or another from the general standard represented above. While subtractive notation for 4, 40, and 400 (IV, XL, and CD) has been the usual form since Roman times [citation needed], additive notation to represent these numbers (IIII, XXXX, and CCCC) very frequently continued to be used, including in compound numbers like 24 (XXIIII), 74 (LXXIIII), and 490 (CCCCLXXXX). The additive forms for 9, 90, and 900 (VIIII, LXXXX, and DCCCC) have also been used, although less often. The two conventions could be mixed in the same document or inscription, even in the same numeral. For example, on the numbered gates to the Colosseum, IIII is systematically used instead of IV, but subtractive notation is used for XL; consequently, gate 44 is labelled XLIIII. Especially on tombstones and other funerary inscriptions, 5 and 50 have been occasionally written IIIII and XXXXX instead of V and L, and there are instances such as IIIIII and XXXXXX rather than VI or LX. Modern clock faces that use Roman numerals still very often use IIII for four o'clock but IX for nine o'clock, a practice that goes back to very early clocks such as the Wells Cathedral clock of the late 14th century. However, this is far from universal: for example, the clock on the Palace of Westminster tower (commonly known as Big Ben) uses a subtractive IV for 4 o'clock.[c] Several monumental inscriptions created in the early 20th century use variant forms for "1900" (usually written MCM). These vary from MDCCCCX for 1910 as seen on Admiralty Arch, London, to the more unusual, if not unique MDCDIII for 1903, on the north entrance to the Saint Louis Art Museum. There are numerous historical examples of IIX being used for 8; for example, XIIX was used by officers of the XVIII Roman Legion to write their number. The notation appears prominently on the cenotaph of their senior centurion Marcus Caelius (c. 45 BC – 9 AD). On the publicly displayed official Roman calendars known as Fasti, XIIX is used for the 18 days to the next Kalends, and XXIIX for the 28 days in February. The latter can be seen on the sole extant pre-Julian calendar, the Fasti Antiates Maiores. There are historical examples of other subtractive forms: IIIXX for 17, IIXX for 18, IIIC for 97, IIC for 98, and IC for 99. A possible explanation is that the word for 18 in Latin is duodeviginti— literally "two from twenty"— while 98 is duodecentum (two from hundred) and 99 is undecentum (one from hundred). However, the explanation does not seem to apply to IIIXX and IIIC, since the Latin words for 17 and 97 were septendecim (seven ten) and nonaginta septem (ninety seven), respectively. The ROMAN() function in Microsoft Excel supports multiple subtraction modes depending on the "Form" setting. For example, the number "499" (usually CDXCIX) can be rendered as LDVLIV ((500-50)+(50-5)+(5-1)), XDIX ((500-10)+(10-1)), VDIV ((500-5)+(5-1)) or ID (500-1). The relevant Microsoft help page offers no explanation for this function other than to describe its output as "more concise". There are also historical examples of other additive and multiplicative forms, and forms which seem to reflect spoken phrases. Some of these variants may have been regarded as errors even by contemporaries. As a non-positional numeral system, Roman numerals have no "place-keeping" zeros. Furthermore, the system as used by the Romans lacked a numeral for the number zero itself (that is, what remains after 1 is subtracted from 1). The word nulla (the Latin word meaning "none") was used to represent 0, although the earliest attested instances are medieval. For instance Dionysius Exiguus used nulla alongside Roman numerals in a manuscript from 525 AD. About 725, Bede or one of his colleagues used the letter N, the initial of nulla or of nihil (the Latin word for "nothing") for 0, in a table of epacts, all written in Roman numerals. The use of N to indicate "none" long survived in the historic apothecaries' system of measurement: used well into the 20th century to designate quantities in pharmaceutical prescriptions. In later times, the Arabic numeral "0" has been used as a zero to open enumerations with Roman numbers. Examples include the 24-hour Shepherd Gate Clock from 1852 and tarot packs such as the 15th-century Sola Busca and the 20th century Rider–Waite packs. The base "Roman fraction" is S, indicating 1⁄2. The use of S (as in VIIS to indicate 7+1⁄2) is attested in some ancient inscriptions and in the now rare apothecaries' system (usually in the form SS): but while Roman numerals for whole numbers are essentially decimal, S does not correspond to 5⁄10, as one might expect, but 6⁄12. The Romans used a duodecimal rather than a decimal system for fractions, as the divisibility of twelve (12 = 22 × 3) makes it easier to handle the common fractions of 1⁄3 and 1⁄4 than does a system based on ten (10 = 2 × 5). Notation for fractions other than 1⁄2 is mainly found on surviving Roman coins, many of which had values that were duodecimal fractions of the unit as. Fractions less than 1⁄2 are indicated by a dot (·) for each uncia "twelfth", the source of the English words inch and ounce; dots are repeated for fractions up to five twelfths. Six twelfths (one half), is S for semis "half". Uncia dots were added to S for fractions from seven to eleven twelfths, just as tallies were added to V for whole numbers from six to nine. The arrangement of the dots was variable and not necessarily linear. Five dots arranged like (⁙) (as on the face of a die) are known as a quincunx, from the name of the Roman fraction/coin. The Latin words sextans and quadrans are the source of the English words sextant and quadrant. Each fraction from 1⁄12 to 12⁄12 had a name in Roman times; these corresponded to the names of the related coins: Other Roman fractional notations included the following: Fractions could also be indicated with a slash through the last letter in a numeral (e.g. Ɨ), which subtracted the number by an amount less than one (usually 1⁄2). The modern form can only write numbers up to 3999, and without M in early Roman times only numbers up to 899 could be written. Various schemes have been used over time to write larger numbers. Using the apostrophus method, 500 is written as IↃ, while 1,000 is written as CIↃ. This system of encasing numbers to denote thousands (imagine the Cs and Ↄs as parentheses) had its origins in Etruscan numeral usage. Each additional set of C and Ↄ surrounding CIↃ raises the value by a factor of ten: CCIↃↃ represents 10,000 and CCCIↃↃↃ represents 100,000. Similarly, each additional Ↄ to the right of IↃ raises the value by a factor of ten: IↃↃ represents 5,000 and IↃↃↃ represents 50,000. Numerals larger than CCCIↃↃↃ do not occur. Sometimes IↃ (500) is reduced to D, CIↃ (1,000) to ↀ, IↃↃ (5,000) to ↁ; CCIↃↃ (10,000) to ↂ; IↃↃↃ (50,000) to ↇ; and CCCIↃↃↃ (100,000) to ↈ. It is likely CIↃ (1000) influenced the later M. John Wallis is often credited with introducing the ⟨∞⟩ symbol for infinity, and one conjecture is his basing it on ↀ, since 1,000 was hyperbolically used to represent very large numbers. Using the vinculum, conventional Roman numerals are multiplied by 1,000 by adding a "bar" or "overline", thus: The vinculum came into use in the late Republic, and it was a common alternative to the apostrophic ↀ during the Imperial era around the Roman world (M for '1000' was not in use until the Medieval period). It continued in use in the Middle Ages, though it became known more commonly as titulus, and it appears in modern editions of classical and medieval Latin texts. In an extension of the vinculum, a three-sided box (now sometimes printed as two vertical lines and a vinculum) is used to multiply by 100,000, thus: Vinculum notation is distinct from the custom of adding an overline to a numeral simply to indicate that it is a number. Both usages can be seen on Roman inscriptions of the same period and general location, such as on the Antonine Wall. Origin The system is closely associated with the ancient city-state of Rome and the Empire that it created. However, due to the scarcity of surviving examples, the origins of the system are obscure and there are several competing theories, all largely conjectural. Rome was founded sometime between 850 and 750 BC, next to the southern edge of the Etruscan domain, which covered a large part of north-central Italy. The Roman numerals, in particular, are directly derived from the Etruscan number symbols: ⟨𐌠⟩, ⟨𐌡⟩, ⟨𐌢⟩, ⟨𐌣⟩, and ⟨𐌟⟩ for 1, 5, 10, 50, and 100 (they had more symbols for larger numbers, but it is unknown which symbol represents which number). As in the basic Roman system, the Etruscans wrote the symbols that added to the desired number, from higher to lower value. Thus, the number 87, for example, would be written 50 + 10 + 10 + 10 + 5 + 1 + 1 = 𐌣𐌢𐌢𐌢𐌡𐌠𐌠 (this would appear as 𐌠𐌠𐌡𐌢𐌢𐌢𐌣 since Etruscan was written from right to left.) The symbols ⟨𐌠⟩ and ⟨𐌡⟩ resembled letters of the Etruscan alphabet, but ⟨𐌢⟩, ⟨𐌣⟩, and ⟨𐌟⟩ did not. The Etruscans used the subtractive notation, too, but not like the Romans. They wrote 17, 18, and 19 as 𐌠𐌠𐌠𐌢𐌢, 𐌠𐌠𐌢𐌢, and 𐌠𐌢𐌢, mirroring the way they spoke those numbers ("three from twenty", etc.); and similarly for 27, 28, 29, 37, 38, etc. However, they did not write 𐌠𐌡 for 4 (nor 𐌢𐌣 for 40), and wrote 𐌡𐌠𐌠, 𐌡𐌠𐌠𐌠 and 𐌡𐌠𐌠𐌠𐌠 for 7, 8, and 9, respectively. The early Roman numerals for 1, 10, and 100 were the Etruscan ones: ⟨𐌠⟩, ⟨𐌢⟩, and ⟨𐌟⟩. The symbols for 5 and 50 changed from ⟨𐌡⟩ and ⟨𐌣⟩ to ⟨V⟩ and ⟨ↆ⟩ at some point. The latter had flattened to ⟨⊥⟩ (an inverted T) by the time of Augustus, and soon afterwards became identified with the graphically similar letter ⟨L⟩. The symbol for 100 was written variously as ⟨𐌟⟩ or ⟨ↃIC⟩, and was then abbreviated to ⟨Ↄ⟩ or ⟨C⟩, with ⟨C⟩ (which matched the Latin letter C) finally winning out. It might have helped that C was the initial letter of CENTUM, Latin for "hundred". The numbers 500 and 1000 were denoted by V or X overlaid with a box or circle. Thus, 500 was like a Ɔ superimposed on a ⋌ or ⊢, making it look like Þ. It became D or Ð by the time of Augustus, under the graphic influence of the letter D. It was later identified as the letter D; an alternative symbol for "thousand" was a CIↃ, and half of a thousand or "five hundred" is the right half of the symbol, IↃ, and this may have been converted into D. The notation for 1000 was a circled or boxed X: Ⓧ, ⊗, ⊕, and by Augustan times was partially identified with the Greek letter Φ phi. Over time, the symbol changed to Ψ and ↀ. The latter symbol further evolved into ∞, then ⋈, and eventually changed to M under the influence of the Latin word mille "thousand". According to Paul Kayser, the basic numerical symbols were I, X, 𐌟 and Φ (or ⊕) and the intermediate ones were derived by taking half of those (half an X is V, half a 𐌟 is ↆ and half a Φ/⊕ is D). Then 𐌟 and ↆ developed as mentioned above. The Colosseum was constructed in Rome in CE 72–80, and while the original perimeter wall has largely disappeared, the numbered entrances from XXIII (23) to LIIII (54) survive, to demonstrate that in Imperial times Roman numerals had already assumed their classical form: as largely standardised in current use. The most obvious anomaly (a common one that persisted for centuries) is the inconsistent use of subtractive notation - while XL is used for 40, IV is avoided in favour of IIII: in fact, gate 44 is labelled XLIIII. Use in the Middle Ages and Renaissance Lower case, or minuscule, letters were developed in the Middle Ages, well after the demise of the Western Roman Empire, and since that time lower-case versions of Roman numbers have also been commonly used: i, ii, iii, iv, and so on. Since the Middle Ages, a "j" has sometimes been substituted for the final "i" of a "lower-case" Roman numeral, such as "iij" for 3 or "vij" for 7. This "j" can be considered a swash variant of "i". Into the early 20th century, the use of a final "j" was still sometimes used in medical prescriptions to prevent tampering with or misinterpretation of a number after it was written. Numerals in documents and inscriptions from the Middle Ages sometimes include additional symbols, which today are called "medieval Roman numerals". Some simply substitute another letter for the standard one (such as "A" for "V", or "Q" for "D"), while others serve as abbreviations for compound numerals ("O" for "XI", or "F" for "XL"). Although they are still listed today in some dictionaries, they are long out of use. A superscript "o" (sometimes written directly above the symbol) was sometimes used as an ordinal indicator. Chronograms, messages with dates encoded into them, were popular during the Renaissance era. The chronogram would be a phrase containing the letters I, V, X, L, C, D, and M. By putting these letters together, the reader would obtain a number, usually indicating a particular year. Modern use By the 11th century, Arabic numerals had been introduced into Europe from al-Andalus, by way of Arab traders and arithmetic treatises. Roman numerals, however, proved very persistent, remaining in common use in the West well into the 14th and 15th centuries, even in accounting and other business records (where the actual calculations would have been made using an abacus). Replacement by their more convenient "Arabic" equivalents was quite gradual, and Roman numerals are still used today in certain contexts. A few examples of their current use are: In astronautics, United States rocket model variants are sometimes designated by Roman numerals, e.g. Titan I, Titan II, Titan III, Saturn I, Saturn V. In astronomy, the natural satellites or "moons" of the planets are designated by capital Roman numerals appended to the planet's name. For example, Titan's designation is Saturn VI. In chemistry, Roman numerals are sometimes used to denote the groups of the periodic table, but this has officially been deprecated in favour of Arabic numerals. They are also used in the IUPAC nomenclature of inorganic chemistry, for the oxidation number of cations which can take on several different positive charges. They are also used for naming phases of polymorphic crystals, such as ice. In education, school grades (in the sense of year-groups rather than test scores) are sometimes referred to by a Roman numeral; for example, "grade IX" is sometimes seen for "grade 9". In entomology, the broods of the thirteen- and seventeen-year periodical cicadas are identified by Roman numerals. In graphic design, stylised Roman numerals may represent numeric values. In law, Roman numerals are commonly used to help organize legal codes as part of an alphanumeric outline. In mathematics (including trigonometry, statistics, and calculus), when a graph includes negative numbers, its quadrants are named using I, II, III, and IV. These quadrant names signify positive numbers on both axes, negative numbers on the x-axis, negative numbers on both axes, and negative numbers on the y-axis, respectively. The use of Roman numerals to designate quadrants avoids confusion, since Arabic numerals are used for the actual data represented in the graph. In military unit designation, Roman numerals are often used to distinguish between units at different levels. This reduces possible confusion, especially when viewing operational or strategic level maps. In particular, army corps are often numbered using Roman numerals (for example, the American XVIII Airborne Corps or the Nazi III Panzerkorps) with Arabic numerals being used for divisions and armies. In music, Roman numerals are used in several contexts: In pharmacy, Roman numerals were used with the now largely obsolete apothecaries' system of measurement: including SS to denote "one half" and N to denote "zero". In photography, Roman numerals (with zero) are used to denote varying levels of brightness when using the Zone System. In seismology, Roman numerals are used to designate degrees of the Mercalli intensity scale of earthquakes. In sport the team containing the "top" players and representing a nation or province, a club or a school at the highest level in (say) rugby union is often called the "1st XV", while a lower-ranking cricket or American football team might be the "3rd XI". In tarot, Roman numerals (with zero) are often used to denote the cards of the Major Arcana. In Ireland, Roman numerals were used until the late 1980s to indicate the month on postage Franking. In documents, Roman numerals are sometimes still used to indicate the month to avoid confusion over day/month/year or month/day/year formats. In theology and biblical scholarship, the Septuagint is often referred to as LXX, as this translation of the Old Testament into Greek is named for the legendary number of its translators (septuaginta being Latin for "seventy"). Some uses that are rare or never seen in English-speaking countries may be relatively common in parts of continental Europe and in other regions (e.g. Latin America) that use a European language other than English. For instance: Capital or small capital Roman numerals are widely used in Romance languages to denote centuries, e.g. the French xviiie siècle and the Spanish siglo xviii (not xviii siglo) for "18th century". Some Slavic and Turkic languages (especially in and adjacent to Russia) similarly favour Roman numerals (e.g. Russian XVIII век, Azeri XVIII əsr or Polish wiek XVIII). On the other hand, in Turkish and some Central European Slavic languages, like most Germanic languages, one writes "18." (with a period) before the local word for "century" (e.g. Turkish 18. yüzyıl, Czech 18. století). When typing on Russian typewriters, the Roman-numeral "V" was replaced with "У" because the letter "V" was absent in the Russian Cyrillic alphabet. Additionally, the Roman-numeral "I" was replaced with "1", since this letter had been removed from the Russian alphabet by the 1918 reform of orthography. For example, XVIII was typed as ХУ111. This style is sometimes maintained even when typing on a computer, either out of habit or due to the inconvenience of switching between Latin and Russian script for one or two letters. Mixed Roman and Arabic numerals are sometimes used in numeric representations of dates (especially in formal letters and official documents, but also on tombstones). The month is written in Roman numerals, while the day is in Arabic numerals: "4.VI.1789" and "VI.4.1789" both refer unambiguously to 4 June 1789. Roman numerals are sometimes used to represent the days of the week in hours-of-operation signs displayed in windows or on doors of businesses, and sometimes in railway and bus timetables. Monday, taken as the first day of the week, is represented by I. Sunday is represented by VII. The hours of operation signs are tables composed of two columns where the left column is the day of the week in Roman numerals and the right column is a range of hours of operation from starting time to closing time. In the example case (left), the business opens from 10 AM to 7 PM on weekdays, 10 AM to 5 PM on Saturdays and is closed on Sundays. Note that the listing uses 24-hour time. Roman numerals may also be used for floor numbering. For instance, apartments in central Amsterdam are indicated as 138-III, with both an Arabic numeral (number of the block or house) and a Roman numeral (floor number). The apartment on the ground floor is indicated as 138-huis. In Italy, where roads outside built-up areas have kilometre signs, major roads and motorways also mark 100-metre subdivisionals, using Roman numerals from I to IX for the smaller intervals. The sign IX/17 thus marks 17.9 km. Certain romance-speaking countries use Roman numerals to designate assemblies of their national legislatures. For instance, the composition of the Italian Parliament from 2018 to 2022 (elected in the 2018 Italian general election) is called the XVIII Legislature of the Italian Republic (or more commonly the "XVIII Legislature"). A notable exception to the use of Roman numerals in Europe is in Greece, where Greek numerals (based on the Greek alphabet) are generally used in contexts where Roman numerals would be used elsewhere. Unicode The "Number Forms" block of the Unicode computer character set standard has a number of Roman numeral symbols in the range of code points from U+2160 to U+2188. This range includes both upper- and lowercase numerals, as well as pre-combined characters for numbers up to 12. One justification for the existence of pre-combined numbers is to facilitate the setting of multiple-letter numbers (such as VIII) on a single horizontal line in Asian vertical text. The Unicode standard, however, includes special Roman numeral code points for compatibility only, stating that "[f]or most purposes, it is preferable to compose the Roman numerals from sequences of the appropriate Latin letters". The block also includes some apostrophus symbols for large numbers, an old variant of "L" (50) similar to the Etruscan character, the Claudian letter "reversed C", etc. See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Al-%27Abbasiyya] | [TOKENS: 1101] |
Contents Al-'Abbasiyya Al-'Abbasiyya (Arabic: العبْاسِيّة), also known as al-Yahudiya (Arabic: اليهودية), was a Palestinian Arab village in the Jaffa Subdistrict. It was attacked under Operation Hametz during the 1948 Palestine War, and finally depopulated under Operation Dani. It was located 13 km east of Jaffa. Some of the remains of the village can be found today in the centre of the modern Israeli city of Yehud. History In 1596, Yahudiya appeared in Ottoman tax registers as being in the Nahiya of Ramla of the Liwa of Gaza. It had a population of 126 Muslim households and paid taxes on wheat, barley, summer crops or fruit trees, sesame, and goats or beehives. In 1838 it was noted as a Muslim village called el-Yehudiyeh in the Lydda administrative region. The French explorer Victor Guérin visited the village, which he called Yehoudieh, in 1863, and found it to have a population of more than 1,000 people. The houses were made of adobe bricks, several topped by palm leaves. Near a noria he noticed an ancient sarcophagus, placed there as a trough. An Ottoman village list from about 1870 found that el-jehudie had a population of 835, in 246 houses, though the population count included men, only. In 1882, the PEF's Survey of Western Palestine (SWP) described the place as "a large mud village, supplied by a pond, and surrounded by palm-trees." They also noted a ruined tank, or birkeh, to the south of the village. In the 1922 census of Palestine, conducted by the British Mandate authorities, Yahudiyeh had a population of 2,437 residents, all Muslims, increasing in the 1931 census, when Yahudiya had a population of 3,258 residents; 3,253 Muslims and 5 Christians, in a total of 772 houses. The previous name, Al-Yahudiya, is thought to be taken from the name of the biblical town of Yahud, mentioned in Joshua 19:45 (as part of a list of towns comprising the territory of the Israelite tribe of Dan), and later called Iudaea by the Romans. In 1932, the town was officially renamed Al-'Abbasiyya, because the inhabitants did not want the town to be associated with Jews. The name chosen as a replacement, Al-'Abbasiyya, was mostly in honour of the memory of a sheikh called al-'Abbas who was buried in the town, but also alluded to the Arab Muslim Abbasid Caliphate. In the 1945 statistics, the population had increased to 5,800; 5,630 Muslims, 150 Jews, and 20 Christians, with a total of 20,540 dunums of land. Of this, a total of 4,099 dunums was used for citrus and bananas, 1,019 dunums were irrigated or used for orchards, 14,465 were for cereals, while 101 dunams were classified as built-up areas. On December 13, 1947, twenty-four armed men from the hard-right paramilitary organization Irgun attacked the village, approaching from the Jewish town of Petaḥ Tiqvah. The attackers wore khaki uniforms and drove through the village in four cars. One group fired on villagers at a cafe and another set bombs and grenades in houses. Seven Arabs were killed (two women and two children under the age of five) and seven others seriously wounded (two women and a four-year-old girl among them). An armored British police vehicle was fired upon by the attackers. On September 13, 1948, David Ben-Gurion requested the destruction of Al-'Abbasiyya, among other Palestinian villages whose inhabitants fled or were expelled. Between 1948 and 1954 the Israeli sites of Yehud, Magshimim, Ganne Yehuda, Ganne Tiqwa, and Savyon were established on the land of Al-'Abbasiyya. In 1992 the village site was described: The main mosque and the shrine of al-Nabi Huda till stand. The mosque is deserted and beginning to crack in several places; the shrine is made of stone and surmounted with a dome. There is also an Israeli coffee shop, called the Tehr coffee shop, at the entrance of a main street that was called Ziqaq al-Raml ("Sand Lane"). A number of houses remain; they have been occupied by Yehud's Jewish residents or put to other uses. One residential house, made of concrete, has a slanted roof and rectangular doors and windows; its porch is covered by corrugated metal sheets. Another house, a two-storey, concrete structure with rectangular doors and windows and I tiled, tent-shaped roof, has been converted into a commercial building. The land around the site (only partially covered by construction) has been left untended and is overgrown with pine and Christ's-thorn trees. References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Bourne_shell] | [TOKENS: 1065] |
Contents Bourne shell The Bourne shell (sh) is a shell command-line interpreter for computer operating systems. It first appeared on Version 7 Unix, as its default shell. Unix-like systems continue to have /bin/sh—which will be the Bourne shell, or a symbolic link or hard link to a compatible shell—even when other shells are used by most users. The Bourne shell was once standard on all branded Unix systems, although historically BSD-based systems had many scripts written in csh. As the basis of POSIX sh syntax, Bourne shell scripts can typically be run with Bash or dash on Linux or other Unix-like systems; Bash itself is a free clone of Bourne. History Work on the Bourne shell initially started in 1976. Developed by Stephen Bourne at Bell Labs, it was a replacement for the Thompson shell, whose executable file had the same name—sh. The Bourne shell was also preceded by the Mashey shell. Bourne was released in 1979 in the Version 7 Unix release distributed to colleges and universities. Although it is used as an interactive command interpreter, it was also intended as a scripting language and contains most of the features that are commonly considered to produce structured programs. It gained popularity with the publication of The Unix Programming Environment by Brian Kernighan and Rob Pike—the first commercially published book that presented the shell as a programming language in a tutorial form.[citation needed] Some of the primary goals of the shell were: Features of the Version 7 UNIX Bourne shell include: The Bourne shell also was the first to feature the convention of using file descriptor 2> for error messages, allowing much greater programmatic control during scripting by keeping error messages separate from data. Stephen Bourne's coding style was influenced by his experience with the ALGOL 68C compiler that he had been working on at Cambridge University. In addition to the style in which the program was written, Bourne reused portions of ALGOL 68's if ~ then ~ elif ~ then ~ else ~ fi, case ~ in ~ esac and for/while ~ do ~ od" (using done instead of od) clauses in the common Unix Bourne shell syntax. Moreover, – although the v7 shell is written in C – Bourne took advantage of some macros to give the C source code an ALGOL 68 flavor. These macros (along with the finger command distributed in Unix version 4.2BSD) inspired the International Obfuscated C Code Contest (IOCCC). Over the years, the Bourne shell was enhanced at AT&T. The various variants are thus called like the respective AT&T Unix version it was released with (some important variants being Version7, System III, SVR2, SVR3, SVR4). As the shell was never versioned, the only way to identify it was testing its features. Features of the Bourne shell versions since 1979 include: Variants Duplex Multi-Environment Real-Time (DMERT) is a hybrid time-sharing/real-time operating system developed in the 1970s at Bell Labs Indian Hill location in Naperville, Illinois uses a 1978 snapshot of Bourne Shell "VERSION sys137 DATE 1978 Oct 12 22:39:57".[citation needed] The DMERT shell runs on 3B21D computers still in use in the telecommunications industry.[citation needed] The Korn shell (ksh) written by David Korn based on the original Bourne Shell source code, was a middle road between the Bourne shell and the C shell. Its syntax was chiefly drawn from the Bourne shell, while its job control features resembled those of the C shell. The functionality of the original Korn Shell (known as ksh88 from the year of its introduction) was used as a basis for the POSIX shell standard. A newer version, ksh93, has been open source since 2000 and is used on some Linux distributions. A clone of ksh88 known as pdksh is the default shell in OpenBSD. Jörg Schilling's Schily-Tools includes three Bourne Shell derivatives. Relationship to other shells Bill Joy, the author of the C shell, criticized the Bourne shell as being unfriendly for interactive use, a task for which Stephen Bourne himself acknowledged C shell's superiority. Bourne stated, however, that his shell was superior for scripting and was available on any Unix system, and Tom Christiansen also criticized C shell as being unsuitable for scripting and programming. Due to copyright issues surrounding the Bourne Shell as it was used in historic CSRG BSD releases, Kenneth Almquist developed a clone of the Bourne Shell, known by some as the Almquist shell and available under the BSD license, which is in use today on some BSD descendants and in low-memory situations. The Almquist Shell was ported to Linux, and the port renamed the Debian Almquist shell, or dash. This shell provides faster execution of standard sh (and POSIX-standard sh, in modern descendants) scripts with a smaller memory footprint than its counterpart, Bash. Its use tends to expose bashisms – bash-centric assumptions made in scripts meant to run on sh. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Turdus_Solitarius] | [TOKENS: 181] |
Contents Turdus Solitarius Turdus Solitarius (Latin for solitary thrush) was a constellation created by French astronomer Pierre Charles Le Monnier in 1776 from stars of Hydra's tail. It was named after the Rodrigues solitaire, an extinct flightless bird that was endemic to the island of Rodrigues East of Madagascar in the Indian Ocean. It was replaced by another constellation, Noctua (the Owl), in A Celestial Atlas (1822) by the British amateur astronomer Alexander Jamieson, but neither was adopted by the International Astronomical Union among its 88 recognized constellations. The IAU Working Group on Star Names approved the name Solitaire for the star E Hydrae in 2024, after the obsolete constellation. See also References External links This constellation-related article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.