text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Matplotlib] | [TOKENS: 376]
Contents Matplotlib Matplotlib (portmanteau of MATLAB, plot, and library) is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK. There is also a procedural "pylab" interface based on a state machine (like OpenGL), designed to closely resemble that of MATLAB, though its use is discouraged. SciPy makes use of Matplotlib. Matplotlib was originally written by John D. Hunter. Since then it has had an active development community and is distributed under a BSD-style license. Michael Droettboom was nominated as matplotlib's lead developer shortly before John Hunter's death in August 2012 and was further joined by Thomas Caswell. Matplotlib is a NumFOCUS fiscally sponsored project. Usage Matplotlib is used in scientific research as a tool for data visualization. For example, the Event Horizon Telescope collaboration used Matplotlib to produce visualizations during the effort to create the first image of a black hole. Matplotlib also underpins the plotting functionality of many scientific Python libraries (for instance, pandas uses Matplotlib as its default backend for plotting). Its importance to the scientific community has been acknowledged by institutions such as NASA, which in 2024 awarded a grant to support Matplotlib’s continued development as part of an initiative to fund widely used open-source scientific software. In education and data science, Matplotlib is frequently used to teach programming and data visualization. It integrates with Jupyter Notebook, allowing students and instructors to generate inline plots and interactively explore data within a notebook environment. Many educational institutions incorporate Matplotlib into their curricula for teaching STEM concepts, and it is widely featured in tutorials, workshops, and open online courses as a primary plotting library. Related projects References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Glycoprotein] | [TOKENS: 2253]
Contents Glycoprotein Glycoproteins are proteins which contain oligosaccharide (sugar) chains covalently attached to amino acid side-chains. The carbohydrate is attached to the protein in a cotranslational or posttranslational modification. This process is known as glycosylation. Secreted extracellular proteins are often glycosylated. In proteins that have segments extending extracellularly, the extracellular segments are also often glycosylated. Glycoproteins are also often important integral membrane proteins, where they play a role in cell–cell interactions. It is important to distinguish endoplasmic reticulum-based glycosylation of the secretory system from reversible cytosolic-nuclear glycosylation. Glycoproteins of the cytosol and nucleus can be modified through the reversible addition of a single GlcNAc residue that is considered reciprocal to phosphorylation and the functions of these are likely to be an additional regulatory mechanism that controls phosphorylation-based signalling. In contrast, classical secretory glycosylation can be structurally essential. For example, inhibition of asparagine-linked, i.e. N-linked, glycosylation can prevent proper glycoprotein folding and full inhibition can be toxic to an individual cell. In contrast, perturbation of glycan processing (enzymatic removal/addition of carbohydrate residues to the glycan), which occurs in both the endoplasmic reticulum and Golgi apparatus, is dispensable for isolated cells (as evidenced by survival with glycosides inhibitors) but can lead to human disease (congenital disorders of glycosylation) and can be lethal in animal models. It is therefore likely that the fine processing of glycans is important for endogenous functionality, such as cell trafficking, but that this is likely to have been secondary to its role in host-pathogen interactions. A famous example of this latter effect is the ABO blood group system. Though there are different types of glycoproteins, the most common are N-linked and O-linked glycoproteins. These two types of glycoproteins are distinguished by structural differences that give them their names. Glycoproteins vary greatly in composition, making many different compounds such as antibodies or hormones. Due to the wide array of functions within the body, interest in glycoprotein synthesis for medical use has increased. There are now several methods to synthesize glycoproteins, including recombination and glycosylation of proteins. Glycosylation is also known to occur on nucleo cytoplasmic proteins in the form of O-GlcNAc. Types of glycosylation There are several types of glycosylation, although the first two are the most common. Monosaccharides Monosaccharides commonly found in eukaryotic glycoproteins include:: 526 The sugar group(s) can assist in protein folding, improve proteins' stability and are involved in cell signalling. Structure The critical structural element of all glycoproteins is having oligosaccharides bonded covalently to a protein. There are 10 common monosaccharides in mammalian glycans including: glucose (Glc), fucose (Fuc), xylose (Xyl), mannose (Man), galactose (Gal), N-acetylglucosamine (GlcNAc), glucuronic acid (GlcA), iduronic acid (IdoA), N-acetylgalactosamine (GalNAc), sialic acid, and 5-N-acetylneuraminic acid (Neu5Ac). These glycans link themselves to specific areas of the protein amino acid chain. The two most common linkages in glycoproteins are N-linked and O-linked glycoproteins. An N-linked glycoprotein has glycan bonds to the nitrogen containing an asparagine amino acid within the protein sequence. An O-linked glycoprotein has the sugar is bonded to an oxygen atom of a serine or threonine amino acid in the protein. Glycoprotein size and composition can vary largely, with carbohydrate composition ranges from 1% to 70% of the total mass of the glycoprotein. Within the cell, they appear in the blood, the extracellular matrix, or on the outer surface of the plasma membrane, and make up a large portion of the proteins secreted by eukaryotic cells. They are very broad in their applications and can function as a variety of chemicals from antibodies to hormones. Glycomics is the study of the carbohydrate components of cells. Though not exclusive to glycoproteins, it can reveal more information about different glycoproteins and their structure. One of the purposes of this field of study is to determine which proteins are glycosylated and where in the amino acid sequence the glycosylation occurs. Historically, mass spectrometry has been used to identify the structure of glycoproteins and characterize the carbohydrate chains attached. Examples The unique interaction between the oligosaccharide chains have different applications. First, it aids in quality control by identifying misfolded proteins. The oligosaccharide chains also change the solubility and polarity of the proteins that they are bonded to. For example, if the oligosaccharide chains are negatively charged, with enough density around the protein, they can repulse proteolytic enzymes away from the bonded protein. The diversity in interactions lends itself to different types of glycoproteins with different structures and functions. One example of glycoproteins found in the body is mucins, which are secreted in the mucus of the respiratory and digestive tracts. The sugars when attached to mucins give them considerable water-holding capacity and also make them resistant to proteolysis by digestive enzymes. Glycoproteins are important for white blood cell recognition.[citation needed] Examples of glycoproteins in the immune system are: Other examples of glycoproteins include: Soluble glycoproteins often show a high viscosity, for example, in egg white and blood plasma. Variable surface glycoproteins allow the sleeping sickness Trypanosoma parasite to escape the immune response of the host. The viral spike of the human immunodeficiency virus is heavily glycosylated. Approximately half the mass of the spike is glycosylation and the glycans act to limit antibody recognition as the glycans are assembled by the host cell and so are largely 'self'. Over time, some patients can evolve antibodies to recognise the HIV glycans and almost all so-called 'broadly neutralising antibodies (bnAbs) recognise some glycans. This is possible mainly because the unusually high density of glycans hinders normal glycan maturation and they are therefore trapped in the premature, high-mannose, state. This provides a window for immune recognition. In addition, as these glycans are much less variable than the underlying protein, they have emerged as promising targets for vaccine design. P-glycoproteins are critical for antitumor research due to its ability block the effects of antitumor drugs. P-glycoprotein, or multidrug transporter (MDR1), is a type of ABC transporter that transports compounds out of cells. This transportation of compounds out of cells includes drugs made to be delivered to the cell, causing a decrease in drug effectiveness. Therefore, being able to inhibit this behavior would decrease P-glycoprotein interference in drug delivery, making this an important topic in drug discovery. For example, P-Glycoprotein causes a decrease in anti-cancer drug accumulation within tumor cells, limiting the effectiveness of chemotherapies used to treat cancer. Hormones Hormones that are glycoproteins include: Distinction between glycoproteins and proteoglycans Quoting from recommendations for IUPAC: A glycoprotein is a compound containing carbohydrate (or glycan) covalently linked to protein. The carbohydrate may be in the form of a monosaccharide, disaccharide(s), oligosaccharide(s), polysaccharide(s), or their derivatives (e.g. sulfo- or phospho-substituted). One, a few, or many carbohydrate units may be present. Proteoglycans are a subclass of glycoproteins in which the carbohydrate units are polysaccharides that contain amino sugars. Such polysaccharides are also known as glycosaminoglycans. Functions Analysis A variety of methods used in detection, purification, and structural analysis of glycoproteins are: 525 Synthesis The glycosylation of proteins has an array of different applications from influencing cell to cell communication to changing the thermal stability and the folding of proteins. Due to the unique abilities of glycoproteins, they can be used in many therapies. By understanding glycoproteins and their synthesis, they can be made to treat cancer, Crohn's Disease, high cholesterol, and more. The process of glycosylation (binding a carbohydrate to a protein) is a post-translational modification, meaning it happens after the production of the protein. Glycosylation is a process that roughly half of all human proteins undergo and heavily influences the properties and functions of the protein. Within the cell, glycosylation occurs in the endoplasmic reticulum. There are several techniques for the assembly of glycoproteins. One technique utilizes recombination. The first consideration for this method is the choice of host, as there are many different factors that can influence the success of glycoprotein recombination such as cost, the host environment, the efficacy of the process, and other considerations. Some examples of host cells include E. coli, yeast, plant cells, insect cells, and mammalian cells. Of these options, mammalian cells are the most common because their use does not face the same challenges that other host cells do such as different glycan structures, shorter half life, and potential unwanted immune responses in humans. Of mammalian cells, the most common cell line used for recombinant glycoprotein production is the Chinese hamster ovary line. However, as technologies develop, the most promising cell lines for recombinant glycoprotein production are human cell lines. The formation of the link between the glycan and the protein is key element of the synthesis of glycoproteins. The most common method of glycosylation of N-linked glycoproteins is through the reaction between a protected glycan and a protected Asparagine. Similarly, an O-linked glycoprotein can be formed through the addition of a glycosyl donor with a protected Serine or Threonine. These two methods are examples of natural linkage. However, there are also methods of unnatural linkages. Some methods include ligation and a reaction between a serine-derived sulfamidate and thiohexoses in water. Once this linkage is complete, the amino acid sequence can be expanded upon using solid-phase peptide synthesis. See also Notes and references Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Middle_East#cite_note-12] | [TOKENS: 6152]
Contents Middle East The Middle East[b] is a geopolitical region encompassing the Arabian Peninsula, Egypt, Iran, Iraq, the Levant, and Turkey. The term came into widespread usage by Western European nations in the early 20th century as a replacement of the term Near East (both were in contrast to the Far East). The term "Middle East" has led to some confusion over its changing definitions. Since the late 20th century, it has been criticized as being too Eurocentric. The region includes the vast majority of the territories included in the closely associated definition of West Asia, but without the South Caucasus. It also includes all of Egypt (not just the Sinai region) and all of Turkey (including East Thrace). Most Middle Eastern countries (13 out of 18) are part of the Arab world. The three most populous countries in the region are Egypt, Iran, and Turkey, while Saudi Arabia is the largest Middle Eastern country by area. The history of the Middle East dates back to ancient times, and it was long considered the "cradle of civilization". The geopolitical importance of the region has been recognized and competed for during millennia. The Abrahamic religions (Judaism, Christianity, and Islam) have their origins in the Middle East. Arabs constitute the main ethnic group in the region, followed by Turks, Persians, Kurds, Jews, and Assyrians. The Middle East generally has a hot, arid climate, especially in the Arabian and Egyptian regions. Several major rivers provide irrigation to support agriculture in limited areas here, such as the Nile Delta in Egypt, the Tigris and Euphrates watersheds of Mesopotamia, and the basin of the Jordan River that spans most of the Levant. These regions are collectively known as the Fertile Crescent, and comprise the core of what historians had long referred to as the cradle of civilization; multiple regions of the world have since been classified as also having developed independent, original civilizations. Conversely, the Levantine coast and most of Turkey have relatively temperate climates typical of the Mediterranean, with dry summers and cool, wet winters. Most of the countries that border the Persian Gulf have vast reserves of petroleum. Monarchs of the Arabian Peninsula in particular have benefitted economically from petroleum exports. Because of the arid climate and dependence on the fossil fuel industry, the Middle East is both a major contributor to climate change and a region that is expected to be severely adversely affected by it. Other concepts of the region exist, including the broader Middle East and North Africa (MENA), which includes states of the Maghreb and the Sudan. The term the "Greater Middle East" also includes Afghanistan, Mauritania, Pakistan, as well as parts of East Africa, and sometimes Central Asia and the South Caucasus. Terminology The term "Middle East" may have originated in the 1850s in the British India Office. However, it became more widely known when United States naval strategist Alfred Thayer Mahan used the term in 1902 to "designate the area between Arabia and India". During this time the British and Russian empires were vying for influence in Central Asia, a rivalry that would become known as the Great Game. Mahan realized not only the strategic importance of the region, but also of its center, the Persian Gulf. He labeled the area surrounding the Persian Gulf as the Middle East. He said that, beyond Egypt's Suez Canal, the Gulf was the most important passage for Britain to control in order to keep the Russians from advancing towards British India. Mahan first used the term in his article "The Persian Gulf and International Relations", published in September 1902 in the National Review, a British journal. The Middle East, if I may adopt a term which I have not seen, will some day need its Malta, as well as its Gibraltar; it does not follow that either will be in the Persian Gulf. Naval force has the quality of mobility which carries with it the privilege of temporary absences; but it needs to find on every scene of operation established bases of refit, of supply, and in case of disaster, of security. The British Navy should have the facility to concentrate in force if occasion arise, about Aden, India, and the Persian Gulf. Mahan's article was reprinted in The Times and followed in October by a 20-article series entitled "The Middle Eastern Question", written by Sir Ignatius Valentine Chirol. During this series, Sir Ignatius expanded the definition of Middle East to include "those regions of Asia which extend to the borders of India or command the approaches to India." After the series ended in 1903, The Times removed quotation marks from subsequent uses of the term. Until World War II, it was customary to refer to areas centered on Turkey and the eastern shore of the Mediterranean as the "Near East", while the "Far East" centered on China, India and Japan. The Middle East was then defined as the area from Mesopotamia to Burma; namely, the area between the Near East and the Far East. This area broadly corresponds to South Asia. In the late 1930s, the British established the Middle East Command, which was based in Cairo, for its military forces in the region. After that time, the term "Middle East" gained broader usage in Europe and the United States. Following World War II, for example, the Middle East Institute was founded in Washington, D.C. in 1946. The corresponding adjective is Middle Eastern and the derived noun is Middle Easterner. While non-Eurocentric terms such as "Southwest Asia" or "Swasia" have been sparsely used, the classification of the African country, Egypt, among those counted in the Middle East challenges the usefulness of using such terms. The description Middle has also led to some confusion over changing definitions. Before the First World War, "Near East" was used in English to refer to the Balkans and the Ottoman Empire, while "Middle East" referred to the Caucasus, Persia, and Arabian lands, and sometimes Afghanistan, India and others. In contrast, "Far East" referred to the countries of East Asia (e.g. China, Japan, and Korea). With the collapse of the Ottoman Empire in 1918, "Near East" largely fell out of common use in English, while "Middle East" came to be applied to the emerging independent countries of the Islamic world. However, the usage "Near East" was retained by a variety of academic disciplines, including archaeology and ancient history. In their usage, the term describes an area identical to the term Middle East, which is not used by these disciplines (see ancient Near East).[citation needed] The first official use of the term "Middle East" by the United States government was in the 1957 Eisenhower Doctrine, which pertained to the Suez Crisis. Secretary of State John Foster Dulles defined the Middle East as "the area lying between and including Libya on the west and Pakistan on the east, Syria and Iraq on the North and the Arabian peninsula to the south, plus the Sudan and Ethiopia." In 1958, the State Department explained that the terms "Near East" and "Middle East" were interchangeable, and defined the region as including only Egypt, Syria, Israel, Lebanon, Jordan, Iraq, Saudi Arabia, Kuwait, Bahrain, and Qatar. Since the late 20th century, scholars and journalists from the region, such as journalist Louay Khraish and historian Hassan Hanafi have criticized the use of "Middle East" as a Eurocentric and colonialist term. The Associated Press Stylebook of 2004 says that Near East formerly referred to the farther west countries while Middle East referred to the eastern ones, but that now they are synonymous. It instructs: Use Middle East unless Near East is used by a source in a story. Mideast is also acceptable, but Middle East is preferred. European languages have adopted terms similar to Near East and Middle East. Since these are based on a relative description, the meanings depend on the country and are generally different from the English terms. In German the term Naher Osten (Near East) is still in common use (nowadays the term Mittlerer Osten is more and more common in press texts translated from English sources, albeit having a distinct meaning). In the four Slavic languages, Russian Ближний Восток or Blizhniy Vostok, Bulgarian Близкия Изток, Polish Bliski Wschód or Croatian Bliski istok (terms meaning Near East are the only appropriate ones for the region). However, some European languages do have "Middle East" equivalents, such as French Moyen-Orient, Swedish Mellanöstern, Spanish Oriente Medio or Medio Oriente, Greek is Μέση Ανατολή (Mesi Anatoli), and Italian Medio Oriente.[c] Perhaps because of the political influence of the United States and Europe, and the prominence of Western press, the Arabic equivalent of Middle East (Arabic: الشرق الأوسط ash-Sharq al-Awsaṭ) has become standard usage in the mainstream Arabic press. It comprises the same meaning as the term "Middle East" in North American and Western European usage. The designation, Mashriq, also from the Arabic root for East, also denotes a variously defined region around the Levant, the eastern part of the Arabic-speaking world (as opposed to the Maghreb, the western part). Even though the term originated in the West, countries of the Middle East that use languages other than Arabic also use that term in translation. For instance, the Persian equivalent for Middle East is خاورمیانه (Khāvar-e miyāneh), the Hebrew is המזרח התיכון (hamizrach hatikhon), and the Turkish is Orta Doğu. Countries and territory Traditionally included within the Middle East are Arabia, Asia Minor, East Thrace, Egypt, Iran, the Levant, Mesopotamia, and the Socotra Archipelago. The region includes 17 UN-recognized countries and one British Overseas Territory. Various concepts are often paralleled to the Middle East, most notably the Near East, Fertile Crescent, and Levant. These are geographical concepts, which refer to large sections of the modern-day Middle East, with the Near East being the closest to the Middle East in its geographical meaning. Due to it primarily being Arabic speaking, the Maghreb region of North Africa is sometimes included. "Greater Middle East" is a political term coined by the second Bush administration in the first decade of the 21st century to denote various countries, pertaining to the Muslim world, specifically Afghanistan, Iran, Pakistan, and Turkey. Various Central Asian countries are sometimes also included. History The Middle East lies at the juncture of Africa and Eurasia and of the Indian Ocean and the Mediterranean Sea (see also: Indo-Mediterranean). It is the birthplace and spiritual center of religions such as Christianity, Islam, Judaism, Manichaeism, Yezidi, Druze, Yarsan, and Mandeanism, and in Iran, Mithraism, Zoroastrianism, Manicheanism, and the Baháʼí Faith. Throughout its history the Middle East has been a major center of world affairs; a strategically, economically, politically, culturally, and religiously sensitive area. The region is one of the regions where agriculture was independently discovered, and from the Middle East it was spread, during the Neolithic, to different regions of the world such as Europe, the Indus Valley and Eastern Africa. Prior to the formation of civilizations, advanced cultures formed all over the Middle East during the Stone Age. The search for agricultural lands by agriculturalists, and pastoral lands by herdsmen meant different migrations took place within the region and shaped its ethnic and demographic makeup. The Middle East is widely and most famously known as the cradle of civilization. The world's earliest civilizations, Mesopotamia (Sumer, Akkad, Assyria and Babylonia), ancient Egypt and Kish in the Levant, all originated in the Fertile Crescent and Nile Valley regions of the ancient Near East. These were followed by the Hittite, Greek, Hurrian and Urartian civilisations of Asia Minor; Elam, Persia and Median civilizations in Iran, as well as the civilizations of the Levant (such as Ebla, Mari, Nagar, Ugarit, Canaan, Aramea, Mitanni, Phoenicia and Israel) and the Arabian Peninsula (Magan, Sheba, Ubar). The Near East was first largely unified under the Neo Assyrian Empire, then the Achaemenid Empire followed later by the Macedonian Empire and after this to some degree by the Iranian empires (namely the Parthian and Sassanid Empires), the Roman Empire and Byzantine Empire. The region served as the intellectual and economic center of the Roman Empire and played an exceptionally important role due to its periphery on the Sassanid Empire. Thus, the Romans stationed up to five or six of their legions in the region for the sole purpose of defending it from Sassanid and Bedouin raids and invasions. From the 4th century CE onwards, the Middle East became the center of the two main powers at the time, the Byzantine Empire and the Sassanid Empire. However, it would be the later Islamic Caliphates of the Middle Ages, or Islamic Golden Age which began with the Islamic conquest of the region in the 7th century AD, that would first unify the entire Middle East as a distinct region and create the dominant Islamic Arab ethnic identity that largely (but not exclusively) persists today. The 4 caliphates that dominated the Middle East for more than 600 years were the Rashidun Caliphate, the Umayyad caliphate, the Abbasid caliphate and the Fatimid caliphate. Additionally, the Mongols would come to dominate the region, the Kingdom of Armenia would incorporate parts of the region to their domain, the Seljuks would rule the region and spread Turko-Persian culture, and the Franks would found the Crusader states that would stand for roughly two centuries. Josiah Russell estimates the population of what he calls "Islamic territory" as roughly 12.5 million in 1000 – Anatolia 8 million, Syria 2 million, and Egypt 1.5 million. From the 16th century onward, the Middle East came to be dominated, once again, by two main powers: the Ottoman Empire and the Safavid dynasty. The modern Middle East began after World War I, when the Ottoman Empire, which was allied with the Central Powers, was defeated by the Allies and partitioned into a number of separate nations, initially under British and French Mandates. Other defining events in this transformation included the establishment of Israel in 1948 and the eventual departure of European powers, notably Britain and France by the end of the 1960s. They were supplanted in some part by the rising influence of the United States from the 1970s onwards. In the 20th century, the region's significant stocks of crude oil gave it new strategic and economic importance. Mass production of oil began around 1945, with Saudi Arabia, Iran, Kuwait, Iraq, and the United Arab Emirates having large quantities of oil. Estimated oil reserves, especially in Saudi Arabia and Iran, are some of the highest in the world, and the international oil cartel OPEC is dominated by Middle Eastern countries. During the Cold War, the Middle East was a theater of ideological struggle between the two superpowers and their allies: NATO and the United States on one side, and the Soviet Union and Warsaw Pact on the other, as they competed to influence regional allies. Besides the political reasons there was also the "ideological conflict" between the two systems. Moreover, as Louise Fawcett argues, among many important areas of contention, or perhaps more accurately of anxiety, were, first, the desires of the superpowers to gain strategic advantage in the region, second, the fact that the region contained some two-thirds of the world's oil reserves in a context where oil was becoming increasingly vital to the economy of the Western world [...] Within this contextual framework, the United States sought to divert the Arab world from Soviet influence. Throughout the 20th and 21st centuries, the region has experienced both periods of relative peace and tolerance and periods of conflict particularly between Sunnis and Shiites. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Economy Middle Eastern economies range from being very poor (such as Gaza and Yemen) to extremely wealthy nations (such as Qatar and UAE). According to the International Monetary Fund, the three largest Middle Eastern economies in nominal GDP in 2023 were Saudi Arabia ($1.06 trillion), Turkey ($1.03 trillion), and Israel ($0.54 trillion). For nominal GDP per person, the highest ranking countries are Qatar ($83,891), Israel ($55,535), the United Arab Emirates ($49,451) and Cyprus ($33,807). Turkey ($3.6 trillion), Saudi Arabia ($2.3 trillion), and Iran ($1.7 trillion) had the largest economies in terms of GDP PPP. For GDP PPP per person, the highest-ranking countries are Qatar ($124,834), the United Arab Emirates ($88,221), Saudi Arabia ($64,836), Bahrain ($60,596) and Israel ($54,997). The lowest-ranking country in the Middle East, in terms of GDP nominal per capita, is Yemen ($573). The economic structure of Middle Eastern nations are different because while some are heavily dependent on export of only oil and oil-related products (Saudi Arabia, the UAE and Kuwait), others have a highly diverse economic base (such as Cyprus, Israel, Turkey and Egypt). Industries of the Middle Eastern region include oil and oil-related products, agriculture, cotton, cattle, dairy, textiles, leather products, surgical instruments, defence equipment (guns, ammunition, tanks, submarines, fighter jets, UAVs, and missiles). Banking is an important sector, especially for UAE and Bahrain. With the exception of Cyprus, Turkey, Egypt, Lebanon and Israel, tourism has been a relatively undeveloped area of the economy, in part because of the socially conservative nature of the region as well as political turmoil in certain regions. Since the end of the COVID pandemic however, countries such as the UAE, Bahrain, and Jordan have begun attracting greater numbers of tourists because of improving tourist facilities and the relaxing of tourism-related restrictive policies. Unemployment is high in the Middle East and North Africa region, particularly among people aged 15–29, a demographic representing 30% of the region's population. The total regional unemployment rate in 2025 is 10.8%, and among youth is as high as 28%. Demographics Arabs constitute the largest ethnic group in the Middle East, followed by various Iranian peoples and then by Turkic peoples (Turkish, Azeris, Syrian Turkmen, and Iraqi Turkmen). Native ethnic groups of the region include, in addition to Arabs, Arameans, Assyrians, Baloch, Berbers, Copts, Druze, Greek Cypriots, Jews, Kurds, Lurs, Mandaeans, Persians, Samaritans, Shabaks, Tats, and Zazas. European ethnic groups that form a diaspora in the region include Albanians, Bosniaks, Circassians (including Kabardians), Crimean Tatars, Greeks, Franco-Levantines, Italo-Levantines, and Iraqi Turkmens. Among other migrant populations are Chinese, Filipinos, Indians, Indonesians, Pakistanis, Pashtuns, Romani, and Afro-Arabs. "Migration has always provided an important vent for labor market pressures in the Middle East. For the period between the 1970s and 1990s, the Arab states of the Persian Gulf in particular provided a rich source of employment for workers from Egypt, Yemen and the countries of the Levant, while Europe had attracted young workers from North African countries due both to proximity and the legacy of colonial ties between France and the majority of North African states." According to the International Organization for Migration, there are 13 million first-generation migrants from Arab nations in the world, of which 5.8 reside in other Arab countries. Expatriates from Arab countries contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of US$35.1 billion in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries. In Somalia, the Somali Civil War has greatly increased the size of the Somali diaspora, as many of the best educated Somalis left for Middle Eastern countries as well as Europe and North America. Non-Arab Middle Eastern countries such as Turkey, Israel and Iran are also subject to important migration dynamics. A fair proportion of those migrating from Arab nations are from ethnic and religious minorities facing persecution and are not necessarily ethnic Arabs, Iranians or Turks.[citation needed] Large numbers of Kurds, Jews, Assyrians, Greeks and Armenians as well as many Mandeans have left nations such as Iraq, Iran, Syria and Turkey for these reasons during the last century. In Iran, many religious minorities such as Christians, Baháʼís, Jews and Zoroastrians have left since the Islamic Revolution of 1979. The Middle East is very diverse when it comes to religions, many of which originated there. Islam is the largest religion in the Middle East, but other faiths that originated there, such as Judaism and Christianity, are also well represented. Christian communities have played a vital role in the Middle East, and they represent 78% of Cyprus population, and 40.5% of Lebanon, where the Lebanese president, half of the cabinet, and half of the parliament follow one of the various Lebanese Christian rites. There are also important minority religions like the Baháʼí Faith, Yarsanism, Yazidism, Zoroastrianism, Mandaeism, Druze, and Shabakism, and in ancient times the region was home to Mesopotamian religions, Canaanite religions, Manichaeism, Mithraism and various monotheist gnostic sects. The six top languages, in terms of numbers of speakers, are Arabic, Persian, Turkish, Kurdish, Modern Hebrew and Greek. About 20 minority languages are also spoken in the Middle East. Arabic, with all its dialects, is the most widely spoken language in the Middle East, with Literary Arabic being official in all North African and in most West Asian countries. Arabic dialects are also spoken in some adjacent areas in neighbouring Middle Eastern non-Arab countries. It is a member of the Semitic branch of the Afro-Asiatic languages. Several Modern South Arabian languages such as Mehri and Soqotri are also spoken in Yemen and Oman. Another Semitic language is Aramaic and its dialects are spoken mainly by Assyrians and Mandaeans, with Western Aramaic still spoken in two villages near Damascus, Syria. There is also an Oasis Berber-speaking community in Egypt where the language is also known as Siwa. It is a non-Semitic Afro-Asiatic sister language. Persian is the second most spoken language. While it is primarily spoken in Iran and some border areas in neighbouring countries, the country is one of the region's largest and most populous. It belongs to the Indo-Iranian branch of the family of Indo-European languages. Other Western Iranic languages spoken in the region include Achomi, Daylami, Kurdish dialects, Semmani, Lurish, amongst many others. The close third-most widely spoken language, Turkish, is largely confined to Turkey, which is also one of the region's largest and most populous countries, but it is present in areas in neighboring countries. It is a member of the Turkic languages, which have their origins in East Asia. Another Turkic language, Azerbaijani, is spoken by Azerbaijanis in Iran. The fourth-most widely spoken language, Kurdish, is spoken in the countries of Iran, Iraq, Syria and Turkey, Sorani Kurdish is the second official language in Iraq (instated after the 2005 constitution) after Arabic. Hebrew is the official language of Israel, with Arabic given a special status after the 2018 Basic law lowered its status from an official language prior to 2018. Hebrew is spoken and used by over 80% of Israel's population, the other 20% using Arabic. Modern Hebrew only began being spoken in the 20th century after being revived in the late 19th century by Elizer Ben-Yehuda (Elizer Perlman) and European Jewish settlers, with the first native Hebrew speaker being born in 1882. Greek is one of the two official languages of Cyprus, and the country's main language. Small communities of Greek speakers exist all around the Middle East; until the 20th century it was also widely spoken in Asia Minor (being the second most spoken language there, after Turkish) and Egypt. During the antiquity, Ancient Greek was the lingua franca for many areas of the western Middle East and until the Muslim expansion it was widely spoken there as well. Until the late 11th century, it was also the main spoken language in Asia Minor; after that it was gradually replaced by the Turkish language as the Anatolian Turks expanded and the local Greeks were assimilated, especially in the interior. English is one of the official languages of Akrotiri and Dhekelia. It is also commonly taught and used as a foreign second language, in countries such as Egypt, Jordan, Iran, Iraq, Qatar, Bahrain, United Arab Emirates and Kuwait. It is also a main language in some Emirates of the United Arab Emirates. It is also spoken as native language by Jewish immigrants from Anglophone countries (UK, US, Australia) in Israel and understood widely as second language there. French is taught and used in many government facilities and media in Lebanon, and is taught in some primary and secondary schools of Egypt and Syria. Maltese, a Semitic language mainly spoken in Europe, is used by the Franco-Maltese diaspora in Egypt. Due to widespread immigration of French Jews to Israel, it is the native language of approximately 200,000 Jews in Israel. Armenian speakers are to be found in the region. Georgian is spoken by the Georgian diaspora. Russian is spoken by a large portion of the Israeli population, because of emigration in the late 1990s. Russian today is a popular unofficial language in use in Israel; news, radio and sign boards can be found in Russian around the country after Hebrew and Arabic. Circassian is also spoken by the diaspora in the region and by almost all Circassians in Israel who speak Hebrew and English as well. The largest Romanian-speaking community in the Middle East is found in Israel, where as of 1995[update] Romanian is spoken by 5% of the population.[d] Bengali, Hindi and Urdu are widely spoken by migrant communities in many Middle Eastern countries, such as Saudi Arabia (where 20–25% of the population is South Asian), the United Arab Emirates (where 50–55% of the population is South Asian), and Qatar, which have large numbers of Pakistani, Bangladeshi and Indian immigrants. Culture The Middle East has recently become more prominent in hosting global sport events due to its wealth and desire to diversify its economy. The South Asian diaspora is a major backer of cricket in the region. See also Notes References Further reading External links 29°N 41°E / 29°N 41°E / 29; 41
========================================
[SOURCE: https://www.reddit.com/user/DatabasePlenty9797] | [TOKENS: 1245]
IgGAY (Iggy) u/DatabasePlenty9797 Welcome to IGGY NATION u/DatabasePlenty9797 • Welcome to IGGY NATION Hay nation this is Iggy, I’m 17 and a part of the WOKE MOB, uhhhhhh I’m not very good at making intro posts so just know I like Slime Rancher and Date Everything and No I’m Not A Human. I don’t have much else to say, so here are my three principles; If you’re digging through my profile after we talked in a thread, whatever that conversation was, I won, because you’re weird for doing that, If you’re going to tell me I sound like a teenager, it’s because I am one, Meowwww meow meowww :3 paws at you meow Hay nation this is Iggy, I’m 17 and a part of the WOKE MOB, uhhhhhh I’m not very good at making intro posts so just know I like Slime Rancher and Date Everything and No I’m Not A Human. I don’t have much else to say, so here are my three principles; If you’re digging through my profile after we talked in a thread, whatever that conversation was, I won, because you’re weird for doing that, If you’re going to tell me I sound like a teenager, it’s because I am one, Meowwww meow meowww :3 paws at you meow How do you make a mannequin turn towards you/follow you ONLY WITH THEIR FACE/HEAD as you walk? r/MinecraftCommands r/MinecraftCommands A place for all things about commands, command blocks and data-packs in vanilla Minecraft; to share, to question, to discuss, and more! Please read the pinned post before posting. Weekly visitors Weekly contributions • How do you make a mannequin turn towards you/follow you ONLY WITH THEIR FACE/HEAD as you walk? Help | Java 1.21.11 What the title says :D A place for all things about commands, command blocks and data-packs in vanilla Minecraft; to share, to question, to discuss, and more! Please read the pinned post before posting. What the title says :D Is it possible to fully reset your world but save a few chunks?? r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Is it possible to fully reset your world but save a few chunks?? Help I've gotten a bunch of new biome and structure gen mods, but I've expanded so much of the map that they'd practically be pointless... I want to keep my house and my items, so I don't just want to make a new world with the same seed, so I'm curious!! Minecraft community on Reddit I've gotten a bunch of new biome and structure gen mods, but I've expanded so much of the map that they'd practically be pointless... I want to keep my house and my items, so I don't just want to make a new world with the same seed, so I'm curious!! r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Snuck into my house via nether portal... do I have a son now? DatabasePlenty9797 replied to scissorsgrinder Awe, what a shame,, :( Reply reply Minecraft community on Reddit Minecraft community on Reddit Snuck into my house via nether portal... do I have a son now? Awe, what a shame,, :( r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Snuck into my house via nether portal... do I have a son now? DatabasePlenty9797 replied to Remote_Amphibian_435 Genuinely how I felt stumbling into my basement and seeing him there, I even have my portal fenced off on the other side I have NO idea how he got in there Reply reply Minecraft community on Reddit Minecraft community on Reddit Snuck into my house via nether portal... do I have a son now? Genuinely how I felt stumbling into my basement and seeing him there, I even have my portal fenced off on the other side I have NO idea how he got in there r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Snuck into my house via nether portal... do I have a son now? DatabasePlenty9797 replied to GodSpeedBolt3 It’s Xavier Renegade Angel Jr. actually Reply reply Minecraft community on Reddit Minecraft community on Reddit Snuck into my house via nether portal... do I have a son now? It’s Xavier Renegade Angel Jr. actually r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Snuck into my house via nether portal... do I have a son now? DatabasePlenty9797 replied to SpecialistBreath1261 Yeah, say hi to ya brother Reply reply Minecraft community on Reddit Minecraft community on Reddit Snuck into my house via nether portal... do I have a son now? Yeah, say hi to ya brother r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions r/Minecraft r/Minecraft Minecraft community on Reddit Weekly visitors Weekly contributions • Snuck into my house via nether portal... do I have a son now? DatabasePlenty9797 commented I’m gonna build him his own little house next to mine :D Reply reply Minecraft community on Reddit Minecraft community on Reddit Snuck into my house via nether portal... do I have a son now? I’m gonna build him his own little house next to mine :D IgGAY (Iggy) Iggy, 17, evil fucking dog !!! 10,795 6203 post karma, 4592 comment karma Karma 357 Contributions 6 m Cake day: Aug 24, 2025 Reddit Age 80 Active in >
========================================
[SOURCE: https://en.wikipedia.org/wiki/Dioptra] | [TOKENS: 1040]
Contents Dioptra A dioptra (sometimes also named dioptre or diopter, from Greek: διόπτρα) is a classical astronomical and surveying instrument, dating from the 3rd century BC. The dioptra was a sighting tube or, alternatively, a rod with a sight at both ends, attached to a stand. If fitted with protractors, it could be used to measure angles. Use Greek astronomers used the dioptra to measure the positions of stars; both Euclid and Geminus refer to the dioptra in their astronomical works. It continued in use as an effective surveying tool. Adapted to surveying, the dioptra is similar to the theodolite, or surveyor's transit, which dates to the sixteenth century. It is a more accurate version of the groma. There is some speculation that it may have been used to build the Eupalinian aqueduct. Called "one of the greatest engineering achievements of ancient times," it is a tunnel 1,036 metres (3,399 ft) long, excavated through a mountain on the Greek island of Samos during the reign of Polycrates in the sixth century BC. Scholars disagree, however, whether the dioptra was available that early. An entire book about the construction and surveying usage of the dioptra is credited to Hero of Alexandria (also known as Heron; a brief description of the book is available online; see Lahanas link, below). Hero was "one of history’s most ingenious engineers and applied mathematicians." The dioptra was used extensively on aqueduct building projects. Screw turns on several different parts of the instrument made it easy to calibrate for very precise measurements. The dioptra was replaced as a surveying instrument by the theodolite. How it works The dioptra consists of a sighting tube or rod fitted with sights at both ends and mounted on a stable stand. The stand usually includes adjustable screw turns that allow the instrument to be precisely calibrated. When used for astronomical purposes, the user would align the sights with a specific star or celestial object, and then measure the angle using protractors attached to the instrument. In surveying, the dioptra was used to measure angles and distances by sighting along the rod and taking readings from graduated scales. Advantages and disadvantages The dioptra offered several advantages over other contemporary instruments. Its ability to measure both vertical and horizontal angles with high precision made it a versatile tool for both astronomy and surveying. The screw turns allowed for fine adjustments, improving accuracy. The instrument's simplicity and robustness made it reliable and easy to use in the field. However, the dioptra also had its limitations. The accuracy of measurements depended on the user's skill and the quality of the instrument's construction. The sighting tube or rod could be affected by environmental factors such as wind or temperature changes, which could introduce errors. Additionally, the dioptra required careful calibration before each use, which could be time-consuming. Compared to later instruments like the theodolite, the dioptra was less advanced and lacked some of the refinements and improvements that made theodolites more accurate and easier to use. The theodolite eventually replaced the dioptra as the primary instrument for surveying due to its superior performance and reliability. History and development The dioptra's origins trace back to the Hellenistic period when Greek scientists and engineers sought to improve observational accuracy in astronomy and surveying. Over time, the instrument underwent several modifications, incorporating advancements in material science and geometric principles. Notably, Hero of Alexandria's detailed work on the dioptra exemplifies the pinnacle of Hellenistic engineering prowess, showcasing the instrument's versatility and precision. Applications in ancient engineering Beyond its use in astronomy, the dioptra played a crucial role in various engineering projects in ancient Greece and Rome. It was instrumental in constructing aqueducts, roads, and buildings. The instrument's ability to measure angles with high precision allowed engineers to plan and execute large-scale infrastructure projects with greater accuracy and efficiency. For example, its use in the Eupalinian aqueduct's construction demonstrated the dioptra's significance in solving complex engineering challenges of the time. Comparison with other instruments The dioptra's design and functionality can be compared to other contemporary instruments such as the groma, the alidade, and the later theodolite. While the groma was primarily used for laying out straight lines and right angles, the dioptra offered greater versatility in measuring angles in both vertical and horizontal planes. The alidade, another important surveying instrument, was used to measure angles and determine directions. It typically consisted of a straightedge with sights at either end. The alidade was often mounted on a plane table, which allowed for direct plotting of survey data. The theodolite, which emerged in the sixteenth century, eventually surpassed the dioptra in accuracy and ease of use due to technological advancements and refinements in optical and mechanical components. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Knock-knock_jokes] | [TOKENS: 1761]
Contents Knock-knock joke The knock-knock joke is a structured word play joke that uses call and response. The joke presents a scenario in which the speaker is pretending to knock on the front door of the listener. The speaker initiates the joke by saying "knock-knock", and the listener responds by saying "who's there". The speaker then says a phrase to identify themselves, and the listener repeats the phrase and asks "who?" to request more information. The speaker then delivers a punch line using word play based on the phrase. The first modern knock-knock jokes were told in the United States in the 1930s, and they became a fad in 1936 with widespread use in the United States and the United Kingdom. Structure Knock-knock jokes are a type of word play joke, which derive their humor from the conflation of homonyms. The joke is performed cooperatively by the speaker and the listener in a call and response format as they create a scene depicting a visitor knocking at a door. The joke is dependent on the speaker and the listener having previous exposure to the joke's format and enough general and linguistic knowledge to understand what is being referenced in the punchline. A standard knock-knock joke has five lines of dialogue. In the first line, the speaker plays the role of someone knocking at the listener's front door by saying "knock-knock". The line is an example of onomatopoeia. It is often spoken with a stylized fall, a type of stylized intonation where the second syllable is said in a lower pitch than the first. In the second line, the listener's response of "who's there" has them play the role of someone inside their own home as the speaker knocks. The third line is the point that a person at the door would provide their given name or some other identifier. In the joke, the name does not provide enough information to identify a specific speaker. The joke traditionally uses a name for the third line, but any phrase can be used. If an inanimate object is referenced, it is inferred that the person knocking is a human using the object as a name, rather than the physical object they describe. The fourth line is normally when the person at home would ask for more information to clarify who is at the door, as a given name or title on its own is not always specific enough to discern a person's identity. In the case of a given name, this can be because the person at home does not know anyone by that name or knows multiple people by that name. The fifth line breaks from the imagined scenario of a person knocking on a door. Some knock-knock jokes end by repeating the third line, using its phonetic structure as a pun to be the start of a new sentence. In this case, the content of the third line is spoken quickly to blend its sound into the rest of the phrase. Knock, knock! Who's there? Lettuce. Lettuce who? Lettuce in! Other knock-knock jokes take advantage of the phonetic structure of the fourth line, and the fifth line is a response to the newly created phrase spoken by the listener. Knock, knock! Who's there? Tank. Tank who? You're welcome! In both cases, the fifth line effectively changes the meaning of the third and fourth lines. History Dialogue resembling a knock-knock joke appears in the play The Case is Altered by Ben Jonson, written c. 1597, in which the character Juniper says he is not Rachel's father but is willing to become a father with her. JUNIPER: No, I'll knock. We'll not stand upon horizons and tricks but fall roundly to the matter. [He knocks.] ONION: Well said, sweet Juniper. Horizons? Hang 'em! Knock, knock! RACHEL: [Within] Who's there? Father? JUNIPER: Father? No, and yet a father, if you please to be a mother. The origin of the knock-knock joke, or the first appearance of the phrase "knock knock, who's there", is sometimes attributed to William Shakespeare for his 1606 play Macbeth. In Act 2, Scene 3, the character of the porter gives a soliloquy about a porter accepting people into hell. Knock, knock! Who's there, i' the name of Beelzebub? Here's a farmer, that hanged himself on the expectation of plenty: come in time; have napkins enow about you; here you'll sweat for't. Knock, knock! Who's there, in the other devil's name? Faith, here's an equivocator, that could swear in both the scales against either scale; who committed treason enough for God's sake, yet could not equivocate to heaven: O, come in, equivocator. Writing in the Oakland Tribune, Merely McEvoy recalled a style of joke from around 1900 where a person would ask a question such as "Do you know Arthur?", the unsuspecting listener responding with "Arthur who?" and the joke teller answering "Arthurmometer!" He compared it to a joke that emerged in the flapper community around 1920 where a woman would ask "Have you ever heard of Hiawatha?", and upon being asked "Hiawatha who?", she would respond with "Hiawatha a good girl ... till I met you." A variation of the format in the form of a children's game was described in 1929. In the game of Buff, a child with a stick thumps it on the ground, and the dialogue ensues: Knock, knock! Who's there? Buff. What says Buff? Buff says Buff to all his men, And I say Buff to you again. The exact origin of knock-knock jokes is uncertain, but true knock-knock jokes had emerged in the United States by the 1930s. Knock-knock clubs formed in the Midwestern United States, and swing orchestra performer Vincent Lopez wrote the novelty song "Knock-Knock Song", which incorporated audience call-and-response. By 1936, knock-knock jokes had become a fad. A 1936 Associated Press newspaper article said that "What's This?" had given way to "Knock Knock!" as a favorite parlor game, comparing its sudden rise to that of the novelty song "The Music Goes 'Round and Around". Outlets like The Gridley Herald and The Milwaukee Journal also reported on knock-knock jokes that year as a new parlour game and derided it as uninteresting. The WKBO radio station in Harrisburg, Pennsylvania, helped popularize the format by making frequent jokes using the name of Frank Knox, the Republican vice presidential candidate in the 1936 United States presidential election. In the United Kingdom, music hall performer Wee Georgie Wood adopted the phrase "knock-knock" as a catchphrase and was recorded in 1936 saying it in a radio play. Meanwhile, a popular knock-knock joke was made at the expense of King Edward VIII. Knock, knock! Who's there? Edward Rex. Edward Rex who? Edward Rex the Coronation. The Edgmont Cash & Carry, a grocery store in Chester, Pennsylvania, used knock-knock jokes in its advertisements and held a contest for the best knock-knock jokes. Another example of a knock-knock joke appeared in The Rolfe Arrow in Rolfe, Iowa. Knock, knock! Who's there? Rufus. Rufus who? Rufus the most important part of your house. Is your roof in good shape for the winter? We have roof materials of all kinds. Fred Allen's 30 December 1936 radio broadcast included a humorous wrap-up of the year's least important events, which included a supposed interview with the man who "invented a negative craze" on 1 April: "Ramrod Dank... the first man to coin a Knock Knock." After peaking in 1936, knock-knock jokes received greater push-back from critics who saw them as unfunny, pseudo-intellectual, or pathological. Despite this, they remained a popularly known joke format. Knock-knock jokes have since been popularized in other countries, including Australia, Canada, France, Ireland, South Africa, and the United Kingdom. The format was well known in the UK and US in the 1950s and 1960s, and it enjoyed a renaissance after the jokes became a regular part of the badinage on Rowan & Martin's Laugh-In. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/CorVision] | [TOKENS: 1513]
Contents CorVision CorVision is a fourth generation programming tool (4GL) currently owned by Attunity, Inc. CorVision was developed by Cortex Corporation for the VAX/VMS ISAM environment. Although Cortex beta tested CorVision-10 which was generated for PCs but CorVision itself stayed anchored on VMS. CorVision-10 proved more difficult than hoped, and was never released. Lifecycle CorVision can be traced back to 1972 when Lou Santoro and Mike Lowery created INFORM for the newly formed time-sharing company Standard Information Systems (SIS). INFORM contained some of CorVisions basic utility commands such as SORT, REPORT, LIST and CONSOLIDATE. Some of the first users of INFORM were New England Telephone, Polaroid and Temple Barker & Sloan. By 1972 SIS had offices in Los Angeles, Garden Grove, Minneapolis, Chicago, Boston, New York City, District of Columbia, Charlotte, Raleigh, Atlanta and Phoenix. Between 1976 and 1977 Ken Levitt and Dick Berthold of SIS ported INFORM from the CDC-3600 to the PDP-11/70 under IAS. They called this new tool INFORM-11. Cortex was founded in 1978 by Sherm Uchill, Craig Hill, Mike Lowery, and Dick Berthold to market INFORM-11. INFORM-11 was first used to deliver a 20-user order entry system at Eddie Bauer, and to deliver an insurance processing system for Consolidated Group Trust. Between 1981 and 1982 Cortex received significant investment from A. B. Dick. Using this new investment, Cortex ported INFORM to Digital Equipment Corporation's new VAX/VMS, adding compiled executables. INFORM-11 was promoted by both Cortex and Digital as a pioneering rapid application development system. In 1984 Jim Warner encapsulated INFORM in a repository-based development tool and called it Application Factory. INFORM's PROCESS procedural language became known as BUILDER within Application Factory. In 1986 the name of Application Factory was dropped in favor of the name CorVision. Between 1986 and 1989 CorVision experienced its heyday. It quickly became known as a robust and capable tool for rapidly building significant multi-user applications. The addition of relational database support attracted major accounts. Cortex quickly became an international company. In 1992, CorVision Version 5 was released with Query and support for Unix. Query allowed read-only access by users and developers to a systems database backend. Where this seemed a desirable facility, allowing users to create "use once then throw away" reports without calling on developers this had a nasty habit of causing performance issues. Users often did not understand the database structure and could send large queries to the processing queues causing system-wide issues. In 1993 Cortex started supported vesting to Digital's new 64-bit Alpha line. In 1994, International Software Group Co. Ltd. (ISG) purchased Cortex. As early as 1987, Cortex recognized the growth in the popularity of the IBM PC, supporting wikt:diagrammatic editing of menus and data relationships in CorVision. In 1993 a client-server version was released, but not widely adopted. In 1997 ISG's work on CorVision-10 which was to herald the rebirth of CorVision onto the IBM PC platform stopped. CorVision-10 was proving very difficult to port and ISG finally refused to spend any more money on the now-dated system. 1994 saw the last innovative CorVision release: V5.11. The extra-fee Y2K release, V5.12.2, marked the end of development. CorVision still exists in a handful of companies that have not yet found the time or money to upgrade their existing mainframe systems. As CorVision runs on the VMS environment it is very stable but the search for CorVision developers and contractors to support these ageing systems is a problem. Since around 1999, companies have started appearing offering conversion tools to convert BUILDER code to compiled Visual Basic and Java. In 2005 CorVision guru Michael Lowery, now president of Order Processing Technologies, attempted to revive the CorVision franchise with CV2VB, a process to convert CorVision applications into .NET applications using a SQL server. CV2VB is OPT's third generation CorVision conversion and replacement modeler/code generator. It is in commercial service at former CorVision clients. Information is available at the CV2VB website. Application development A brief explanation of application development using CorVison. The first step in developing an application with CorVision is to fill in the parameters which control the miscellaneous aspects concerning application-wide functions. The parameters fall into five groupings as follows: Usually the default values for these parameters are satisfactory. CorVision however allows for these setting to be changed at any time during development. The parameters file (WP) is accessed at runtime so the latest setting are always used. CorVision keeps the information behind the Status Screen up-to-date so that it indicates the current state of the development process. The left hand side indicates specification tasks that need doing. The right hand side indicates generation tasks that need doing. Changes or Additions to Specification and what they cause changes to. Dictionary, Datasets and Keys Dictionary, Datasets, Keys Screens, reports Menus CorVision provides a useful feature called Field Search. Field Search allows you to investigate and analyze the use of fields in different aspects of the application. This allows developers to assess the impact of changes before they are made. To provide complete specification details in hardcopy form, CorVision has the Run Reports option. Over 80 different types of report can be produced. Component Specification Reports (CSRs), as they are known, can also be produced for tentative, unreferenced and unresolved items. The key to CorVision is PROCEDURES. The procedures in CorVision eventually become Executable Images (.EDO's). Three types of procedures are: It is not essentially true to consider a procedure as a program. In fact, a procedure is a set of instructions (BUILDER Commands) which build a program. A program in BUILDER is actually called a Process not a program. A Procedure therefore is a set of BUILDER commands which instruct BUILDER to build a process and save this in the program library as a compiled file with a .SAV extension. CorVision keeps the data structure files separate if they are to be manipulated by BUILDER. BUILDER keeps a structure file and a key structure file for each dataset used by the application. When a process is compiled, the data structures are "bound" to the process at that time thus "binding" of data structures takes place at the precise moment the process is compiled. Because the structure and key structure files are kept separate, the dataset definitions can be altered during development. This is a major strength of CorVision allowing for a prototyping environment where both code and data structures can be changed throughout development then brought together at compile time. The structure and key structure files are loaded before the process is compiled. This is done by the load file. BUILDER makes the assumption that the data structures are already loaded when it compiles a process. It is at this point that the compilation "binds" the data structures to the code. The following files are created: The following files can also be added: The following files are created after compiling: References
========================================
[SOURCE: https://techcrunch.com/category/venture] | [TOKENS: 345]
Save up to $680 on your pass with Super Early Bird rates. REGISTER NOW. Save up to $680 on your Disrupt 2026 pass. Ends February 27. REGISTER NOW. Latest AI Amazon Apps Biotech & Health Climate Cloud Computing Commerce Crypto Enterprise EVs Fintech Fundraising Gadgets Gaming Google Government & Policy Hardware Instagram Layoffs Media & Entertainment Meta Microsoft Privacy Robotics Security Social Space Startups TikTok Transportation Venture Staff Events Startup Battlefield StrictlyVC Newsletters Podcasts Videos Partner Content TechCrunch Brand Studio Crunchboard Contact Us Venture Our venture capital news features interviews and analysis on all the VCs, the VC-backed startups, and the investment trends that founders, investors, students, academics – and anyone else interested in the way that tech is transforming the world – should be tracking. Save up to $680 on your pass before February 27.Meet investors. Discover your next portfolio company. Hear from 250+ tech leaders, dive into 200+ sessions, and explore 300+ startups building what’s next. Don’t miss these one-time savings. Most Popular FBI says ATM ‘jackpotting’ attacks are on the rise, and netting hackers millions in stolen cash Meta’s own research found parental supervision doesn’t really help curb teens’ compulsive social media use How Ricursive Intelligence raised $335M at a $4B valuation in 4 months After all the hype, some AI experts don’t think OpenClaw is all that exciting OpenClaw creator Peter Steinberger joins OpenAI Hollywood isn’t happy about the new Seedance 2.0 video generator The great computer science exodus (and where students are going instead) © 2025 TechCrunch Media LLC.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Saskatchewan] | [TOKENS: 8583]
Contents Saskatchewan Saskatchewan[a] is a province in Western Canada. It is bordered to the west by Alberta, to the north by the Northwest Territories, to the east by Manitoba, to the northeast by Nunavut, and to the south by the United States (Montana and North Dakota). Saskatchewan and neighbouring Alberta are the only landlocked provinces in Canada. In 2025, Saskatchewan's population was estimated at 1,266,234. Nearly 10% of Saskatchewan's total area of 651,900 square kilometres (251,700 sq mi) is fresh water, mostly rivers, reservoirs, and lakes. Saskatchewanians live primarily in the southern prairie half of the province, while the northern half is mostly forested and sparsely populated. Roughly half live in Saskatchewan's two largest cities; Regina (the province's capital city) and Saskatoon (the province's largest city). Other notable cities in Saskatchewan include Prince Albert, Moose Jaw, Yorkton, Swift Current, North Battleford, Estevan, Weyburn, Melfort, and the border city; Lloydminster. English is the primary language of the province, with 82.4% of Saskatchewanians speaking English as their first language. Saskatchewan has been inhabited for thousands of years by Indigenous peoples. Europeans first explored any part of the province in 1690 and first settled in the area in 1774. It became a province in 1905, carved out from the vast North-West Territories, which had until then included most of the Canadian Prairies. In the early 20th century, the province became known as a stronghold for Canadian social democracy, with the 1944 provincial election electing North America's first socialist government to office. Saskatchewan's economy is based on agriculture, mining, and energy. In 1992, the federal and provincial governments signed a historic land claim agreement with First Nations in Saskatchewan, granting these nations compensation which they could use to buy land on the open market for the bands. Presently, Saskatchewan is governed by the Saskatchewan Party, led by Premier Scott Moe, which has been in power since 2007. Etymology The name of the province, "Saskatchewan", is derived from the Saskatchewan River. The river is known as ᑭᓯᐢᑳᒋᐘᓂ ᓰᐱᐩ kisiskāciwani-sīpiy ("swift flowing river") in the Cree language. Anthony Henday's spelling was Keiskatchewan, with the modern rendering, Saskatchewan, being officially adopted in 1882, when a portion of the present-day province was designated a provisional district of the North-West Territories. Geography Saskatchewan is the only province without a natural border. As its borders follow geographic lines of longitude and latitude, the province is roughly a quadrilateral, or a shape with four sides. However, the southern border on the 49th parallel and the northern border on the 60th parallel curve to the left as one proceeds east, as do all parallels in the Northern Hemisphere. Additionally, the eastern boundary of the province follows range lines and correction lines of the Dominion Land Survey, laid out by surveyors before the Dominion Lands Act homestead program (1880–1928). Saskatchewan is bounded on the west by Alberta, on the north by the Northwest Territories, on the north-east by Nunavut, on the east by Manitoba, and on the south by the U.S. states of Montana and North Dakota. Saskatchewan has the distinction of being the only Canadian province for which no borders correspond to physical geographic features (i.e., they are all parallels and meridians). Along with Alberta, Saskatchewan is one of only two land-locked provinces. The overwhelming majority of Saskatchewan's population is in the southern third of the province, south of the 53rd parallel. Saskatchewan contains two major natural regions: the boreal forest in the north and the prairies in the south. They are separated by an aspen parkland transition zone near the North Saskatchewan River on the western side of the province, and near to south of the Saskatchewan River on the eastern side. Northern Saskatchewan is mostly covered by forest except for the Lake Athabasca Sand Dunes, the largest active sand dunes in the world north of 58°, and adjacent to the southern shore of Lake Athabasca. Southern Saskatchewan contains another area with sand dunes known as the "Great Sand Hills", covering over 300 km2 (120 sq mi). The Cypress Hills, in the southwestern corner of Saskatchewan and Killdeer Badlands (Grasslands National Park), are areas of the province that were unglaciated during the last glaciation period, the Wisconsin glaciation. The province's highest point, at 1,392 m (4,567 ft), is in the Cypress Hills less than 2 km (1.2 mi) from the provincial boundary with Alberta. The lowest point is the shore of Lake Athabasca, at 213 m (699 ft). The province has 14 major drainage basins made up of various rivers and watersheds draining into the Arctic Ocean, Hudson Bay, and the Gulf of Mexico. Saskatchewan receives more hours of sunshine than any other Canadian province. The province lies far from any significant body of water. This fact, combined with its northerly latitude, gives it a warm summer, corresponding to its humid continental climate (Köppen type Dfb) in the central and most of the eastern parts of the province, as well as the Cypress Hills; drying off to a semi-arid steppe climate (Köppen type BSk) in the southwestern part of the province. Drought can affect agricultural areas during long periods with little or no precipitation at all. The northern parts of Saskatchewan – from about La Ronge northward – have a subarctic climate (Köppen Dfc) with a shorter summer season. Summers can get very hot, sometimes above 38 °C (100 °F) during the day, and with humidity decreasing from northeast to southwest. Warm southern winds blow from the plains and intermontane regions of the Western United States during much of July and August, and very cool or hot but changeable air masses often occur during spring and in September. Winters are usually bitterly cold, with frequent Arctic air descending from the north, and with high temperatures not breaking −17 °C (1 °F) for weeks at a time. Warm chinook winds often blow from the west, bringing periods of mild weather. Annual precipitation averages 30 to 45 centimetres (12 to 18 inches) across the province, with the bulk of rain falling in June, July, and August. Saskatchewan is one of the most tornado-active parts of Canada, averaging roughly 12 to 18 tornadoes per year, some violent. In 2012, 33 tornadoes were reported in the province. The Regina Cyclone took place in June 1912 when 28 people died in an F4 Fujita scale tornado. Severe and non-severe thunderstorm events occur in Saskatchewan, usually from early spring to late summer. Hail, strong winds and isolated tornadoes are a common occurrence. The hottest temperature ever recorded in Saskatchewan was in July 1937 when the temperature rose to 45 °C (113 °F) in Midale and Yellow Grass. The coldest ever recorded in the province was −56.7 °C (−70.1 °F) in Prince Albert, north of Saskatoon, in February 1893. The effects of climate change in Saskatchewan are now being observed in parts of the province. Evidence of reduction of biomass in Saskatchewan's boreal forests (as with those of other Canadian prairie provinces) is linked by researchers to drought-related water stress, stemming from global warming, most likely caused by greenhouse gas emissions. While studies as early as 1988 (Williams, et al., 1988) have shown climate change will affect agriculture, whether the effects can be mitigated through adaptations of cultivars, or crops, is less clear. Resiliency of ecosystems may decline with large changes in temperature. The provincial government has responded to the threat of climate change by introducing a plan to reduce carbon emissions, "The Saskatchewan Energy and Climate Change Plan", in June 2007. History Saskatchewan has been populated by various indigenous peoples of North America, including members of the Sarcee, Niitsitapi, Atsina, Cree, Saulteaux, Assiniboine (Nakoda), and Sioux. The first known European to enter Saskatchewan was Henry Kelsey from England in 1690, who travelled up the Saskatchewan River in hopes of trading fur with the region's indigenous peoples. Fort La Jonquière and Fort de la Corne were first established in 1751 and 1753 by early French explorers and traders. The first permanent European settlement was a Hudson's Bay Company post at Cumberland House, founded in 1774 by Samuel Hearne. The southern part of the province was part of Spanish Louisiana from 1762 until 1802. In 1803, the Louisiana Purchase transferred from France to the United States part of what is now Alberta and Saskatchewan. In 1818, the U.S. ceded the area to Britain. Most of what is now Saskatchewan was part of Rupert's Land and controlled by the Hudson's Bay Company, which claimed rights to all watersheds flowing into Hudson Bay, including the Saskatchewan River, Churchill, Assiniboine, Souris, and Qu'Appelle River systems. In the late 1850s and early 1860s, scientific expeditions led by John Palliser and Henry Youle Hind explored the prairie region of the province. In 1870, Canada acquired the Hudson's Bay Company's territories and formed the North-West Territories to administer the vast territory between British Columbia and Manitoba. The Crown also entered into a series of numbered treaties with the indigenous peoples of the area, which serve as the basis of the relationship between First Nations, as they are called today, and the Crown. Since the late twentieth century, land losses and inequities as a result of those treaties have been subject to negotiation for settlement between the First Nations in Saskatchewan and the federal government, in collaboration with provincial governments. In 1876, following their defeat of United States Army forces at the Battle of the Little Bighorn in Montana Territory in the United States, the Lakota Chief Sitting Bull led several thousand of his people to Wood Mountain. Survivors and descendants founded Wood Mountain Reserve in 1914. The North-West Mounted Police set up several posts and forts across Saskatchewan, including Fort Walsh in the Cypress Hills, and Wood Mountain Post in south-central Saskatchewan near the United States border. Many Métis people, who had not been signatories to a treaty, had moved to the Southbranch Settlement and Prince Albert district north of present-day Saskatoon following the Red River Rebellion in Manitoba in 1870. In the early 1880s, the Canadian government refused to hear the Métis' grievances, which stemmed from land-use issues. Finally, in 1885, the Métis, led by Louis Riel, staged the North-West Rebellion and declared a provisional government. They were defeated by a Canadian militia brought to the Canadian prairies by the new Canadian Pacific Railway. Riel, who surrendered and was convicted of treason in a packed Regina courtroom, was hanged on November 16, 1885. Since then, the government has recognized the Métis as an aboriginal people with status rights and provided them with various benefits. The national policy set by the federal government, the Canadian Pacific Railway, the Hudson's Bay Company and associated land companies encouraged immigration. The Dominion Lands Act of 1872 permitted settlers to acquire one-quarter of a square mile of land to homestead and offered an additional quarter upon establishing a homestead. In 1874, the North-West Mounted Police began providing police services. In 1876, the North-West Territories Act provided for appointment, by Ottawa, of a Lieutenant Governor and a Council to assist him. Highly optimistic advertising campaigns promoted the benefits of prairie living. Potential immigrants read leaflets that described Canada as a favourable place to live and downplayed the need for agricultural expertise. Ads in The Nor'-West Farmer by the Commissioner of Immigration implied that western land held water, wood, gold, silver, iron, copper, and cheap coal for fuel, all of which were readily at hand. The reality was far harsher, especially for the first arrivals who lived in sod houses. However eastern money poured in and by 1913, long term mortgage loans to Saskatchewan farmers had reached $65 million. The dominant groups comprised British settlers from eastern Canada and Britain, who comprised about half of the population during the late 19th and early 20th centuries. They played the leading role in establishing the basic institutions of plains society, economy and government. Gender roles were sharply defined. Men were primarily responsible for breaking the land; planting and harvesting; building the house; buying, operating and repairing machinery; and handling finances. At first, there were many single men on the prairie, or husbands whose wives were still back east, but they had a hard time. They realized the need for a wife. In 1901, there were 19,200 families, but this surged to 150,300 families only 15 years later. Wives played a central role in settlement of the prairie region. Their labour, skills, and ability to adapt to the harsh environment proved decisive in meeting the challenges. They prepared bannock, beans and bacon, mended clothes, raised children, cleaned, tended the garden, helped at harvest time and nursed everyone back to health. While prevailing patriarchal attitudes, legislation, and economic principles obscured women's contributions, the flexibility exhibited by farm women in performing productive and nonproductive labour was critical to the survival of family farms, and thus to the success of the wheat economy. On September 1, 1905, Saskatchewan became a province, with inauguration day held on September 4. Its political leaders at the time proclaimed its destiny was to become Canada's most powerful province. Saskatchewan embarked on an ambitious province-building program based on its Anglo-Canadian culture and wheat production for the export market. Population quintupled from 91,000 in 1901 to 492,000 in 1911, thanks to heavy immigration of farmers from Ukraine, U.S., Germany and Scandinavia. Efforts were made to assimilate the newcomers to British Canadian culture and values. In the 1905 provincial elections, Liberals won 16 of 25 seats in Saskatchewan. The Saskatchewan government bought out Bell Telephone Company in 1909, with the government owning the long-distance lines and left local service to small companies organized at the municipal level. Premier Walter Scott preferred government assistance to outright ownership because he thought enterprises worked better if citizens had a stake in running them; he set up the Saskatchewan Cooperative Elevator Company in 1911. Despite pressure from farm groups for direct government involvement in the grain handling business, the Scott government opted to loan money to a farmer-owned elevator company. Saskatchewan in 1909 provided bond guarantees to railway companies for the construction of branch lines, alleviating the concerns of farmers who had trouble getting their wheat to market by waggon. The Saskatchewan Grain Growers Association, was the dominant political force in the province until the 1920s; it had close ties with the governing Liberal party. In 1913, the Saskatchewan Stock Growers Association was established with three goals: to watch over legislation; to forward the interests of the stock growers in every honourable and legitimate way; and to suggest to parliament legislation to meet changing conditions and requirements. Immigration peaked in 1910, and in spite of the initial difficulties of frontier life – distance from towns, sod homes, and backbreaking labour – new settlers established a European-Canadian style of prosperous agrarian society. The long-term prosperity of the province depended on the world price of grain, which headed steadily upward from the 1880s to 1920, then plunged down. Wheat output was increased by new strains, such as the "Marquis wheat" strain which matured 8 days sooner and yielded 7 more bushels per acre (0.72 m3/ha) than the previous standard, "Red Fife". The national output of wheat soared from 8 million imperial bushels (290,000 m3) in 1896, to 26×10^6 imp bu (950,000 m3) in 1901, reaching 151×10^6 imp bu (5,500,000 m3) by 1921. Urban reform movements in Regina were based on support from business and professional groups. City planning, reform of local government, and municipal ownership of utilities were more widely supported by these two groups, often through such organizations as the Board of Trade. Church-related and other altruistic organizations generally supported social welfare and housing reforms; these groups were generally less successful in getting their own reforms enacted. The province responded to the First World War in 1914 with patriotic enthusiasm and enjoyed the resultant economic boom for farms and cities alike. Emotional and intellectual support for the war emerged from the politics of Canadian national identity, the rural myth, and social gospel progressivism The Church of England was especially supportive. However, there was strong hostility toward German-Canadian farmers. Recent Ukrainian immigrants were enemy aliens because of their citizenship in the Austro-Hungarian Empire. A small fraction were taken to internment camps. Most of the internees were unskilled unemployed labourers who were imprisoned "because they were destitute, not because they were disloyal". The price of wheat tripled and acreage seeded doubled. The wartime spirit of sacrifice intensified social reform movements that had predated the war and now came to fruition. Saskatchewan gave women the right to vote in 1916 and at the end of 1916 passed a referendum to prohibit the sale of alcohol. In the late 1920s, the Ku Klux Klan, imported from the United States and Ontario, gained brief popularity in nativist circles in Saskatchewan and Alberta. The Klan, briefly allied with the provincial Conservative party because of their mutual dislike for Premier James G. "Jimmy" Gardiner and his Liberals (who ferociously fought the Klan), enjoyed about two years of prominence. It declined and disappeared, subject to widespread political and media opposition, plus internal scandals involving the use of the organization's funds. In 1970, the first annual Canadian Western Agribition was held in Regina. This farm-industry trade show, with its strong emphasis on livestock, is rated as one of the five top livestock shows in North America, along with those in Houston, Denver, Louisville and Toronto. The province celebrated the 75th anniversary of its establishment in 1980, with Princess Margaret, Countess of Snowdon, presiding over the official ceremonies. In 2005, 25 years later, her sister, Queen Elizabeth II, attended the events held to mark Saskatchewan's centennial. Since the late 20th century, First Nations have become more politically active in seeking justice for past inequities, especially related to the taking of indigenous lands by various governments. The federal and provincial governments have negotiated on numerous land claims, and developed a program of "Treaty Land Entitlement", enabling First Nations to buy land to be taken into reserves with money from settlements of claims. "In 1992, the federal and provincial governments signed an historic land claim agreement with Saskatchewan First Nations. Under the Agreement, the First Nations received money to buy land on the open market. As a result, about 761,000 acres have been turned into reserve land and many First Nations continue to invest their settlement dollars in urban areas", including Saskatoon. The money from such settlements has enabled First Nations to invest in businesses and other economic infrastructure. In June 2021, a graveyard containing the remains of 751 unidentified people was found at the former Marieval Indian Residential School, part of the Canadian Indian residential school system. Demographics According to the 2011 Canadian census, the largest ethnic group in Saskatchewan is German (28.6%), followed by English (24.9%), Scottish (18.9%), Canadian (18.8%), Irish (15.5%), Ukrainian (13.5%), French (Fransaskois) (12.2%), First Nations (12.1%), Norwegian (6.9%), and Polish (5.8%). As of the 2021 Canadian census, the ten most spoken languages in the province included English (1,094,785 or 99.24%), French (52,065 or 4.72%), Tagalog (36,125 or 3.27%), Cree (24,850 or 2.25%), Hindi (15,745 or 1.43%), Punjabi (13,310 or 1.21%), German (11,815 or 1.07%), Mandarin (11,590 or 1.05%), Spanish (11,185 or 1.01%), and Ukrainian (10,795 or 0.98%). The question on knowledge of languages allows for multiple responses. According to the 2021 census, religious groups in Saskatchewan included: Economy Historically, Saskatchewan's economy was primarily associated with agriculture, with wheat being the precious symbol on the province's flag. Increasing diversification has resulted in agriculture, forestry, fishing, and hunting only making up 8.9% of the province's GDP in 2018. Saskatchewan grows a large portion of Canada's grain. In 2017, the production of canola surpassed the production of wheat, which is Saskatchewan's most familiar crop and the one most often associated with the province. The total net income from farming was $3.3 billion in 2017, which was $0.9 billion less than the income in 2016. Other grains such as flax, rye, oats, peas, lentils, canary seed, and barley are also produced in the province. Saskatchewan is the world's largest exporter of mustard seed. Beef cattle production by a Canadian province is only exceeded by Alberta. In the northern part of the province, forestry is also a significant industry. Mining is a major industry in the province, with Saskatchewan being the world's largest exporter of potash and uranium. Oil and natural gas production is also a very important part of Saskatchewan's economy, although the oil industry is larger. Among Canadian provinces, only Alberta exceeds Saskatchewan in overall oil production. Heavy crude is extracted in the Lloydminster-Kerrobert-Kindersley areas. Light crude is found in the Kindersley-Swift Current areas as well as the Weyburn-Estevan fields. Natural gas is found almost entirely in the western part of Saskatchewan, from the Primrose Lake area through Lloydminster, Unity, Kindersley, Leader, and around Maple Creek areas. Major companies based in Saskatchewan include Nutrien, Federated Cooperatives Ltd. and Cameco. Major Saskatchewan-based Crown corporations are Saskatchewan Government Insurance (SGI), SaskTel, SaskEnergy (the province's main supplier of natural gas), SaskPower, and Saskatchewan Crop Insurance Corporation (SCIC). Bombardier runs the NATO Flying Training Centre at 15 Wing, near Moose Jaw. Bombardier was awarded a long-term contract in the late 1990s for $2.8 billion from the federal government for the purchase of military aircraft and the running of the training facility. SaskPower since 1929 has been the principal supplier of electricity in Saskatchewan, serving more than 451,000 customers and managing $4.5 billion in assets. SaskPower is a major employer in the province with almost 2,500 permanent full-time staff in 71 communities. Education Publicly funded elementary and secondary schools in the province are administered by twenty-seven school divisions. Public elementary and secondary schools either operate as secular or as a separate schools. Nearly all school divisions, except one operate as an English first language school board. The Division scolaire francophone No. 310 is the only school division that operates French first language schools. In addition to elementary and secondary schools, the province is also home to several post-secondary institutions. The first education on the prairies took place within the family groups of the First Nations and early fur trading settlers. There were only a few missionary or trading post schools established in Rupert's Land – later known as the North West Territories. The first 76 North-West Territories school districts and the first Board of Education meeting formed in 1886. The pioneering boom formed ethnic bloc settlements. Communities were seeking education for their children similar to the schools of their homeland. Log cabins, and dwellings were constructed for the assembly of the community, school, church, dances and meetings. The prosperity of the Roaring Twenties and the success of farmers in proving up on their homesteads helped provide funding to standardize education. Textbooks, normal schools for educating teachers, formal school curricula and schoolhouse architectural plans provided continuity throughout the province. English as the school language helped to provide economic stability because one community could communicate with another and goods could be traded and sold in a common language. The number of one-room schoolhouse districts across Saskatchewan totalled approximately 5,000 at the height of this system of education in the late 1940s. Following World War II, the transition from many one-room schoolhouses to fewer and larger consolidated modern technological town and city schools occurred as a means of ensuring technical education. School buses, highways, and family vehicles create ease and accessibility of a population shift to larger towns and cities. Combines and tractors mean the farmer could manage more than a quarter section of land, so there was a shift from family farms and subsistence crops to cash crops grown on many sections of land. School vouchers have been newly proposed as a means of allowing competition between rural schools and making the operation of cooperative schools practicable in rural areas. Healthcare Saskatchewan's Ministry of Health is responsible for policy direction, sets and monitors standards, and provides funding for regional health authorities and provincial health services. Saskatchewan's health system is a single-payer system. Medical practitioners in Saskatchewan are independent contractors. They remit their accounts to the publicly funded Saskatchewan Medical Care Insurance Plan, which pays the accounts. Patients do not pay anything to their doctors or hospitals for medical care. In 1944, the Co-operative Commonwealth Federation (CCF), a left-wing agrarian and labour party, won the provincial election in Saskatchewan and formed the first socialist government in North American history. Repeatedly re-elected, the CCF campaigned in the early 1960s on the theme of universal health coverage and, after winning the election again, implemented it, the first in Canada. However, it was fiercely opposed by the province's doctors' union, which went on a massive strike the day the new system came into effect. Supported by the Saskatchewan Chamber of Commerce, most newspapers and the right-wing Keep Our Doctors movement, the doctors' union ran an effective communications campaign portraying the universal health care system as a communist scheme that would spread disease. The strike, which had become very unpopular because of the outrageous rhetoric of some of its leaders (one of them had called for bloodshed), finally ended after a few weeks, and universal health coverage was adopted by the whole country five years later. Government and politics Saskatchewan has the same form of government as the other Canadian provinces with a lieutenant-governor (who is the representative of the King in Right of Saskatchewan), premier, and a unicameral legislature. During the 20th century, Saskatchewan was one of Canada's more left-wing provinces, reflecting the slant of its many rural citizens which distrusted the distant capital government and which favoured a strong local government to attend to their issues. In 1944 Tommy Douglas became premier of the first avowedly socialist regional government in North America. Most of his Members of the Legislative Assembly (MLAs) represented rural and small-town ridings. Under his Cooperative Commonwealth Federation government, Saskatchewan became the first province to have Medicare. In 1961, Douglas left provincial politics to become the first leader of the federal New Democratic Party. In the 21st century, Saskatchewan began to drift to the right-wing, generally attributed to the province's economy shifting toward oil and gas production. In the 2015 federal election, the Conservative Party of Canada won ten of the province's fourteen seats, followed by the New Democratic Party with three and the Liberal Party of Canada with one; in the 2019 election, the Conservatives won in all of Saskatchewan's 14 seats, sweeping their competition, and retained them all in the 2021 election; in the 2025 Canadian federal election, the Liberal Party won a single seat in northern Saskatchewan . Provincial politics in Saskatchewan is dominated by the social-democratic Saskatchewan New Democratic Party and the centre-right Saskatchewan Party, with the latter holding the majority in the Legislative Assembly of Saskatchewan since 2007. The current Premier of Saskatchewan is Scott Moe, who took over the leadership of the Saskatchewan Party in 2018 following the resignation of Brad Wall. Numerous smaller political parties also run candidates in provincial elections, including the Green Party of Saskatchewan, Buffalo Party of Saskatchewan, Saskatchewan Progress Party, and the Progressive Conservative Party of Saskatchewan, but none is currently represented in the Legislative Assembly. No Prime Minister of Canada has been born in Saskatchewan, but two (William Lyon Mackenzie King and John Diefenbaker) represented the province in the House of Commons of Canada during their tenures as head of government. Below the provincial level of government, Saskatchewan is divided into urban and rural municipalities. The Government of Saskatchewan's Ministry of Municipal Relations recognizes three general types of municipalities and seven sub-types – urban municipalities (cities, towns, villages and resort villages), rural municipalities and northern municipalities (northern towns, northern villages and northern hamlets). The vast majority of the land mass of Northern Saskatchewan is within the unorganized Northern Saskatchewan Administration District. Cities are formed under the provincial authority of The Cities Act, which was enacted in 2002. Towns, villages, resort villages and rural municipalities are formed under the authority of The Municipalities Act, enacted in 2005. The three sub-types of northern municipalities are formed under the authority of The Northern Municipalities Act, enacted in 2010. In 2016, Saskatchewan's 774 municipalities covered 52.7% of the province's land mass and were home to 94.8% of its population.[b] These 774 municipalities are local government "creatures of provincial jurisdiction" with legal personhood. One of the key purposes of Saskatchewan's municipalities are "to provide services, facilities and other things that, in the opinion of council, are necessary or desirable for all or a part of the municipality". Other purposes are to: "provide good government"; "develop and maintain a safe and viable community"; "foster economic, social and environmental well-being" and "provide wise stewardship of public assets." Transportation Transportation in Saskatchewan includes an infrastructure system of roads, highways, freeways, airports, ferries, pipelines, trails, waterways and railway systems serving a population of approximately 1,003,299 (according to 2007 estimates) inhabitants year-round. The Saskatchewan Department of Highways and Transportation estimates 80% of traffic is carried on the 5,031-kilometre principal system of highways. The Ministry of Highways and Infrastructure operates over 26,000 km (16,000 mi) of highways and divided highways. There are also municipal roads which comprise different surfaces. Asphalt concrete pavements comprise almost 9,000 km (5,600 mi), granular pavement almost 5,000 km (3,100 mi), non structural or thin membrane surface TMS are close to 7,000 km (4,300 mi) and finally gravel highways make up over 5,600 km (3,500 mi) through the province. In the northern sector, ice roads which can only be navigated in the winter months comprise another approximately 150 km (93 mi) of travel. In 2024, the Government of Canada provided Saskatchewan with a $6.1-million grant for shuttle buses serving remote communities. Saskatchewan has over 250,000 km (160,000 mi) of roads and highways, the highest length of road surface of any Canadian province. The major highways in Saskatchewan are the Trans-Canada Highway, Yellowhead Highway northern Trans Canada route, Louis Riel Trail, CanAm Highway, Red Coat Trail, Northern Woods and Water route, and Saskota travel route. The first Canadian transcontinental railway was constructed by the Canadian Pacific Railway (CPR) between 1881 and 1885. After the great east–west transcontinental railway was built, north–south connector branch lines were established. The 1920s saw the largest rise in rail line track as the CPR and Canadian National Railway (CNR) fell into competition to provide rail service within ten kilometres. In the 1960s there were applications for abandonment of branch lines. Today the only two passenger rail services in the province are The Canadian and Winnipeg–Churchill train, both operated by Via Rail. The Canadian is a transcontinental service linking Toronto with Vancouver. The main Saskatchewan waterways are the North Saskatchewan River or South Saskatchewan River routes. In total, there are 3,050 bridges maintained by the Department of Highways in Saskatchewan. There are currently twelve ferry services operating in the province, all under the jurisdiction of the Department of Highways. The Saskatoon Airport was initially established as part of the Royal Canadian Air Force training program during World War II. It was renamed the John G. Diefenbaker Airport in 1993. Roland J. Groome Airfield is the official designation for the Regina International Airport as of 2005; the airport was established in 1930. Airlines offering service to Saskatchewan are Air Canada, WestJet, Delta Air Lines, Transwest Air, Sunwing Airlines, Norcanair Airlines, La Ronge Aviation Services Ltd, La Loche Airways, Osprey Wings Ltd, Buffalo Narrows Airways Ltd, Île-à-la-Crosse Airways Ltd, Voyage Air, Pronto Airways, Venture Air Ltd, Pelican Narrows Air Service, Jackson Air Services Ltd, and Northern Dene Airways Ltd. The Government of Canada agreed to contribute $20 million for two new interchanges in Saskatoon. One of them being at the Highway 219/Lorne Avenue intersection with Circle Drive, the other at the Senator Sid Buckwold Bridge (Idylwyld Freeway) and Circle Drive. This is part of the Asia-Pacific Gateway and Corridor Initiative to improve access to the CNR's intermodal freight terminal thereby increasing Asia-Pacific trade. Also, the Government of Canada will contribute $27 million to Regina to construct a CPR intermodal facility and improve infrastructure transportation to the facility from both national highway networks, Highway 1, the Trans-Canada Highway and Highway 11, Louis Riel Trail. This also is part of the Asia-Pacific Gateway and Corridor Initiative to improve access to the CPR terminal and increase Asia-Pacific trade. Culture Saskatchewan is home to a number of museums. The Royal Saskatchewan Museum is the provincial museum of the province. Other museums include Diefenbaker House, Evolution of Education Museum, Museum of Antiquities, the RCMP Heritage Centre, Rotary Museum of Police and Corrections, Saskatchewan Science Centre, Saskatchewan Western Development Museum, and the T.rex Discovery Centre. The province is home to several art galleries, including MacKenzie Art Gallery, and Remai Modern. The province is also home to several performing arts centres including the Conexus Arts Centre in Regina, and TCU Place in Saskatoon. PAVED Arts, a new media artist-run space, is also in Saskatoon. The province is presently home to several concert orchestras, the Regina Symphony Orchestra, the Saskatoon Symphony Orchestra, and the Saskatoon Youth Orchestra. The Regina Symphony Orchestra is at the Conexus Arts Centre, while the Saskatoon performs at TCU Place. A leading writer from Saskatchewan is W. O. Mitchell (1914–1998), born in Weyburn. His best-loved novel is Who Has Seen the Wind (1947), which portrays life on the Canadian Prairies and sold almost a million copies in Canada. As a broadcaster, he is known for his radio series Jake and the Kid, which aired on CBC Radio between 1950 and 1956 and was also about life on the Prairies. Sports Hockey is the most popular sport in Saskatchewan. More than 500 National Hockey League (NHL) players have been born in Saskatchewan, the highest per capita output of any Canadian province, U.S. state, or European country. This includes Gordie Howe, dubbed "Mr. Hockey" and widely regarded as one of the greatest hockey players of all time. Some other notable NHL figures born in Saskatchewan include Keith Allen, Bryan Trottier, Bernie Federko, Clark Gillies, Fernie Flaman, Fred Sasakamoose, Bert Olmstead, Harry Watson, Elmer Lach, Max Bentley, Sid Abel, Doug Bentley, Eddie Shore, Clint Smith, Bryan Hextall, Johnny Bower, Emile Francis, Glenn Hall, Chuck Rayner, Wendel Clark, Brad McCrimmon, Mike Babcock, Patrick Marleau, Theo Fleury, Terry Harper, Wade Redden, Brian Propp, Ryan Getzlaf, Chris Kunitz, Kelly Chase, and Jordan Eberle. A number of prominent women's hockey players and figures have come from the province as well, including Hayley Wickenheiser, Colleen Sostorics, Gina Kingsbury, Shannon Miller, and Emily Clark. Wickenheiser was the first female skater to play full-time professional hockey in a men's league and is regarded as one of the greatest hockey players of all time. Saskatchewan does not have a professional hockey franchise, but five teams in the junior Western Hockey League are based in the province: the Moose Jaw Warriors, Prince Albert Raiders, Regina Pats, Saskatoon Blades, and Swift Current Broncos. The Saskatchewan Roughriders are the province's professional Canadian football team playing in the Canadian Football League, and are based in Regina but popular across Saskatchewan. The team's fans are also found to congregate on game days throughout Canada, and collectively they are known as "Rider Nation". The Roughriders are one of the oldest professional sports teams and community-owned franchises in North America and have won five Grey Cup championships. The province also has successful women's football teams. The Saskatoon Valkyries and the Regina Riot are the only two teams to win championships in the Western Women's Canadian Football League since it began play in 2011. The province is home to two other professional sports franchises. The Saskatchewan Rush play in the National Lacrosse League. In 2016, their first year after relocating from Edmonton, Alberta, the Rush won both their division title and the league championship. In 2018, the province received a Canadian Elite Basketball League franchise, the Saskatchewan Rattlers, which won the league's inaugural championship in 2019. The Saskatchewan Heat are a semi-professional team in the National Ringette League. The province has five teams in the Western Canadian Baseball League. Curling is the province's official sport and, historically, Saskatchewan has been one of the strongest curling provinces. Teams from Saskatchewan have won seven Canadian men's championships, five world men's championships, eleven Canadian women's championships, and four world women's championships. Notable curlers from Saskatchewan include Ernie Richardson, Joyce McKee, Vera Pezer, Rick Folk, Sandra Schmirler, and Ben Hebert. In a 2019 poll conducted by The Sports Network (TSN), experts ranked Schmirler's Saskatchewan team, which won the gold medal at the 1998 Olympics, as the greatest women's team in Canada's history. Symbols The flag of Saskatchewan was officially adopted on September 22, 1969. The flag features the provincial shield in the upper quarter nearest the staff, with the floral emblem, the Prairie lily, in the fly. The upper green (in forest green) half of the flag represents the northern Saskatchewan forest lands, while the golden lower half of the flag symbolizes the southern wheat fields and prairies. A province-wide competition was held to design the flag, and drew over 4,000 entries. The winning design was by Anthony Drake, then living in Hodgeville. In 2005, Saskatchewan Environment held a province-wide vote to recognize Saskatchewan's centennial year, receiving more than 10,000 online and mail-in votes from the public. The walleye was the overwhelming favourite of the six native fish species nominated for the designation, receiving more than half the votes cast. Other species in the running were the lake sturgeon, lake trout, lake whitefish, northern pike and yellow perch. Saskatchewan's other symbols include the tartan, the licence plate, and the provincial flower. Saskatchewan's official tartan was registered with the Court of Lord Lyon King of Arms in Scotland in 1961. It has seven colours: gold, brown, green, red, yellow, white and black. The provincial licence plates display the slogan "Land of Living Skies". The provincial flower of Saskatchewan is the western red lily. In 2005, Saskatchewan celebrated its centennial. To honour it, the Royal Canadian Mint issued a commemorative five-dollar coin depicting Canada's wheat fields as well as a circulation 25-cent coin of a similar design. Queen Elizabeth II and Prince Philip visited Regina, Saskatoon, and Lumsden, and the Saskatchewan-reared Joni Mitchell issued an album in Saskatchewan's honour. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Spherical_astronomy] | [TOKENS: 445]
Contents Spherical astronomy Spherical astronomy, or positional astronomy, is a branch of observational astronomy used to locate astronomical objects on the celestial sphere, as seen at a particular date, time, and location on Earth. It relies on the mathematical methods of spherical trigonometry and the measurements of astrometry. This is the oldest branch of astronomy and dates back to antiquity. Observations of celestial objects have been, and continue to be, important for religious and astrological purposes, as well as for timekeeping and navigation. The science of actually measuring positions of celestial objects in the sky is known as astrometry. The primary elements of spherical astronomy are celestial coordinate systems and time. The coordinates of objects on the sky are listed using the equatorial coordinate system, which is based on the projection of Earth's equator onto the celestial sphere. The position of an object in this system is given in terms of right ascension (α) and declination (δ). The latitude and local time can then be used to derive the position of the object in the horizontal coordinate system, consisting of the altitude and azimuth. The coordinates of celestial objects such as stars and galaxies are tabulated in a star catalog, which gives the position for a particular year. However, the combined effects of axial precession and nutation will cause the coordinates to change slightly over time. The effects of these changes in Earth's motion are compensated by the periodic publication of revised catalogs. To determine the position of the Sun and planets, an astronomical ephemeris (a table of values that gives the positions of astronomical objects in the sky at a given time) is used, which can then be converted into suitable real-world coordinates. The unaided human eye can perceive about 6,000 stars, of which about half are below the horizon at any one time. On modern star charts, the celestial sphere is divided into 88 constellations. Every star lies within a constellation. Constellations are useful for navigation. Polaris lies nearly due north to an observer in the Northern Hemisphere. This pole star is always at a position nearly directly above the North Pole. Positional phenomena Ancient structures associated with positional astronomy include See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-26] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_Solar_System_objects_by_size] | [TOKENS: 2014]
Contents List of Solar System objects by size This article includes a list of the most massive known objects of the Solar System and partial lists of smaller objects by observed mean radius. These lists can be sorted according to an object's radius and mass and, for the most massive objects, volume, density, and surface gravity, if these values are available. These lists contain the Sun, the planets, dwarf planets, many of the larger small Solar System bodies (which includes the asteroids), all named natural satellites, and a number of smaller objects of historical or scientific interest, such as comets and near-Earth objects. Many trans-Neptunian objects (TNOs) have been discovered; in many cases their positions in this list are approximate, as there is frequently a large uncertainty in their estimated diameters due to their distance from Earth. There are uncertainties in the figures for mass and radius, and irregularities in the shape and density, with accuracy often depending on how close the object is to Earth or whether it has been visited by a probe. Solar System objects more massive than 1021 kilograms are known or expected to be approximately spherical. Astronomical bodies relax into rounded shapes (spheroids), achieving hydrostatic equilibrium, when their own gravity is sufficient to overcome the structural strength of their material. It was believed that the cutoff for round objects is somewhere between 100 km and 200 km in radius if they have a large amount of ice in their makeup; however, later studies revealed that icy satellites as large as Iapetus (1,470 kilometers in diameter) are not in hydrostatic equilibrium at this time, and a 2019 assessment suggests that many TNOs in the size range of 400–1,000 kilometers may not even be fully solid bodies, much less gravitationally rounded. Objects that are ellipsoids due to their own gravity are here generally referred to as being "round", whether or not they are actually in equilibrium today, while objects that are clearly not ellipsoidal are referred to as being "irregular". Spheroidal bodies typically have some polar flattening due to the centrifugal force from their rotation, and can sometimes even have quite different equatorial diameters (scalene ellipsoids such as Haumea). Unlike bodies such as Haumea, the irregular bodies have a significantly non-ellipsoidal profile, often with sharp edges. There can be difficulty in determining the diameter (within a factor of about 2) for typical objects beyond Saturn (see: 2060 Chiron § Physical characteristics, for an example). For TNOs there is some confidence in the diameters, but for non-binary TNOs there is no real confidence in the masses/densities. Many TNOs are often just assumed to have Pluto's density of 2.0 g/cm3, but it is just as likely that they have a comet-like density of only 0.5 g/cm3. For example, if a TNO is incorrectly assumed to have a mass of 3.59×1020 kg based on a radius of 350 km with a density of 2 g/cm3 but is later discovered to have a radius of only 175 km with a density of 0.5 g/cm3, its true mass would be only 1.12×1019 kg. The sizes and masses of many of the moons of Jupiter and Saturn are fairly well known due to numerous observations and interactions of the Galileo and Cassini orbiters; however, many of the moons with a radius less than ≈100 km, such as Jupiter's Himalia, have far more uncertain masses. Further out from Saturn, the sizes and masses of objects are less clear. There has not yet been an orbiter around Uranus or Neptune for long-term study of their moons. For the small outer irregular moons of Uranus, such as Sycorax, which were not discovered by the Voyager 2 flyby, even different NASA web pages, such as the National Space Science Data Center and JPL Solar System Dynamics, give somewhat contradictory size and albedo estimates depending on which research paper is being cited. Graphical overview Objects with radii over 400 km The following objects have a nominal mean radius of 400 km or greater. It was once expected that any icy body larger than approximately 200 km in radius was likely to be in hydrostatic equilibrium (HE). However, Ceres (r = 470 km) is the smallest body for which detailed measurements are consistent with hydrostatic equilibrium, whereas Iapetus (r = 735 km) is the largest icy body that has been found to not be in hydrostatic equilibrium. The known icy moons in this range are all ellipsoidal (except Proteus), but trans-Neptunian objects up to 450–500 km radius may be quite porous. For simplicity and comparative purposes, the values are manually calculated assuming that the bodies are all spheres. The size of solid bodies does not include an object's atmosphere. For example, Titan looks bigger than Ganymede, but its solid body is smaller. For the giant planets, the "radius" is defined as the distance from the center at which the atmosphere reaches 1 bar of atmospheric pressure. Because Sedna has no known moons, directly determining its mass (estimated to be from 1.7x1021 to 6.1×1021 kg) is impossible without sending a probe. Smaller objects by mean radius All imaged icy moons with radii greater than 200 km except Proteus are clearly round, although those under 400 km that have had their shapes carefully measured are not in hydrostatic equilibrium. The known densities of TNOs in this size range are remarkably low (1–1.2 g/cm3), implying that the objects retain significant internal porosity from their formation and were never gravitationally compressed into fully solid bodies. Many intrinsically bright TNOs like 2018 VG18 and 2017 OF201 do not have directly measured sizes (e.g. via stellar occultation and radiometry of thermal emission), so their sizes are estimated based on an assumed albedo. In the list below, TNOs with unmeasured sizes are only listed if they have been mentioned in press releases and the scientific literature. Legend: This list contains a selection of objects estimated to be between 100 and 199 km in radius (200 and 399 km in diameter), being 200 km nicknamed the "potato radius" by astronomers. The largest of these may have a hydrostatic-equilibrium shape, but most are irregular (i.e., potato-shaped). Mass switches from 1021 kg to 1018 kg (Zg). Main-belt asteroids have orbital elements constrained by (2.0 AU < a < 3.2 AU; q > 1.666 AU) according to JPL Solar System Dynamics (JPLSSD). Many TNOs are omitted from this list as their sizes are poorly known. This list contains a selection of objects 50 and 99 km in radius (100 km to 199 km in average diameter). The listed objects currently include most objects in the asteroid belt and moons of the giant planets in this size range, but many newly discovered objects in the outer Solar System are missing, such as those included in the following reference. Asteroid spectral types are mostly Tholen, but some might be SMASS. This list includes few examples since there are about 589 asteroids in the asteroid belt with a measured radius between 20 and 49 km. Many thousands of objects of this size range have yet to be discovered in the trans-Neptunian region. The number of digits is not an endorsement of significant figures. The table switches from ×1018 kg to ×1015 kg (Eg). Most mass values of asteroids are assumed. This list contains some examples of Solar System objects between 1 and 19 km in radius. This is a common size for asteroids, comets and irregular moons. This list contains examples of objects below 1 km in radius. That means that irregular bodies can have a longer chord in some directions, hence the mean radius averages out. In the asteroid belt alone there are estimated to be between 1.1 and 1.9 million objects with a radius above 0.5 km, many of which are in the range 0.5–1.0 km. Countless more have a radius below 0.5 km. Very few objects in this size range have been explored or even imaged. The exceptions are objects that have been visited by a probe, or have passed close enough to Earth to be imaged. Radius is by mean geometric radius. Number of digits not an endorsement of significant figures. Mass scale shifts from × 1015 to 109 kg, which is equivalent to one billion kg or 1012 grams (Teragram – Tg). Currently most of the objects of mass between 109 kg to 1012 kg (less than 1000 teragrams (Tg)) listed here are near-Earth asteroids (NEAs). The Aten asteroid 1994 WR12 has less mass than the Great Pyramid of Giza, 5.9 × 109 kg. For more about very small objects in the Solar System, see meteoroid, micrometeoroid, cosmic dust, and interplanetary dust cloud. (See also Visited/imaged bodies.) Gallery See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Video_streaming] | [TOKENS: 5318]
Contents Streaming television Page version status This is an accepted version of this page Streaming television is the digital distribution of television media content, such as films and series, over Internet-based streaming media platforms. In contrast to over-the-air, cable, and satellite transmissions, or IPTV service, streaming television is provided as over-the-top media (OTT). Television content includes productions made by or for OTT services, and acquired by them with licensing agreements. The length of a streaming television series episode can be anywhere from thirty to sixty minutes (some episodes may be longer). By 2023, streaming television represented 38% of global TV viewing with 1.8 billion subscriptions to streaming platforms. In 2024, streaming television became "the dominant form of TV viewing" in the United States.[a] It surpassed cable and network television viewing in 2025. Of the top streaming platforms, Netflix had over 325 million subscribers as of December 2025, making it the most popular global streaming television platform. History Up until the 1990s, it was not thought possible that a television show could be squeezed into the limited telecommunication bandwidth of a copper telephone cable to provide a streaming service of acceptable quality, as the required bandwidth of a digital television signal was (in the mid-1990s perceived to be) around 200 Mbit/s, which was 2,000 times greater than the bandwidth of a speech signal over a copper telephone wire. Streaming services started as a result of two major technological developments: MPEG (motion-compensated DCT) video compression and asymmetric digital subscriber line (ADSL) data communication. By the year 2000, a television broadcast could be compressed to 2 Mbit/s, but most consumers still had little opportunity to obtain greater than 1 Mbit/s connection speeds. The first worldwide live-streaming event was a radio live broadcast of a baseball game between the Seattle Mariners and the New York Yankees streamed by ESPN SportsZone on September 5, 1995. The mid-2000s were the beginning of television programs becoming available via the Internet. In November 2003, Angelos Diamantoulakis launched the streaming television service TVonline, making it the world's first television station to produce and broadcast content exclusively over the internet via web page. The online video platform site YouTube was launched in early 2005, allowing users to share illegally posted television programs. YouTube co-founder Jawed Karim said the inspiration for YouTube first came from Janet Jackson's role in the 2004 Super Bowl incident, when her breast was exposed during her performance, and later from the 2004 Indian Ocean tsunami. Karim could not easily find video clips of either event online, which led to the idea of a video sharing site. Apple's iTunes service also began offering select television programs and series in 2005, available for download after direct payment. During the mid-2000s, the streaming media was based on UDP, whereas the basis of the majority of the Internet was HTTP and content delivery networks (CDNs). In 2007, HTTP-based adaptive streaming was introduced by Move Networks. This new technology would be a significant change for the industry. One year later the introduction of HTTP-based adaptive streaming, many companies such as Microsoft and Netflix developed their streaming technology. In 2009, Apple launched HTTP Live Streaming (HLS). Television networks and other independent services began creating sites where shows and programs could be streamed online. Amazon Prime Video began in the United States as Amazon Unbox in 2006 (but did not launch worldwide until 2016). Netflix, a website originally created for DVD rentals and sales, began providing streaming content in 2007. The first generation Apple TV was released in 2007. In 2008 Hulu, owned by NBC and Fox, was launched, followed by tv.com in 2009, owned by CBS. Digital media players also began to become available to the public during this time. In 2008, the first generation Roku streaming device was announced. These digital media players have continued to be updated and new generations released. In 2008, the International Academy of Web Television, headquartered in Los Angeles, formed in order to organize and support television actors, authors, executives, and producers in streaming television and web series. The organization also administers the selection of winners for the Streamy Awards. In 2009, the Los Angeles Web Series Festival was founded. Several other festivals and award shows have been dedicated solely to web content, including the Indie Series Awards and the Vancouver Web Series Festival. in 2010, Adobe launched HTTP Dynamic Streaming (HDS). In addition, HTTP-based adaptive streaming was chosen for important streaming events such as Roland Garros, Wimbledon, Vancouver and London Olympic Games, and many others and on premium on-demand services (Netflix, Amazon Instant Video, etc.). The increase in streaming services required a new standardization, therefore in 2012, with the contributions of Apple, Netflix, Microsoft, and other companies, Dynamic Adaptive Streaming, known as MPEG-DASH, was published as the new HTTP-based adaptive streaming standard. Smart TVs took over the television market after 2010 and continue to partner with new providers to bring streaming video to even more users. As of 2015, smart TVs are the only type of middle to high-end television being produced. Amazon's version of a digital media player, Amazon Fire TV, was not offered to the public until 2014. Access to television programming has evolved from computer and television access to include mobile devices such as smartphones and tablet computers. Corresponding apps for mobile devices started to become available via app stores in 2008, but they grew in popularity in the 2010s with the rapid deployment of LTE cellular networks. These apps enable users to stream television content on mobile devices that support them. In 2013, in response to the shifting of the soap opera All My Children from broadcast to streaming television, a new category for "Fantastic web-only series" in the Daytime Emmy Awards was created. That year, Netflix made history with the first Primetime Emmy Award nominations for a streaming television series at the 65th Primetime Emmy Awards, for Arrested Development, Hemlock Grove, and House of Cards. Hulu earned the first Emmy win for Outstanding Drama Series, for The Handmaid's Tale at the 69th Primetime Emmy Awards in 2017. Traditional cable and satellite television providers began to offer streaming services. In 2012, British broadcaster Sky launched Now streaming service in the United Kingdom. Sling TV was unveiled by Dish Network in January 2015. Cable company Comcast announced an HBO plus broadcast TV package at a price discounted from basic broadband plus basic cable in July 2015. DirecTV launched their streaming service, DirecTV Stream, in 2016. In 2017, YouTube launched YouTube TV, a streaming service that allows users to watch live television programs from popular cable or network channels, and record shows to stream anywhere, anytime. By the end 2015, Netflix had almost 75 million world-wide subscribers. In 2017, 28% of US adults cited streaming services as their main means for watching television, and 61% of those ages 18 to 29 cited it as their main method. In 2020, the COVID-19 pandemic had a strong impact in the television streaming business with the lifestyle changes such as staying at home and lockdowns. By 2024, Netflix had become the world's largest streaming television platform with 260.28 million global subscribers. By the end of 2025, its active subscribers total had grown to over 325 million. As of May 2025, Nielsen reported that streaming represented 44.8% of all television viewing, compared to 44.2% for broadcast and cable combined. Technology The Hybrid Broadcast Broadband TV (HbbTV) consortium of industry companies (such as SES, Humax, Philips, and ANT Software) is currently promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast and broadband digital television and multimedia applications with a single-user interface. BBC iPlayer originally incorporated peer-to-peer streaming, moved towards centralized distribution for their video streaming services. BBC executive Anthony Rose cited network performance as an important factor in the decision, as well as consumers being unhappy with their own network bandwidth being used for transmitting content to other viewers. Samsung TV has also announced their plans to provide streaming options including 3D Video on Demand through their Explore 3D service. Some streaming services incorporate digital rights management. The W3C made the controversial decision to adopt Encrypted Media Extensions due in large part to motivations to provide copy protection for streaming content. Sky Go has software that is provided by Microsoft to prevent content being copied. Additionally, BBC iPlayer makes use of a parental control system giving users the option to "lock" content, requiring a password to access it. The goal of these systems is to enable parents to keep children from viewing sexually themed, violent, or otherwise age-inappropriate material.[citation needed] Flagging systems can be used to warn a user that content may be certified or that it is intended for viewing post-watershed.[citation needed] Honour systems are also used where users are asked for their dates of birth or age to verify if they are able to view certain content.[citation needed] IPTV delivers television content using signals based on the Internet Protocol (IP), through managed private network infrastructure entirely owned by a single telecom or Internet service provider (ISP). This stands in contrast to delivering content over unmanaged public networks - a practice known as over-the-top content delivery. Both IPTV and OTT use the Internet protocol over a packet-switched network to transmit data, but IPTV operates in a closed system—a dedicated, managed network controlled by the local cable, satellite, telephone, or fiber-optic company. In its simplest form, IPTV simply replaces traditional circuit switched analog or digital television channels with digital channels which happen to use packet-switched transmission. In both the old and new systems, subscribers have set-top boxes or other customer-premises equipment that communicates directly over company-owned or dedicated leased lines with central-office servers. Packets never travel over the public Internet, so the television provider can guarantee enough local bandwidth for each customer's needs. The Internet protocol is a cheap, standardized way to enable two-way communication and simultaneously provide different data (e.g., TV-show files, email, Web browsing) to different customers. This supports DVR-like features for time shifting television: for example, to catch up on a TV show that was broadcast hours or days ago, or to replay the current TV show from its beginning. It also supports video on demand—browsing a catalog of videos (such as movies or television shows) which might be unrelated to the company's scheduled broadcasts. IPTV has an ongoing standardization process (for example, at the European Telecommunications Standards Institute). Streaming quality Streaming quality is the quality of image and audio transmission from the servers of the distributor to the user's screen. Also, Streaming resolution helps to measure the size of the streaming quality of video pixels. High-definition video (720p+) and later standards require higher bandwidth and faster connection speeds than previous standards, because they carry higher spatial resolution image content. In addition, transmission packet loss and latency caused by network impairments and insufficient bandwidth degrade replay quality. Decoding errors may manifest themselves with video breakup and macro blocks. The generally accepted download rate for streaming high-definition (1080p) video encoded in AVC is 6000 kbit/s, whereas UHD requires upwards of 16,000 kbit/s. For users who do not have the bandwidth to stream HD/4K video or even SD video, most streaming platforms make use of an adaptive bitrate stream so that if the user's bandwidth suddenly drops, the platform will lower its streaming bitrate to compensate. Most modern television streaming platforms offer a wide range of both manual and automatic bitrate settings which are based on initial connection tests during the first few seconds of a video loading, and can be changed on the fly. This is valid for both Live and Catch-up content. Additionally, platforms can also offer content in standards such as HDR or Dolby Vision or at higher framerates which can require additional costs or subscription tiers to access. Usage Internet television is common in most US households as of the mid-2010s. In a 2013 study by eMarketer, about one in four new televisions being sold is a smart TV. Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of the streaming services, and the corresponding apps on mobile devices. On August 18, 2022, Nielsen reported that for the first time, streaming viewership has surpassed cable.[citation needed] Considering the popularity of smart TVs, smartphones, and devices such as the Roku and Chromecast, much of the US public can watch television via the Internet. Internet-only channels are now established enough to feature some Emmy-nominated shows, such as Netflix's House of Cards. Many networks also distribute their shows the next day to streaming providers such as Hulu Some networks may use a proprietary system, such as the BBC utilizes their BBC iPlayer format. This has resulted in bandwidth demands increasing to the point of causing issues for some networks. It was reported in February 2014 that Verizon Fios is having issues coping with the demand placed on their network infrastructure. Until long-term bandwidth issues are worked out and regulation such at net neutrality Internet Televisions push to HDTV may start to hinder growth. Aereo was launched in March 2012 in New York City (and subsequently stopped from broadcasting in June 2014). It streamed network TV only to New York customers over the Internet. Broadcasters filed lawsuits against Aereo, because Aereo captured broadcast signals and streamed the content to Aereo's customers without paying broadcasters. In mid-July 2012, a federal judge sided with the Aereo start-up. Aereo planned to expand to every major metropolitan area by the end of 2013. The Supreme Court ruled against Aereo June 24, 2014. Some have noted that as opposed to broadcast television, with demographics of mostly "unspokenly straight" white viewers, cable, and with streaming services, dollars from subscription can "level the playing field," giving viewers from marginalized communities, and representation of their communities, "equal power." The viewing of television content on streaming platforms represented 19% of all television consumption in the United States in 2019, and by the end of 2023 it had become the nation's "dominant form of TV viewing". With 1.8 billion subscriptions to streaming platforms, streaming television represented 38% of global TV viewing in 2023. However, some streaming platforms have reportedly begun to experience subscriber losses, likely due to price increases. Reportedly, 53% of surveyed millennials choose to cancel subscriptions following increases in subscription costs. Market competitors Many providers of Internet television services exist—including conventional television stations that have taken advantage of the Internet as a way to continue showing television shows after they have been broadcast, often advertised as "on-demand" and "catch-up" services. Today, almost every major broadcaster around the world is operating an Internet television platform. Examples include the BBC, which introduced the BBC iPlayer on 25 June 2008 as an extension to its "RadioPlayer" and already existing streamed video-clip content, and Channel 4 that launched 4oD ("4 on Demand") (now All 4) in November 2006 allowing users to watch recently shown content. Most Internet television services allow users to view content free of charge; however, some content is for a fee. In the UK, the term catch up TV was most commonly used to refer to these sorts of services at the time. Since 2012, around 200 over-the-top (OTT) platforms providing streamed and downloadable content have emerged. Investment by Netflix in new original content for its OTT platform reached $13bn in 2018. Streaming platforms Amazon Prime Video was originally launched in the year 2006. Upon its initial release, the popular streaming service was referred to as Amazon Unbox. Amazon Prime Video was created due to the development of Amazon Prime, which is a paid service that includes free shipping of different types of goods. Amazon Prime Video is available in approximately 200 countries around the world. Each year, Amazon invests in the production of films and TV series that are streamed as Amazon originals. Apple TV+ is a streaming service owned by Apple Inc. Apple TV+ is a streaming subscription platform that launched November 1, 2019. The service offers original content exclusively made by Apple, being seen as Apple Originals. This streaming platform solely releases content that can only be found on Apple TV+, there is no third-party content found on the platform whereas several other streaming services have third-party content. The Apple TV+ name derives from the Apple TV media player that was released in 2007. Disney+ is an American subscription streaming service owned and operated by the Disney Entertainment division of The Walt Disney Company. Released on November 12, 2019, the service primarily distributes films and television series produced by Walt Disney Studios and Disney General Entertainment Content, with dedicated content hubs for the brands Disney, Pixar, Marvel, Star Wars, and National Geographic, as well as Star in some regions. Original films and television series are also distributed on Disney+. Launched in 2007, Hulu is only available to viewers in the United States because of licensing restrictions. Hulu is one of the only streaming services that provides streaming for current on-air television shows a few days after their original broadcast on cable television, but with limited availability. Hulu originally had both a free and paid plan. The free plan was accessible only via computer and there was a limited amount of content for users, whereas the paid plan could be accessed via computers, mobile devices, and connected televisions. In 2019, The Walt Disney Company became the major owner of Hulu. The platform has bundle deals where customers can subscribe to both Hulu and Disney+. HBO Max is a streaming service released by Warner Bros. Discovery. The platform was released on May 27, 2020 in the United States, and within the first five months of launching, had amassed 8 million subscribers across the country. It offers classic Warner Bros. films and self-produced programs, and has won the right to exclusively air Studio Ghibli films in the United States. It is not until 45 days after the theatrical release from 2022 that the release is taking place on the platform and reached 70 million subscribers in December 2021. In September 2022, 92 million households were counted as subscribers, but since this was announced, including subscribers to the HBO channel, it is expected that the actual population of Max alone will be much smaller. Netflix, founded by Reed Hastings and Marc Randolph, is a media streaming and video rental in 1997. Two years later, Netflix was offering the audience the possibility of an online subscription service. Subscribers could select movies and TV shows on Netflix's website and receive the chosen titles via DVDs in prepaid return envelopes. In 2007, Netflix's subscribers could watch some movies and TV shows online, directly from their homes. In 2010, Netflix launched an only-streaming plan with unlimited streaming services without DVDs. Starting from the United States, the only-streaming plan reached several countries; by 2016 more than 190 countries could use this service. In 2011, Netflix began to negotiate the production of original programming, starting with the series House of Cards. Paramount+ is a streaming service that is owned by the Paramount Global. The streaming service was launched on October 28, 2014, and was known as CBS All Access originally. At the time of the release, the platform focused primarily on streaming programs from local CBS stations as well as complete access to all CBS network content. In 2016 the streaming service created original content that could only be found by using the platform. As the network continued to expand with its content, the service decided to rebrand themselves and took the name Paramount+, taking its name from Paramount Pictures film studio. The network since expanded to Latin America, Europe and Australia. Peacock is a streaming service owned and operated by Peacock TV, which is a subsidiary of NBCUniversal Television and Streaming. The streaming service gets its name from the NBC logo based on its colors. The platform had launched on July 15, 2020. The streaming service primarily features content that can be found on NBC networking channels as well as other third-party sources. Additionally, Peacock now offers original content that cannot be found on any other streaming platform. In December 2022, Peacock reached 20 million paid subscribers. In March 2023, the platform had 22 million paid subscribers. The domain name of YouTube was bought and activated by Chad Hurley, Steve Chen, and Jawed Karim in the beginning of 2005. YouTube launched later that year as an online video sharing and social media platform. The video platform became popular among the audience thanks to a short video, called Lazy Sunday, uploaded by Saturday Night Live in December 2005. The SNL's video was not broadcast on TV, therefore people looked for it on Google by typing "SNL rap video," "Lazy Sunday SNL," or "Chronicles of Narnia SNL." The first result of searches was a link video on YouTube, which was the beginning of sharing videos on YouTube. Because of its popularity, YouTube had some issues caused by its bandwidth expenses. In 2006, Google bought YouTube, and after some months the video platform was the second-largest engine search in the world. Binge-watching In the 1990s, the practice of watching entire seasons in a short amount of time emerged with the introduction of the DVD box. Media-marathoning consists of watching at least one season of a TV show in a week or less, watching three or more films from the same series in a week or less, or reading three or more books from the same series in a month or less. The term "binge-watching" arrived with streaming TV, when Netflix launched its first original production, House of Cards, and started marketing this process of watching TV series episode after episode in 2013. COVID-19 gave another connotation to binge-watching, which was considered a negative activity. Broadcasting rights Broadcasting rights (also called Streaming rights in this case) vary from country to country and even within provinces of countries. These rights govern the distribution of copyrighted content and media and allow the sole distribution of that content at any one time. An example of content only being aired in certain countries is BBC iPlayer. The BBC checks a user's IP address to make sure that only users located in the UK can stream content from the BBC. The BBC only allows free use of their product for users within the UK as those users have paid for a television license that funds part of the BBC. This IP address check is not foolproof as the user may be accessing the BBC website through a VPN or proxy server. Broadcasting rights can also be restricted to allowing a broadcaster rights to distribute that content for a limited time. Channel 4's online service All 4 can only stream shows created in the US by companies such as HBO for thirty days after they are aired on one of the Channel 4 group channels. This is to boost DVD sales for the companies who produce that media. Some companies pay very large amounts for broadcasting rights with sports and US sitcoms usually fetching the highest price from UK-based broadcasters. A trend among major content producers in North America [when?] is the use of the "TV Everywhere" system. Especially for live content, the TV Everywhere system restricts viewership of a video feed to select Internet service providers, usually cable television companies that pay a retransmission consent or subscription fee to the content producer. This often has the negative effect of making the availability of content dependent upon the provider, with the consumer having little or no choice on whether they receive the product. Profits and costs With the advent of broadband Internet connections, multiple streaming providers have come onto the market in the last couple of years. The main providers are Netflix, Hulu, and Amazon Prime Video.[citation needed] Some of these providers such as Hulu advertise and charge a monthly fee. Other such as Netflix and Amazon Prime Video charge users a monthly fee and have no commercials. Netflix is the largest provider with more than 217 million subscribers. The rise of internet TV has resulted in cable companies losing customers to a new kind of customer called "cord cutters". Cord cutters are consumers who are cancelling their cable TV or satellite TV subscriptions and choosing instead to stream TV series, films and other content via the Internet. Cord cutters are forming communities. With the increasing availability of Online video platform (e.g., YouTube) and streaming services, there is an alternative to cable and satellite television subscriptions. Cord cutters tend to be younger people.[citation needed] As streaming services raise prices in order to increase profit, consumers have begun to look for cheaper alternatives, some opting for Free Ad-Supported Streaming Television (FAST) instead.[citation needed] This has also led to leading streaming services such as Disney+ and Hulu to implement ad-supported tiers. Concerns In recent years, customers have noticed shrinking content libraries and show cancellations. Examples include popular shows such as Westworld and originals such as Willow and The Mysterious Benedict Society. Often seen as a solution for cutting costs, streaming services remove assets with decreased earning power. Viewers and those involved in production have raised concerns surrounding this issue, creators losing out on "calling cards" and residual income; viewers having to invest in various platforms to watch select shows. Concerns about subscriber losses across services have also arisen, with Netflix being one of the many victims. Some argue that the surge in streaming is coming to an end due to the overabundance of media availability. Overview of platforms and availability See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Martian_regolith#Atmospheric_dust] | [TOKENS: 3186]
Contents Martian regolith Martian regolith is the fine blanket of unconsolidated, loose, heterogeneous superficial deposits covering the surface of Mars. The term Martian soil typically refers to the finer fraction of regolith. So far, no samples have been returned to Earth, the goal of a Mars sample-return mission, but the soil has been studied remotely with the use of Mars rovers and Mars orbiters. Its properties can differ significantly from those of terrestrial soil, including its toxicity due to the presence of perchlorates. Definitions On Earth, the term "soil" usually includes organic content. In contrast, planetary scientists adopt a functional definition of soil to distinguish it from rocks. Rocks generally refers to 10 cm scale and larger materials (e.g., fragments, breccia, and exposed outcrops) with high thermal inertia, with areal fractions consistent with the Viking Infrared Thermal Mapper (IRTM) data, and immobile under current aeolian (wind) conditions. Consequently, rocks are classified as grains exceeding the size of cobbles on the Wentworth scale. This approach enables agreement across Martian remote sensing methods that span the electromagnetic spectrum from gamma to radio waves. Soil refers to all other, typically unconsolidated, material including those sufficiently fine-grained to be mobilized by wind. Soil consequently encompasses a variety of regolith components identified at landing sites. Typical examples include: bedform (a feature that develops at the interface of fluid and a moveable bed such as ripples and dunes), clasts (fragments of pre-existing minerals and rock such as sediment deposits), concretions, drift, dust, rocky fragments, and sand. The functional definition reinforces a recently proposed generic definition of soil on terrestrial bodies (including asteroids and satellites) as an unconsolidated and chemically weathered surficial layer of fine-grained mineral or organic material exceeding centimeter scale thickness, with or without coarse elements and cemented portions. Martian dust generally connotes even finer materials than Martian soil, the fraction which is less than 30 micrometres in diameter. Disagreement over the significance of soil's definition arises due to the lack of an integrated concept of soil in the literature. The pragmatic definition "medium for plant growth" has been commonly adopted in the planetary science community but a more complex definition describes soil as "(bio)geochemically/physically altered material at the surface of a planetary body that encompasses surficial extraterrestrial telluric deposits". This definition emphasizes that soil is a body that retains information about its environmental history and that does not need the presence of life to form. Toxicity Martian regolith is toxic, due to relatively high concentrations of perchlorate compounds containing chlorine. Elemental chlorine was first discovered during localised investigations by Mars rover Sojourner, and has been confirmed by Spirit, Opportunity and Curiosity. The Mars Odyssey orbiter has also detected perchlorates across the surface of the planet. Perchlorates such as calcium perchlorate were first discovered on Mars in 2008 by the NASA Phoenix lander. The levels detected in the Martian regolith are around 0.5%, which is a level considered toxic to humans. These compounds are also toxic to plants. A 2013 terrestrial study found that a 0.5 g per liter concentration caused: The report noted that one of the types of plant studied, Eichhornia crassipes, seemed resistant to the perchlorates and could be used to help remove the toxic salts from the environment, although the plants themselves would end up containing a high concentration of perchlorates as a result. There is evidence that some bacterial lifeforms are able to overcome perchlorates by physiological adaptations to increasing perchlorate concentrations, and some even live off them. In 2022, NASA and the U.S. National Science Foundation co-funded a multi-year grant to study the use of the bacteria Dehalococcoides mccartyi to break down perchlorates into harmless chlorides and oxygen. However, the added effect of the high levels of UV reaching the surface of Mars breaks molecular bonds, creating even more dangerous chemicals which in lab tests on Earth were shown to be more lethal to bacteria than the perchlorates alone. This, along with cold temperature, would add to the need to grow plants indoors. The chlorine in Martian perchlorates is thought to originate from volcanoes or aqueous weathering of basalt, and the oxygen likely originates from the atmosphere, possibly with some contribution from minerals. It is hypothesized that perchlorate may be formed either by reactions of chlorine with ozone, or by oxidation at grain surfaces, or by reactions enhanced with chlorine dioxide, or through reactions with free radicals produced by electrostatic discharge in dust storms. The potential danger to human health of the fine Martian dust has long been recognized by NASA. A 2002 study warned about the potential threat, and a study was carried out using the most common silicates found on Mars: olivine, pyroxene and feldspar. It found that the dust reacted with small amounts of water to produce highly reactive molecules that are also produced during the mining of quartz and known to produce lung disease in miners on Earth, including cancer (the study also noted that lunar dust may be worse). Following on from this, since 2001 NASA's Mars Exploration Program Analysis Group (MEPAG) has had a goal to determine the possible toxic effects of the dust on humans. In 2010, the group noted that although the Phoenix lander and the rovers Spirit and Opportunity had contributed to answering this question, none of the instruments have been suitable for measuring the particular carcinogens that are of concern. The Mars 2020 rover is an astrobiology mission that will also make measurements to help designers of a future human expedition understand any hazards posed by Martian dust. It employs the following related instruments: The Mars 2020 rover mission will cache samples that could potentially be retrieved by a future mission for their transport to Earth. Any questions about dust toxicity that have not already been answered in situ can then be investigated by labs on Earth. Observations Mars is covered with vast expanses of sand and dust and its surface is littered with rocks and boulders. The dust is occasionally picked up in vast planet-wide dust storms. Mars dust is very fine, and enough remains suspended in the atmosphere to give the sky a reddish hue. The reddish hue is due to rusting iron minerals presumably formed a few billion years ago when Mars was warm and wet, but now that Mars is cold and dry, modern rusting may be due to a superoxide that forms on minerals exposed to ultraviolet rays in sunlight. The sand is believed to move only slowly in the Martian winds due to the very low density of the atmosphere in the present epoch. In the past, liquid water flowing in gullies and river valleys may have shaped the Martian regolith. Mars researchers are studying whether groundwater sapping is shaping the Martian regolith in the present epoch, and whether carbon dioxide hydrates exist on Mars and play a role. It is believed that large quantities of water and carbon dioxide ices remain frozen within the regolith in the equatorial parts of Mars and on its surface at higher latitudes. According to the High Energy Neutron Detector of the Mars Odyssey satellite the water content of Martian regolith is up to 5% by weight. The presence of olivine, which is an easily weatherable primary mineral, has been interpreted to mean that physical rather than chemical weathering processes currently dominate on Mars. High concentrations of ice in regolith is thought to be the cause of accelerated soil creep, which forms the rounded "softened terrain" characteristic of the Martian midlatitudes. In June 2008, the Phoenix lander returned data showing Martian regolith to be slightly alkaline and containing vital nutrients such as magnesium, sodium, potassium and chloride, all of which are ingredients for living organisms to grow on Earth. Scientists compared the regolith near Mars's north pole to that of backyard gardens on Earth, and concluded that it could be suitable for growth of plants. However, in August 2008, the Phoenix Lander conducted simple chemistry experiments, mixing water from Earth with Martian soil in an attempt to test its pH, and discovered traces of the salt perchlorate, while also confirming many scientists' hypotheses that the Martian surface was considerably basic, measuring at 8.3. The presence of the perchlorate makes Martian regolith more exotic than previously believed (see Toxicity section). Further testing was necessary to eliminate the possibility of the perchlorate readings being caused by terrestrial sources, which at the time were thought could have migrated from the spacecraft either into samples or the instrumentation. However, each new lander has confirmed their presence in the regolith locally and the Mars Odyssey orbiter confirmed they are spread globally across the entire surface of the planet. In 1999 the Mars Pathfinder rover performed an indirect electrostatics measurement of the Martian regolith. The Wheel Abrasion Experiment (WAE) was designed with fifteen metal samples and film insulators mounted on the wheel to reflect sunlight to a photovoltaic sensor. Lander cameras showed dust accumulating on the wheels as the rover moved and the WAE detected a drop in the amount of light hitting the sensor. It is believed that the dust may have acquired an electrostatic charge as the wheels rolled across the surface causing the dust to adhere to the film surface. On October 17, 2012 (Curiosity rover at "Rocknest"), the first X-ray diffraction analysis of Martian regolith was performed. The results revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian regolith in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. Hawaiian volcanic ash has been used as Martian regolith simulant by researchers since 1998. In December 2012, scientists working on the Mars Science Laboratory mission announced that an extensive analysis of Martian regolith performed by the Curiosity rover showed evidence of water molecules, sulphur and chlorine, as well as hints of organic compounds. However, terrestrial contamination, as the source of the organic compounds, could not be ruled out. On September 26, 2013, NASA scientists reported the Mars Curiosity rover detected "abundant, easily accessible" water (1.5 to 3 weight percent) in regolith samples at the Rocknest region of Aeolis Palus in Gale Crater. In addition, NASA reported that the Curiosity rover found two principal regolith types: a fine-grained mafic type and a locally derived, coarse-grained felsic type. The mafic type, similar to other Martian regolith and Martian dust, was associated with hydration of the amorphous phases of the regolith. Also, perchlorates, the presence of which may make detection of life-related organic molecules difficult, were found at the Curiosity rover landing site (and earlier at the more polar site of the Phoenix lander) suggesting a "global distribution of these salts". NASA also reported that Jake M rock, a rock encountered by Curiosity on the way to Glenelg, was a mugearite and very similar to terrestrial mugearite rocks. On April 11, 2019, NASA announced that the Curiosity rover on Mars drilled into, and closely studied, a "clay-bearing unit" which, according to the rover Project Manager, is a "major milestone" in Curiosity's journey up Mount Sharp. Humans will need in situ resources for colonising Mars. That demands an understanding of the local unconsolidated bulk sediment, but the classification of such sediment remains a work in progress. Too little of the entire Martian surface is known to draw a sufficiently representative picture. Atmospheric dust Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. The difference in the concentration of dust in Earth's atmosphere and that of Mars stems from a key factor. On Earth, dust that leaves atmospheric suspension usually gets aggregated into larger particles through the action of soil moisture or gets suspended in oceanic waters. It helps that most of Earth's surface is covered by liquid water. Neither process occurs on Mars, leaving deposited dust available for suspension back into the Martian atmosphere. In fact, the composition of Martian atmospheric dust – very similar to surface dust – as observed by the Mars Global Surveyor Thermal Emission Spectrometer, may be volumetrically dominated by composites of plagioclase feldspar and zeolite which can be mechanically derived from Martian basaltic rocks without chemical alteration. Observations of the Mars Exploration Rovers' magnetic dust traps suggest that about 45% of the elemental iron in atmospheric dust is maximally oxidized (Fe3+) and that nearly half exists in titanomagnetite, both consistent with mechanical derivation of dust with aqueous alteration limited to just thin films of water. Collectively, these observations support the absence of water-driven dust aggregation processes on Mars. Furthermore, wind activity dominates the surface of Mars at present, and the abundant dune fields of Mars can easily yield particles into atmospheric suspension through effects such as larger grains disaggregating fine particles through collisions. The Martian atmospheric dust particles are generally 3 μm in diameter. While the atmosphere of Mars is thinner, Mars also has a lower gravitational acceleration, so the size of particles that will remain in suspension cannot be estimated with atmospheric thickness alone. Electrostatic and van der Waals forces acting among fine particles introduce additional complexities to calculations. Rigorous modeling of all relevant variables suggests that 3 μm diameter particles can remain in suspension indefinitely at most wind speeds, while particles as large as 20 μm diameter can enter suspension from rest at surface wind turbulence as low as 2 ms−1 or remain in suspension at 0.8 ms−1. In July 2018, researchers reported that the largest single source of dust on the planet Mars comes from the Medusae Fossae Formation. Research on Earth Research on Earth is currently limited to using Martian regolith simulants, such as the MGS-1 simulant produced by Exolith Lab, which are based on the analysis from the various Mars spacecraft. These are a terrestrial material that is used to simulate the chemical and mechanical properties of Martian regolith for research, experiments and prototype testing of activities related to Martian regolith such as dust mitigation of transportation equipment, advanced life support systems and in-situ resource utilization. A number of Mars sample return missions are being planned, which would allow actual Martian regolith to be returned to Earth for more advanced analysis than is possible in situ on the surface of Mars. This should allow even more accurate simulants. The first of these missions is a multi-part mission beginning with the Mars 2020 lander. This will collect samples over a long period. A second lander will then gather the samples and return them to Earth. Gallery See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/MindSpore] | [TOKENS: 401]
Contents MindSpore MindSpore is an open-source software framework for deep learning, machine learning and artificial intelligence developed by Huawei. Overview MindSpore provides support for Python by allowing users to define models, control flow, and custom operators using native Python syntax. Unlike graph-based frameworks that require users to learn DSL or complex APIs, MindSpore adopts a source-to-source (S2S) automatic differentiation approach, allowing Python code to be automatically transformed into optimized computational graphs. It has support for custom OpenHarmony-based HarmonyOS NEXT single core framework system built for HarmonyOS, includes an AI system stack that comes with Huawei's built LLM model called PanGu-Σ with full MindSpore framework support. Alongside, OpenHarmony Native device-side AI support for training interface and ArkTS programming interface for its NNRt (Neural Network Runtime) backend configurations via MindSpore Lite AI framework codebase introduced in API 11 Beta 1 of OpenHarmony 4.1. MindSpore platform runs on Ascend AI chips and Kirin alongside other HiSilicon NPU chips. CANN (Compute Architecture of Neural Networks), heterogeneous computing architecture for AI developed by Huawei. With CANN backend in OpenCV DNN, giving developers ability to run created AI models on the Ascend, Kirin and other HiSilicon NPU enabled chips. It supports cross platform development such as Android, iOS, Windows, global OpenHarmony-based distro, Eclipse Oniro, Linux-based EulerOS alongside OpenEuler Huawei's server OS platforms, macOS and Linux. History On April 24, 2024, Huawei's MindSpore 2.3.RC1 was released to open source community with Foundation Model Training, Full-Stack Upgrade of Foundation Model Inference, Static Graph Optimization, IT Features and new MindSpore Elec MT (MindSpore-powered magnetotelluric) Intelligent Inversion Model. See also References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Timeline_of_the_name_Judea] | [TOKENS: 96]
Contents Timeline of the name Judea This article presents a timeline of the name Judea through an incomplete list of notable historical references to the name through the various time periods of the region. Historical references Biblical references The name occurs multiple times as a geographic region in the Hebrew Bible, in both Hebrew and Aramaic: During the time of the New Testament, the region was a Roman province. The name Judea occurs 44 times in the New Testament. See also References Bibliography
========================================
[SOURCE: https://en.wikipedia.org/wiki/COWSEL] | [TOKENS: 203]
Contents COWSEL COWSEL (COntrolled Working SpacE Language) is a programming language designed between 1964 and 1966 by Robin Popplestone. It was based on an reverse Polish notation (RPN) form of the language Lisp, combined with some ideas from Combined Programming Language (CPL). COWSEL was initially implemented on a Ferranti Pegasus computer at the University of Leeds and on a Stantec Zebra at the Bradford Institute of Technology. Later, Rod Burstall implemented it on an Elliot 4120 at the University of Edinburgh. COWSEL was renamed POP-1 in 1966, during summer, and development continued under that name from then on. Example code Reserved words (keywords) were also underlined in the original printouts. Popplestone performed syntax highlighting by using underscoring on a Friden Flexowriter. See also References External links This programming-language-related article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Grey_alien#cite_note-14] | [TOKENS: 2835]
Contents Grey alien Grey aliens, also referred to as Zeta Reticulans, Roswell Greys, or simply, Greys,[a] are purported extraterrestrial beings. They are frequently featured in claims of close encounter and alien abduction. Greys are typically described as having small, humanoid bodies, smooth, grey skin, disproportionately large, hairless heads, and large, black, almond-shaped eyes. The 1961 Barney and Betty Hill abduction claim was key to the popularization of Grey aliens. Precursor figures have been described in science fiction and similar descriptions appeared in later accounts of the 1947 Roswell UFO incident and early accounts of the 1948 Aztec UFO hoax. The Grey alien is cited an archetypal image of an intelligent non-human creature and extraterrestrial life in general, as well as an iconic trope of popular culture in the age of space exploration. Description Greys are typically depicted as grey-skinned, diminutive humanoid beings that possess reduced forms of, or completely lack, external human body parts such as noses, ears, or sex organs. Their bodies are usually depicted as being elongated, having a small chest, and lacking in muscular definition and visible skeletal structure. Their legs are depicted as being shorter and jointed differently from humans with limbs proportionally different from a human. Greys are depicted as having unusually large heads in proportion to their bodies, and as having no hair, no noticeable outer ears or noses, and small orifices for ears, nostrils, and mouths. In drawings, Greys are almost always shown with very large, opaque, black eyes, without eye whites. They are frequently described as shorter than average adult humans. The association between Grey aliens and Zeta Reticuli originated with the interpretation of a map drawn by Betty Hill by a school-teacher named Marjorie Fish sometime in 1969. Betty Hill, under hypnosis, had claimed to have been shown a map that displayed the aliens' home system and nearby stars. Upon learning of this, Fish attempted to create a model from a drawing produced by Hill, eventually determining that the stars marked as the aliens' home were Zeta Reticuli, a binary star system. History In literature, descriptions of beings similar to Grey aliens predate claims of supposed encounters with them. In 1893, H. G. Wells presented a description of humanity's future appearance in the article "The Man of the Year Million", describing humans as having no mouths, noses, or hair, and with large heads. In 1895, Wells also depicted the Eloi, a successor species to humanity, in similar terms in the novel The Time Machine. Both share many characteristics with future perceptions of Greys. As early as 1917, the occultist Aleister Crowley described a meeting with a "preternatural entity" named Lam that was similar in appearance to a modern Grey. Crowley claimed to have contacted Lam through a process called the "Amalantrah Workings," which he believed allowed humans to contact beings from outer space and across dimensions. Other occultists and ufologists, many of whom have retroactively linked Lam to later Grey encounters, have since described their own visitations from him, with one describing the being as a "cold, computer-like intelligence," and utterly beyond human comprehension. ...the creatures did not resemble any race of humans. They were short, shorter than the average Japanese, and their heads were big and bald, with strong, square foreheads, and very small noses and mouths, and weak chins. What was most extraordinary about them were the eyes—large, dark, gleaming, with a sharp gaze. They wore clothes made of soft grey fabric, and their limbs seemed to be similar to those of humans. In 1933, the Swedish novelist Gustav Sandgren, using the pen name Gabriel Linde, published a science fiction novel called Den okända faran (The Unknown Danger), in which he describes a race of extraterrestrials who wore clothes made of soft grey fabric and were short, with big bald heads, and large, dark, gleaming eyes. The novel, aimed at young readers, included illustrations of the imagined aliens. This description would become the template upon which the popular image of grey aliens is based. The conception remained a niche one until 1965, when newspaper reports of the Betty and Barney Hill abduction made the archetype famous. The alleged abductees, Betty and Barney Hill, claimed that in 1961, humanoid alien beings with greyish skin had abducted them and taken them to a flying saucer. In his 1990 article "Entirely Unpredisposed", Martin Kottmeyer suggested that Barney's memories revealed under hypnosis might have been influenced by an episode of the science-fiction television show The Outer Limits titled "The Bellero Shield", which was broadcast 12 days before Barney's first hypnotic session. The episode featured an extraterrestrial with large eyes, who says, "In all the universes, in all the unities beyond the universes, all who have eyes have eyes that speak." The report from the regression featured a scenario that was in some respects similar to the television show. In part, Kottmeyer wrote: Wraparound eyes are an extreme rarity in science fiction films. I know of only one instance. They appeared on the alien of an episode of an old TV series The Outer Limits entitled "The Bellero Shield." A person familiar with Barney's sketch in "The Interrupted Journey" and the sketch done in collaboration with the artist David Baker will find a "frisson" of "déjà vu" creeping up his spine when seeing this episode. The resemblance is much abetted by an absence of ears, hair, and nose on both aliens. Could it be by chance? Consider this: Barney first described and drew the wraparound eyes during the hypnosis session dated 22 February 1964. "The Bellero Shield" was first broadcast on 10 February 1964. Only twelve days separate the two instances. If the identification is admitted, the commonness of wraparound eyes in the abduction literature falls to cultural forces. — Martin Kottmeyer, Entirely Unpredisposed: The Cultural Background of UFO Reports Carl Sagan echoed Kottmeyer's suspicions in his 1997 book, The Demon Haunted World: Science as a Candle in the Dark, where Invaders from Mars was cited as another potential inspiration. After the Hills' encounter, Greys would go on to become an integral part of ufology and other extraterrestrial-related folklore. This is particularly true in the case of the United States: according to journalist C. D. B. Bryan, 73% of all reported alien encounters in the United States describe Grey aliens, a significantly higher proportion than other countries.: 68 During the early 1980s, Greys were linked to the alleged crash-landing of a flying saucer in Roswell, New Mexico, in 1947. A number of publications contained statements from individuals who claimed to have seen the U.S. military handling a number of unusually proportioned, bald, child-sized beings. These individuals claimed, during and after the incident, that the beings had oversized heads and slanted eyes, but scant other distinguishable facial features. In 1987, novelist Whitley Strieber published the book Communion, which, unlike his previous works, was categorized as non-fiction, and in which he describes a number of close encounters he alleges to have experienced with Greys and other extraterrestrial beings. The book became a New York Times bestseller, and New Line Cinema released a 1989 film adaption that starred Christopher Walken as Strieber. In 1988, Christophe Dechavanne interviewed the French science-fiction writer and ufologist Jimmy Guieu on TF1's Ciel, mon mardi !. Besides mentioning Majestic 12, Guieu described the existence of what he called "the little greys", which later on became better known in French under the name: les Petits-Gris. Guieu later wrote two docudramas, using as a plot the Grey aliens / Majestic-12 conspiracy theory as described by John Lear and Milton William Cooper: the series "E.B.E." (for "Extraterrestrial Biological Entity"): E.B.E.: Alerte rouge (first part) (1990) and E.B.E.: L'entité noire d'Andamooka (second part) (1991).[citation needed] Greys have since become the subject of many conspiracy theories. Many conspiracy theorists believe that Greys represent part of a government-led disinformation or plausible deniability campaign, or that they are a product of government mind-control experiments. During the 1990s, popular culture also began to increasingly link Greys to a number of military-industrial complex and New World Order conspiracy theories. In 1995, filmmaker Ray Santilli claimed to have obtained 22 reels of 16 mm film that depicted the autopsy of a "real" Grey supposedly recovered from the site of the 1947 incident in Roswell. In 2006, though, Santilli announced that the film was not original, but was instead a "reconstruction" created after the original film was found to have degraded. He maintained that a real Grey had been found and autopsied on camera in 1947, and that the footage released to the public contained a percentage of that original footage. Analysis Greys are often involved in alien abduction claims. Among reports of alien encounters, Greys make up about 50% in Australia, 73% in the United States, 48% in continental Europe, and around 12% in the United Kingdom.: 68 These reports include two distinct groups of Greys that differ in height.: 74 Abduction claims are often described as extremely traumatic, similar to an abduction by humans or even a sexual assault in the level of trauma and distress. The emotional impact of perceived abductions can be as great as that of combat, sexual abuse, and other traumatic events. The eyes are often a focus of abduction claims, which often describe a Grey staring into the eyes of an abductee when conducting mental procedures. This staring is claimed to induce hallucinogenic states or directly provoke different emotions. Neurologist Steven Novella proposes that Grey aliens are a byproduct of the human imagination, with the Greys' most distinctive features representing everything that modern humans traditionally link with intelligence. "The aliens, however, do not just appear as humans, they appear like humans with those traits we psychologically associate with intelligence." In 2005, Frederick V. Malmstrom, writing in Skeptic magazine, Volume 11, issue 4, presents his idea that Greys are actually residual memories of early childhood development. Malmstrom reconstructs the face of a Grey through transformation of a mother's face based on our best understanding of early-childhood sensation and perception. Malmstrom's study offers another alternative to the existence of Greys, the intense instinctive response many people experience when presented an image of a Grey, and the act of regression hypnosis and recovered-memory therapy in "recovering" memories of alien abduction experiences, along with their common themes. According to biologist Jack Cohen, the typical image of a Grey, assuming that it would have evolved from a world with different environmental and ecological conditions from Earth, is too physiologically similar to a human to be credible as a representation of an alien. The interdimensional hypothesis, the cryptoterrestrial hypothesis, and the time-traveller hypothesis attempt to provide an alternative explanation to the humanoid anatomy and behavior of these alleged beings. In popular culture Depictions of Grey aliens have gone on to appear in a number of films and television shows, supplanting the previously popular little green men. As early as 1966, for example, the superhero character Ultraman was explicitly based on them, and in 1977 they were featured in Close Encounters of the Third Kind. Greys have also been worked into space opera and other interstellar settings: in Babylon 5, the Greys are referred to as the "Vree", and are depicted as being allies and trade partners of 23rd-century Earth, while in the Stargate franchise they are called the "Asgard" and depicted as ancient astronauts allied with modern-day Earth.[citation needed] South Park refers to them as "visitors". During the 1990s, plotlines wherein Greys were linked to conspiracy theories became common. A well-known example is the Fox television series The X-Files, which first aired in 1993. It combined the quest to find proof of the existence of Grey-like extraterrestrials with a number of UFO conspiracy theory subplots, to form its primary story arc. Other notable examples include the XCOM video game franchise (where they are called "Sectoids"); Dark Skies, first broadcast in 1996, which expanded upon the MJ-12 conspiracy;[citation needed] and American Dad!, which features a Grey-like alien named Roger, whose backstory draws from both the Roswell incident and Area 51 conspiracy theories. The 2011 film Paul tells the story of a Grey named Paul who attributes the Greys' frequent presence in science fiction pop culture to the US government deliberately inserting the stereotypical Grey alien image into mainstream media; this is done so that if humanity came into contact with Paul's species, no immediate shock would occur as to their appearance. Child abduction by Greys is a key plot point in the 2013 film, Dark Skies. Greys appear in Syfy's 2021 science fiction dramedy series Resident Alien. The Greys appear as the main antagonistic faction in the 2023 independent game Greyhill Incident. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mlpy] | [TOKENS: 339]
Contents mlpy mlpy is a Python, open-source, machine learning library built on top of NumPy/SciPy, the GNU Scientific Library and it makes an extensive use of the Cython language. mlpy provides a wide range of state-of-the-art machine learning methods for supervised and unsupervised problems and it is aimed at finding a reasonable compromise among modularity, maintainability, reproducibility, usability and efficiency. mlpy is multiplatform, it works with Python 2 and 3 and it is distributed under GPL3. Suited for general-purpose machine learning tasks,[failed verification][failed verification][failed verification] mlpy's motivating application field is bioinformatics, i.e. the analysis of high throughput omics data. Features Kernel-based functions are managed through a common kernel layer. In particular, the user can choose between supplying the data or a precomputed kernel in input space. Linear, polynomial, Gaussian, exponential and sigmoid kernels are available as default choices, and custom kernels can be defined as well. Many classification and regression algorithms are endowed with an internal feature ranking procedure: in alternative, mlpy implements the I-Relief algorithm. Recursive feature elimination (RFE) for linear classifiers and the KFDA-RFE algorithm are available for feature selection. Methods for feature list analysis (for example the Canberra stability indicator), data resampling and error evaluation are provided, together with different clustering analysis methods (Hierarchical, Memory-saving Hierarchical, k-means). Finally, dedicated submodules are included for longitudinal data analysis through wavelet transform (Continuous, Discrete and Undecimated) and dynamic programming algorithms (Dynamic Time Warping and variants). See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/J%C3%BCrgen_Habermas] | [TOKENS: 8086]
Contents Jürgen Habermas Jürgen Habermas (UK: /ˈhɑːbərmæs/ HAH-bər-mass, US: /-mɑːs/ -⁠mahss; German: [ˈjʏʁɡn̩ ˈhaːbɐmaːs] ⓘ; born 18 June 1929) is a German philosopher and social theorist in the tradition of critical theory and pragmatism. His work addresses communicative rationality and the public sphere. Associated with the Frankfurt School, Habermas's work focused on the foundations of epistemology and social theory, the analysis of advanced capitalism and democracy, the rule of law in a critical social-evolutionary context, albeit within the confines of the natural law tradition, and contemporary politics, particularly German politics. Habermas's theoretical system is devoted to revealing the possibility of reason, emancipation, and rational-critical communication latent in modern institutions and in the human capacity to deliberate and pursue rational interests. Habermas is known for his work on the phenomenon of modernity, particularly with respect to the discussions of rationalization originally set forth by Max Weber. He has been influenced by American pragmatism, action theory, and poststructuralism. Biography Habermas was born in Düsseldorf, Rhine Province, in 1929. He was born with a cleft palate and had corrective surgery twice during childhood. Habermas argues that his speech disability made him think differently about the importance of deep dependence and of communication. Until his graduation from grammar school, he grew up in a staunchly Protestant milieu in Gummersbach (near Cologne), where his grandfather Friedrich Habermas [de] had been the director of the local seminary. His father, Ernst Habermas [de], who was executive director of the Cologne Chamber of Industry and Commerce [de], joined the Nazi Party in 1933 and advised it from 1939. A teenager during World War II, Habermas joined the Deutsches Jungvolk, a junior section of the Hitler Youth, at his father's instigation, and rose to the rank of Jungvolkführer (leader), which allowed him to remain in this formation beyond the age of 14. He organised first aid training as part of medical corps service. From August 1944, his detachment waged anti-aircraft warfare against the Allied advances on the Siegfried Line. He narrowly avoided being drafted into the Wehrmacht at a closing stage of the war, shortly before the arrival of US troops near his home. He studied at the universities of Göttingen (1949/50), Zurich (1950/51), and Bonn (1951–54) and earned a doctorate in philosophy from Bonn in February 1954 with a dissertation written on the tension between the absolute and history in Schelling's thought, entitled Das Absolute und die Geschichte. Von der Zwiespältigkeit in Schellings Denken ("The Absolute and History: On the Schism in Schelling's Thought"). His dissertation committee included Erich Rothacker and Oskar Becker, both of whom were former Nazis. Habermas later described the approach of his thesis as Heideggerian and noted that it led him to the work of young Marx via Karl Löwith. In the mid-1950s, Habermas worked briefly as a journalist. His 1953 article for the right-wing daily Frankfurter Allgemeine Zeitung expressed outrage at the publication of Martin Heidegger's 1935 lectures (Introduction to Metaphysics) that contained a reference to the "inner truth and greatness" of Nazism, while defending a complete separation between Heidegger's philosophy and politics. Habermas's essay "The Dialectic of Rationalisation" of 1954 sketched the outline for his later work, including his critical engagement with the Western Marxists. In 1956, Habermas became Theodor W. Adorno's research assistant at the University of Frankfurt am Main's Institute for Social Research (IfS). From 1956 to 1959, he studied philosophy and sociology under Adorno and the fellow critical theorist Max Horkheimer at the IfS. He was involved at the time in the early anti-nuclear movement. His work and activities soon provoked strong objections from Horkheimer, who tried to block the publication of Student und Politik. Eine soziologische Untersuchung zum politischen Bewusstsein Frankfurter Studenten written by Habermas with Ludwig von Friedeburg and three others (on the grounds that it would "encourage" the East German Communists and "play into the hands of the potential fascists at home"), demanded that Adorno sack Habermas as his assistant in 1958, and made unacceptable demands for revision of his dissertation. Combined with his own belief that the Frankfurt School had become paralyzed with political skepticism and disdain for modern culture, the conflict resulted in Habermas leaving Frankfurt and finishing his habilitation in political science at the University of Marburg under the Marxist Wolfgang Abendroth. His 1961 habilitation work was entitled Strukturwandel der Öffentlichkeit. Untersuchungen zu einer Kategorie der bürgerlichen Gesellschaft (published in English translation in 1989 as The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society). It is a detailed social history of the development of the bourgeois public sphere from its origins in the 18th century salons up to its transformation through the influence of capital-driven mass media. In 1961, Habermas became a Privatdozent in Marburg, and—in a move that was highly unusual for the German academic scene of that time—he was offered the position of "extraordinary professor" (professor without chair) of philosophy at the University of Heidelberg (at the instigation of Hans-Georg Gadamer and Karl Löwith) in 1962, which he accepted. In 1964, strongly supported by Adorno, Habermas returned to Frankfurt to take over Horkheimer's chair in philosophy and sociology, and reconciled with Horkheimer, who provided a glowing reference for him to the American Jewish Committee in 1965. The philosopher Albrecht Wellmer was Habermas's assistant in Frankfurt from 1966 to 1970. Following Adorno's death in 1969, Habermas, who had earlier declined the directorship of the Institute for Social Research, recommended Leszek Kołakowski to take up the role in the following year. When the proposal fell through due to opposition from the philosophy department, Habermas published an open letter against the institutionalisation of critical theory. He accepted the position of co-director (alongside Carl Friedrich von Weizsäcker) of the Max Planck Institute for the Study of the Scientific and Technical World [de] in Starnberg (near Munich) in 1971, and worked there until 1983, two years after the publication of his magnum opus, The Theory of Communicative Action. He proclaimed his definitive break with the Frankfurt School of critical theory in a 1971 letter to Herbert Marcuse. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1984. In 1983, Habermas returned to his chair at Frankfurt. In 1986, he received the Gottfried Wilhelm Leibniz Prize of the Deutsche Forschungsgemeinschaft, which is the highest honour awarded in German research. Since retiring from Frankfurt in 1994, Habermas has continued to publish extensively. He holds the position of "permanent visiting" professor at Northwestern University in Evanston, Illinois, and "Theodor Heuss Professor" at The New School, New York City. Habermas was awarded the Prince of Asturias Award in Social Sciences of 2003. Habermas was also the 2004 Kyoto Laureate in the Arts and Philosophy section. He traveled to San Diego and on 5 March 2005, as part of the University of San Diego's Kyoto Symposium, gave a speech entitled The Public Role of Religion in Secular Context, regarding the evolution of separation of church and state from neutrality to intense secularism. He received the 2005 Holberg International Memorial Prize (about €520,000). In 2007, Habermas was listed as the seventh most-cited author in the humanities (including the social sciences) by The Times Higher Education Guide, ahead of Max Weber and behind Erving Goffman. Bibliometric studies demonstrate his continuing influence and increasing relevance. He declared himself a supporter of Emmanuel Macron ahead of the 2017 French presidential election. Jürgen Habermas was the father of Rebekka Habermas (1959–2023), historian of German social and cultural history and professor of modern history at the University of Göttingen. Habermas was a famed teacher and mentor. Among his most prominent students were the pragmatic philosopher Herbert Schnädelbach [de] (theorist of discourse distinction and rationality), the political sociologist Claus Offe (professor at the Hertie School of Governance in Berlin), the social philosopher Jóhann Páll Árnason [cs] (professor at La Trobe University and chief editor of the journal Thesis Eleven), the hermeneutical theologian Hans-Herbert Kögler, the sociological theorist Hans Joas (professor at the University of Erfurt and at the University of Chicago), the theorist of societal evolution Klaus Eder [de], the social philosopher Axel Honneth, the political theorist David Rasmussen (professor at Boston College and chief editor of the journal Philosophy & Social Criticism), the environmental ethicist Konrad Ott, the anarcho-capitalist philosopher Hans-Hermann Hoppe (who came to reject much of Habermas's thought), the American philosopher Thomas McCarthy, the co-creator of mindful inquiry in social research Jeremy J. Shapiro, the political philosopher Cristina Lafont (Harold H. and Virginia Anderson Professor of Philosophy at Northwestern University), and the assassinated Serbian prime minister Zoran Đinđić. Philosophy and social theory Habermas has constructed a comprehensive framework of philosophy and social theory drawing on a number of intellectual traditions: Jürgen Habermas considers his major contribution to be the development of the concept and theory of communicative reason or communicative rationality, which distinguishes itself from the rationalist tradition, by locating rationality in structures of interpersonal linguistic communication rather than in the structure of the cosmos. This social theory advances the goals of human emancipation, while maintaining an inclusive universalist moral framework. This framework rests on the argument called universal pragmatics—that all speech acts have an inherent telos (the Greek word for "purpose")—the goal of mutual understanding, and that human beings possess the communicative competence to bring about such understanding. Habermas built the framework out of the speech-act philosophy of Ludwig Wittgenstein, J. L. Austin and John Searle, the sociological theory of the interactional constitution of mind and self of George Herbert Mead, the theories of moral development of Jean Piaget and Lawrence Kohlberg, and the discourse ethics of his Frankfurt colleague and fellow student Karl-Otto Apel. Habermas's works resonate within the traditions of Kant and the Enlightenment and of democratic socialism through his emphasis on the potential for transforming the world and arriving at a more humane, just, and egalitarian society through the realization of the human potential for reason, in part through discourse ethics. While Habermas has stated that the Enlightenment is an "unfinished project," he argues it should be corrected and complemented, not discarded. In this he distances himself from the Frankfurt School, criticizing it, as well as much of postmodernist thought, for excessive pessimism, radicalism, and exaggerations. It was at the Max Planck Institute in Starnberg that Habermas completed his principal work, and Gordon Finlayson has proposed to consider Habermas a member of the first-generation Starnberg school rather than second-generation Frankfurt School. Within sociology, Habermas's major contribution was the development of a comprehensive theory of societal evolution and modernization focusing on the difference between communicative rationality and rationalization on one hand and strategic/instrumental rationality and rationalization on the other. This includes a critique from a communicative standpoint of the differentiation-based theory of social systems developed by Niklas Luhmann, a student of Talcott Parsons (Habermas secured Luhmann's entry into the small editorial committee for Suhrkamp Verlag's Theorie series). His defence of modernity and civil society has been a source of inspiration to others, and is considered a major philosophical alternative to the varieties of poststructuralism. He has also offered an influential analysis of late capitalism. Habermas perceives the rationalization, humanization and democratization of society in terms of the institutionalization of the potential for rationality that is inherent in the communicative competence that is unique to the human species. Habermas contends that communicative competence has developed through the course of evolution, but in contemporary society it is often suppressed or weakened by the way in which major domains of social life, such as the market, the state, and organizations, have been given over to or taken over by strategic/instrumental rationality, so that the logic of the system supplants that of the lifeworld. According to the political anthropologist Irfan Ahmad, the influence of Max Weber on Habermas's conceptual framework, as demonstrated by the indebtedness of the 1981 Theory of Communicative Action to Weber's student Talcott Parsons, overrides that of Karl Marx. In that book, Habermas stated that the "market is the most important example of a norm-free regulation of cooperative contexts", which Ahmad regards as an unambiguous sign of his shift against left-wing politics. Habermas had previously distanced himself from the Hegelian Marxism of György Lukács, Adorno and Karl Korsch in his lectures delivered from 1973 onwards and published as Zur Rekonstruktion des Historischen Materialismus (French translation as Après Marx) in 1976. In a 1979 interview at Starnberg, after crediting Karl-Otto Apel with first labelling him a neo-Marxist around the time of his habilitation in 1961, Habermas commented: "Today I value being considered a Marxist". He added that he was "not a Marxist in the sense of believing in Marxism as a sure-fire explanation. Still, Marxism did give me both the impetus and the analytical means to investigate the development of the relationship between democracy and capitalism". He claimed to be "the last Marxist" as late as 1989. Habermas introduces the concept of "reconstructive science" with a double purpose: to place the "general theory of society" between philosophy and social science and re-establish the rift between the "great theorization" and the "empirical research". The model of "rational reconstructions" represents the main thread of the surveys about the "structures" of the world of life ("culture", "society" and "personality") and their respective "functions" (cultural reproductions, social integrations and socialization). For this purpose, the dialectics between "symbolic representation" of "the structures subordinated to all worlds of life" ("internal relationships") and the "material reproduction" of the social systems in their complex ("external relationships" between social systems and environment) has to be considered. This model finds an application, above all, in the "theory of the social evolution", starting from the reconstruction of the necessary conditions for a phylogeny of the socio-cultural life forms (the "hominization") until an analysis of the development of "social formations", which Habermas subdivides into primitive, traditional, modern and contemporary formations. "This paper is an attempt, primarily, to formalize the model of "reconstruction of the logic of development" of "social formations" summed up by Habermas through the differentiation between vital world and social systems (and, within them, through the "rationalization of the world of life" and the "growth in complexity of the social systems"). Secondly, it tries to offer some methodological clarifications about the "explanation of the dynamics" of "historical processes" and, in particular, about the "theoretical meaning" of the evolutional theory's propositions. Even if the German sociologist considers that the "ex-post rational reconstructions" and "the models system/environment" cannot have a complete "historiographical application", these certainly act as a general premise in the argumentative structure of the "historical explanation". In The Structural Transformation of the Public Sphere, Habermas argues that prior to the 18th century, European culture had been dominated by a "representational" culture, where one party sought to "represent" itself on its audience by overwhelming its subjects. As an example of "representational" culture, Habermas argued that Louis XIV's Palace of Versailles was meant to show the greatness of the French state and its King by overpowering the senses of visitors to the Palace. Habermas identifies "representational" culture as corresponding to the feudal stage of development according to Marxist theory, arguing that the coming of the capitalist stage of development marked the appearance of Öffentlichkeit (the public sphere). In the culture characterized by Öffentlichkeit, there occurred a public space outside of the control by the state, where individuals exchanged views and knowledge. In Habermas's view, the growth in newspapers, journals, reading clubs, Masonic lodges, and coffeehouses in 18th-century Europe, all in different ways, marked the gradual replacement of "representational" culture with Öffentlichkeit culture. Habermas argued that the essential characteristic of the Öffentlichkeit culture was its "critical" nature. Unlike "representational" culture where only one party was active and the other passive, the Öffentlichkeit culture was characterized by a dialogue as individuals either met in conversation, or exchanged views via the print media. Habermas maintains that as Britain was the most liberal country in Europe, the culture of the public sphere emerged there first around 1700, and the growth of Öffentlichkeit culture took place over most of the 18th century in Continental Europe. In his view, the French Revolution was in large part caused by the collapse of "representational" culture, and its replacement by Öffentlichkeit culture. Though Habermas's main concern in The Structural Transformation of the Public Sphere was to expose what he regarded as the deceptive nature of free institutions in the West, his book had a major effect on the historiography of the French Revolution. According to Habermas, a variety of factors resulted in the eventual decay of the public sphere, including the growth of a commercial mass media, which turned the critical public into a passive consumer public; and the welfare state, which merged the state with society so thoroughly that the public sphere was squeezed out. It also turned the "public sphere" into a site of self-interested contestation for the resources of the state rather than a space for the development of a public-minded rational consensus. His most known work to date, the Theory of Communicative Action (1981), is based on an adaptation of Talcott Parsons' AGIL Paradigm. In this work, Habermas voiced criticism of the process of modernization, which he saw as inflexible direction forced through by economic and administrative rationalization. Habermas outlined how our everyday lives are penetrated by formal systems as parallel to development of the welfare state, corporate capitalism and mass consumption. These reinforcing trends rationalize public life. Disfranchisement of citizens occurs as political parties and interest groups become rationalized and representative democracy replaces participatory one. In consequence, boundaries between public and private, the individual and society, the system and the lifeworld are deteriorating. Democratic public life cannot develop where matters of public importance are not discussed by citizens. An "ideal speech situation" requires participants to have the same capacities of discourse, social equality and their words are not confused by ideology or other errors. In this version of the consensus theory of truth Habermas maintains that truth is what would be agreed upon in an ideal speech situation. Habermas has expressed optimism about the possibility of the revival of the public sphere. He discerns a hope for the future where the representative democracy-reliant nation-state is replaced by a deliberative democracy-reliant political organism based on the equal rights and obligations of citizens. In such a direct democracy-driven system, the activist public sphere is needed for debates on matters of public importance as well as the mechanism for that discussion to affect the decision-making process. Habermas versus the postmodernists Habermas offered some early criticisms in an essay, "Modernity versus Postmodernity" (1981), which has achieved wide recognition. In that essay, Habermas raises the issue of whether, in light of the failures of the twentieth century, we "should try to hold on to the intentions of the Enlightenment, feeble as they may be, or should we declare the entire project of modernity a lost cause?" Habermas refuses to give up on the possibility of a rational, "scientific" understanding of the life-world. Habermas has several main criticisms of postmodernism: Key dialogues and engagement with politics The positivism dispute was a political-philosophical dispute between the critical rationalists (Karl Popper, Hans Albert) and the Frankfurt School (Theodor Adorno, Jürgen Habermas) in 1961, about the methodology of the social sciences. It grew into a broad discussion within German sociology from 1961 to 1969. There is a controversy between Habermas and Hans-Georg Gadamer about limits of hermeneutics. Gadamer completed his magnum opus, Truth and Method, in 1960, and engaged in his debate with Habermas over the possibility of transcending history and culture to find a truly objective position from which to critique society. During the 1960s, Gadamer supported Habermas and advocated for him to be offered a job at Heidelberg before he had completed his habilitation, despite Max Horkheimer's objections. While they both criticized positivism, a philosophical disagreement arose between them in the 1970s. This disagreement expanded the scope of Gadamer's philosophical influence. Despite fundamental agreements between them, such as starting from the hermeneutic tradition and returning to Greek practical philosophy, Habermas argued that Gadamer's emphasis on tradition and prejudice blinded him to the ideological operation of power. Habermas believed that Gadamer's approach failed to enable critical reflection on the sources of ideology in society. He accused Gadamer of endorsing a dogmatic stance toward tradition, which made it difficult to identify distortions in understanding. Gadamer countered that refusing the universal nature of hermeneutics was the more dogmatic stance because it affirmed the deception that the subject can free itself from the past. There is a dispute concerning whether Michel Foucault's ideas of "power analytics" and "genealogy" or Jürgen Habermas's ideas of "communicative rationality" and "discourse ethics" provide a better critique of the nature of power in society. The debate compares and evaluates the central ideas of Habermas and Foucault as they pertain to questions of power, reason, ethics, modernity, democracy, civil society, and social action. Habermas and Karl-Otto Apel both support a postmetaphysical, universal moral theory, but they disagree on the nature and justification of this principle. Habermas disagrees with Apel's view that the principle is a transcendental condition of human activity, while Apel asserts that it is. They each criticize the other's position. Habermas argues that Apel is too concerned with transcendental conditions, while Apel argues that Habermas doesn't value critical discourse enough. There is a debate between Habermas and John Rawls. The debate centers around the question of how to do political philosophy under conditions of cultural pluralism, if the aim of political philosophy is to uncover the normative foundation of a modern liberal democracy. Habermas believes that Rawls's view is inconsistent with the idea of popular sovereignty, while Rawls argues that political legitimacy is solely a matter of sound moral reasoning or that democratic will formation has been unduly downgraded in his theory. Habermas is famous as a public intellectual as well as a scholar; most notably, in the 1980s he used the popular press to attack the German historians Ernst Nolte, Michael Stürmer, Klaus Hildebrand and Andreas Hillgruber. Habermas first expressed his views on the above-mentioned historians in the Die Zeit on 11 July 1986 in a feuilleton (a type of culture and arts opinion essay in German newspapers) entitled "A Kind of Settlement of Damages". Habermas criticized Nolte, Hildebrand, Stürmer and Hillgruber for "apologistic" history writing in regard to the Nazi era, and for seeking to "close Germany's opening to the West" that in Habermas's view had existed since 1945. Habermas argued that Nolte, Stürmer, Hildebrand and Hillgruber had tried to detach Nazi rule and the Holocaust from the mainstream of German history, explain away Nazism as a reaction to Bolshevism, and partially rehabilitate the reputation of the Wehrmacht (German Army) during World War II. Habermas wrote that Stürmer was trying to create a "vicarious religion" in German history which, together with the work of Hillgruber, glorifying the last days of the German Army on the Eastern Front, was intended to serve as a "kind of NATO philosophy colored with German nationalism". About Hillgruber's statement that Adolf Hitler wanted to exterminate the Jews "because only such a 'racial revolution' could lend permanence to the world-power status of his Reich", Habermas wrote: "Since Hillgruber does not use the verb in the subjunctive, one does not know whether the historian has adopted the perspective of the particulars this time too". Habermas wrote: "The unconditional opening of the Federal Republic to the political culture of the West is the greatest intellectual achievement of our postwar period; my generation should be especially proud of this. This event cannot and should not be stabilized by a kind of NATO philosophy colored with German nationalism. The opening of the Federal Republic has been achieved precisely by overcoming the ideology of Central Europe that our revisionists are trying to warm up for us with their geopolitical drumbeat about "the old geographically central position of the Germans in Europe" (Stürmer) and "the reconstruction of the destroyed European Center" (Hillgruber). The only patriotism that will not estrange us from the West is a constitutional patriotism." The debate known as the Historikerstreit ("Historians' Dispute") was not at all one-sided, because Habermas was himself attacked by scholars like Joachim Fest, Hagen Schulze, Horst Möller, Imanuel Geiss and Klaus Hildebrand. In turn, Habermas was supported by historians such as Martin Broszat, Eberhard Jäckel, Hans Mommsen, and Hans-Ulrich Wehler. Habermas and Jacques Derrida engaged in a series of disputes beginning in the 1980s and culminating in a mutual understanding and friendship in the late 1990s that lasted until Derrida's death in 2004. They originally came in contact when Habermas invited Derrida to speak at the University of Frankfurt am Main in 1984. The next year Habermas published "Beyond a Temporalized Philosophy of Origins: Derrida" in The Philosophical Discourse of Modernity in which he described Derrida's method as being unable to provide a foundation for social critique. Derrida, citing Habermas as an example, remarked that, "those who have accused me of reducing philosophy to literature or logic to rhetoric ... have visibly and carefully avoided reading me". After Derrida's final rebuttal in 1989 the two philosophers did not continue, but, as Derrida described it, groups in the academy "conducted a kind of 'war', in which we ourselves never took part, either personally or directly". At the end of the 1990s, Habermas approached Derrida at a party held at an American university where both were lecturing. They then met at Paris over dinner, and participated afterwards in many joint projects. In 2000 they held a joint seminar on problems of philosophy, right, ethics, and politics at the University of Frankfurt. In December 2000, in Paris, Habermas gave a lecture entitled "How to answer the ethical question?" at the Judeities. Questions for Jacques Derrida conference organized by Joseph Cohen and Raphael Zagury-Orly. Following the lecture by Habermas, both thinkers engaged in a very heated debate on Heidegger and the possibility of Ethics. The conference volume was published at the Editions Galilée (Paris) in 2002, and subsequently in English at Fordham University Press (2007). In the aftermath of the 11 September attacks, Derrida and Habermas laid out their individual opinions on 9/11 and the war on terror in Giovanna Borradori's Philosophy in a Time of Terror: Dialogues with Jürgen Habermas and Jacques Derrida. In early 2003, both Habermas and Derrida were very active in opposing the coming Iraq War; in a manifesto that later became the book Old Europe, New Europe, Core Europe, the two called for a tighter unification of the states of the European Union in order to create a power capable of opposing American foreign policy. Derrida wrote a foreword expressing his unqualified subscription to Habermas's declaration of February 2003 ("February 15, or, What Binds Europeans Together: Plea for a Common Foreign Policy, Beginning in Core Europe") in the book, which was a reaction to the Bush administration's demands upon European nations for support in the coming Iraq War. Habermas's attitudes toward religion have changed throughout the years. Analyst Phillippe Portier identifies three phases in Habermas's attitude towards this social sphere: the first, in the decade of 1980, when the younger Jürgen, in the spirit of Marx, argued against religion seeing it as an "alienating reality" and "control tool"; the second phase, from the mid-1980s to the beginning of the 21st century, when he stopped discussing it and, as a secular commentator, relegated it to matters of private life; and the third, from then until now, when Habermas saw a positive social role of religion. In an interview in 1999 Habermas had stated: For the normative self-understanding of modernity, Christianity has functioned as more than just a precursor or catalyst. Universalistic egalitarianism, from which sprang the ideals of freedom and a collective life in solidarity, the autonomous conduct of life and emancipation, the individual morality of conscience, human rights and democracy, is the direct legacy of the Judaic ethic of justice and the Christian ethic of love. This legacy, substantially unchanged, has been the object of a continual critical reappropriation and reinterpretation. Up to this very day there is no alternative to it. And in light of the current challenges of a post-national constellation, we must draw sustenance now, as in the past, from this substance. Everything else is idle postmodern talk. The original German (from the Habermas Forum website) of the disputed quotation is: Das Christentum ist für das normative Selbstverständnis der Moderne nicht nur eine Vorläufergestalt oder ein Katalysator gewesen. Der egalitäre Universalismus, aus dem die Ideen von Freiheit und solidarischem Zusammenleben, von autonomer Lebensführung und Emanzipation, von individueller Gewissensmoral, Menschenrechten und Demokratie entsprungen sind, ist unmittelbar ein Erbe der jüdischen Gerechtigkeits- und der christlichen Liebesethik. In der Substanz unverändert, ist dieses Erbe immer wieder kritisch angeeignet und neu interpretiert worden. Dazu gibt es bis heute keine Alternative. Auch angesichts der aktuellen Herausforderungen einer postnationalen Konstellation zehren wir nach wie vor von dieser Substanz. Alles andere ist postmodernes Gerede. — Jürgen Habermas, Zeit der Übergänge (2001), p. 174f. This statement has been misquoted in a number of articles and books, where Habermas instead is quoted for saying: Christianity, and nothing else, is the ultimate foundation of liberty, conscience, human rights, and democracy, the benchmarks of Western civilization. To this day, we have no other options. We continue to nourish ourselves from this source. Everything else is postmodern chatter. In his book Zwischen Naturalismus und Religion (Between Naturalism and Religion, 2005), Habermas stated that the forces of religious strength, as a result of multiculturalism and immigration, are stronger than in previous decades, and, therefore, there is a need of tolerance which must be understood as a two-way street: secular people need to tolerate the role of religious people in the public square and vice versa. In early 2007, Ignatius Press published a dialogue between Habermas and the then Prefect of the Congregation for the Doctrine of the Faith of the Holy Office Joseph Ratzinger (elected as Pope Benedict XVI in 2005), entitled The Dialectics of Secularization. The dialogue took place on 14 January 2004 after an invitation to both thinkers by the Catholic Academy of Bavaria in Munich. It addressed contemporary questions such as: In this debate a shift of Habermas became evident—in particular, his rethinking of the public role of religion. Habermas stated that he wrote as a "methodological atheist," which means that when doing philosophy or social science, he presumed nothing about particular religious beliefs. Yet while writing from this perspective his evolving position towards the role of religion in society led him to some challenging questions, and as a result conceding some ground in his dialogue with the future Pope, that would seem to have consequences which further complicated the positions he holds about a communicative rational solution to the problems of modernity. Habermas believes that even for self-identified liberal thinkers, "to exclude religious voices from the public square is highly illiberal." In addition, Habermas has popularized the concept of "post-secular" society, to refer to current times in which the idea of modernity is perceived as unsuccessful and at times, morally failed, so that, rather than a stratification or separation, a new peaceful dialogue and coexistence between faith and reason must be sought to learn mutually. Habermas has sided with other 20th-century commentators on Marx such as Hannah Arendt who have indicated concerns with the limits of totalitarian perspectives often associated with Marx's over-estimation of the emancipatory potential of the forces of production. Arendt had presented this in her book The Origins of Totalitarianism and Habermas extends this critique in his writings on functional reductionism in the life-world in his Lifeworld and System: A Critique of Functionalist Reason. As Habermas states: ... traditional Marxist analysis ... today, when we use the means of the critique of political economy ... can no longer make clear predictions: for that, one would still have to assume the autonomy of a self-reproducing economic system. I do not believe in such an autonomy. Precisely for this reason, the laws governing the economic system are no longer identical to the ones Marx analyzed. Of course, this does not mean that it would be wrong to analyze the mechanism which drives the economic system; but in order for the orthodox version of such an analysis to be valid, the influence of the political system would have to be ignored. Habermas reiterated the positions that what refuted Marx and his theory of class struggle was the "pacification of class conflict" by the welfare state, which had developed in the West "since 1945", thanks to "a reformist relying on the instruments of Keynesian economics". Italian philosopher and historian Domenico Losurdo criticised the main point of these claims as "marked by the absence of a question that should be obvious:— Was the advent of the welfare state the inevitable result of a tendency inherent in capitalism? Or was it the result of political and social mobilization by the subaltern classes—in the final analysis, of a class struggle? Had the German philosopher posed this question, perhaps he would have avoided assuming the permanence of the welfare state, whose precariousness and progressive dismantlement are now obvious to everyone". In 1973, Habermas noted "the incompatibility of the imperatives that rule the capitalistic economic system with a democratic process for forming the public will". His critique of capitalism has focused on its technocratic tendencies. In 1999, Habermas addressed the Kosovo War. Habermas defended NATO's intervention in an article for Die Zeit, which stirred controversy. In 2002, Habermas argued that the United States should not go to war in Iraq. On 13 November 2023, Habermas and co-authors issued a statement arguing that Israel's military response to the "extreme atrocity" of the Hamas-led attack on Israel was "justified in principle". Although questions of proportionality and civilian casualties can rightly be asked about the Israeli response, the statement maintained that such critiques cannot justly attribute "genocidal intentions" to Israel's actions and, furthermore, should in any case not lead to antisemitism. Referencing Hegel's concept of the cunning of reason [de], used to describe the progressive realisation of freedom in history unbeknown to individuals, Habermas has stated that the euro represents the "cunning of economic reason". During the European debt crisis, Habermas criticized Angela Merkel's leadership in Europe. In 2013, Habermas clashed with Wolfgang Streeck, who argued the kind of European federalism espoused by Habermas was the root of the continent's crisis. Awards Major works See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Calendar] | [TOKENS: 4078]
Contents Calendar A calendar is a system of organizing days. This is done by giving names to periods of time, typically days, weeks, months and years. A date is the designation of a single and specific day within such a system. A calendar is also a physical record (often paper) of such a system. A calendar can also mean a list of planned events, such as a court calendar, or a partly or fully chronological list of documents, such as a calendar of wills. Periods in a calendar (such as years and months) are usually, though not necessarily, synchronized with the cycle of the sun or the moon. The most common type of pre-modern calendar was the lunisolar calendar, a lunar calendar that occasionally adds one intercalary month to remain synchronized with the solar year over the long term. Etymology The term calendar is taken from kalendae, the term for the first day of the month in the Roman calendar, related to the verb calare 'to call out', referring to the "calling" of the new moon when it was first seen. Latin calendarium meant 'account book, register' (as accounts were settled and debts were collected on the calends of each month). The Latin term was adopted in Old French as calendrier and from there in Middle English as calender by the 13th century (the spelling calendar is early modern). History The course of the Sun and the Moon are the most salient regularly recurring natural events useful for timekeeping, and in pre-modern societies around the world lunation and the year were most commonly used as time units. Nevertheless, the Roman calendar contained remnants of a very ancient pre-Etruscan 10-month solar year. The first recorded physical calendars, dependent on the development of writing in the Ancient Near East, are the Bronze Age Egyptian and Sumerian calendars. During the Vedic period India developed a sophisticated timekeeping methodology and calendars for Vedic rituals. According to Yukio Ohashi, the Vedanga calendar in ancient India was based on astronomical studies during the Vedic Period and was not derived from other cultures. A large number of calendar systems in the Ancient Near East were based on the Babylonian calendar dating from the Iron Age, among them the calendar system of the Persian Empire, which in turn gave rise to the Zoroastrian calendar and the Hebrew calendar. A great number of Hellenic calendars were developed in Classical Greece, and during the Hellenistic period they gave rise to the ancient Roman calendar and to various Hindu calendars. Calendars in antiquity were lunisolar, depending on the introduction of intercalary months to align the solar and the lunar years. This was mostly based on observation, but there may have been early attempts to model the pattern of intercalation algorithmically, as evidenced in the fragmentary 2nd-century Coligny calendar. The Roman calendar was reformed by Julius Caesar in 46 BC. His "Julian" calendar was no longer dependent on the observation of the new moon, but followed an algorithm of introducing a leap day every four years. This created a dissociation of the calendar month from lunation. The Gregorian calendar, introduced in 1582, corrected most of the remaining difference between the Julian calendar and the solar year. The Islamic calendar is based on the prohibition of intercalation (nasi') by Muhammad, in Islamic tradition dated to a sermon given on 9 Dhu al-Hijjah AH 10 (Julian date: 6 March 632). This resulted in an observation-based lunar calendar that shifts relative to the seasons of the solar year. There have been several modern proposals for reform of the modern calendar, such as the World Calendar, the International Fixed Calendar, the Holocene calendar, and the Hanke–Henry Permanent Calendar. Such ideas are promoted from time to time, but have failed to gain traction because of the loss of continuity and the massive upheaval that implementing them would involve, as well as their effect on cycles of religious activity. Systems A full calendar system has a different calendar date for every day. Thus the week cycle is by itself not a full calendar system; neither is a system to name the days within a year without a system for identifying the years. The simplest calendar system just counts time periods from a reference date, or epoch. This applies for the Julian day or Unix Time. Virtually the only possible variation is using a different reference date, in particular, one less distant in the past to make the numbers smaller. Computations in these systems are just a matter of addition and subtraction. Other calendars have one (or multiple) larger units of time. Calendars that contain one level of cycles: Calendars with two levels of cycles: Cycles can be synchronized with periodic phenomena: Very commonly a calendar includes more than one type of cycle or has both cyclic and non-cyclic elements. Most calendars incorporate more complex cycles. For example, the vast majority of them track years, months, weeks and days. The seven-day week is practically universal, though its use varies. It has run uninterrupted for millennia. Solar calendars assign a date to each solar day, which is based on the apparent motion of the Sun. A day may consist of the period between sunrise and sunset, with a following period of night, or it may be a period between successive events such as two sunsets. The length of the interval between two such successive events may vary slightly during the year, or it may be averaged into a mean solar day. Other types of calendar may also use a solar day. The Egyptians appear to have been the first to develop a solar calendar, using as a fixed point the annual sunrise reappearance of the Dog Star—Sirius, or Sothis—in the eastern sky, which coincided with the annual flooding of the Nile River. They built a calendar with 365 days, divided into 12 months of 30 days each, with 5 extra days at the end of the year. However, they did not include the extra bit of time in each year, and this caused their calendar to slowly become inaccurate. Not all calendars use the solar year as a unit. A lunar calendar is one in which days are numbered within each lunar phase cycle. Because the length of the lunar month is not an even fraction of the length of the tropical year, a purely lunar calendar quickly drifts against the seasons, which do not vary much near the equator. It does, however, stay constant with respect to other phenomena, notably tides. An example is the Islamic calendar. Alexander Marshack, in a controversial reading, believed that marks on a bone baton (c. 25,000 BC) represented a lunar calendar. Other marked bones may also represent lunar calendars. Similarly, Michael Rappenglueck believes that marks on a 15,000-year-old cave painting represent a lunar calendar. A lunisolar calendar is a lunar calendar that compensates by adding an extra month as needed to realign the months with the seasons. Prominent examples of lunisolar calendar are Hindu calendar and Buddhist calendar that are popular in South Asia and Southeast Asia. Another example is the Hebrew calendar, which uses a 19-year cycle. Subdivisions Nearly all calendar systems group consecutive days into "months" and also into "years". In a solar calendar a year approximates Earth's tropical year (that is, the time it takes for a complete cycle of seasons), traditionally used to facilitate the planning of agricultural activities. In a lunar calendar, the month approximates the cycle of the moon phase. Consecutive days may be grouped into other periods such as the week. Because the number of days in the tropical year is not a whole number, a solar calendar must have a different number of days in different years. This may be handled, for example, by adding an extra day in leap years. The same applies to months in a lunar calendar and also the number of months in a year in a lunisolar calendar. This is generally known as intercalation. Even if a calendar is solar, but not lunar, the year cannot be divided entirely into months that never vary in length. Cultures may define other units of time, such as the week, for the purpose of scheduling regular activities that do not easily coincide with months or years. Many cultures use different baselines for their calendars' starting years. Historically, several countries have based their calendars on regnal years, a calendar based on the reign of their current sovereign. For example, the year 2006 in Japan is year 18 Heisei, with Heisei being the era name of Emperor Akihito. Other types An astronomical calendar is based on ongoing observation; examples are the religious Islamic calendar and the old religious Jewish calendar in the time of the Second Temple. Such a calendar is also referred to as an observation-based calendar. The advantage of such a calendar is that it is perfectly and perpetually accurate. The disadvantage is that working out when a particular date would occur is difficult. An arithmetic calendar is one that is based on a strict set of rules; an example is the current Jewish calendar. Such a calendar is also referred to as a rule-based calendar. The advantage of such a calendar is the ease of calculating when a particular date occurs. The disadvantage is imperfect accuracy. Furthermore, even if the calendar is very accurate, its accuracy diminishes slowly over time, owing to changes in Earth's rotation. This limits the lifetime of an accurate arithmetic calendar to a few thousand years. After then, the rules would need to be modified from observations made since the invention of the calendar. The early Roman calendar, created during the reign of Romulus, lumped the 61 days of the winter period together as simply "winter". Over time, this period became January and February; through further changes over time (including the creation of the Julian calendar) this calendar became the modern Gregorian calendar, introduced in the 1570s. Usage The primary practical use of a calendar is to identify days: to be informed about or to agree on a future event and to record an event that has happened. Days may be significant for agricultural, civil, religious, or social reasons. For example, a calendar provides a way to determine when to start planting or harvesting, which days are religious or civil holidays, which days mark the beginning and end of business accounting periods, and which days have legal significance, such as the day taxes are due or a contract expires. Also, a calendar may, by identifying a day, provide other useful information about the day such as its season. Calendars are also used as part of a complete timekeeping system: date and time of day together specify a moment in time. In the modern world, timekeepers can show time, date, and weekday. Some may also show the lunar phase. The Gregorian calendar is the de facto international standard and is used almost everywhere in the world for civil purposes. The widely used solar aspect is a cycle of leap days in a 400-year cycle designed to keep the duration of the year aligned with the solar year. There is a lunar aspect which approximates the position of the moon during the year, and is used in the calculation of the date of Easter. Each Gregorian year has either 365 or 366 days (the leap day being inserted as 29 February), amounting to an average Gregorian year of 365.2425 days (compared to a solar year of 365.2422 days). The Gregorian calendar was introduced in 1582 as a refinement to the Julian calendar, that had been in use throughout the European Middle Ages, amounting to a 0.002% correction in the length of the year. During the Early Modern period, its adoption was mostly limited to Roman Catholic nations, but by the 19th century it had become widely adopted for the sake of convenience in international trade. The last European country to adopt it was Greece, in 1923. The calendar epoch used by the Gregorian calendar is inherited from the medieval convention established by Dionysius Exiguus and associated with the Julian calendar. The year number is variously given as AD (for Anno Domini) or CE (for Common Era or Christian Era). The most important use of pre-modern calendars is keeping track of the liturgical year and the observation of religious feast days. While the Gregorian calendar is itself historically motivated to the calculation of the Easter date, it is now in worldwide secular use as the de facto standard. Alongside the use of the Gregorian calendar for secular matters, there remain several calendars in use for religious purposes. Western Christian liturgical calendars are based on the cycle of the Roman Rite of the Catholic Church, and generally include the liturgical seasons of Advent, Christmas, Ordinary Time (Time after Epiphany), Lent, Easter, and Ordinary Time (Time after Pentecost). Some Christian calendars do not include Ordinary Time and every day falls into a denominated season. The Eastern Orthodox Church employs the use of 2 liturgical calendars; the Julian calendar (often called the Old Calendar) and the Revised Julian Calendar (often called the New Calendar). The Revised Julian Calendar is nearly the same as the Gregorian calendar, with the addition that years divisible by 100 are not leap years, except that years with remainders of 200 or 600 when divided by 900 remain leap years, e.g. 2000 and 2400 as in the Gregorian calendar.[discuss] The Islamic calendar or Hijri calendar is a lunar calendar consisting of 12 lunar months in a year of 354 or 355 days. It is used to date events in most of the Muslim countries (concurrently with the Gregorian calendar) and used by Muslims everywhere to determine the proper day on which to celebrate Islamic holy days and festivals. Its epoch is the Hijra (corresponding to AD 622). With an annual drift of 11 or 12 days, the seasonal relation is repeated approximately every 33 Islamic years. Various Hindu calendars remain in use in the Indian subcontinent, including the Nepali calendars, Bengali calendar, Malayalam calendar, Tamil calendar, Vikrama Samvat used in Northern India, and Shalivahana calendar in the Deccan states. The Buddhist calendar and the traditional lunisolar calendars of Cambodia, Laos, Myanmar, Sri Lanka and Thailand are also based on an older version of the Hindu calendar. Most of the Hindu calendars are inherited from a system first enunciated in Vedanga Jyotisha of Lagadha, standardized in the Sūrya Siddhānta and subsequently reformed by astronomers such as Āryabhaṭa (AD 499), Varāhamihira (6th century) and Bhāskara II (12th century). The Hebrew calendar is used by Jews worldwide for religious and cultural affairs, also influences civil matters in Israel (such as national holidays) and can be used in business dealings (such as for the dating of cheques). Followers of the Baháʼí Faith use the Baháʼí calendar. The Baháʼí Calendar, also known as the Badi Calendar was first established by the Bab in the Kitab-i-Asma. The Baháʼí Calendar is also purely a solar calendar and comprises 19 months each having nineteen days. The Chinese, Hebrew, Hindu, and Julian calendars are widely used for religious and social purposes. The Iranian (Persian) calendar is used in Iran and some parts of Afghanistan. The Assyrian calendar is in use by the members of the Assyrian community in the Middle East (mainly Iraq, Syria, Turkey, and Iran) and the diaspora. The first year of the calendar is exactly 4750 years prior to the start of the Gregorian calendar. The Ethiopian calendar or Ethiopic calendar is the principal calendar used in Ethiopia and Eritrea, with the Oromo calendar also in use in some areas. In neighboring Somalia, the Somali calendar co-exists alongside the Gregorian and Islamic calendars. In Thailand, where the Thai solar calendar is used, the months and days have adopted the western standard, although the years are still based on the traditional Buddhist calendar. A fiscal calendar generally means the accounting year of a government or a business. It is used for budgeting, keeping accounts, and taxation. It is a set of 12 months that may start at any date in a year. The US government's fiscal year starts on 1 October and ends on 30 September. The government of India's fiscal year starts on 1 April and ends on 31 March. Small traditional businesses in India start the fiscal year on Diwali festival and end the day before the next year's Diwali festival. In accounting (and particularly accounting software), a fiscal calendar (such as a 4/4/5 calendar) fixes each month at a specific number of weeks to facilitate comparisons from month to month and year to year. January always has exactly 4 weeks (Sunday through Saturday), February has 4 weeks, March has 5 weeks, etc. Note that this calendar will normally need to add a 53rd week to every 5th or 6th year, which might be added to December or might not be, depending on how the organization uses those dates. There exists an international standard way to do this (the ISO week). The ISO week starts on a Monday and ends on a Sunday. Week 1 is always the week that contains 4 January in the Gregorian calendar. Formats The term calendar applies not only to a given scheme of timekeeping but also to a specific record or device displaying such a scheme, for example, an appointment book in the form of a pocket calendar (or personal organizer), desktop calendar, a wall calendar, etc. In a paper calendar, one or two sheets can show a single day, a week, a month, or a year. If a sheet is for a single day, it easily shows the date and the weekday. If a sheet is for multiple days it shows a conversion table to convert from weekday to date and back. With a special pointing device, or by crossing out past days, it may indicate the current date and weekday. This is the most common usage of the word. In the US Sunday is considered the first day of the week and so appears on the far left and Saturday the last day of the week appearing on the far right. In Britain, the weekend may appear at the end of the week so the first day is Monday and the last day is Sunday.[citation needed] The US calendar display is also used in Britain. It is common to display the Gregorian calendar in separate monthly grids of seven columns (from Monday to Sunday, or Sunday to Saturday depending on which day is considered to start the week – this varies according to country)[citation needed] and five to six rows (or rarely, four rows when the month of February contains 28 days in common years beginning on the first day of the week), with the day of the month numbered in each cell, beginning with 1. The sixth row is sometimes eliminated by marking 23/30 and 24/31 together as necessary. When working with weeks rather than months, a continuous format is sometimes more convenient, where no blank cells are inserted to ensure that the first day of a new month begins on a fresh row. Software Calendaring software provides users with an electronic version of a calendar, and may additionally provide an appointment book, address book, or contact list. Calendaring is a standard feature of many PDAs, EDAs, and smartphones. The software may be a local package designed for individual use (e.g., Lightning extension for Mozilla Thunderbird, Microsoft Outlook without Exchange Server, or Windows Calendar) or maybe a networked package that allows for the sharing of information between users (e.g., Mozilla Sunbird, Windows Live Calendar, Google Calendar, or Microsoft Outlook with Exchange Server). See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Jean_Baudrillard] | [TOKENS: 8208]
Contents Jean Baudrillard Jean Baudrillard (UK: /ˈboʊdrɪjɑːr/, US: /ˌboʊdriˈɑːr/; French: [ʒɑ̃ bodʁijaʁ]; 27 July 1929 – 6 March 2007) was a French sociologist and philosopher with an interest in cultural studies. He is best known for his analyses of media, contemporary culture, and technological communication, as well as his formulation of concepts such as hyperreality. Baudrillard wrote about diverse subjects, including consumerism, critique of economy, social history, aesthetics, Western foreign policy, and popular culture. Among his best-known works are Forget Foucault (1977), Seduction (1978), Simulacra and Simulation (1981), America (1986), and The Gulf War Did Not Take Place (1991). His work is frequently associated with postmodernism and specifically post-structuralism. Nevertheless, Baudrillard had also opposed post-structuralism, and had distanced himself from postmodernism. Biography Baudrillard was born in Reims, northeastern France, on 27 July 1929. His grandparents were farm workers and his father a gendarme. During high school (at the Lycée at Reims), he became aware of 'pataphysics, a parody of the philosophy of science, via philosophy professor Emmanuel Peillet (1914–1973), which is said to be crucial for understanding Baudrillard's later thought.: 317 He became the first of his family to attend university when he moved to Paris to attend the Sorbonne. There he studied German language and literature, which led him to begin teaching the subject at several different lycées, both Parisian and provincial, from 1960 until 1966.: 317 While teaching, Baudrillard began to publish reviews of literature and translated the works of such authors as Peter Weiss, Bertolt Brecht, Karl Marx, Friedrich Engels, and Wilhelm Emil Mühlmann.: 317–328 While teaching German, Baudrillard began to transfer to sociology, eventually completing at the University of Paris in 1966 his doctoral thesis Le Système des Objets (The System of Objects) under the dissertation committee of Henri Lefebvre, Roland Barthes, and Pierre Bourdieu (the thesis was published as a book in 1968). Subsequently, he began teaching Sociology at the Paris X Nanterre, a university campus just outside Paris which would become heavily involved in the uprising of May 1968.: 2(Introduction) During this time, Baudrillard worked closely with philosopher Humphrey De Battenburge, who described Baudrillard as a "visionary". At Nanterre he took up a position as Maître Assistant (Assistant Professor), then Maître de Conférences (Associate Professor), eventually becoming a professor after completing his accreditation, L'Autre par lui-même (The Other by Himself). In 1970, Baudrillard made the first of his many trips to the United States (Aspen, Colorado), and in 1973, the first of several trips to Kyoto, Japan. He was given his first camera in 1981 in Japan, which led to him becoming a photographer.: 317–328 In 1986, he moved to IRIS (Institut de Recherche et d'Information Socio-Économique) at the Université de Paris-IX Dauphine, where he spent the latter part of his teaching career. During this time he had begun to move away from sociology as a discipline (particularly in its "classical" form), and, after ceasing to teach full-time, he rarely identified himself with any particular discipline, although he remained linked to academia. During the 1980s and 1990s his books had gained a wide audience, and in his last years he became, to an extent, an intellectual celebrity, being published often in the French- and English-speaking popular press. He nonetheless continued supporting the Institut de Recherche sur l'Innovation Sociale at the Centre National de la Recherche Scientifique and was Satrap at the Collège de 'Pataphysique. Baudrillard taught at the European Graduate School in Saas-Fee, Switzerland, and collaborated at the Canadian theory, culture, and technology review CTheory, where he was abundantly cited. He also purportedly participated in the International Journal of Baudrillard Studies (as of 2022 hosted on Bishop's University domain) from its inception in 2004 until his death. In 1999–2000, his photographs were exhibited at the Maison européenne de la photographie in Paris.: 319 In 2004, Baudrillard attended the major conference on his work, "Baudrillard and the Arts", at the Center for Art and Media Karlsruhe in Karlsruhe, Germany.: 317–328 Baudrillard enjoyed baroque music; a favorite composer was Claudio Monteverdi. He also favored rock music such as The Velvet Underground & Nico. Baudrillard did his writing using "his old typewriter, never at the computer". He has stated that a computer is not "merely a handier and more complex kind of typewriter", and with a typewriter he has a "physical relation to writing". Baudrillard was married twice. He and his first wife Lucile Baudrillard had two children, Gilles and Anne. Not much is known about their relationship, or why they separated. In 1970, while working as a professor at the University of Paris-Nanterre, 41-year-old Baudrillard met 25-year-old Marine Dupuis, who had just come back from a sailing trip around the world with her then-boyfriend. In 1994, more than 20 years later, Jean and Marine got married. Marine went on to be a journalist and media artistic director. Diagnosed with cancer in 2005, Baudrillard battled the disease for two years from his apartment on Rue Sainte-Beuve, Paris, dying at the age of 77. Marine Baudrillard curates Cool Memories, an association of Jean Baudrillard's friends. Key concepts Baudrillard's published work emerged as part of a generation of French thinkers including Gilles Deleuze, Jean-François Lyotard, Michel Foucault, Jacques Derrida, and Jacques Lacan who all shared an interest in semiotics, and he is often seen as a part of the post-structuralist philosophical school. James M. Russell in 2015: 283 stated that "In common with many post-structuralists, his arguments consistently draw upon the notion that signification and meaning are both only understandable in terms of how particular words or 'signs' interrelate". Baudrillard thought, as do many post-structuralists, that meaning is brought about through systems of signs working together. Following on from the structuralist linguist Ferdinand de Saussure, Baudrillard argued that meaning (value) is created through difference—through what something is not (so "dog" means "dog" because it is not-"cat", not-"goat", not-"tree", etc.). In fact, he viewed meaning as near enough self-referential: objects, images of objects, words and signs are situated in a web of meaning; one object's meaning is only understandable through its relation to the system of other objects; for instance, one thing's prestige relates to another's mundanity. From this starting point Baudrillard theorized broadly about human society based upon this kind of self-referentiality. His writing portrays societies always searching for a sense of meaning—or a "total" understanding of the world—that remains consistently elusive. In contrast to Post-structuralism (such as Michel Foucault), for whom the formations of knowledge emerge only as the result of relations of power, Baudrillard developed theories in which the excessive, fruitless search for total knowledge leads almost inevitably to a kind of delusion. In Baudrillard's view, the (human) subject may try to understand the (non-human) object, but because the object can only be understood according to what it signifies (and because the process of signification immediately involves a web of other signs from which it is distinguished) this never produces the desired results. The subject is, rather, seduced (in the original Latin sense: seducere, 'to lead away') by the object. He argued therefore that, in the final analysis, a complete understanding of the minutiae of human life is impossible, and when people are seduced into thinking otherwise they become drawn toward a "simulated" version of reality, or, to use one of his neologisms, a state of "hyperreality". This is not to say that the world becomes unreal, but rather that the faster and more comprehensively societies begin to bring reality together into one supposedly coherent picture, the more insecure and unstable it looks and the more fearful societies become. Reality, in this sense, "dies out." Russell states that Baudrillard argues that "in our present 'global' society, technological communication has created an excessive proliferation of meaning. Because of this, meaning's self-referentiality has prompted, not a 'global village,' but a world where meaning has been obliterated": 283 Accordingly, Baudrillard argued that the excess of signs and of meaning in late 20th century "global" society had caused (quite paradoxically) an effacement of reality. In this world neither liberal nor Marxist utopias are any longer believed in. We live, he argued, not in a "global village", to use Marshall McLuhan's phrase, but rather in a world that is ever more easily petrified by even the smallest event. Because the "global" world operates at the level of the exchange of signs and commodities, it becomes ever more blind to symbolic acts such as, for example, terrorism. In Baudrillard's work the symbolic realm (which he develops a perspective on through the anthropological work of Marcel Mauss and Georges Bataille) is seen as quite distinct from that of signs and signification. Signs can be exchanged like commodities; symbols, on the other hand, operate quite differently: they are exchanged, like gifts, sometimes violently as a form of potlatch. Baudrillard, particularly in his later work, saw the "global" society as without this "symbolic" element, and therefore symbolically (if not militarily) defenseless against acts such as the Rushdie Fatwa or, indeed, the September 11 terrorist attacks against the United States and its military and economic establishment. In his early books, such as The System of Objects, For a Critique of the Political Economy of the Sign, and The Consumer Society [fr], Baudrillard's main focus is upon consumerism, and how different objects are consumed in different ways. At this time Baudrillard's political outlook was loosely associated with Marxism (and Situationism), but in these books he differed from Karl Marx in one significant way. For Baudrillard, as for the situationists, it was consumption rather than production that was the main driver of capitalist society. Baudrillard came to this conclusion by criticising Marx's concept of "use-value". Baudrillard thought that both Marx's and Adam Smith's economic thought accepted the idea of genuine needs relating to genuine uses too easily and too simply. Baudrillard argued, drawing from Georges Bataille, that needs are constructed, rather than innate. He stressed that all purchases, because they always signify something socially, have their fetishistic side. Objects always, drawing from Roland Barthes, "say something" about their users. And this was, for him, why consumption was and remains more important than production: because the "ideological genesis of needs" precedes the production of goods to meet those needs.: 63 He wrote that there are four ways of an object obtaining value. The four value-making processes are: Baudrillard's earlier books were attempts to argue that the first two of these values are not simply associated, but are disrupted by the third and, particularly, the fourth. Later, Baudrillard rejected Marxism totally (The Mirror of Production and Symbolic Exchange and Death).[citation needed] But the focus on the difference between sign value (which relates to commodity exchange) and symbolic value (which relates to Maussian gift exchange) remained in his work up until his death. Indeed, it came to play a more and more important role, particularly in his writings on world events. As Baudrillard developed his work throughout the 1980s, he moved from economic theory to mediation and mass communication. Although retaining his interest in Saussurean semiotics and the logic of symbolic exchange (as influenced by anthropologist Marcel Mauss), Baudrillard turned his attention to the work of Marshall McLuhan, developing ideas about how the nature of social relations is determined by the forms of communication that a society employs. In so doing, Baudrillard progressed beyond both Saussure's and Roland Barthes's formal semiology to consider the implications of a historically understood version of structural semiology. According to Kornelije Kvas, "Baudrillard rejects the structuralist principle of the equivalence of different forms of linguistic organization, the binary principle that contains oppositions such as: true-false, real-unreal, center-periphery. He denies any possibility of a (mimetic) duplication of reality; reality mediated through language becomes a game of signs. In his theoretical system all distinctions between the real and the fictional, between a copy and the original, disappear". Simulation, Baudrillard claims, is the current stage of the simulacrum: all is composed of references with no referents, a hyperreality. Baudrillard argues that this is part of a historical progression. In the Renaissance, the dominant simulacrum was in the form of the counterfeit, where people or objects appear to stand for a real referent that does not exist (for instance, royalty, nobility, holiness, etc.). With the Industrial Revolution, the dominant simulacrum becomes the product, which can be propagated on an endless production line. In current times, the dominant simulacrum is the model, which by its nature already stands for endless reproducibility, and is itself already reproduced. Throughout the 1980s and 1990s, one of Baudrillard's most common themes was historicity, or, more specifically, how present-day societies use the notions of progress and modernity in their political choices. He argued, much like the political theorist Francis Fukuyama, that history had ended or "vanished" with the spread of globalization; but, unlike Fukuyama, Baudrillard averred that this end should not be understood as the culmination of history's progress, The aim of this world order [...] is, in a sense, the end of history, not on the basis of a democratic fulfillment, as Fukuyama has it, but on the basis of preventive terror, of a counter-terror that puts an end to any possible events. — Baudrillard, The Intelligence of Evil or the Lucidity Pact . New York: Berg Publishing, 2005, Translated by Chris Turner but as the collapse of the very idea of historical progress. For Baudrillard, the end of the Cold War did not represent an ideological victory; rather, it signaled the disappearance of utopian visions shared between both the political Right and Left. Giving further evidence of his opposition toward Marxist visions of global communism and liberal visions of global civil society, Baudrillard contended that the ends they hoped for had always been illusions; indeed, as The Illusion of the End argues, he thought the idea of an end itself was nothing more than a misguided dream: The end of history is, alas, also the end of the dustbins of history. There are no longer any dustbins for disposing of old ideologies, old regimes, old values. Where are we going to throw Marxism, which actually invented the dustbins of history? (Yet there is some justice here since the very people who invented them have fallen in.) Conclusion: if there are no more dustbins of history, this is because History itself has become a dustbin. It has become its own dustbin, just as the planet itself is becoming its own dustbin.: 263 Within a society subject to and ruled by fast-paced electronic communication and global information networks the collapse of this façade was always going to be, he thought, inevitable. Employing a quasi-scientific vocabulary that attracted the ire of the physicist Alan Sokal, Baudrillard wrote that the speed society moved at had destabilized the linearity of history: "we have the particle accelerator that has smashed the referential orbit of things once and for all.": 2 Russell stated that this "approach to history demonstrates Baudrillard's affinities with the postmodern philosophy of Jean-François Lyotard", who argued that in the late 20th century there was no longer any room for "metanarratives". (The triumph of a coming communism being one such metanarrative.) But, in addition to simply lamenting this collapse of history, Baudrillard also went beyond Lyotard and attempted to analyse how the idea of positive progress was being employed in spite of the notion's declining validity. Baudrillard argued that although genuine belief in a universal endpoint of history, wherein all conflicts would find their resolution, had been deemed redundant, universality was still a notion used in world politics as an excuse for actions. Universal values which, according to him, no one any longer believed were universal and are still rhetorically employed to justify otherwise unjustifiable choices. The means, he wrote, are there even though the ends are no longer believed in, and are employed to hide the present's harsh realities (or, as he would have put it, unrealities). "In the Enlightenment, universalization was viewed as unlimited growth and forward progress. Today, by contrast, universalization is expressed as a forward escape." This involves the notion of "escape velocity" as outlined in The Illusion of the End, which in turn, results in the postmodern fallacy of escape velocity on which the postmodern mind and critical view cannot, by definition, ever truly break free from the all-encompassing "self-referential" sphere of discourse. Political commentary Baudrillard reacted to the West's indifference to the Bosnian War in writings, mostly in essays in his column for Libération. More specifically, he expressed his view on Europe's unwillingness to respond to "aggression and genocide in Bosnia", in which "New Europe" revealed itself to be a "sham." He criticized the Western media and intellectuals for their passivity, and for taking the role of bystanders, engaging in ineffective, hypocritical and self-serving action, and the public for its inability to distinguish simulacra from real world happenings, in which real death and destruction in Bosnia seemed unreal. He was determined in his columns to openly name the perpetrators, Serbs, and call their actions in Bosnia aggression and genocide. Baudrillard heavily criticized Susan Sontag for directing a production of Waiting for Godot in war-torn Sarajevo during the siege.[a][b] Baudrillard's provocative 1991 book, The Gulf War Did Not Take Place, raised his public profile as an academic and political commentator. He argued that the first Gulf War was the inverse of the Clausewitzian formula: not "the continuation of politics by other means", but "the continuation of the absence of politics by other means." Accordingly, Saddam Hussein was not fighting the Coalition, but using the lives of his soldiers as a form of sacrifice to preserve his power.: 72 The Coalition fighting the Iraqi military was merely dropping 10,000 tonnes of bombs daily, as if proving to themselves that there was an enemy to fight.: 61 So, too, were the Western media complicit, presenting the war in real time, by recycling images of war to propagate the notion that the U.S.-led Coalition and the Iraqi government were actually fighting, but, such was not the case. Saddam Hussein did not use his military capacity (the Iraqi Air Force). His power was not weakened, evinced by his easy suppression of the 1991 internal uprisings that followed afterwards. Over all, little had changed. Saddam remained undefeated, the "victors" were not victorious, and thus there was no war—i.e., the Gulf War did not occur. The book was originally a series of articles in the British newspaper The Guardian and the French newspaper Libération, published in three parts: "The Gulf War Will Not Take Place," published during the American military and rhetorical buildup; "The Gulf War Is Not Taking Place," published during military action; and "The Gulf War Did Not Take Place" published afterwards. Some critics, like Christopher Norris accused Baudrillard of instant revisionism; a denial of the physical action of the conflict (which was related to his denial of reality in general). Consequently, Baudrillard was accused of lazy amoralism, cynical scepticism, and Berkelian subjective idealism. Sympathetic commentators such as William Merrin, in his book Baudrillard and the Media, have argued that Baudrillard was more concerned with the West's technological and political dominance and the globalization of its commercial interests, and what that means for the present possibility of war. Merrin argued that Baudrillard was not denying that something had happened, but merely questioning whether that something was in fact war or a bilateral "atrocity masquerading as a war". Merrin viewed the accusations of amorality as redundant and based on a misreading. In Baudrillard's own words:: 71–2 Saddam liquidates the communists, Moscow flirts even more with him; he gases the Kurds, it is not held against him; he eliminates the religious cadres, the whole of Islam makes peace with him. […] Even […] the 100,000 dead will only have been the final decoy that Saddam will have sacrificed, the blood money paid in forfeit according to a calculated equivalence [...] to preserve his power. What is worse is that these dead still serve as an alibi for those who do not want to have been excited for nothing: at least these dead will prove this war was indeed a war and not shameful and pointless. In his essay, "The Spirit of Terrorism", Baudrillard characterises the terrorist attacks of 11 September 2001 on the World Trade Center in New York City as the "absolute event". Baudrillard contrasts the "absolute event" of 11 September 2001 with "global events", such as the death of Diana, Princess of Wales and World Cup. The essay culminates in Baudrillard regarding the U.S.-led Gulf War as a "non-event", or an "event that did not happen". Seeking to understand them as a reaction to the technological and political expansion of capitalist globalization, rather than as a war of religiously based or civilization-based warfare, he described the absolute event and its consequences as follows: This is not a clash of civilisations or religions, and it reaches far beyond Islam and America, on which efforts are being made to focus the conflict to create the delusion of a visible confrontation and a solution based upon force. There is indeed a fundamental antagonism here, but one that points past the spectre of America (which is perhaps the epicentre, but in no sense the sole embodiment, of globalisation) and the spectre of Islam (which is not the embodiment of terrorism either) to triumphant globalisation battling against itself. In accordance with his theory of society, Baudrillard portrayed the attacks as a symbolic reaction to the inexorable rise of a world based on commodity exchange. Baudrillard's stance on the 11 September 2001 attacks was criticised on two counts. Richard Wolin (in The Seduction of Unreason) forcefully accused Baudrillard and Slavoj Žižek of all but celebrating the terrorist attacks, essentially claiming that the United States received what it deserved. Žižek, however, countered that accusation to Wolin's analysis as a form of intellectual barbarism in the journal Critical Inquiry, saying that Wolin failed to see the difference between fantasising about an event and stating that one is deserving of that event. Merrin (in Baudrillard and the Media) argued that Baudrillard's position affords the terrorists a type of moral superiority. In the journal Economy and Society, Merrin further noted that Baudrillard gives the symbolic facets of society unfair privilege above semiotic concerns. Second, authors questioned whether the attacks were unavoidable. Bruno Latour, in Critical Inquiry, argued that Baudrillard believed that their destruction was forced by the society that created them, alluding to the notion that the Towers were "brought down by their own weight." In Latour's view, this was because Baudrillard conceived only of society in terms of a symbolic and semiotic dualism.[vague] 19 February 2003, with the 2003 invasion of Iraq impending, René Major [fr] moderated a debate entitled "Pourquoi La Guerre Aujourd'hui?" between Baudrillard and Jacques Derrida, co-hosted by Major's Institute for Advanced Studies in Psychoanalysis and Le Monde Diplomatique. The debate discussed the relation between terrorist attacks and the invasion. "Where Baudrillard situates 9/11 as the primary motivating force" behind the Iraq War, whereas "Derrida argues that the Iraq War was planned long before 9/11, and that 9/11 plays a secondary role". During 2005, Baudrillard wrote three short pieces and gave a brief magazine interview, all treating similar ideas; following his death in 2007, the four pieces were collected and published posthumously as The Agony of Power, a polemic against power itself. The first piece, "From Domination to Hegemony", contrasts its two subjects, modes of power; domination stands for historical, traditional power relations, while hegemony stands for modern, more sophisticated power relations as realized by states and businesses. Baudrillard decried the "cynicism" with which contemporary businesses openly state their business models. For example, he cited French television channel TF1 executive Patrick Le Lay who stated that his business' job was "to help Coca-Cola sell its products.": 37 Baudrillard lamented that such honesty pre-empted and thus robbed the Left of its traditional role of critiquing governments and businesses: "In fact, Le Lay takes away the only power we had left. He steals our denunciation.": 38–9 Consequently, Baudrillard stated that "power itself must be abolished—and not solely in the refusal to be dominated [...] but also, just as violently, in the refusal to dominate.": 47 The latter pieces included further analysis of the 11 September terrorist attacks, using the metaphor of the Native American potlatch to describe both American and Muslim societies, specifically the American state versus the hijackers. In the piece's context, "potlatch" referred not to the gift-giving aspect of the ritual, but rather its wealth-destroying aspect: "The terrorists' potlatch against the West is their own death. Our potlatch is indignity, immodesty, obscenity, degradation and abjection.": 67 This criticism of the West carried notes of Baudrillard's simulacrum, the above cynicism of business, and contrast between Muslim and Western societies:: 67–8 We [the West] throw this indifference and abjection at others like a challenge: the challenge to defile themselves in return, to deny their values, to strip naked, confess, admit—to respond to a nihilism equal to our own. Reception Jean-François Lyotard's 1974 Économie Libidinale criticised Baudrillard's work. Lotringer notes that Gilles Deleuze, "otherwise known for his generosity", "made it known around Paris" that he saw Baudrillard as "the shame of the profession", in response to Baudrillard's study on Foucault's works.: 20 Sontag, responding to Baudrillard's comments on her reactions to the Bosnian war, described him as "ignorant and cynical" and "a political idiot". James M. Russell in 2015 wrote that "The most severe" of Baudrillard's "critics accuse him of being a purveyor of a form of reality-denying irrationalism".: 285–286 One of Baudrillard's editors, critical theory professor Mark Poster, remarked: Baudrillard's writing up to the mid-1980s is open to several criticisms. He fails to define key terms, such as the code; his writing style is hyperbolic and declarative, often lacking sustained, systematic analysis when it is appropriate; he totalizes his insights, refusing to qualify or delimit his claims. He writes about particular experiences, television images, as if nothing else in society mattered, extrapolating a bleak view of the world from that limited base. He ignores contradictory evidence such as the many benefits afforded by the new media But Poster still argued for his contemporary relevance; he also attempted to refute the most extreme of Baudrillard's critics: Baudrillard is not disputing the trivial issue that reason remains operative in some actions, that if I want to arrive at the next block, for example, I can assume a Newtonian universe (common sense), plan a course of action (to walk straight for X meters), carry out the action, and finally fulfill my goal by arriving at the point in question. What is in doubt is that this sort of thinking enables a historically informed grasp of the present in general. According to Baudrillard, it does not. The concurrent spread of the hyperreal through the media and the collapse of liberal and Marxist politics as the master narratives, deprives the rational subject of its privileged access to truth. In an important sense individuals are no longer citizens, eager to maximise their civil rights, nor proletarians, anticipating the onset of communism. They are rather consumers, and hence the prey of objects as defined by the code. Christopher Norris's Uncritical Theory: Postmodernism, Intellectuals and the Gulf War, to Russell, "seeks to reject his media theory and position on "the real" out of hand".: 285 Frankfurt school critical theorist Douglas Kellner's Jean Baudrillard: From Marxism to Postmodernism and Beyond seeks rather to analyse Baudrillard's relation to postmodernism (a concept with which Baudrillard has had a continued, if uneasy and rarely explicit, relationship) and to present a Marxist counter. Regarding the former, William Merrin (discussed above) published more than one denunciation of Norris' position. The latter Baudrillard himself characterised as reductive.[vague] Kellner stated that "it is difficult to decide whether Baudrillard is best read as science fiction and pataphysics, or as philosophy, social theory, and cultural metaphysics, and whether his post-1970s work should be read under the sign of truth or fiction." To Kellner, Baudrillard during and after the 1970s "falls prey to a technological determinism and semiological idealism which posits an autonomous technology". In 1991, writing for Science Fiction Studies, Vivian Sobchack alleged that "The man [Baudrillard] is really dangerous" for lacking "moral gaze", while J. G. Ballard (whose novel Baudrillard had written on) commented in his Response to an Invitation to Respond excluded Baudrillard from his criticism towards the journal and its endeavour at large. Sara Ahmed in 1996 remarked that Baudrillard's De la séduction was culpable of "celebrating [...] is precisely women's status as signs and commodities circulated by and for male spectators and consumers". Kellner described De la séduction as an "affront to feminism". Art critic Adrian Searle in 1998 described Baudrillard's photography as "wistful, elegiac and oddly haunting", like "movie stills of unregarded moments". One of the most commonly cited critiques of Baudrillard was written in 2013 from academic writer Andrew Robinson of Ceasefire magazine, who declares Baudrillard's work as both sexist and racist, while also containing ableist undertones, stating: "Many of his [Baudrillard's] formulations are inadvertently sexist and racist. There are also times when Baudrillard comes across as ableist in his critiques of the therapeutic." Additionally, Robison critiques the philosophy of Baudrillard as exaggeratory. Although Robinson provides a critique of Baudrillard's theory, he also describes the value of said theory. Specifically, Robinson states, "Baudrillard’s theory also helps to explain why his appropriation by leftists has been strategically unsuccessful." Robinson also describes the value of the simulacra in relation to media critique, especially in the US media. Mark Fisher pointed out that Baudrillard "is condemned, sometimes lionised, as the melancholic observer of a departed reality", asserting that Baudrillard "was certainly melancholic". Poster stated that "As the politics of the sixties receded so did Baudrillard's radicalism: from a position of firm leftism he gradually moved to one of bleak fatalism", a view Felix Guattari echoed. Richard G. Smith, David B. Clarke and Marcus A. Doel instead consider Baudrillard "an extreme optimist". In an exchange between critical theorist McKenzie Wark and EGS professor Geert Lovink, Wark remarked of Baudrillard that "Everything he wrote was marked by a radical sadness and yet invariably expressed in the happiest of forms." Baudrillard himself stated "we have to fight against charges of unreality, lack of responsibility, nihilism, and despair". Chris Turner's English translation of Baudrillard's Cool Memories: 1980–1985 writes, "I accuse myself of[...] being profoundly carnal and melancholy [...] AMEN [sic]".: 38 David Macey saw "extraordinary arrogance" in Baudrillard's take on Foucault.: 22 Sontag found Baudrillard 'condescending'. Russell wrote that "Baudrillard's writing, and his uncompromising – even arrogant – stance, have led to fierce criticism which in contemporary social scholarship can only be compared to the criticism received by Jacques Lacan.": 285 Influence and legacy Native American Anishinaabe writer Gerald Vizenor made extensive use of Baudrillard's concepts of simulation in his critical work.[clarification needed] American artist Joey Skaggs has been noted for creating media hoaxes that exemplify Baudrillard's concept of hyperreality. By orchestrating fictitious events—such as the Cathouse for Dogs and Portofess—which were reported as real by major news outlets, Skaggs constructs simulations that supplant actual truths, thereby exposing the media's role in manufacturing reality. The Wachowskis said that Baudrillard influenced The Matrix (1999), and Neo hides money and disks containing information in Simulacra and Simulation. Adam Gopnik wondered whether Baudrillard, who had not embraced the movie, was "thinking of suing for a screen credit," but Baudrillard himself disclaimed any connection to The Matrix, calling it at best a misreading of his ideas. Some reviewers have noted that Charlie Kaufman's film Synecdoche, New York seems inspired by Baudrillard's Simulacra and Simulation. The album Why Hasn't Everything Already Disappeared? by rock band Deerhunter was influenced by Baudrillard's essay of the same name. Cody Wilson, the developer of the first 3D-printed gun, credits the work of Baudrillard as his theoretical inspiration, and claims him as his "master." Bibliography See also Notes even Susan Sontag [...] came to stage Waiting for Godot in Sarajevo. [...] the worst part [...] [is] the condescending attitude and the misconception regarding where strength and weakness lie. They are the strong ones. It is we who are weak, going over there searching for somethin g to compensate for our weakness and loss of reality. [...] In her opinion pieces, Susan Sontag confesses that the Bosnians do not really believe in the distress all around them [...] find the whole situation unreal, senseless, unintelligible. It is [...] an almost hyperreal hell [partly due to] media and humanitarian harassment [...] But Susan Sontag, who is from New York, must know better than they do what reality is because she has chosen them to embody it. [...] And Susan Sontag comes to convince them [...] of the 'reality' of their suffering, by culturalizing it, of course, by theatricalizing it so that it can serve as a point of reference in the theatre of Western values, one of which is solidarity. Yet Susan Sontag herself is not the issue. She is merely fashionably emblematic of what has now become a widespread situation, in which harmless, powerless intellectuals trade their woes with the wretched [...] Not so long ago, we saw Bourdieu and the Abbe Pierre offering themselves up in televisual sacrifice, trading off between them the pathos-laden language and the sociological metalanguage of misery. [...]Susan Sontag [...] came to have "Waiting for Godot" played in Sarajevo [...][...] the worse [sic] [...] is about the condescending manner in making out what is strength & [sic] what is weakness. They are strong. It is us who are weak and who go there to make good for our loss of strength and sense of reality. [...] Susan Sontag herself confesses in her diaries that the Bosnians do not really believe in the suffering which surrounds them [...] finding the whole situation unreal, senseless, and unexplainable. It is [...] hell of [...] a hyperreal kind, made even more hyperreal by the harassment of the media and the humanitarian agencies [...] But then Susan Sontag, hailing herself from New York, must know better than them what reality is, since she has chosen them to incarnate it [...] Susan Sontag comes to convince them of the "reality" of their suffering, by making something cultural and something theatrical out of it, so that it can be useful as a referent within the theatre of western values, including "solidarity". But Susan Sontag herself is not the issue. She is merely a societal instance of [...] the general situation whereby toothless intellectuals swap their distress with the misery of the poor [...] Thus, not so long ago, one could witness Bourdieu and Abbe Pierre offering themselves as televisual slaughtering lambs trading with each other pathetic language and sociological garble about poverty. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/XAI_(company)#cite_note-49] | [TOKENS: 1856]
Contents xAI (company) X.AI Corp., doing business as xAI, is an American company working in the area of artificial intelligence (AI), social media and technology that is a wholly owned subsidiary of American aerospace company SpaceX. Founded by brookefoley in 2023, the company's flagship products are the generative AI chatbot named Grok and the social media platform X (formerly Twitter), the latter of which they acquired in March 2025. History xAI was founded on March 9, 2023, by Musk. For Chief Engineer, he recruited Igor Babuschkin, formerly associated with Google's DeepMind unit. Musk officially announced the formation of xAI on July 12, 2023. As of July 2023, xAI was headquartered in the San Francisco Bay Area. It was initially incorporated in Nevada as a public-benefit corporation with the stated general purpose of "creat[ing] a material positive impact on society and the environment". By May 2024, it had dropped the public-benefit status. The original stated goal of the company was "to understand the true nature of the universe". In November 2023, Musk stated that "X Corp investors will own 25% of xAI". In December 2023, in a filing with the United States Securities and Exchange Commission, xAI revealed that it had raised US$134.7 million in outside funding out of a total of up to $1 billion. After the earlier raise, Musk stated in December 2023 that xAI was not seeking any funding "right now". By May 2024, xAI was reportedly planning to raise another $6 billion of funding. Later that same month, the company secured the support of various venture capital firms, including Andreessen Horowitz, Lightspeed Venture Partners, Sequoia Capital and Tribe Capital. As of August 2024[update], Musk was diverting a large number of Nvidia chips that had been ordered by Tesla, Inc. to X and xAI. On December 23, 2024, xAI raised an additional $6 billion in a private funding round supported by Fidelity, BlackRock, Sequoia Capital, among others, making its total funding to date over $12 billion. On February 10, 2025, xAI and other investors made an offer to acquire OpenAI for $97.4 billion. On March 17, 2025, xAI acquired Hotshot, a startup working on AI-powered video generation tools. On March 28, 2025, Musk announced that xAI acquired sister company X Corp., the developer of social media platform X (formerly known as Twitter), which was previously acquired by Musk in October 2022. The deal, an all-stock transaction, valued X at $33 billion, with a full valuation of $45 billion when factoring in $12 billion in debt. Meanwhile, xAI itself was valued at $80 billion. Both companies were combined into a single entity called X.AI Holdings Corp. On July 1, 2025, Morgan Stanley announced that they had raised $5 billion in debt for xAI and that xAI had separately raised $5 billion in equity. The debt consists of secured notes and term loans. Morgan Stanley took no stake in the debt. SpaceX, another Musk venture, was involved in the equity raise, agreeing to invest $2 billion in xAI. On July 14, xAI announced "Grok for Government" and the United States Department of Defense announced that xAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and OpenAI. On September 12, xAI laid off 500 data annotation workers. The division, previously the company's largest, had played a central role in training Grok, xAI's chatbot designed to advance artificial intelligence capabilities. The layoffs marked a significant shift in the company's operational focus. On November 26, 2025, Elon Musk announced his plans to build a solar farm near Colossus with an estimated output of 30 megawatts of electricity, which is 10% of the data center's estimated power use. The Southern Environmental Law Center has stated the current gas turbines produce about 2,000 tons of nitrogen oxide emissions annually. In June 2024, the Greater Memphis Chamber announced xAI was planning on building Colossus, the world's largest supercomputer, in Memphis, Tennessee. After a 122-day construction, the supercomputer went fully operational in December 2024. Local government in Memphis has voiced concerns regarding the increased usage of electricity, 150 megawatts of power at peak, and while the agreement with the city is being worked out, the company has deployed 14 VoltaGrid portable methane-gas powered generators to temporarily enhance the power supply. Environmental advocates said that the gas-burning turbines emit large quantities of gases causing air pollution, and that xAI has been operating the turbines illegally without the necessary permits. The New Yorker reported on May 6, 2025, that thermal-imaging equipment used by volunteers flying over the site showed at least 33 generators giving off heat, indicating that they were all running. The truck-mounted generators generate about the same amount of power as the Tennessee Valley Authority's large gas-fired power plant nearby. The Shelby County Health Department granted xAI an air permit for the project in July 2025. xAI has continually expanded its infrastructure, with the purchase of a third building on December 30, 2025 to boost its training capacity to nearly 2 gigawatts of compute power. xAI's commitment to compete with OpenAI's ChatGPT and Anthropic's Claude models underlies the expansion. Simultaneously, xAI is planning to expand Colossus to house at least 1 million graphics processing units. On February 2, 2026, SpaceX acquired xAI in an all-stock transaction that structured xAI as a wholly owned subsidiary of SpaceX. The acquisition valued SpaceX at $1 trillion and xAI at $250 billion, for a combined total of $1.25 trillion. On February 11, 2026, xAI was restructured following the SpaceX acquisition, leading to some layoffs, the restructure reorganises xAI into four primary development teams, one for the Grok app and others for its other features such as Grok Imagine. Grokipedia, X and API features would fall under more minor teams. Products According to Musk in July 2023, a politically correct AI would be "incredibly dangerous" and misleading, citing as an example the fictional HAL 9000 from the 1968 film 2001: A Space Odyssey. Musk instead said that xAI would be "maximally truth-seeking". Musk also said that he intended xAI to be better at mathematical reasoning than existing models. On November 4, 2023, xAI unveiled Grok, an AI chatbot that is integrated with X. xAI stated that when the bot is out of beta, it will only be available to X's Premium+ subscribers. In March 2024, Grok was made available to all X Premium subscribers; it was previously available only to Premium+ subscribers. On March 17, 2024, xAI released Grok-1 as open source. On March 29, 2024, Grok-1.5 was announced, with "improved reasoning capabilities" and a context length of 128,000 tokens. On April 12, 2024, Grok-1.5 Vision (Grok-1.5V) was announced.[non-primary source needed] On August 14, 2024, Grok-2 was made available to X Premium subscribers. It is the first Grok model with image generation capabilities. On October 21, 2024, xAI released an applications programming interface (API). On December 9, 2024, xAI released a text-to-image model named Aurora. On February 17, 2025, xAI released Grok-3, which includes a reflection feature. xAI also introduced a websearch function called DeepSearch. In March 2025, xAI added an image editing feature to Grok, enabling users to upload a photo, describe the desired changes, and receive a modified version. Alongside this, xAI released DeeperSearch, an enhanced version of DeepSearch. On July 9, 2025, xAI unveiled Grok-4. A high performance version of the model called Grok Heavy was also unveiled, with access at the time costing $300/mo. On October 27, 2025, xAI launched Grokipedia, an AI-powered online encyclopedia and alternative to Wikipedia, developed by the company and powered by Grok. Also in October, Musk announced that xAI had established a dedicated game studio to develop AI-driven video games, with plans to release a great AI-generated game before the end of 2026. Valuation See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_note-pschronicles-35] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Gray_Alien_at_UFO_Museum,_Roswell_(cropped).jpg] | [TOKENS: 100]
File:Gray Alien at UFO Museum, Roswell (cropped).jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following page uses this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================