text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Sexagesimal_number_system] | [TOKENS: 2885]
Contents Sexagesimal Sexagesimal, also known as base 60, is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, was passed down to the ancient Babylonians, and is still used—in a modified form—for measuring time, angles, and geographic coordinates. The number 60, a superior highly composite number, has twelve divisors, namely 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60, of which 2, 3, and 5 are prime numbers. With so many factors, many fractions involving sexagesimal numbers are simplified. For example, one hour can be divided evenly into sections of 30 minutes, 20 minutes, 15 minutes, 12 minutes, 10 minutes, 6 minutes, 5 minutes, 4 minutes, 3 minutes, 2 minutes, and 1 minute. 60 is the smallest number that is divisible by every number from 1 to 6; that is, it is the lowest common multiple of 1, 2, 3, 4, 5, and 6. In this article, all sexagesimal digits are represented as decimal numbers, except where otherwise noted. For example, the largest sexagesimal digit is "59". Origin According to Otto Neugebauer, the origins of sexagesimal are not as simple, consistent, or singular in time as they are often portrayed. Throughout their many centuries of use, which continues today for specialized topics such as time, angles, and astronomical coordinate systems, sexagesimal notations have always contained a strong undercurrent of decimal notation, such as in how sexagesimal digits are written. Their use has also always included (and continues to include) inconsistencies in where and how various bases are used to represent numbers even within a single text. The most powerful driver for rigorous, fully self-consistent use of sexagesimal has always been its mathematical advantages for writing and calculating fractions. In ancient texts this shows up in the fact that sexagesimal is used most uniformly and consistently in mathematical tables of data. Another practical factor that helped expand the use of sexagesimal in the past, even if less consistently than in mathematical tables, was its decided advantages to merchants and buyers for making everyday financial transactions easier when they involved bargaining for and dividing up larger quantities of goods. In the late 3rd millennium BC, Sumerian/Akkadian units of weight included the kakkaru (talent, approximately 30 kg) divided into 60 manû (mina), which was further subdivided into 60 šiqlu (shekel); the descendants of these units persisted for millennia, though the Greeks later coerced this relationship into the more base-10–compatible ratio of a shekel being one 50th of a mina. Apart from mathematical tables, the inconsistencies in how numbers were represented within most texts extended all the way down to the most basic cuneiform symbols used to represent numeric quantities. For example, the cuneiform symbol for 1 was an ellipse made by applying the rounded end of the stylus at an angle to the clay, while the sexagesimal symbol for 60 was a larger oval or "big 1". But within the same texts in which these symbols were used, the number 10 was represented as a circle made by applying the round end of the style perpendicular to the clay, and a larger circle or "big 10" was used to represent 100. Such multi-base numeric quantity symbols could be mixed with each other and with abbreviations, even within a single number. The details and even the magnitudes implied (since zero was not used consistently) were idiomatic to the particular time periods, cultures, and quantities or concepts being represented. In modern times there is the recent innovation of adding decimal fractions to sexagesimal astronomical coordinates. Usage The sexagesimal system as used in ancient Mesopotamia was not a pure base-60 system, in the sense that it did not use 60 distinct symbols for its digits. Instead, the cuneiform digits used ten as a sub-base in the fashion of a sign-value notation: a sexagesimal digit was composed of a group of narrow, wedge-shaped marks representing units up to nine (, , , , ..., ) and a group of wide, wedge-shaped marks representing up to five tens (, , , , ). The value of the digit was the sum of the values of its component parts: Numbers larger than 59 were indicated by multiple symbol blocks of this form in place value notation. Because there was no symbol for zero it is not always immediately obvious how a number should be interpreted, and its true value must sometimes have been determined by its context. For example, the symbols for 1 and 60 are identical. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as in numbers like 13200. In the Chinese calendar, a system is commonly used in which days or years are named by positions in a sequence of ten stems and in another sequence of 12 branches. The same stem and branch repeat every 60 steps through this cycle. Book VIII of Plato's Republic involves an allegory of marriage centered on the number 604 = 12960000 and its divisors. This number has the particularly simple sexagesimal representation 1,0,0,0,0. Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage. Ptolemy's Almagest, a treatise on mathematical astronomy written in the second century AD, uses base 60 to express the fractional parts of numbers. In particular, his table of chords, which was essentially the only extensive trigonometric table for more than a millennium, has fractional parts of a degree in base 60, and was practically equivalent to a modern-day table of values of the sine function. Medieval astronomers also used sexagesimal numbers to note time. Al-Biruni first subdivided the hour sexagesimally into minutes, seconds, thirds and fourths in 1000 while discussing Jewish months. Around 1235 John of Sacrobosco continued this tradition, although Nothaft thought Sacrobosco was the first to do so. The Parisian version of the Alfonsine tables (ca. 1320) used the day as the basic unit of time, recording multiples and fractions of a day in base-60 notation. The sexagesimal number system continued to be frequently used by European astronomers for performing calculations as late as 1671. For instance, Jost Bürgi in Fundamentum Astronomiae (presented to Emperor Rudolf II in 1592), his colleague Ursus in Fundamentum Astronomicum, and possibly also Henry Briggs, used multiplication tables based on the sexagesimal system in the late 16th century, to calculate sines. In the late 18th and early 19th centuries, Tamil astronomers were found to make astronomical calculations, reckoning with shells using a mixture of decimal and sexagesimal notations developed by Hellenistic astronomers. Base-60 number systems have also been used in some other cultures that are unrelated to the Sumerians, for example by the Ekari people of Western New Guinea. Modern uses for the sexagesimal system include measuring angles, geographic coordinates, electronic navigation, and time. One hour of time is divided into 60 minutes, and one minute is divided into 60 seconds. Thus, a measurement of time such as 3:23:17 (3 hours, 23 minutes, and 17 seconds) can be interpreted as a whole sexagesimal number (no sexagesimal point), meaning 3 × 602 + 23 × 601 + 17 × 600 seconds. However, each of the three sexagesimal digits in this number (3, 23, and 17) is written using the decimal system. Similarly, the practical unit of angular measure is the degree, of which there are 360 (six sixties) in a circle. There are 60 minutes of arc in a degree, and 60 arcseconds in a minute. In version 1.1 of the YAML data storage format, sexagesimals are supported for plain scalars, and formally specified both for integers and floating point numbers. This has led to confusion, as e.g. some MAC addresses would be recognised as sexagesimals and loaded as integers, where others were not and loaded as strings. In YAML 1.2 support for sexagesimals was dropped. Notations In Hellenistic Greek astronomical texts, such as the writings of Ptolemy, sexagesimal numbers were written using Greek alphabetic numerals, with each sexagesimal digit being treated as a distinct number. Hellenistic astronomers adopted a new symbol for zero, —°, which morphed over the centuries into other forms, including the Greek letter omicron, ο, normally meaning 70, but permissible in a sexagesimal system where the maximum value in any position is 59. The Greeks limited their use of sexagesimal numbers to the fractional part of a number. In medieval Latin texts, sexagesimal numbers were written using Arabic numerals; the different levels of fractions were denoted minuta (i.e., fraction), minuta secunda, minuta tertia, etc. By the 17th century it became common to denote the integer part of sexagesimal numbers by a superscripted zero, and the various fractional parts by one or more accent marks. John Wallis, in his Mathesis universalis, generalized this notation to include higher multiples of 60; giving as an example the number 49‵‵‵‵36‵‵‵25‵‵15‵1°15′25″36‴49⁗; where the numbers to the left are multiplied by higher powers of 60, the numbers to the right are divided by powers of 60, and the number marked with the superscripted zero is multiplied by 1. This notation leads to the modern signs for degrees, minutes, and seconds. The same minute and second nomenclature is also used for units of time, and the modern notation for time with hours, minutes, and seconds written in decimal and separated from each other by colons may be interpreted as a form of sexagesimal notation. In some usage systems, each position past the sexagesimal point was numbered, using Latin or French roots: prime or primus, seconde or secundus, tierce, quatre, quinte, etc. To this day we call the second-order part of an hour or of a degree a "second". Until at least the 18th century, ⁠1/60⁠ of a second was called a "tierce" or "third". In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integer and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days. This notation is used in this article. Fractions and irrational numbers In the sexagesimal system, any fraction in which the denominator is a regular number (having only 2, 3, and 5 in its prime factorization) may be expressed exactly. Shown here are all fractions of this type in which the denominator is less than or equal to 60: However numbers that are not regular form more complicated repeating fractions. For example: The fact that the two numbers that are adjacent to sixty, 59 and 61, are both prime numbers implies that fractions that repeat with a period of one or two sexagesimal digits can only have regular number multiples of 59 or 61 as their denominators, and that other non-regular numbers have fractions that repeat with a longer period. The representations of irrational numbers in any positional number system (including decimal and sexagesimal) neither terminate nor repeat. The square root of 2, the length of the diagonal of a unit square, was approximated by the Babylonians of the Old Babylonian Period (1900 BC – 1650 BC) as Because √2 ≈ 1.41421356... is an irrational number, it cannot be expressed exactly in sexagesimal (or indeed any integer-base system), but its sexagesimal expansion does begin 1;24,51,10,7,46,6,4,44... (OEIS: A070197) The value of π as used by the Greek mathematician and scientist Ptolemy was 3;8,30 = 3 + ⁠8/60⁠ + ⁠30/602⁠ = ⁠377/120⁠ ≈ 3.141666.... Jamshīd al-Kāshī, a 15th-century Persian mathematician, calculated 2π as a sexagesimal expression to its correct value when rounded to nine subdigits (thus to ⁠1/609⁠); his value for 2π was 6;16,59,28,1,34,51,46,14,50. Like √2 above, 2π is an irrational number and cannot be expressed exactly in sexagesimal. Its sexagesimal expansion begins 6;16,59,28,1,34,51,46,14,49,55,12,35... (OEIS: A091649) See also References Further reading External links
========================================
[SOURCE: https://www.theverge.com/reviews] | [TOKENS: 1053]
Reviews Looking to buy your next phone, laptop, headphones, or other tech gear? Or maybe you just want to know all of the details about the latest products from Apple, Samsung, Google, and many others. The Verge Reviews is the place for all of that and more. Whether you’re looking for buying advice, how to use products you already own, or the best deals on products we’ve tested and used ourselves and can recommend, you needn’t look any further. Posts from this topic will be added to your daily email digest and your homepage feed. See All Headphone Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Accessory Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Laptop Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Smart Home Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Drone Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Phone Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Console Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Tablet Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Speaker Reviews Posts from this topic will be added to your daily email digest and your homepage feed. See All Smartwatch Reviews Whether you want to read in the bath or scribble notes in the margins, there’s an e-reader for just about everyone. Whether you want to read in the bath or scribble notes in the margins, there’s an e-reader for just about everyone. Latest In Reviews If you’ve even wondered how important that lumen measurement is on projectors… Here’s the monstrous Nebula X1 Pro next to the little Nebula P1. I’ve got both portable projectors with detachable speakers in for testing, but only one is viewable in ambient mid-day light. You may not like it, but big phone and tiny keyboard is what peak performance looks like. On a frigid February evening, I went on four dates with AI companions at a pop-up dating café. 3 Verge Score Romo flies through chores, but a recent security vulnerability makes it difficult to recommend. 8 Verge Score The new flagship earbuds are the best at ANC, if you can get a good seal in your ear. 7 Verge Score This pricey laptop has got sleek no-frills looks and Strix Halo strengths on lock. As someone who can never remember where I put my keys, the louder chime and extended range are godsent. The Switch peripheral is a well-built piece of nostalgia, but its games are just too stuck in the past. It won’t clean your whole house, but it’s the perfect companion to your robot vacuum 8 Verge Score Better contrast, color, detail, and sharpness. The DynaCap system lets you combine the best qualities of Topre and MX keyboards for a semi-reasonable price. While robot vacuums are getting fancier and more sophisticated (there’s now one that can climb stairs!), there are lots of great budget bots you can get for just a couple of hundred bucks that will keep your floors swept and mopped. Check out my top picks for the best budget robot vacuum and mop in my updated buying guide. The best budget robot vacuums 8 Verge Score Combining real analog circuits with digital synthesis and sampling makes the TR-1000 powerful but overwhelming. 8 Verge Score Asus made all the right tweaks, and the new Panther Lake chip delivers. The first chip of Intel’s 18A process is speedy, even on battery power. And it’s a solid option for 1080p gaming. 7 Verge Score The Sony LinkBuds Clip are comfortable with good sound, but are light on features for the price. Belkin’s $100 Charging Case Pro for the Switch 2 looks similar to the $70 version; it’s a thick zip-up case with a 10,000mAh battery, plus pockets for cartridges and an AirTag. But the battery here, cleverly redesigned as a folding stand that magnetically snaps into the case, makes it feel worth the higher cost. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. 8 Verge Score I should want this, but don’t. I worked exclusively on a pre-production Asus Zenbook A16 with a Snapdragon X2 processor throughout CES, and I came away impressed. 8 Verge Score An all-in-one Google TV projector with big battery life for entertainment anywhere you go. 9 Verge Score It’s not the brightest OLED, and it isn’t perfect, but there’s no TV I’d rather watch. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-55] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_ref-72] | [TOKENS: 8626]
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-36] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-6] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-0-691-09182-2] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Neo-Assyrian_Empire] | [TOKENS: 23960]
Contents Neo-Assyrian Empire The Neo-Assyrian Empire[b] was the fourth and penultimate stage of ancient Assyrian history. Beginning with the accession of Adad-nirari II in 911 BC,[c] the Neo-Assyrian Empire grew to dominate the ancient Near East and parts of the South Caucasus, North Africa and the Eastern Mediterranean throughout much of the 9th to 7th centuries BC, becoming the largest empire in history up to that point. Because of its geopolitical dominance and ideology based in world domination, the Neo-Assyrian Empire has been described as the first world empire in history.[d] It influenced other empires of the ancient world culturally, administratively, and militarily, including the Neo-Babylonians, the Achaemenids, and the Seleucids. At its height, the empire was the strongest military power in the world and ruled over all of Mesopotamia, the Levant and Egypt, as well as parts of Anatolia, Arabia and modern-day Iran and Armenia. The early Neo-Assyrian kings were chiefly concerned with restoring Assyrian control over much of northern Mesopotamia, eastern Anatolia and Levant, since significant portions of the preceding Middle Assyrian Empire (1365–1050 BC) had been lost during the late 11th century BC. Under Ashurnasirpal II (r. 883–859 BC), Assyria once more became the dominant power of the Near East, ruling the north undisputed. Ashurnasirpal's campaigns reached as far as the Mediterranean, and he oversaw the transfer of the imperial capital from the traditional city of Assur to the more centrally located Kalhu (later known as Calah in the Bible and Nimrud to the Medieval Arabs) The empire grew even more under Ashurnasirpal's successor Shalmaneser III (r. 859–824 BC), though it entered a period of stagnation after his death, referred to as the "age of the magnates". During this time, the chief wielders of political power were prominent generals and officials, and central control was unusually weak. This age came to an end with the rule of Tiglath-Pileser III (r. 745–727 BC), who re-asserted Assyrian royal power and more than doubled the size of the empire through wide-ranging conquests. His most notable conquests were Babylonia in the south and large parts of the Levant. Under the Sargonid dynasty, which ruled from 722 BC to the fall of the empire, Assyria reached its apex. Under Sennacherib (r. 705–681 BC), the capital was transferred to Nineveh, and under Esarhaddon (r. 681–669 BC) the empire reached its largest extent through the conquest of Egypt. Despite being at the peak of its power, the empire experienced a swift and violent fall in the late 7th century BC, destroyed by a Babylonian uprising and an invasion by the Medes. The causes behind how Assyria could be destroyed so quickly continue to be debated among scholars. The unprecedented success of the Neo-Assyrian Empire was not only due to its ability to expand but also, and perhaps more importantly, its ability to efficiently incorporate conquered lands into its administrative system. As the first of its scale, the empire saw various military, civic and administrative innovations. In the military, important innovations included a large-scale use of cavalry and new siege warfare techniques. Techniques first adopted by the Neo-Assyrian army would be used in later warfare for millennia. To solve the issue of communicating over vast distances, the empire developed a sophisticated state communication system, using relay stations and well-maintained roads. The communication speed of official messages in the empire was not surpassed in the Middle East until the 19th century. The empire also made use of a resettlement policy, wherein some portions of the populations from conquered lands were resettled in the Assyrian heartland and in underdeveloped provinces. This policy served to both disintegrate local identities and to introduce Assyrian-developed agricultural techniques to all parts of the empire. A consequence was the dilution of the cultural diversity of the Near East, forever changing the ethnolinguistic composition of the region and facilitating the rise of Aramaic as the regional lingua franca, a position the language retained until the 14th century. The Neo-Assyrian Empire left a legacy of great cultural significance. The political structures established by the empire became the model for the later empires, and the ideology of universal rule promulgated by the Neo-Assyrian kings inspired—through the concept of translatio imperii—similar ideas of rights to world domination as late as the early modern period. The empire became an important part of later folklore and literary traditions in northern Mesopotamia through the subsequent post-imperial period and beyond. Judaism—and in turn Christianity and Islam—was profoundly affected by the period of Neo-Assyrian rule; numerous Biblical stories appear to draw on earlier Assyrian mythology and history, and the Assyrian impact on early Jewish theology was immense.[e] Although the empire is prominently remembered today for the supposed excessive brutality of its army, the Assyrians were not excessively brutal when compared to other civilizations throughout history. Background Imperialism and the ambition of establishing a universal, all-encompassing empire was a long-established aspect of royal ideology in the ancient Near East prior to the rise of the Neo-Assyrian Empire. In the Early Dynastic Period of Mesopotamia (c. 2900 – c. 2350 BC), the Sumerian rulers of the various city-states in the region often fought with each other in order to establish small hegemonic empires and to gain a superior position relative to the other city-states. Eventually, these small conflicts evolved into a general ambition to achieve universal rule. Reaching a position of world domination was not seen as a wholly impossible task in this time since Mesopotamia was believed to correspond to the entire world. One of the earliest Mesopotamian "world conquerors" was Lugalzaggesi, king of Uruk, who conquered all of Lower Mesopotamia in the 24th century BC. The succeeding Akkadian Empire is generally regarded as the first known empire. Numerous imperialist states rose and fell in Mesopotamia and the rest of the Near East after the time of the Akkadian Empire. Most early empires and kingdoms were limited to some core territories, with most of their subjects only nominally recognizing the authority of the central government. Still, the general desire for universal rule dominated the royal ideologies of Mesopotamian kings for thousands of years, bolstered by the memory of the Akkadian Empire and exemplified in titles such as "king of the Universe" or "king of the Four Corners of the World". This desire was also manifested in the kings of Assyria, who ruled in what had been the northern part of the Akkadian Empire. Assyria experienced its first period of ascendancy with the rise of the Middle Assyrian Empire in the 14th century BC, previously only having been a city-state centered around the city of Assur. From the time of Adad-nirari I (r. c. 1305–1274 BC) onwards, Assyria became one of the great powers of the ancient Near East. Under Tukulti-Ninurta I (r. c. 1243–1207 BC) the empire reached its greatest extent and became the dominant force in Mesopotamia, for a time even subjugating Babylonia in the south. After Tukulti-Ninurta's assassination, the Middle Assyrian Empire went into a long period of decline, becoming increasingly restricted to just the Assyrian heartland. Though this period of decline was broken up by Tiglath-Pileser I (r. 1114–1076 BC), who once more expanded Assyrian power, his conquests overstretched Assyria and could not be maintained by his successors. The trend of decline was substantially reversed in the reign of the last Middle Assyrian king, Ashur-dan II (r. 934–912 BC) who campaigned in the northeast and northwest. History The early Neo-Assyrian kings initially set out to reverse the long decline of the Assyrian Empire, retake its former lands and re-establish the position it held at the height of its power. The two empires were not as distinct as their portrayal sometimes suggests, with the Neo-Assyrian kings being part of the same continuous royal family line as the Assyrian Empire. The outward re-expansion by these new kings was cast as war to liberate those Assyrians cut off from Assyrian territory and forced to live under foreign rulers. This held at least some truth, with material evidence from sites lost and then reconquered by the empire demonstrating an endurance of Assyrian culture in the interim. Early efforts at reconquest were mostly focused on the region up to the Khabur river in the west. One of the first conquests of Ashur-dan II had been Katmuḫu in this region, which he made a vassal kingdom rather than annexed outright; this suggests that the resources available to the early Neo-Assyrian kings were very limited and that the imperial reconquista project had to begin nearly from scratch. In this context, the successful expansion conducted under the early Neo-Assyrian kings was an extraordinary achievement. The initial phase of the Assyrian reconquista was slow, beginning under Ashur-dan II near the end of the Middle Assyrian period and covering the reigns of the first two Neo-Assyrian kings, Adad-nirari II (r. 911–891 BC) and Tukulti-Ninurta II (r. 890–884 BC). Ashur-dan's efforts mostly worked to pave the way for the more sustained work under Adad-nirari and Tukulti-Ninurta. Among the conquests of Adad-nirari, the most strategically important campaigns were the wars directed to the southeast, beyond the Little Zab river. These lands had previously been under Babylonian rule. One of Adad-nirari's wars brought the Assyrian army as far south as Der, close to the border of the southwestern kingdom of Elam. Though Adad-nirari did not manage to incorporate territories so far away from the Assyrian heartland into the empire, he secured Arrapha (modern-day Kirkuk), which in later times served as the launching point of numerous Assyrian campaigns toward lands in the east. Adad-nirari managed to secure a border agreement with the Babylonian king Nabu-shuma-ukin I, sealed through both kings marrying a daughter of the other. Adad-nirari continued Ashur-dan's efforts in the west; in his wars, he defeated numerous small western kingdoms. Several small states, such as Guzana, were made into vassals, and others, such as Nisibis, were placed under pro-Assyrian puppet-kings. After his successful wars in the region, Adad-nirari was able to go on a long march along the Khabur river and the Euphrates, collecting tribute from all the local rulers with no military opposition. He also conducted important building projects; Apku, located between Nineveh and Sinjar and destroyed c. 1000 BC, was rebuilt and became an important administrative center. Though he reigned only briefly, Adad-nirari's son Tukulti-Ninurta continued the policies of his father. In 885 BC, Tukulti-Ninurta repeated his father's march along the Euphrates and Khabur, though he went in the opposite direction, beginning in the south at Dur-Kurigalzu and then collecting tribute while he travelled north. Some of the southern cities that sent tribute to Tukulti-Ninurta during this march were historically more closely aligned with Babylon. Tukulti-Ninurta fought against small states in the east, aimed to strengthen Assyrian control in this direction. Among the lands he defeated were Kirruri, Hubushkia and Gilzanu. In later times, Gilzanu often supplied Assyria with horses. The second phase of the reconquista was initiated in the reign of Tukulti-Ninurta's son and successor Ashurnasirpal II (r. 883–859 BC). Under his rule, Assyria rose to become the dominant political power in the Near East. In terms of personality, Ashurnasirpal was a complex figure; he was a relentless warrior and one of the most brutal kings in Assyrian history,[f] but he also cared about the people, working to increase the prosperity and comfort of his subjects, establishing extensive water reserves and food depots in times of crisis. As a result of the successful campaigns of his predecessors, Ashurnasirpal inherited an impressive amount of resources with which he could work to re-establish Assyrian dominance. Ashurnasirpal's first campaign in 883 BC was against the revolting cities of Suru and Tela along the northern portion of the Tigris river. At Tela he brutally repressed the citizens, among other punishments cutting off noses, ears, fingers and limbs, gouging out eyes and overseeing impalements and decapitations. Ashurnasirpal's later campaigns included three wars against the kingdom of Zamua in the eastern Zagros Mountains, repeated campaigns against Nairi and Urartu in the north, and, most prominently, near continuous conflict with Aramean and Neo-Hittite kingdoms in the west. The Arameans and Neo-Hittites had evolved into well-organized kingdoms, possibly in response to pressure from Assyria. One of Ashurnasirpal's most persistent enemies was the Aramean king Ahuni, who ruled Bit Adini. Ahuni's forces broke through across the Khabur and Euphrates several times, and it was only after years of war that he at last accepted Ashurnasirpal as his suzerain. Ahuni's defeat was highly important as it marked the first time since Ashur-bel-kala two centuries prior that Assyrian forces campaigned further west than the Euphrates. Ashurnasirpal made use of this opportunity. In his ninth campaign, he marched to Lebanon and then to the coast of the Mediterranean Sea. Though few of them became formally incorporated into the empire at this point, many kingdoms on the way paid tribute to Ashurnasirpal to avoid being attacked, including Carchemish and Patina, as well as Phoenician cities such as Sidon, Byblos, Tyre and Arwad. Ashurnasirpal's royal inscriptions proudly proclaim that he and his army symbolically cleaned their weapons in the water of the Mediterranean. Ashurnasirpal financed several large-scale building projects at cities like Assur, Nineveh and Balawat. The most impressive and important project conducted was the restoration of the ruined town of Nimrud, located on the eastern bank of the Tigris in the Assyrian heartland. In 879 BC Ashurnasirpal made Nimrud the capital of the empire and employed thousands of workers to construct fortifications, palaces and temples in the city. Assur became a ceremonial city, although it was still the empire's religious center. Ashurnasirpal's aggressive military politics were continued under his son Shalmaneser III (r. 859–824 BC), whose reign saw a considerable expansion of Assyrian territory. The lands along the Khabur and Euphrates rivers in the west were consolidated under Assyrian control. Ahuni of Bit Adini resisted for several years, but he surrendered to Shalmaneser in the winter of 857/856 BC. When Shalmaneser visited the city in the summer of the next year, he renamed it Kar-Salmanu‐ašared ("fortress of Shalmaneser"), settled a substantial number of Assyrians there, and made it the administrative center of a new province, placed under the turtanu (commander in chief). Shalmaneser also placed other powerful officials, so-called "magnates", in charge of other vulnerable provinces and regions of the empire. The most powerful and threatening enemy of Assyria at this point was Urartu in the north; following in the footsteps of the Assyrians, the Urartian administration, culture, writing system and religion closely followed those of Assyria. The Urartian kings were autocrats similar to the Assyrian kings. The Assyrians also took some inspiration from Urartu. For instance, Assyrian irrigation technology and cavalry units, introduced by Shalmaneser, may have been derived from encounters with Urartu. The imperialist expansionism undertaken by the kings of both Urartu and Assyria led to frequent military clashes between the two, despite being separated by the Taurus Mountains. In 856 BC, Shalmaneser conducted an ambitious military campaign, marching through mountainous territory to the source of the Euphrates and then attacking Urartu from the west. King Arame was forced to flee as Shalmaneser's forces sacked the Urartian capital of Arzashkun, devastated the Urartian heartland, and then marched into what today is western Iran before returning to Arbela in Assyria. Although Shalmaneser's campaign against Urartu compelled many of the small states in northern Syria to pay tribute to him, he was unable to fully utilize the situation. In 853 BC, a coalition of western states assembled at Tell Qarqur in Syria against Assyrian expansion. The coalition included numerous kings of various peoples, including the earliest historically verifiable Israelite and Arab rulers, and was led by King Hadadezer of Aram-Damascus. Shalmaneser engaged the coalition in the same year that it was formed. Though Assyrian records claim that he scored a great victory at the Battle of Qarqar, it is more likely that the battle was indecisive since no substantial political or territorial gains were achieved. After Qarqar, Shalmaneser focused much on the south and in 851–850 BC aided the Babylonian king Marduk-zakir-shumi I to defeat a revolt by his brother Marduk-bel-ushati. After defeating the rebel, Shalmaneser spent some time visiting cities in Babylon and further helping Marduk-zakir-shumi fight against the Chaldeans in the far south of Mesopotamia. As Babylonian culture was greatly appreciated in Assyria, Shalmaneser was proud of his alliance to the Babylonian king; a surviving piece of artwork shows the two rulers shaking hands. In the 840s and 830s BC, Shalmaneser again campaigned in Syria and succeeding in receiving tribute from numerous western states after the coalition against him collapsed with Hadadezer's death in 841 BC. Assyrian forces thrice tried to capture Damascus but were not successful. Shalmaneser's failed attempts to impose Assyrian rule in Syria was a result of his energetic campaigns overextending the empire too quickly. In the 830s BC, his armies reached into Cilicia in Anatolia, and in 836 BC Shalmaneser reached Ḫubušna (near modern-day Ereğli), one of the westernmost places reached by Assyrian forces. Though Shalmaneser's conquests were wide-ranging and inspired fear among the other kings of the Near East, he lacked the means to stabilize and consolidate his new lands, and imperial control in many places remained shaky. In the latter years of Shalmaneser's reign, Urartu rose again as a powerful adversary. Though the Assyrians campaigned against them in 830 BC, they failed to fully neutralize the threat the restored kingdom posed. The 830 BC campaign against Urartu was not led by Shalmaneser but by the long-serving and prominent turtanu Dayyan-Assur. Dayyan-Assur led other campaigns on behalf of the kings. Shalmaneser's final years became preoccupied by an internal crisis when one of his sons, Ashur-danin-pal, rebelled in an attempt to seize the throne, possibly because the younger son Shamshi-Adad V had been designated as heir instead of him. When Shalmaneser died in 824 BC, Ashur-danin-pal was still in revolt, supported by a significant portion of the country, including Assur. Shamshi-Adad was perhaps initially a minor and a puppet of Dayyan-Assur. Though Dayyan-Assur died during the early stages of the civil war, Shamshi-Adad was eventually victorious, apparently with help from Marduk-zakir-shumi or his successor Marduk-balassu-iqbi. Shamshi-Adad's accession marked the beginning of a new age of Neo-Assyrian history, sometimes dubbed the "age of the magnates". This time was marked by the number of royal inscriptions being much smaller than in preceding and succeeding times, and Assyrian magnates—such as Dayyan-Assur and other prominent generals and officials—being the dominant political actors, with the kings wielding significantly less power and influence. Though the consequences of this shift in power remain debated, the age of the magnates has often been characterized as a period of decline. Assyria endured through this period largely unscathed, but there was little to no territorial expansion and central power grew unusually weak. Some developments were good for the longevity of the empire, since many magnates took the opportunity to develop stronger military and economic structures and institutions in their own lands throughout the empire. Shamshi-Adad's earliest campaigns were against a series of Urartian fortresses and western Iran and quite limited in scope. Most of Shamshi-Adad's early reign was relatively unsuccessful; his third campaign, against the small states in the Zagros Mountains region, might have been an Assyrian defeat, and many of the small kingdoms in northern Syria ceased to pay tribute. In 817 or 816 BC, there was a rebellion against the king at Tillê, within the Assyrian heartland. From 815 BC Shamshi-Adad directed his efforts mainly against Marduk-balassu-iqbi. In 813 BC he defeated Marduk-balassu-iqbi and brought him to Assyria as a captive. A year later he defeated the Babylonian successor Baba-aha-iddina and annexed several territories in northern Babylonia. Southern Mesopotamia was left in disarray after Shamshi-Adad's victories. Though Babylonia nominally came under Assyrian control, Shamshi-Adad took the ancient Babylonian title "king of Sumer and Akkad" but not the conventional "king of Babylon". Due to Assyria's perhaps somewhat weakened state he was unable to fully exploit the victory, and the Babylonian throne remained unoccupied for several years. Shamshi-Adad's son Adad-nirari III (r. 811–783 BC) was probably very young at the time of his father's death in 811 BC, and real political power during his early reign was probably wielded by the turtanu Nergal‐ila'i and by Adad-nirari's mother Shammuramat. Shammuramat was one of the most powerful women in Assyrian history and perhaps for a time served as co-regent; she is recorded to have partaken in a military campaign, the only ancient Assyriain woman known to have done so, against Kummuh in Syria and is credited in inscriptions alongside her son for expanding Assyrian territory, usually only a royal privilege. After Shammuramat's death, Adad-nirari continued to be dominated by other figures, such as the eunuch Nergal-eresh. Despite his limited sole authority, Adad-nirari's reign saw some military successes, and Assyrian armies campaigned in western Iran at least 13 times. The western territories, now more or less autonomous, were only attacked four times, though Adad-nirari managed to defeat Aram-Damascus. In 790 BC, Adad-nirari conducted the first Assyrian campaign against the Aramaic tribes living in the Assyro-Babylonian border regions. In c. 787 BC Adad-nirari appointed the new turtanu Shamshi-ilu. Shamshi-ilu would occupy this position for about 40 years and was for most of that time likely the most powerful political actor in Assyria. After Adad-nirari's death in 783 BC, three of his sons ruled in succession: Shalmaneser IV (r. 783–773 BC), Ashur-dan III (r. 773–755 BC) and Ashur-nirari V (r. 755–745 BC). Their reigns collectively form what appears to be the low point of Assyrian royal power since a remarkably small number of royal inscriptions are known from them. In Shalmaneser IV's reign, Shamshi-ilu eventually grew bold enough to stop crediting the king at all in his inscriptions and instead claimed to act completely on his own, more openly flaunting his power. Probably under Shamshi-ilu's leadership, the Assyrian army began to mainly focus on Urartu. In 774 BC, Shamshi-ilu scored an important victory against Argishti I of Urartu, though Urartu was not decisively beaten. There was however some significant successes in the west since Shamshi-ilu captured Damascus in 773 BC and secured tribute from the city to the king. Another official who acted with usually royal privileges in Shalmaneser's time was the palace herald Bel-harran-beli-usur, who founded a city, Dur-Bel-harran-beli-usur (named after himself), and claimed in a stele that it was he, and not the king, who had established tax exemptions for the city. Though little information survives concerning Ashur-dan III's reign, it is clear that it was particularly difficult. Much of his reign was spent putting down revolts. These revolts were perhaps the result of the plague epidemics sweeping Assyria and the Bur-Sagale solar eclipse on 15 June 763 BC; both the epidemics and the eclipse could have been interpreted by the Assyrian populace as the gods withdrawing their divine support for Ashur-dan's rule. Though Assyria stabilized again under Ashur-nirari V, he appears to have been relatively idle. Ashur-nirari campaigned in only three of the ten years of his reign and is not recorded to have conducted any construction projects. Though the Assyrian army under Ashur-nirari was successful against Arpad in northwestern Syria in 754 BC, they were also beaten at an important battle against Sarduri II of Urartu. In 745 BC, Ashur-nirari was succeeded by Tiglath-Pileser III (r. 745–727 BC), probably another son of Adad-nirari III. His accession ushered in a new era of Neo-Assyrian history. While the conquests of earlier kings were impressive, they contributed little to Assyria's rise as a consolidated empire. Through campaigns aimed at conquest and not just extraction of seasonal tribute, as well as reforms meant to efficiently organize the army and centralize the realm, Tiglath-Pileser is regarded by some as the first true initiator of Assyria's "imperial" phase. Tiglath-Pileser is the earliest Assyrian king mentioned in the Babylonian Chronicles and the Hebrew Bible, and thus the earliest king for which there exists important outside perspectives on his reign. Early on, Tiglath-Pileser reduced the influence of the previously powerful magnates, dividing their territories into smaller provinces under the rule of royally appointed provincial governors and withdrawing their right to commission official building inscriptions in their own names. Shamshi-ilu appears to have been subjected to a damnatio memoriae, as his name and tiles were erased from some of his inscriptions. During his 18-year reign, Tiglath-Pileser campaigned in all directions. In his first year as king, he warred against King Nabonassar of Babylon and conquered territories on the eastern side of the Tigris river. In 746 BC he conducted a successful campaign in the region around the Zagros Mountains, where he created two new Assyrian provinces. From 743 to 739 BC, he focused on Urartu and northern Syria. Campaigns against both targets proved to be resoundingly successful; in 743 BC Sarduri II of Urartu was defeated, and in 740 BC Arpad in Syria was conquered after a three-year long siege. With the nearest threats dealt with, Tiglath-Pileser began to focus on lands that had never been under solid Assyrian rule. In 738 BC the Neo-Hittite states of Pattin and Hatarikka, and the Phoenician city of Sumur were conquered. In 734 BC the Assyrian army marched through the Levant all the way to the Egyptian border, forcing several of the states on the way—such as Ammon, Edom, Moab and Judah—to pay tribute and become Assyrian vassals. In 732 BC the Assyrians captured Damascus and much of Transjordan and Galilee. Tiglath-Pileser's conquests are, in addition to their extent, also noteworthy because of the large scale in which he undertook resettlement policies; he settled tens to hundreds of thousands of foreigners in both the Assyrian heartland and in far-away underdeveloped provinces. Late in his reign, Tiglath-Pileser turned his eye towards Babylon. For a long time, the political situation in the south had been volatile, with conflict between the traditional urban elites of the cities, Aramean tribes in the countryside, and Chaldean warlords in the south. In 732 BC the Chaldean warlord Nabu-mukin-zeri seized Babylon and became king, a development Tiglath-Pileser used as an excuse to invade Babylonia. In 729 BC he succeeded in capturing Babylon and defeating Nabu-mukin-zeri and thus assumed the title "king of Babylon", alongside "king of Assyria". To increase the willingness of the Babylonian populace to accept him as ruler, Tiglath-Pileser twice partook in the traditional Babylonian Akitu (New Year's) celebrations, held in honor of the national deity Marduk. Control over Babylonia was secured through campaigns against the remaining Chaldean strongholds in the south. By the time of his death in 727 BC, Tiglath-Pileser had more than doubled the territory of the empire. His policy of direct rule rather than rule through vassal states brought important changes to the Assyrian state and its economy; rather than tribute, the empire grew more reliant on taxes collected by provincial governors, a development which increased administrative costs but also reduced the need for military intervention. Tiglath-Pileser was succeeded by his son Shalmaneser V (r. 727–722 BC). Though little to no royal inscriptions and other sources survive from Shalmaneser's brief reign, the empire appears to have been largely stable under his rule. Shalmaneser managed to secure some lasting achievements; he was probably the Assyrian king responsible for conquering Samaria and thus bringing an end to the ancient Kingdom of Israel, and he also appears to have annexed lands in northern Syria and Cilicia. Shalmaneser was succeeded by Sargon II (r. 722–705 BC), who in all likelihood was a usurper who deposed his predecessor in a palace coup. Like Tiglath-Pileser before him, Sargon in his inscriptions made no references to prior kings and instead ascribed his accession purely to divine selection. Sargon's rise to power marked the foundation of the Sargonid dynasty, leading to considerable internal unrest. In his own inscriptions, Sargon claims to have deported 6,300 "guilty Assyrians", probably Assyrians from the heartland who opposed his accession. Several peripheral regions of the empire also revolted and regained their independence. The most significant of the revolts was the successful uprising of the Chaldean warlord Marduk-apla-iddina II, who took control of Babylon, restoring Babylonian independence, and allied with the Elamite king Ḫuban‐nikaš I. Though Sargon tried early on to dislodge Marduk-apla-iddina, attacking Aramean tribes who supported Marduk-apla-iddina and marching out to fight the Elamites, his efforts were initially unsuccessful, and in 720 BC the Elamites defeated Sargon's forces at Der. Sargon's early reign was more successful in the west. There, another movement, led by Yau-bi'di of Hamath and supported by Simirra, Damascus, Samaria and Arpad, also sought to regain independence and threatened to destroy the sophisticated provincial system imposed on the region under Tiglath-Pileser. While Sargon was campaigning in the east in 720 BC, his generals defeated Yau-bi'di and the others. Sargon continued to focus on both east and west, successfully warring against Šinuḫtu in Anatolia and Mannaya in western Iran. In 717 BC Sargon retook Carchemish and secured the city's substantial silver treasury. Perhaps it was the acquisition of these funds which inspired Sargon to begin the construction of a new capital of the empire from scratch, named Dur-Sharrukin ("Fort Sargon") after himself. Perhaps the motivating factor was that Sargon did not feel safe at Nimrud after the early conspiracies against him. As construction work progressed, Sargon continued to go on military campaigns, which ensured that Assyria's geopolitical dominance and influence expanded significantly in his reign. Between 716 and 713 BC, Sargon fought against Urartu, the Medes, Arab tribes, and Ionian pirates in the eastern Mediterranean. A significant victory was the 714 BC campaign against Urartu, in which Rusa I was defeated and much of the Urartian heartland was plundered. In 709 BC Sargon won against seven kings in the land of Ia', in the district of Iadnana or Atnana. The land of Ia' is assumed to be the Assyrian name for Cyprus, and some scholars suggest that the latter may mean 'the islands of the Danaans', or Greece. There are other inscriptions referring to the land of Ia' in Sargon's palace at Khorsabad. Cyprus was thus absorbed into the Assyrian Empire, with the victory commemorated with a stele found near present-day Larnaca. Late in his reign, Sargon again turned his attention to Babylon. When Sargon marched south in 710 BC he encountered little resistance. After Marduk-apla-iddina fled to Dur-Yakin, the stronghold of his Chaldean tribe, the citizens of Babylon willingly opened the gates of Babylon to Sargon. The situation was somewhat uncertain until Sargon made peace with Marduk-apla-iddina after prolonged negotiations, which resulted in Marduk-apla-iddina and his family being given the right to escape to Elam in exchange for Sargon being allowed to dismantle the walls of Dur-Yakin. Between 710 and 707 BC, Sargon resided in Babylon, receiving foreign delegations there and participating in local traditions, such as the Akitu festival. In 707 BC Sargon returned to Nimrud, and in 706 BC Dur-Sharrukin was inaugurated as the empire's capital. Sargon did not get to enjoy his new city for long; in 705 BC he embarked on his final campaign, directed against Tabal in Anatolia. Sargon was killed in battle, and the army was unable to recover his body. Shocked and frightened by the manner of his father's death and its theological implications, Sargon's son Sennacherib distanced himself from his father. Sennacherib never mentioned Sargon in his inscriptions and abandoned Dur-Sharrukin, instead moving the capital to Nineveh, previously the residence of the crown prince. One of the first building projects he undertook was restoring a temple dedicated to the death-god Nergal, likely due to worries concerning his father's fate. Several of the vassal states in the Levant stopped paying tribute, and Marduk-apla-iddina retook Babylon with the aid of the Elamites. Sennacherib was thus faced with numerous enemies almost immediately upon his accession, and it took years to defeat them all. In 704 BC he sent the Assyrian army, led by officials, to Anatolia to avenge Sargon's death. Sennacherib began warring against Marduk-apla-iddina. After fighting against Babylonia for nearly two years, Sennacherib succeeded in recapturing Babylonia, though Marduk-apla-iddina fled to Elam once again, and Bel-ibni, a Babylonian noble who had been raised at the Assyrian court, was installed as vassal king of Babylon. In 701 BC Sennacherib undertook the most famous campaign of his reign, invading the Levant to force the states there to pay tribute again. This conflict is the first Assyrian war to be recorded in great detail sources other than Assyrian inscriptions including the Hebrew Bible. The Assyrian account diverges somewhat from the Biblical one; whereas the Assyrian inscriptions describe the campaign as a resounding success, in which tribute was regained, some states were annexed outright and Sennacherib even managed to stop Egyptian ambitions in the region, the Bible describes Sennacherib suffering a crushing defeat outside Jerusalem. Since Hezekiah, the king of Judah (who ruled Jerusalem), paid a heavy tribute to Sennacherib after the campaign, modern scholars consider it more likely that the Biblical account, motivated by theological concerns, is highly distorted and that Sennacherib succeeded in his goals of the campaign and re-imposed Assyrian authority in the region. Bel-ibni's tenure as Babylonian vassal ruler did not last long, and he continually opposed by Marduk-apla-iddina and another Chaldean warlord, Mushezib-Marduk, who hoped to seize power for themselves. In 700 BC Sennacherib invaded Babylonia again and drove Marduk-apla-iddina and Mushezib-Marduk away. Needing a vassal ruler with stronger authority, he placed his eldest son, Ashur-nadin-shumi, on the throne of Babylon. For a few years, internal peace was restored, and Sennacherib kept the army busy with a few minor campaigns. During this time, Sennacherib focused his attention mainly on building projects; between 699 and 695 BC he ambitiously renovated Nineveh, constructing among other works the Southwest Palace and a 12 kilometer (7.5-mile) long and 25 meter (82 feet) tall wall. It is possible that a large park constructed near the Southwest Palace served as the inspiration for the Hanging Gardens of Babylon. Sennacherib's choice of making Nineveh capital probably resulted not only from him having long lived in the city as crown prince, but also because of its ideal location, being an important point in the established road and trade systems and also located close to an important ford across the Tigris river. In 694 BC Sennacherib invaded Elam, with the explicit goal to root out Marduk-apla-iddina and his supporters. Sennacherib sailed across the Persian Gulf with a fleet built by Phoenician and Greek shipwrights and captured and sacked countless Elamite cities. He never got his revenge on Marduk-apla-iddina, who died of natural causes before the Assyrian army landed, and the campaign instead significantly escalated the conflict with the anti-Assyrian faction in Babylonia and with the Elamites. The Elamite king Hallushu-Inshushinak took revenge on Sennacherib by marching on Babylonia while the Assyrians were busy in his lands. During this campaign, Ashur-nadin-shumi was captured and taken to Elam, where he was probably executed. In his place, the Elamites and Babylonians crowned the Babylonian noble Nergal-ushezib as king of Babylon. Though Senacherib just a few months later defeated and captured Nergal-ushezib in battle, the war dragged on as the Chaldean warlord Mushezib-Marduk took control of Babylon late in 693 BC and assembled a large coalition of Chaldeans, Arameans, Arabs and Elamites to resist Assyrian retribution. After a series of battles, Sennacherib finally recaptured Babylon in 689 BC. Mushezib-Marduk was captured and Babylon was destroyed in an effort to eradicate Babylonian political identity. The last years of Sennacherib's reign were relatively peaceful, but problems began to arise within the royal court. Though Sennacherib's next eldest son, Arda-Mulissu, had replaced Ashur-nadin-shumi as heir, around 684 BC the younger son Esarhaddon was proclaimed heir instead. Perhaps Sennacherib was influenced by Esarhaddon's mother Naqi'a, who in later times became increasingly prominent and powerful. Disappointed, Arda-Mulissu and his supporters pressured Sennacherib to reinstate him as heir. Though they succeeded in forcing Esarhaddon into exile in the west for his own protection, Sennacherib did not accept Arda-Mulissu as heir. In late 681 BC Arda-Mulissu killed his father in a temple in Nineveh. Because of the regicide, Arda-Mulissu lost some of his previous support and was unable to undergo a coronation before Esarhaddon returned with an army. Two months after Sennacherib was murdered, Esarhaddon captured Nineveh and became king, Arda-Mulissu and his supporters fleeing from the empire. Esarhaddon sought to establish a balance of power between the northern and southern parts of his empire. Thus, he rebuilt Babylon in the south, viewing Sennacherib's destruction of the city as excessively brutal, but also made sure not to neglect the temples and cults of Assyria. As a result of his tumultuous rise to the throne he was distrustful of his officials and family members; something which also had the side effect of an increased prominence of women in his reign, whom he trusted more. Esarhaddon's mother Naqi'a, his queen Esharra-hammat and his daughter Serua-eterat were all more powerful and prominent than most women in earlier Assyrian history. The king was also frequently ill and sickly and also appears to have suffered from depression, which intensified after the deaths of his queen and several of his children. Despite his physical and mental health, Esarhaddon led many successful military campaigns, several of them farther away from the Assyrian heartland than those of any previous king. He defeated the Cimmerians who plagued the northwestern part of the empire, conquered the cities of Kundu and Sissû in Anatolia, and conquered the Phoenician city of Sidon, which was renamed Kar-Aššur‐aḫu‐iddina ("fortress of Esarhaddon"). After fighting the Medes in the Zagros Mountains, Esarhaddon campaigned further to the east than any king before him, reaching as far into modern-day Iran as Dasht-e Kavir, in the Assyrian conquest of Elam. Esarhaddon also invaded the eastern Arabian peninsula where he conquered a large number of cities, including Diḫranu (modern Dhahran). Esarhaddon's greatest military achievement was his 671 BC conquest of Egypt. Through logistic support from various Arab tribes, the 671 BC invasion took a difficult route through central Sinai and took the Egyptian armies by surprise. After a series of three large battles against Pharaoh Taharqa, Esarhaddon captured Memphis, the Egyptian capital. Taharqa fled south to Nubia, and Esarhaddon allowed most of the local governors to remain in place, though he left some of his representatives to oversee them. The conquest of Egypt brought the Neo-Assyrian Empire to its greatest extent. Though he was among the most successful kings in Assyrian history, Esarhaddon faced numerous conspiracies against his rule, perhaps because the king suffering from illness could be seen as the gods withdrawing their divine support for his rule. Around the time of the Egyptian campaigns, there were at least three major insurgencies against Esarhaddon; in Nineveh, the chief eunuch Ashur-nasir was prophesied by a Babylonian hostage to replace Esarhaddon as king; a prophetess in Harran proclaimed that Esarhaddon and his lineage would be "destroyed" and that a usurper named Sasî would become king; and in Assur, the local governor instigated a plot after receiving a prophetic dream in which a child rose from a tomb and handed him a staff. Through a well-developed network of spies and informants, Esarhaddon uncovered all of these coup attempts and in 670 BC had a large number of high-ranking officials put to death. In 672 BC, Esarhaddon decreed that his younger son Ashurbanipal (r. 669–631 BC) would succeed him in Assyria and that the older son Shamash-shum-ukin would rule Babylon. To ensure that the succession to the throne would go more smoothly than his own accession, Esarhaddon forced everyone in the empire, not only the prominent officials but also far-away vassal rulers and members of the royal family, to swear oaths of allegiance to the successors and respect the arrangement. When Esarhaddon died of an illness while on his way to campaign in Egypt in 669 BC, Naqi'a forced similar oaths of allegiance to Ashurbanipal, who became king without incident. One year later, Ashurbanipal oversaw Shamash-shum-ukin's inauguration as (largely ceremonial) king of Babylon. Ashurbanipal is often regarded to have been the last great king of Assyria. His reign saw the last time Assyrian troops marched in all directions of the Near East. In 667 and 664 BC, Ashurbanipal invaded Egypt in the wake of anti-Assyrian uprisings; both Pharaoh Taharqa and his nephew Tantamani were defeated, and Ashurbanipal captured the southern Egyptian capital of Thebes, from which enormous amounts of plundered booty was sent back to Assyria. In 664 BC, after a prolonged period of peace, the Elamite king Urtak launched a surprise invasion of Babylonia which renewed hostilities. After indecisive campaigns for ten years, the Elamite king Teumman was defeated in 653 BC, captured and executed in a battle by the Ulai river. Teumman's head was brought back to Nineveh and displayed for the public. Elam however remained undefeated and continued to work against Assyria for some time. One of the growing problems in Ashurbanipal's early reign were disagreements between Ashurbanipal and Shamash-shum-ukin. While Esarhaddon's documents suggest that Shamash-shum-ukin was intended to inherit all of Babylonia, it appears that he only controlled the immediate vicinity of Babylon since numerous other Babylonian cities apparently ignored him and considered Ashurbanipal to be their king. Over time, it seems that Shamash-shum-ukin grew to resent his brother's overbearing control, and in 652 BC he revolted with the aid of several Elamite kings. In 648 BC Ashurbanipal captured Babylon after a long siege and devastated the city. Shamash-shum-ukin might have died by setting himself on fire in his palace. Ashurbanipal replaced him with the puppet ruler Kandalanu and then marched on Elam. The Elamite capital of Susa was captured and devastated, and large numbers of Elamite prisoners were brought to Nineveh, tortured and humiliated. Ashurbanipal chose to not annex and integrate Elam into the Neo-Assyrian Empire, instead leaving it open and undefended. In the following decades, the Persians would migrate into the region and rebuild the ruined Elamite strongholds for their own use. Though Ashurbanipal's inscriptions present Assyria as an uncontested and divinely supported hegemon over all the world, cracks were starting to form in the empire during his reign. At some point after 656 BC, the empire lost control of Egypt, which was ruled by Pharaoh Psamtik I, founder of Egypt's twenty-sixth dynasty. Ashurbanipal went on numerous campaigns against various Arab tribes, which failed to consolidate rule over their lands and wasted Assyrian resources. Perhaps most importantly, his devastation of Babylon after defeating Shamash-shum-ukin fanned anti-Assyrian sentiments in southern Mesopotamia, which soon after his death would have disastrous consequences. Ashurbanipal's reign also appears to have seen a growing disconnect between the king and the traditional elite of the empire; eunuchs grew powerful in his time, being granted large tracts of lands and numerous tax exemptions. After Ashurbanipal's death in 631 BC, the throne was inherited by his son Ashur-etil-ilani. Though some historians have forwarded the idea that Ashur-etil-ilani was a minor upon his accession, this is unlikely given that he is attested to have had children during his brief reign. Despite being his father's legitimate successor, he appears to have been installed against considerable opposition with the aid of the chief eunuch Sin-shumu-lishir. Assyrian official Nabu-rihtu-usur appears to have attempted to usurp the throne, but his conspiracy was swiftly crushed by Sin-shumu-lishir. Since excavated ruins at Nineveh from around the time of Ashurbanipal's death show evidence of fire damage, the plot might have resulted in violence and unrest within the capital. Ashur-etil-ilani appears to have been a relatively idle ruler; no records of any military campaigns are known, and his palace at Nimrud was much smaller than that of previous kings. It is possible that the government was more or less run by Sin-shumu-lishir throughout his reign. After a reign of four years, Ashur-etil-ilani died in unclear circumstances in 627 BC and was succeeded by his brother Sinsharishkun. Sinsharishkun's accession did not go unchallenged. Immediately upon his rise to the throne, Sin-shumu-lishir rebelled and attempted to claim the throne for himself, despite the lack of any genealogical claim and as the only eunuch to ever do so in Assyrian history. Sin-shumu-lishir successfully seized several prominent cities in Babylonia, including Nippur and Babylon, but was defeated by Sinsharishkun after three months. This victory did little to alleviate Sinsharishkun's problems. The Babylonian vassal king Kandalanu also died in 627 BC. The swift regime changes and internal unrest bolstered Babylonian hopes to shake off Assyrian rule and regain independence, a movement which swiftly proclaimed Nabopolassar as its leader, who was probably a member of a prominent political family in Uruk. Some months after Sin-shumu-lishir's defeat, Nabopolassar and his allies captured both Nippur and Babylon, though the Assyrian response was swift and Nippur was recaptured in October 626 BC. Sinsharishkun's attempts to retake Babylon and Uruk were unsuccessful, however, and in the aftermath Nabopolassar was formally invested as king of Babylon in November 626 BC, restoring Babylonia as an independent kingdom. In the years that followed Nabopolassar's coronation, Babylonia became a brutal battleground between Assyrian and Babylonian armies. Though cities often repeatedly changed hands, the Babylonians slowly pushed Sinsharishkun's armies out of the south. Under Sinsharishkun's personal leadership, the Assyrian campaigns against Nabopolassar initially looked to be successful: in 625 BC, Sippar was retaken and Nabopolassar failed to take Nippur; in 623 BC the Assyrians recaptured Nabopolassar's ancestral home city Uruk. Sinsharishkun might ultimately have been victorious had it not been for a usurper, whose name is not known, from the empire's western territories rebelling in 622 BC, marching on Nineveh and seizing the capital. Though this usurper was defeated by Sinsharishkun after 100 days, the absence of the Assyrian army allowed Nabopolassar's forces to capture all of Babylonia in 622–620 BC. Despite this loss, there was little reason for the Assyrians to suspect that Nabopolassar's consolidation of Babylonia was a significant event and not simply a temporary inconvenience; in previous Babylonian uprisings the Babylonians had at times gained the upper hand temporarily. More alarming was Nabopolassar's first forays into the Assyrian heartland in 616 BC, which amounted to capturing some border cities and defeating local Assyrian garrisons. The Assyrian heartland had not been invaded for 500 years, and the event illustrated that the situation was dire enough for Pharaoh Psamtik to enter the conflict on Assyria's side. Psamtik was probably primarily interested in Assyria remaining as a buffer between his own growing empire and the Babylonians and other powers in the east. In May 615 BC Nabopolassar assaulted Assur, the empire's southernmost remaining city. Sinsharishkun succeeded in repulsing Nabopolassar's assault and, for a time, saving the old city. It is doubtful that Nabopolassar would have achieved a lasting victory without the entrance of the Median Empire into the conflict. Long fragmented into several tribes and often targets of Assyrian military campaigns, the Medes had been united under King Cyaxares. In late 615 or in 614 BC, Cyaxares and his army entered Assyria and conquered the region around Arrapha in preparation for a campaign against Sinsharishkun. The Medes mounted attacks on both Nimrud and Nineveh and captured Assur, leading to the ancient city being brutally plundered and its inhabitants being massacred. Nabopolassar arrived at Assur after the sack and upon his arrival met and allied with Cyaxares. In 612 BC after a siege lasting two months, the Medes and Babylonians captured Nineveh, and Sinsharishkun died defending the city. The capture of the city was followed by extensive looting and destruction and effectively meant the end of the Assyrian Empire. After the fall of Nineveh, an Assyrian general and prince, possibly Sinsharishkun's son, led the remnants of the Assyrian army and established himself at Harran in the west. The prince chose the regnal name Ashur-uballit II With the loss of Assur, Ashur-uballit could not undergo the traditional Assyrian coronation ritual and as such formally ruled under the title of "crown prince", though Babylonian documents considered him to be the Assyrian king. Ashur-uballit's rule at Harran lasted until late 610 or early 609 BC, when the city was captured by the Babylonians and the Medes. Three months later, an attempt by Ashur-uballit and the Egyptians to retake the city failed disastrously, and Ashur-uballit disappears from the sources, his ultimate fate unknown. The remnants of the Assyrian army continued to fight alongside the Egyptian forces against the Babylonians until a crushing defeat at the Battle of Carchemish in 605 BC. Though Assyrian culture endured through the subsequent post-imperial period and beyond, Ashur-uballit's final defeat at Harran marked the end of the ancient line of Assyrian kings and of Assyria as a state. Reasons for the fall of Assyria The fall of Assyria was swift, dramatic and unexpected; scholars continue to grapple with what factors caused the empire's quick and violent downfall. One commonly cited possible explanation is the unrest and the civil wars that immediately preceded Nabopolassar's rise. Such civil conflict could have caused a crisis of legitimacy, and the members of the Assyrian elite may have felt increasingly disconnected from the Assyrian king. However, there is no evidence that Ashur-etil-ilani and Sinsharishkun warred with each other, and other uprisings of Assyrian officials—the unrest upon Ashur-etil-ilani's accession, the rebellion of Sin-shumu-lishir, and the capture of Nineveh by a usurper in 622 BC—were dealt with relatively quickly. Protracted civil war is thus unlikely to have been the reason for the empire's fall. Another proposed explanation was that Assyrian rule suffered from serious structural vulnerabilities; most importantly, Assyria appears to have had little to offer the regions it conquered other than order and freedom from strife; conquered lands were mostly kept in line through fear and terror, alienating local peoples. As such, people outside of the Assyrian heartland may have had little reason to remain loyal when the empire came under attack. Further explanations may lie in the actions and policies of the Assyrian kings. Under Esarhaddon's reign, many experienced and capable officials and generals had been killed as the result of the king's paranoia; under Ashurbanipal, many had lost their positions to eunuchs. Some historians have further deemed Ashurbanipal to have been an "irresponsible and self-indulgent king" since he at one point appointed his chief musician the name of the year. Though it would be easy to place the blame on Sinsharishkun, there is no evidence to suggest that he was an incompetent ruler. No defensive plan existed for the Assyrian heartland since it had not been invaded for centuries, and Sinsharishkun was a capable military leader using well-established Mesopotamian military tactics. In a normal war, Sinsharishkun could have been victorious, but he was unprepared to go on the defensive against an enemy that was both numerically superior and that aimed to destroy his country rather than conquer it. Yet another possible factor was environmental issues. The massive rise in population in the Assyrian heartland during the height of the Neo-Assyrian Empire might have led to a period of severe drought that affected Assyria to a much larger extent than nearby territories such as Babylonia. It is impossible to determine the severity of such demographic and climate-related effects. A large reason for Assyrian collapse was the failure to resolve the "Babylonian problem" which had plagued Assyrian kings since Assyria first conquered southern Mesopotamia. Despite the many attempts of the kings of the Sargonid dynasty to resolve the constant rebellions in the south in a variety of different ways—Sennacherib's destruction of Babylon and Esarhaddon's restoration of it—rebellions and insurrections remained common. This is despite Babylon for the most part being treated more leniently than other conquered regions. Babylonia was for instance not annexed directly into Assyria but preserved as a full kingdom, either ruled by an appointed client king or by the Assyrian king in a personal union. Despite the privileges the Assyrians saw themselves as extending to the Babylonians, Babylon refused to be passive in political matters, likely because the Babylonians saw the Assyrian kings—who rarely visited the city—as failing to undertake the traditional religious duties of the Babylonian kings. The strong appreciation of Babylonian culture in Assyria sometimes turned to hatred, which led to Babylon suffering several brutal acts of retribution from Assyrian kings after revolts. Nabopolassar's revolt was the last in a long line of Babylonian uprisings against the Assyrians; Sinsharishkun's failure to stop it, despite trying for years, doomed his empire. Despite all of these simultaneous factors, it is possible that the empire could have survived if the unexpected alliance between the Babylonians and Medes had not been sealed. Government Sennacherib, the great king, the mighty king, king of the Universe, king of Assyria, king of the Four Corners of the World; favorite of the great gods; the wise and crafty one; strong hero, first among all princes; the flame that consumes the insubmissive, who strikes the wicked with the thunderbolt. — Excerpt from the royal titles of Sennacherib (r. 705–681 BC) In documents describing coronations of Assyrian kings from both the Middle and Neo-Assyrian periods, it is specifically recorded that the king was commanded by Ashur, the Assyrian national deity, to "broaden the land of Ashur" and "extend the land at his feet". The Assyrians saw their empire as being the part of the world overseen and administered by Ashur, through his human agents. In their ideology, the outer realm outside of Assyria was characterized by chaos, and the people there were uncivilized, with unfamiliar cultural practices and strange languages. The existence of the "outer realm" was regarded as a threat to the cosmic order within Assyria, and as such it was the king's duty to expand the realm of Ashur and incorporate these strange lands, converting chaos to civilization. The position of the king above all others was regarded as natural in ancient Assyria since he, though not divine, was seen as the divinely appointed representative of the god Ashur on earth. His power thus derived from his unique position among humanity and his obligation to extend Assyria to eventually cover the whole world was cast as a moral, humane and necessary duty rather than exploitative imperialism. Though their power was nearly limitless, the kings were not free from tradition and their obligations. The kings were obliged to campaign once a year to bring Ashur's rule and civilization to the "four corners of the world"; if a king did not set out to campaign, their legitimacy was severely undermined. Campaigns were usually justified through an enemy having made some sort of (real or fabricated) affront against Ashur. The overwhelming force of the Assyrian army was used to instill the idea that it was invincible, thus further legitimizing the Assyrian king's rule. The king was also responsible for performing various rituals in support of the cult of Ashur and the Assyrian priesthood. Because the rule and actions of the Assyrian king were seen as divinely sanctioned, resistance to Assyrian sovereignty in times of war was regarded to be resistance against divine will, which deserved punishment. Peoples and polities who revolted against Assyria were seen as criminals against the divine world order. The legitimacy of the king hinged on acceptance among the imperial elite, and to a lesser extent the wider populace, of the idea that the king was both divinely chosen by Ashur and uniquely qualified for his position. There were various methods of legitimization employed by the Neo-Assyrian kings and their royal courts. One of the common methods, which appears to be a new innovation of the Neo-Assyrian Empire, was the manipulation and codifying of the king's own personal history in the form of annals. This genre of texts is believed to have been created to support the king's legitimacy through recording events of their reign, particularly their military exploits. The annals were copied by scribes and then disseminated throughout the empire for propagandistic purposes, adding to the perception of the king's power. In many cases, historical information was also inscribed on temples and other buildings. Kings also made use of genealogical legitimacy. Real (and in some cases perhaps fabricated) connections to past royalty established both uniqueness and authenticity since it established the monarch as a descendant of great ancestors who on Ashur's behalf were responsible for creating and expanding civilization. Nearly all Neo-Assyrian kings highlighted their royal lineage in their inscriptions. Genealogical qualification presented a problem for usurpers who did not belong to the direct genealogical lineage. The two Neo-Assyrian kings generally believed to have been usurpers, Tiglath-Pileser III and Sargon II, for the most part did not mention genealogical connections in their inscriptions but instead relied on direct divine appointment. Both of these kings claimed in several of their inscriptions that Ashur had "called my name" or "placed me on the throne". Queens were titled issi ekalli, which could be abbreviated to sēgallu, both terms meaning "woman of the palace". The feminine version of the word for "king" (šarru) was šarratu, but this term was only applied to goddesses and queens of foreign nations who ruled in their own right. Since the Assyrian consorts did not rule, the Assyrians did not refer to them as šarratu. The difference in terminology does not necessarily mean that foreign queens, who often governed significantly smaller territories, were seen as having a higher status than the Assyrian queens. A frequently used symbol, apparently the royal symbol of the queens, used in documents and on objects to designate the queens was a scorpion. Though the queens, like all other female and male members of the royal court, ultimately derived their power and influence from their association with the king, they were not pawns without political power. The queens had their own say in financial affairs, and while they ideally were supposed to produce an heir to the throne, they also had several other duties and responsibilities, often in very high levels of the government. The queens were involved in the arrangement of religious activities, dedicated gifts to the gods, and supported temples financially. They were in charge of their own often considerable financial resources, evidenced not only by surviving texts concerning their household and activities but also the treasures uncovered in the Queens' tombs at Nimrud. Under the Sargonid dynasty, military units subservient to the queen were created. Such units were not just an honor guard for the queen, but included commanders, cohorts of infantry and chariots and are sometimes known to have partaken alongside other units in military campaigns. Perhaps the most powerful of the Neo-Assyrian queens was Shammuramat, queen of Shamshi-Adad V, who might have ruled as regent in the early reign of her son Adad-nirari III and participated in military campaigns. Also powerful was Esarhaddon's mother Naqi'a, though whether she held the status of queen is not certain. Naqi'a is the best documented woman of the Neo-Assyrian period, and she is seen influencing politics in the reigns of Sennacherib, Esarhaddon and Ashurbanipal. The unprecedented success of the Neo-Assyrian Empire was tied to its ability to efficiently incorporate conquered lands into its administrative system. It is clear that there was a strong sense of order in the Assyrian mindset, so much so that the Neo-Assyrians have sometimes been referred to as the "Prussians of the ancient Near East". This sense of order manifested in various parts of Neo-Assyrian society, including the more square and regular shape of the characters in Neo-Assyrian writing and in the organized administration of the Neo-Assyrian Empire, which was divided into a set of provinces. The idea of imposing order by creating well-organized hierarchies of power was part of the justifications used by Neo-Assyrian kings for their expansionism: in one of his inscriptions, Sargon II explicitly pointed out that some of the Arab tribes he had defeated had previously "known no overseer or commander". At the top of the provincial administration was the provincial governor (bēl pīhāti or šaknu). Second-in-command was probably the šaniu (translated as "deputy" by modern historians, the title literally means "second") and at the bottom of the hierarchy were village managers (rab ālāni), in charge of one or more villages or other settlements with the primary duty to collect taxes in the form of labor and goods. Provincial governors were directly responsible for various aspects of provincial administration, including construction, taxation and security. Security concerns were often mostly relevant only in the frontier provinces, whose governors were also responsible for gathering intelligence about enemies across the border. To this end, a vast network of informants or spies (daiālu) were employed to keep officials informed of events and developments in foreign lands. Provincial governors were also responsible for supplying offerings to temples, in particular to the temple of Ashur. This channeling of revenues from across the empire was not only meant as a method to collect profit but also as a way to connect the elites across the empire to the religious institutions in the Assyrian heartland. The royal administration kept close watch of institutions and individual officials across the empire through a system of officials responsible directly to the king, called qēpu (usually translated as "royal delegates"). Control was maintained locally through regularly deploying low-ranking officials to the smaller settlements, i.e. villages and towns, of the empire. Corvée officers (ša bēt-kūdini) kept tallies on the labor performed by forced laborers and the remaining time owed and village managers kept provincial administrators informed of the conditions of the settlements in their provinces. As the empire grew and time went on, many of its foreign subject peoples became incorporated into the Assyrian administration, with more high officials in the later times of the empire being of non-Assyrian origin. The inner elite included two main groups, the "magnates" and the "scholars". The "magnates" are a grouping by modern historians for the seven highest-ranking officials in the administration; the masennu (treasurer), nāgir ekalli (palace herald), rab šāqê (chief cupbearer), rab ša-rēši (chief officer/eunuch), sartinnu (chief judge), sukkallu (grand vizier) and turtanu (commander-in-chief). These offices were sometimes occupied by members of the royal family. Occupants of four of the offices—the masennu, nāgir ekalli, rab šāqê and turtanu—served as governors of important provinces and thus as controllers of local tax revenues and administration. All of the magnates were involved with the Assyrian military, each controlling significant numbers of forces, and they often owned large and tax-free estates. Such estates were scattered across the empire, likely to defuse the power of local provincial authorities and to tie the personal interest of the inner elite to the well-being of the entire empire. The "scholars", called ummânī, included people specialized in various disciplines, including scribal arts, medicine, exorcism, divination and astrology. Their role was chiefly to protect, advise and guide the kings through interpreting omens, which maintained the ritual purity of the king and protected him from evil. To govern an empire of unprecedented size, the Neo-Assyrian Empire, probably first under Shalmaneser III, developed a sophisticated state communication system. Use of this system was restricted to messages sent by high officials; their messages were stamped with their seals, which demonstrated their authority. Messages without such seals could not be sent through the communication system. Per estimates by Karen Radner, a message sent from the western border province Quwê to the Assyrian heartland, a distance of 700 kilometers (430 miles) over a stretch of lands featuring many rivers without any bridges, could take less than five days to arrive. Such communication speed was unprecedented before the rise of the Neo-Assyrian Empire and was not surpassed in the Middle East until the telegraph was introduced by the Ottoman Empire in 1865, nearly 2,500 years later. The quick communications between the imperial court and officials in the provinces was an important contributing factor to the cohesion of the empire and an important innovation which paved the way for its geopolitical dominance. The government used mules for long-distance state messengers because of their strength, hardiness and low maintenance. Assyria was the first civilization to use mules for this purpose. It was common for messengers to ride with two mules, which meant that it was possible to alternate between them to keep them fresh and to ensure that the messengers were not stranded if one mule became lame. Messages were sent either through a trusted envoy or through a series of relay riders. The relay system, called kalliu, was invented by the Assyrians and allowed for significantly faster speeds in times of need, with each rider only covering a segment of the travel route, ending at a relay station at which the next rider, with a fresh pair of mules, was passed the letter. To facilitate transport and long-distance travel, the empire constructed and maintained a vast road system. Called the hūl šarri ("king's road"), the roads might originally have grown from routes used by the military during campaigns and were continually expanded. The largest phase of road expansion transpired between the reigns of Shalmaneser III and Tiglath-Pileser III. Military At the height of the empire, the Assyrian army was the strongest army yet assembled in world history. The number of soldiers was likely several hundred thousand. The Assyrians pioneered innovative strategies, particularly concerning cavalry and siege warfare, that would be used in warfare for millennia. Due to detailed royal records and detailed depictions of soldiers and battle scenes on reliefs, the equipment and organization of the army is relatively well understood. Communication within the army and between units was fast and efficient; using the empire's efficient methods of state communication, messages could be sent across vast distances very quickly. Messages could be passed within an army through the use of fire signals. While on campaign, the army was symbolically led by two gods; with standards of Nergal and Adad being hoisted to the left and right of the commander. The commander was typically the king, but other officials could be assigned to lead the army into war. The army was chiefly raised through provincial governors levying troops. Provincial governors sometimes led campaigns and negotiated with foreign rulers. Under the Sargonid dynasty, some reforms appear to have been made to the leadership of armies; the office of turtanu was divided into two, and it seems that specific regiments, including their respective land-holdings, were transferred from the king's direct command to the command of the crown prince and the queen. The two most important developments in the Neo-Assyrian period were the large-scale introduction of cavalry and the adoption of iron for armor and weapons. While the Middle Assyrian army had been composed entirely of levies, a central standing army was established in the Neo-Assyrian Empire, dubbed the kiṣir šarri ("king's unit"). Closely accompanying the king were the ša qurubte, or royal bodyguards, some drawn from the infantry. The army was subdivided into kiṣru, composed of perhaps 1,000 soldiers, most of whom would have been infantry soldiers (zūk, zukkû or raksūte). The infantry was divided into three types: light, medium and heavy. The light infantry might have in addition to serving in battles also carried out policing tasks and served in garrisons and was likely mainly composed of Aramean tribesmen, often barefoot and without helmets, wielding bows or spears. Also included in that group were probably expert archers hired from Elam. The medium infantry were also primarily archers or spearmen but were armed with characteristic pointed helmets and a shield, though no body armor was used before the time of Ashurbanipal. The heavy infantry included spearmen, archers and slingers and wore boots, pointed helmets, round shields and scale armor. In battle they fought in close formation. Foreign levy troops drafted into the army are often distinguishable in reliefs by distinct headgear. The cavalry (ša pētḫalli) used small horses bred in northern Assyria. The cavalry was commanded by a general with the title rab muggi ša pētḫalli. The cavalry was at some point divided into two distinct groups; the archers (ṣāb qašti) and lancers (ṣāb kabābi), both of whom were also equipped with swords. The army incorporated foreign cavalry from Urartu, despite Assyria and Urartu often being at war. The role of cavalry changed through the Neo-Assyrian period; early on, cavalrymen worked in pairs, one shooting arrows and the other protecting the bowman with his shield. Later on, shock cavalry was introduced. Under Ashurbanipal, horses were equipped with leather armor and a bronze plaque on the head, and riders wore scale armor. Though chariots continued to be used ceremonially and were often used by kings while on campaign, they were largely replaced by cavalry as a prominent element of the army. While on campaign, the army made heavy use of both interpreters/translators (targumannu) and guides (rādi kibsi), both probably being drawn from foreigners resettled in Assyra. The innovative techniques and siege engines in siege warfare included tunneling, diverting rivers, blockading to ensure starvation, siege towers, ladders, ramps and battering rams. Another innovation were the camps established by the army while on campaign, which were carefully designed with collapsible furniture and tents so that they could be swiftly built and dismantled. Society At the top of society was the king. Below the king were (in descending order of prestige and power) the crown prince, the royal family, the royal court, administrators and army officers. From the time Ashurnasirpal II designated Nimrud as the new capital of the empire onwards, eunuchs held a high position. The highest offices both in the civil administration and the army were typically occupied by eunuchs with deliberately obscure and lowly origins, since this ensured that they would be loyal to the king. The members of the royal court were often handpicked from among the urban elites by eunuchs. Below the higher classes were the Assyrian "citizens",[g] semi-free laborers (usually mostly made up of deportees) and then slaves. There was not a large number of slaves, who were made up of prisoners of war and of Assyrians unable to pay their debts and were thus reduced to debt bondage. In many cases, Assyrian family groups or "clans" formed large population groups within the empire referred to as tribes.[h] It was possible through steady service to the Assyrian state bureaucracy for a family to move up the social ladder; in some cases stellar work conducted by a single individual enhanced the status of their family for generations. Foreigners could also reach high positions with attestations of individuals with Aramean names in high positions by the end of the 8th century BC. Though most of the preserved sources only give insight into the higher classes, the vast majority of the population would have been farmers who worked land owned by their families. Families and tribes lived together in settlements near their agricultural lands. It is not clear how local settlements were organized internally beyond each being headed by a mayor who acted as a local judge (more in the sense of a counselor to involved parties than someone who passed judgement) and represented the settlement within the state bureaucracy. It is possible that the mayors were responsible for forwarding local concerns to the state; no revolts by the common people (only by local governors and high officials) are known. Though all means of production were owned by the state, there was a vibrant private economic sector within the empire, with property rights of individuals ensured by the government. Monumental construction projects were undertaken by the state through levying materials and people from local governors, though sometimes also with the help of private contractors. The wealth generated through private investments was dwarfed by the wealth of the state, which was by far the largest employer in the empire and had an obvious monopoly on agriculture, manufacturing and exploitation of minerals. The imperial economy advantaged mainly the elite, since it was structured in a way that ensured that surplus wealth flowed to the government and was then used for the maintenance of the state throughout the empire. From the time of the reconquista onwards, the Assyrians made extensive use of an increasingly complex system of deportations and resettlements. Large-scale resettlement projects were carried out in recently defeated enemy lands and cities in an effort to destroy local identities, which would reduce the risk that local peoples rose up against Assyria, and to make the most of the empire's resources, through settling people in a specific underdeveloped region to cultivate its resources better. Though it could likely be emotionally devastating for the resettled populations and economically devastating for the regions they were drawn from, the policy did not include killing any of the resettled people and was meant to safeguard the empire and make its upkeep more efficient. The total number of relocated individuals has been estimated at 1.5–4.5 million people. The state valued deportees for their labor and abilities. One of the most important reasons for resettlement was to develop the empire's agricultural infrastructure through introducing Assyrian-developed agricultural techniques to all of the provinces. As a result, many regions of the empire experienced significant improvements in irrigation and thus prosperity. The resettlements were carefully planned out and organized. Resettled people were allowed to bring their possessions with them, settle and live together with their families. They were no longer counted as foreigners but as Assyrians, which over time contributed to a sense of loyalty to the state. This recognition as Assyrians was not in name only, as documentary evidence attests to the new settlers not being treated any differently by the Assyrian state than the old populations who had lived in the same locations for generations. A consequence of the resettlements, and according to Karen Radner "the most lasting legacy of the Assyrian Empire", was a dilution of the cultural diversity of the Near East, changing the region's ethnolinguistic composition and facilitating the rise of Aramaic as the local lingua franca. Aramaic remained the lingua franca of the region until suppression of Christians under the Ilkhanate and Timurid Empire in the 14th century AD. The ancient Assyrians primarily spoke and wrote the Assyrian language, a Semitic language (i.e. related to modern Hebrew and Arabic) closely related to Babylonian, spoken in southern Mesopotamia. Assyrian and Babylonian are generally regarded by modern scholars as dialects of the Akkadian language. The empire was the last state to sponsor writing traditional Akkadian cuneiform in all levels of its administration. As a result, ancient Mesopotamian textual tradition and writing practices flourished to an unprecedented degree in the Neo-Assyrian period. Texts written in cuneiform were made not just in the traditionally Akkadian-speaking Assyrian heartland and Babylonia, but by officials and scribes all over the empire. At the height of the empire, cuneiform documents were written in lands today part of countries like Israel, Lebanon, Turkey, Syria, Jordan and Iran, which had not produced any cuneiform writings for centuries, and in cases never before. Three distinct versions, or dialects, of Akkadian were used: Standard Babylonian, Neo-Assyrian and Neo-Babylonian. Standard Babylonian was a highly codified version of ancient Babylonian, used around 1500 BC, and was used as a language of high culture, for nearly all scholarly documents, literature and poetry. The culture of the elite was strongly influenced by Babylonia in the south. Though the political relationship between Babylonia and the Assyrian central government was variable and volatile, cultural appreciation of the south was constant throughout the Neo-Assyrian period. Many of the documents written in Standard Babylonian were written by scribes who originally came from southern Mesopotamia but were employed in the north. The Neo-Assyrian and Neo-Babylonian forms of Akkadian were vernacular languages, primarily spoken in northern and southern Mesopotamia, respectively. The imperialism of the empire was in some ways different from that of later empires. The perhaps biggest difference was that the kings did not impose their religion or language on the foreign peoples they conquered outside the Assyrian heartland; the national deity Ashur had no significant temples outside of northern Mesopotamia, and the Neo-Assyrian language, though it served as an official language in the sense that it was spoken by provincial governors, was not forced upon conquered peoples. This lack of suppression against foreign languages, and the growing movement of Aramaic-speaking people into the empire during the Middle Assyrian and early Neo-Assyrian periods, facilitated the spread of the Aramaic language. Aramaic grew in importance and increasingly replaced the Neo-Assyrian language even within the Assyrian heartland. From the 9th century BC onwards, Aramaic became the de facto lingua franca of the empire, with Neo-Assyrian and other forms of Akkadian becoming relegated to a language of the political elite. Despite its growth, surviving examples of Aramaic from Neo-Assyrian times are significantly fewer in number than Akkadian writings, mostly because Aramaic scribes for the most part used perishable materials for their writings. The somewhat lacking record of Aramaic in inscriptions does not reflect that the language held a lower status, since royal inscriptions were almost always written in a highly codified and established manner. Some Aramaic-language inscriptions in stone are known, and there are even a handful of examples of bilingual inscriptions, with the same text written in both Akkadian and Aramaic. Despite the Neo-Assyrian Empire's promotion of Akkadian, Aramaic also grew to become a widespread vernacular language and it also began to be used in official state-related capacities as early as the reign of Shalmaneser III, given that some examples of Aramaic writings are known from a palace he built in Nimrud. The relationship between Akkadian and Aramaic was somewhat complex, however. Though Sargon II explicitly rejected Aramaic as being unfit for royal correspondence,[i] Aramaic was clearly an officially recognized language under his predecessor Shalmaneser V, who owned a set of lion weights inscribed with text in both Akkadian and Aramaic. That the question of using Aramaic in royal correspondence was even raised in Sargon II's time in the first place was a significant development. In reliefs from palaces built by kings from Tiglath-Pileser III to Ashurbanipal, scribes writing in Akkadian and Aramaic are often depicted side by side, confirming Aramaic having risen to the position of an official language used by the imperial administration. The Neo-Assyrian Empire was highly multilingual. Through its expansionism, the empire came to rule a vast stretch of land incorporating regions throughout the Near East, where various languages were spoken. These languages included various Semitic languages (including Phoenician, Hebrew, Arabic, Ugaritic, Moabite and Edomite) as well as many non-Semitic languages, such as Indo-European languages (including Luwian and Median), Hurrian languages (including Urartian and Shuprian), Afroasiatic languages (Egyptian), and language isolates (including Mannean and Elamite). Though it was no longer spoken, some scholarly texts from the Neo-Assyrian period were also written in the ancient Sumerian language. Though they must have been necessary, Neo-Assyrian texts rarely mentioned translators and interpreters (targumānu). Translators are only mentioned in cases when Assyrians communicated with speakers of non-Semitic languages. The beginnings of Assyrian scholarship is conventionally placed near the beginning of the Middle Assyrian Empire in the 14th century BC, when Assyrians began to take a lively interest in Babylonian scholarship, which they adapted and developed into their own scholarship tradition. The rising status of scholarship might be connected to the kings beginning to regard amassing knowledge as a way to strengthen their power. There was a marked change in royal attitude towards scholarship; while the kings had previously seen preserving knowledge as a responsibility of the temples and of private individuals, it was increasingly also seen as a responsibility of the king. The scholarship appears to have begun already under Tukulti-Ninurta II in the 9th century BC, since he is the first Assyrian king under which the office of chief scholar is attested. In Tukulti-Ninurta's time the office was occupied by Gabbu-ilani-eresh, an ancestor of a later influential family of advisors and scribes. Libraries were built to maintain scribal culture and scholarship and to preserve the knowledge of the past. Such libraries were not limited to the temples and royal palaces; there were also private libraries built and kept by individual scholars. Texts found in the libraries fall into a wide array of genres, including divinatory texts, divination reports, treatments for the sick (either medical or magical), ritual texts, incantations, prayers and hymns, school texts and literary texts. The largest and most important royal library in Mesopotamian history was the Library of Ashurbanipal, an ambitious project for which Ashurbanipal gathered tablets from both Assyrian and Babylonian libraries. The texts in this library were gathered both through amassing existing tablets from throughout the empire and through commissioning scribes to copy existing works in their own libraries and send them to the king. The Library of Ashurbanipal included more than 30,000 documents. The empire accomplished several complex technical projects, which indicates sophisticated technical knowledge. Various professionals who performed engineering tasks are attested in Neo-Assyrian sources, such as individuals holding positions like šitimgallu ("chief builder"), šellapajū ("architect"), etinnu ("house builder") and gugallu ("inspector of canals"). Among the most impressive engineering and construction projects were the repeated constructions and renovations of capital cities (Nimrud, Dur-Sharrukin and Nineveh). Royal inscriptions commemorating the building works at these sites often detail the building process. The level of sophistication in engineering is evident from solutions to technical problems like lighting throughout large buildings and canalizations of toilets, roofs and courts. A frequent challenge was to construct the roofs of large rooms since the Assyrians had to support them using only wooden beams. As a result, large representative rooms were often much longer than they were wide. There was a general tendency of kings wanting to outperform their predecessors: Sennacherib's palace at Nineveh was significantly larger than that of Sargon II, which in turn was significantly larger than that of Shalmaneser III. All the capitals contained great parks, an innovation of the Neo-Assyrian period. Parks were complex engineering works since they not only exhibited exotic plants from far-away lands but also involved modifying the landscape through adding artificial hills and ponds, as well as pavilions and other small buildings. To supply cities with water, the Assyrians constructed advanced hydraulic works to divert and transport water from far-away mountain regions in the east and north. In Babylonia, water was typically simply drawn from the Tigris river, but it was difficult to do so in Assyria due to the river's level vis-à-vis the surrounding lands and changes in the water level. Because periods of drought often threatened Assyrian dry farming, several kings undertook irrigation projects, including canal construction. The most ambitious hydraulic engineering project was undertaken by Sennacherib during his renovation of Nineveh. As part of his building project, four large canal systems, together covering more than 150 kilometers (93.2 miles), were connected to the city from four different directions. These systems included canals, tunnels, weirs, aqueducts and natural watercourses. Other hydraulic works included sewage and drainage systems for buildings which made it possible to dispose of wastewater and efficiently drain the yards, roofs and toilets of palaces, temples, and private homes. Transportation from far-away locations of goods and materials sometimes involved very heavy loads. Wood was for instance relatively scarce in the Assyrian heartland and as such had to be gathered from distant lands and transported for use as a building material. Wood was typically gathered from distant forests, transported to rivers and then hauled on rafts or ships. The most challenging type of transportation was the transport of large blocks of stone, necessary for various building projects. Several kings note in their royal inscriptions the difficulties involved in the transportation of the single massive blocks of stone needed to create the great lamassu (protective stone colossi with the head of a human, wings and the body of a bull) for their palaces. Because the stones had to be transported from sources several kilometers away from the capitals and were typically transported on boats, it was a difficult process, and several boats sank on the way. Under Sennacherib a quarry was opened on the left bank of the Tigris river, which led to the stones being able to be transported fully over land, a more secure but still very labor-intensive project. When transported over land, the great stones were moved by four teams of workers, overseen by supervisors, using wooden planks or rollers. Legacy The empire left a cultural legacy of great consequence. The population of northern Mesopotamia continued to keep the memory of their ancient civilization alive and positively connected with the Assyrian Empire in local histories written as late as the Sasanian period. Figures like Sargon II, Sennacherib, Esarhaddon, Ashurbanipal and Shamash-shum-ukin long figured in local folklore and literary tradition. In large part, tales from the Sasanian period and later times were invented narratives, based on ancient Assyrian history but applied to local and current landscapes. Medieval tales written in Aramaic (or Syriac) by and large characterize Sennacherib as an archetypical pagan king assassinated as part of a family feud, whose children convert to Christianity. The legend of the Saints Behnam and Sarah, set in the 4th century but written long thereafter, casts Sennacherib, under the name Sinharib, as their royal father. Some Aramaic stories spread far beyond northern Mesopotamia. The Story of Ahikar follows a legendary royal advisor named Ahikar of Sennacherib and Esarhaddon and is first attested on a papyrus from Elephantine in Egypt from c. 500 BC. This story proved popular and was translated into many languages. Other tales from Egypt include stories of the Egyptian hero Inarus, a fictionalized version of the rebel Inaros I, fighting against Esarhaddon's invasion of Egypt as well as a tale recounting the civil war between Ashurbanipal and Shamash-shum-ukin. Some Egyptian tales feature a queen of the Amazons named Serpot, possibly based on Shammuramat. Several legends of Assyria are known from Greco-Roman texts, including a fictional narrative of the founding of the Assyrian Empire and Nineveh by the legendary figure Ninus, as well as tales of Ninus's powerful wife Semiramis, another fictionalized version of Shammuramat. Legendary accounts were written of the empire's fall, erroneously linked to the reign of the effeminate Sardanapalus, a fictionalized version of Ashurbanipal. Though the empire did not force religious conversions, its existence as a large imperialist state reshaped the religious views of the people around it, prominently in the Hebrew kingdoms of Israel and Judah. The Bible mentions Assyria about 150 times; multiple significant events which involved the Hebrews are mentioned, most prominently Sennacherib's war against Hezekiah, and several Neo-Assyrian kings are mentioned, including Tiglath-Pileser III, Shalmaneser V, Sargon II, Sennacherib, Esarhaddon and possibly Ashurbanipal. Though some positive associations of Assyria are included, the Bible generally paints the empire as an imperialist aggressor. Jewish theology was influenced by the empire: the Book of Deuteronomy bears a strong resemblance to the loyalty oaths in Assyrian vassal treaties, though with the absolute loyalty to the Assyrian king replaced with absolute loyalty to the Abrahamic god. Additionally, some stories in the Bible appear to be at least partly drawn from events in Assyrian history; the story of Jonah and the whale might draw on earlier stories concerning Shammuramat, and the story of Joseph was likely at least partly inspired by Esarhaddon's rise to power. Perhaps the greatest influence of the empire on later Abrahamic religious tradition was that the emergence of a new religious and "national" identity among the Hebrews might have been a direct response to the political and intellectual challenges posed by Assyrian imperialism. The most important innovation in Hebrew theology during the period roughly corresponding to the time of the Neo-Assyrian Empire was the elevation of Yahweh as the only god and the beginning of the monotheism that would later characterize Judaism, Christianity and Islam. It has been suggested that this development only followed experiences either with the near-monotheism of the Assyrians in regards to the god Ashur, or the monocratic and universal nature of the imperial rule of the Assyrian kings. When the Medes and Babylonians conquered the Assyrian heartland, they put the monuments, palaces, temples and cities to the torch; the Assyrian people were dispersed, and the cities were for a long time left largely abandoned. Though Assyria experienced a resurgence in the post-imperial period, chiefly under the Seleucids and Parthians, the region was later devastated once more during the rise of the Sasanian Empire in the 3rd century AD. The only ancient Assyrian city to be continually inhabited as an urban center from the time of the Neo-Assyrian Empire to the present is Arbela, today known as Erbil. Though the local population of northern Mesopotamia never forgot the Neo-Assyrian Empire and the locations of its capital cities, knowledge of Assyria in the west survived through the centuries chiefly through the accounts of the Bible and the works describing the empire by classical authors. Unlike other ancient civilizations, Assyria and other Mesopotamian civilizations left no magnificent ruins above ground; all that remained to see were grass-covered mounds in the plains which travellers at times believed to simply be natural features of the landscape. In the early 19th century, European explorers and archaeologists first began to investigate the ancient mounds. One of the important early figures in Assyrian archaeology was British business agent Claudius Rich who visited the site of Nineveh in 1820, traded antiquities with the locals and made measurements of the mounds. Rich's collection (which ended up in the British Museum) and writings inspired Julius von Mohl, secretary of the French Société Asiatique, to persuade the French authorities to create the position of a French consul in Mosul and to start excavations at Nineveh. The first consul to be appointed was Paul-Émile Botta in 1841. Using funds secured by von Mohl, Botta conducted extensive excavations at Nineveh, particularly on the Kuyunjik mound. Because the ancient ruins of Nineveh were hidden deep under layers of later settlement and agricultural activities, Botta's excavation never reached them. Upon hearing reports by locals that they had uncovered Assyrian ruins, Botta turned his attention to the site of Khorsabad, 20 kilometers to the northeast, where he discovered the ruins of an ancient palace. Botta had uncovered the ancient city of Dur-Sharrukin. The works of art found under Botta's supervision included reliefs and stone lamassus. In 1847 the first exhibition on Assyrian sculptures was held in the Louvre. After returning to Europe in the late 1840s, Botta compiled an elaborate report on the findings, complete with numerous drawings of the reliefs made by artist Eugène Flandin. The report, published in 1849, showcases the majesty of Assyrian art and architecture. English archaeologist Austen Henry Layard wrote in the 19th century: "mighty ruins in the midst of deserts, defying, by their very desolation and lack of definite form, the description of the traveller". Layard began his activities in November 1845 at Nimrud (though he believed this to be the site of Nineveh), working as a private individual without any permission to excavate from the Ottoman authorities; he initially tried to fool the local pasha through claiming that he was on a hunting trip. The expedition was funded by the British Ambassador to the Ottoman Empire, Stratford Canning. Layard discovered ruins of numerous palaces, including the Northwest Palace of Ashurnasirpal II, with numerous walls covered in reliefs. Layard's illustrated two-volume book presenting his discoveries, Nineveh and its Remains, was published in 1849. Layard conducted a second expedition in which he turned his attention to the Kuyunjik mound. There he made significant discoveries, including the palace built by Sennacherib. In 1852, the French continued excavations at Khorsabad, with Victor Place instructed to procure "the largest possible" amount of Assyrian artefacts. Rivalry between the Louvre and the British Museum played a significant role in the intensity of early exploration and excavation of Assyrian sites. Though Layard left Mesopotamia in 1851, the British Museum appointed his close assistant, Assyrian Hormuzd Rassam, to continue excavation projects in the region. After the outbreak of the Crimean War in 1853, archaeology in Assyria remained dead for a long time, though excavations began again in the early 20th century and have continued since. Though some point to the Akkadian Empire (c. 2334–2154 BC) or the Eighteenth Dynasty of Egypt (c. 1550–1290 BC), many researchers consider the Neo-Assyrian Empire to be the first world empire in history. Although the Neo-Assyrian Empire covered between 1.4 and 1.7 million square kilometers (0.54–0.66 million square miles; just a little over one percent of the land area of the planet), the terms "world empire" or "universal empire" should not be taken as denoting actual world domination. The Neo-Assyrian Empire was at its height the largest empire yet to be formed in history and was regarded by the Assyrians and many of their contemporaries as "universal", while the lands remaining outside their dominions—such as the Arabian desert and the highlands of the Zagros Mountains—were dismissed "empty", at the fringes of the world and inhabited by uncivilized peoples. A "world empire" can also be interpreted as an imperial state without any competitors. Though there were other reasonably large kingdoms in the ancient Near East during the Neo-Assyrian period—notably Urartu in the north, Egypt in the west and Elam in the east—none were existential threats to Assyria and could do little else than defend themselves in times of war; whereas Assyrian troops routinely plundered and campaigned in the heartlands of these kingdoms, the Assyrian heartland was not invaded until the fall of the empire. Nevertheless, the existence of other organized kingdoms undermined the notion of the Assyrians as universal rulers. It is partly because of this that large military campaigns were conducted with the express goal of conquering these kingdoms and fulfilling the ideological mission of ruling the world. At the height of the empire under Esarhaddon and Ashurbanipal, only Urartu remained since Egypt had been conquered and Elam left destroyed and desolate. Ideologically, the empire formed an important part in the imperial ideologies of succeeding empires in the Middle East. The idea of continuity between successive empires (a phenomenon in later times dubbed translatio imperii) was a long established tradition in Mesopotamia, going back to the Sumerian King List which connected succeeding and sometimes rival dynasties and kingdoms together as predecessors and successors. In the past, the idea of succession between empires had resulted in claims such as that of the Dynasty of Isin being the successor of the Third Dynasty of Ur, or Babylonia being the successor of the Akkadian Empire. The idea of translatio imperii supposes that there is only one "true" empire at any given time, and that imperial power and right to rule is inherited from one empire to the next, with Assyria typically seen as the first empire. Ancient Greek historians such as Herodotus and Ctesias supported a sequence of three world empires and a successive transfer of world domination from the Assyrians to the Medes to the Achaemenids. Inscriptions from several of the Achaemenid kings, most notably Cyrus the Great, alludes to their empire being the successor of the Neo-Assyrian Empire. Shortly after Alexander the Great conquered Persia, his Macedonian Empire began to be regarded as the fourth empire. Texts from the Neo-Babylonian period regard the Neo-Babylonian Empire as the successor of the Neo-Assyrian Empire. Babylonian texts from the time Mesopotamia came under the rule of the Seleucid Empire centuries later supported a longer sequence, with imperial power being transferred from the Assyrians to the Babylonians, then to the Achaemenids and finally to the Macedonians, with the Seleucid Empire being viewed as the same empire as Alexander's empire. Later traditions were somewhat confused in the set of empires, with some conflating Assyria with Babylonia as a single empire, though still counting the Macedonians/Seleucids as the fourth due to counting both Babylonia and the Medes (despite them being contemporaries). The Book of Daniel describes a dream of the Neo-Babylonian King Nebuchadnezzar II which features a statue with a golden head, silver chest, bronze belly, iron legs and iron/clay feet. This statue is interpreted as an expression of translatio imperii, placing Nebuchadnezzar's empire (the Neo-Babylonian Empire; gold) as the first empire, the Median Empire (silver) as the second, the Achaemenid Empire (bronze) as the third and the Macedonian Empire of Alexander the Great (iron) as the fourth. The idea of succession of empires did not end with the fall of the Seleucid Empire; traditions were instead adjusted to include later empires in the sequence. Shortly after the Roman Empire conquered the last remnants of the Seleucid Empire in 63 BC, literary traditions began to regard the Roman Empire as the fifth world empire. The Roman Empire spawned its own sequences of successor claimants; in the east it was followed by the Byzantine Empire, from which both the Russian and Ottoman empires claimed succession. In the west, the Frankish and eventually Holy Roman empires considered themselves to be the heirs of Rome. Later scholars have sometimes posited a sequence of world empires more focused on the Middle East. In British scholar George Rawlinson's 1862–67 work The Five Great Monarchies of the Ancient Eastern World, the five Oriental empires are regarded to have been Chaldaea (erroneous since no such empire existed), Assyria, Babylonia, Media and Persia. Rawlinson expanded the sequence in his 1876 The Seven Great Monarchies of the Ancient Eastern World to include the Parthian and Sasanian empires. Though expansive sequences of translatio imperii hold little weight in modern research, scholars today still recognize a basic sequence of imperial succession from the Neo-Assyrian Empire to the Neo-Babylonian Empire to the Achaemenid Empire. The political structures established by the Neo-Assyrian Empire became the model for the later empires that succeeded it. Key components of the Neo-Babylonian Empire were based on the Neo-Assyrian Empire. Though the administrative structure of the Neo-Babylonian Empire is not known due to the scant surviving sources, and it is thus unclear to what degree the old provincial divisions and administration of the Neo-Assyrian Empire continued to be in use, the organization of the central palace bureaucracy under the Neo-Babylonian kings was based on that of the Neo-Assyrian Empire, not any established earlier Babylonian models. Additionally, Neo-Babylonian construction projects, such as Nebuchadnezzar II's massive expansion of Babylon, followed Assyrian traditions; as the Neo-Assyrian kings had done in their new capitals, Nebuchadnezzar placed his palace on a raised terrace across the city wall and followed a rectangular plan for the inner city. The sophisticated Assyrian road system, first created during the Middle Assyrian period, also continued to be in use and served as a model for sophisticated road systems of the Neo-Babylonian and Achaemenid empires. I built a pillar over against the city gate and I flayed all the chiefs who had revolted and I covered the pillar with their skins. Some I impaled upon the pillar on stakes and others I bound to stakes round the pillar. I cut the limbs off the officers who had rebelled. Many captives I burned with fire and many I took as living captives. From some I cut off their noses, their ears, and their fingers, of many I put out their eyes. I made one pillar of the living and another of heads and I bound their heads to tree trunks round about the city. Their young men and maidens I consumed with fire. The rest of their warriors I consumed with thirst in the desert of the Euphrates. — Inscription by Ashurnasirpal II (r. 883–859 BC) The empire is perhaps most prominently remembered for the ferocity and brutality of its army. Neo-Assyrian inscriptions and artwork are unusually explicit in description and depiction of various atrocities, often describing them with "terrifying realism". It is chiefly from the Neo-Assyrian period that royal inscriptions describe atrocities in detail, though various atrocities were enacted against enemy states and peoples by certain Middle Assyrian kings as well. This may be attributable to the Neo-Assyrian kings using fear to keep their conquered territories in-line after declines in power during the Middle Assyrian Empire. Biblical and other historical references to Assyrian brutality were reinforced by the 19th-century discoveries of ancient art and inscriptions, as well as by unflattering comparisons drawn between Assyria and the Ottoman Empire by the historians and archaeologists who found them. Today, despite the diversity of ancient Assyrian culture, military and atrocity scenes dominate museum exhibitions on Assyria because of their distinct character. Though there is no modern scholarly denial that the Neo-Assyrian government was brutal, the extent to which the inscriptions and artwork reflect actual atrocities is debated. Some believe that the Assyrians were more brutal than depicted because inscriptions and art do not include all the gruesome details or record every instance, whereas others believe Assyrian kings used exaggerated descriptions of brutal acts as tools for propaganda and psychological warfare. Although Neo-Assyrian art is particularly graphic, actual practice in war was likely similar in character to their cultural neighbors, if more effective and broader in impact because of a higher level of bureaucratic organization. Detailed analysis of palace friezes suggests that brutality was typically targeted to intimidate and dissuade foreigners and vassals from fighting against Assyrian dominion. The vast majority of depicted brutal acts were directed against the soldiers and nobility of Assyria's enemies, with civilians only rarely being brutalized, suggesting cultural limits and restraints for most of neo-Assyrian rule. See also Notes References (Shamshi-Adad dynasty1808–1736 BCE)(Amorites)Shamshi-Adad I Ishme-Dagan I Mut-Ashkur Rimush Asinum Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Non-dynastic usurpers1735–1701 BCE) Puzur-Sin Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Adaside dynasty1700–722 BCE)Bel-bani Libaya Sharma-Adad I Iptar-Sin Bazaya Lullaya Shu-Ninua Sharma-Adad II Erishum III Shamshi-Adad II Ishme-Dagan II Shamshi-Adad III Ashur-nirari I Puzur-Ashur III Enlil-nasir I Nur-ili Ashur-shaduni Ashur-rabi I Ashur-nadin-ahhe I Enlil-Nasir II Ashur-nirari II Ashur-bel-nisheshu Ashur-rim-nisheshu Ashur-nadin-ahhe II Second Intermediate PeriodSixteenthDynasty of Egypt AbydosDynasty SeventeenthDynasty of Egypt (1500–1100 BCE)Kidinuid dynastyIgehalkid dynastyUntash-Napirisha Twenty-first Dynasty of EgyptSmendes Amenemnisu Psusennes I Amenemope Osorkon the Elder Siamun Psusennes II Twenty-third Dynasty of EgyptHarsiese A Takelot II Pedubast I Shoshenq VI Osorkon III Takelot III Rudamun Menkheperre Ini Twenty-fourth Dynasty of EgyptTefnakht Bakenranef (Sargonid dynasty)Tiglath-Pileser† Shalmaneser† Marduk-apla-iddina II Sargon† Sennacherib† Marduk-zakir-shumi II Marduk-apla-iddina II Bel-ibni Ashur-nadin-shumi† Nergal-ushezib Mushezib-Marduk Esarhaddon† Ashurbanipal Ashur-etil-ilani Sinsharishkun Sin-shumu-lishir Ashur-uballit II
========================================
[SOURCE: https://www.theverge.com/science] | [TOKENS: 1482]
Science Featuring the latest in daily science news, Verge Science is all you need to keep track of what’s going on in health, the environment, and your whole world. Through our articles, we keep a close eye on the overlap between science and technology news — so you’re more informed. HBO’s medical drama has been teasing out a smart story about what makes gen AI so tempting and concerning. Skinfluencers swear topical salmon-sperm serums will make your skin glow. The reality is a bit less impressive. Latest In Science Following a successful wet dress rehearsal on Thursday plagued only by ground communications glitches, NASA says March 6th will be the earliest launch date for the long-delayed Artemis II mission that will send four astronauts on an approximately 600,000-mile trip to circle the moon and return to Earth. That’s the message from NASA Administrator Jared Isaacman on Thursday as the agency released a 311-page redacted report (pdf) on what went wrong during the Boeing Starliner’s first crewed flight test in 2024. NASA and Boeing announced that “Investigators identified an interplay of combined hardware failures, qualification gaps, leadership missteps, and cultural breakdowns that created risk conditions inconsistent with NASA’s human spaceflight safety standard.” California regulators killed a proposal that would have imposed fees on gas-burning furnaces and water heaters that release smog-forming pollutants. More than 20,000 comments they received opposing the proposal were generated by a single AI platform, some addressed from people with no idea their names had been used. [Los Angeles Times] Musk used to call the Moon ‘a distraction.’ Now he says SpaceX is building a city there. A coalition including the American Public Health Association, American Lung Association, and Sierra Club have filed suit against the Trump administration for repealing the landmark ‘endangerment finding.’ The repeal — if successful — could strip away the Environmental Protection Agency’s ability to to regulate planet-heating pollution. [NBC News] The NAACP sent a notice of intent to sue, accusing Musk’s company of illegally installing gas turbines in Mississippi to power its Colossus 2 data center. Thermal images taken by drone show more than a dozen turbines running at the site without a permit, according to a Floodlight investigation. [The Hill] Oura is lobbying for relaxed wearables regulation. It has a point, but is regulation even the problem here? The first Southwest Airlines plane with Starlink will enter this service this summer, and Starlink is set to be available on “more than 300 aircraft” by the end of the year, Southwest says. Southwest joins airlines like United, WestJet, and British Airways in bringing SpaceX’s Starlink to customers. [Southwest Newsroom] Amazon’s Leo now has FCC approval for about 7,700 low Earth orbit satellites. So far it’s only launched about 150, well short of its FCC requirement to deploy 1,600 by July 2026 (it’s seeking an extension). SpaceX has launched over 11,000 Starlink satellites into LEO with about 9,600 still active. [CNBC] According to recently released documents, the convicted sex offender had a vast network of people working to whitewash his digital presence. “SpaceX has already shifted focus to building a self-growing city on the Moon,” Musk said on Sunday, just a week after merging SpaceX and xAI. It’s a notable change in plans from a little over a year ago when Musk insisted that, “we’re going straight to Mars. The Moon is a distraction.” [SpaceX prioritizes lunar 'self-growing city' over Mars project, Musk says] This Politico story is a fascinating deep dive into Oura cozying up to the government. What caught my eye is a tidbit that Oura is lobbying lawmakers for a “digital health screener” device classification process that would sidestep the more intensive FDA clearance process for medical devices. [Politico] The first Super Bowl ad from SpaceX apparently didn’t have enough time left in production to mention its newly-joined X / xAI elements, but it is promoting the idea of global satellite internet. EV adoption was tied to a decrease in smog-forming nitrogen dioxide pollution in California, the biggest market for electric cars in the US, a recent study confirms. [Fast Company] But here’s Dave Wiskus, founder of the Nebula streaming service, on how AG1 did not pass muster as a sponsor. If you’re curious to learn more, may I point you to this week’s Optimizer? Athletic Greens is ‘clinically backed.’ What does that even mean? Bloomberg’s Mark Gurman reports that Apple is “scaling back” plans for the coach and will instead roll out some of what it had been working on into the Heath app over time. Maybe not the worst idea. [Bloomberg] Maybe combining Musk’s companies is really about space AI data centers. But reports from Bloomberg and the Wall Street Journal indicate that SpaceX’s IPO pursuit includes a push to have major index providers find a way around the usual waiting periods before they’ll add newly listed companies. The partnership will allow AT&T to use Amazon Leo — the ecommerce giant’s low Earth orbit satellite network — to deliver fixed broadband services to businesses. Amazon launched its gigabit-speed Leo Ultra antenna last November, but it’s only available for commercial use for now. [Business Wire] SpaceX is profitable, while xAI is burning about $1 billion a month. Is this another case of Musk bailing out himself? The President announced a new $12 billion public-private partnership called Project Vault, meant to establish a strategic reserve of critical minerals. It’s expected to safeguard stores of rare earths and other materials used in batteries, smart phones, cars, planes, and more. [CNBC] I used to compare Elon Musk to an old boss of mine who would spin up a company division every time he found a new hobby, but this might be just as apt: ElectricOrchestra613: Elon Musk’s constant new ventures and subsequent mergers just feels like the corporate equivalent of creating a new email every time you want to sign up for a free trial. Get the day’s best comment and more in my free newsletter, The Verge Daily. NASA’s overnight wet dress rehearsal of the SLS rocket surfaced a liquid hydrogen leak. A second wet dress rehearsal is now needed, pushing the earliest possible launch of the crewed mission around the moon to March. [NASA] The Trump administration ordered five major offshore wind projects to pause construction in December, suddenly citing national security risks even though developers had previously secured approvals to start building. After the companies filed suit, federal courts have now allowed all five projects to start construction again. [the Guardian] The Trump administration is quietly weakening regulations meant to protect groundwater and limit radiation exposure to workers at new nuclear reactors, NPR reports. Trump has worked to speed up the deployment of new nuclear reactor designs to power AI data centers. [NPR] The ‘crisp and refreshing’ protein drink is a sign of a company running out of time to turn it around. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Rxj1242_comp.jpg] | [TOKENS: 128]
File:Rxj1242 comp.jpg File history Click on a date/time to view the file as it appeared at that time. File usage The following 2 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. X-ray: NASA/CXC/MPE/S.Komossa et al.; http://chandra.harvard.edu 60 Garden St. Cambridge, MA, 02138 USA
========================================
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-9] | [TOKENS: 11349]
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Philosophy_of_science] | [TOKENS: 8944]
Contents Philosophy of science Philosophy of science is the branch of philosophy concerned with the foundations, methods, and implications of science. Amongst its central questions are the difference between science and non-science, the reliability of scientific theories, and the ultimate purpose and meaning of science as a human endeavour. Philosophy of science focuses on metaphysical, epistemic and semantic aspects of scientific practice, and overlaps with metaphysics, ontology, logic, and epistemology, for example, when it explores the relationship between science and the concept of truth. Philosophy of science is both a theoretical and empirical discipline, relying on philosophical theorising as well as meta-studies of scientific practice. Ethical issues such as bioethics and scientific misconduct are often considered ethics or science studies rather than the philosophy of science. Many of the central problems concerned with the philosophy of science lack contemporary consensus, including whether science can infer truth about unobservable entities and whether inductive reasoning can be justified as yielding definite scientific knowledge. Philosophers of science also consider philosophical problems within particular sciences (such as biology, physics and social sciences such as economics and psychology). Some philosophers of science also use contemporary results in science to reach conclusions about philosophy itself. While philosophical thought pertaining to science dates back at least to the time of Aristotle, the general philosophy of science emerged as a distinct discipline only in the 20th century following the logical positivist movement, which aimed to formulate criteria for ensuring all philosophical statements' meaningfulness and objectively assessing them. Karl Popper criticized logical positivism and helped establish a modern set of standards for scientific methodology. Thomas Kuhn's 1962 book The Structure of Scientific Revolutions was also formative, challenging the view of scientific progress as the steady, cumulative acquisition of knowledge based on a fixed method of systematic experimentation and instead arguing that any progress is relative to a "paradigm", the set of questions, concepts, and practices that define a scientific discipline in a particular historical period. Subsequently, the coherentist approach to science, in which a theory is validated if it makes sense of observations as part of a coherent whole, became prominent due to W. V. Quine and others. Some thinkers such as Stephen Jay Gould seek to ground science in axiomatic assumptions, such as the uniformity of nature. A vocal minority of philosophers, and Paul Feyerabend in particular, argue against the existence of the "scientific method", so all approaches to science should be allowed, including explicitly supernatural ones. Another approach to thinking about science involves studying how knowledge is created from a sociological perspective, an approach represented by scholars like David Bloor and Barry Barnes. Finally, a tradition in continental philosophy approaches science from the perspective of a rigorous analysis of human experience. Philosophies of the particular sciences range from questions about the nature of time raised by Einstein's general relativity, to the implications of economics for public policy. A central theme is whether the terms of one scientific theory can be intra- or intertheoretically reduced to the terms of another. Can chemistry be reduced to physics, or can sociology be reduced to individual psychology? The general questions of philosophy of science also arise with greater specificity in some particular sciences. For instance, the question of the validity of scientific reasoning is seen in a different guise in the foundations of statistics. The question of what counts as science and what should be excluded arises as a life-or-death matter in the philosophy of medicine. Additionally, the philosophies of biology, psychology, and the social sciences explore whether the scientific studies of human nature can achieve objectivity or are inevitably shaped by values and by social relations. Introduction Distinguishing between science and non-science is referred to as the demarcation problem. For example, should psychoanalysis, creation science, and historical materialism be considered pseudosciences? Karl Popper called this the central question in the philosophy of science. However, no unified account of the problem has won acceptance among philosophers, and some regard the problem as unsolvable or uninteresting. Martin Gardner has argued for the use of a Potter Stewart standard ("I know it when I see it") for recognizing pseudoscience. Early attempts by the logical positivists grounded science in observation while non-science was non-observational and hence meaningless. Popper argued that the central property of science is falsifiability. That is, every genuinely scientific claim is capable of being proven false, at least in principle. An area of study or speculation that masquerades as science in an attempt to claim a legitimacy that it would not otherwise be able to achieve is referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe they are doing science because their activities have the outward appearance of it but actually lack the "kind of utter honesty" that allows their results to be rigorously evaluated. A closely related question is what counts as a good scientific explanation. In addition to providing predictions about future events, society often takes scientific theories to provide explanations for events that occur regularly or have already occurred. Philosophers have investigated the criteria by which a scientific theory can be said to have successfully explained a phenomenon, as well as what it means to say a scientific theory has explanatory power. One early and influential account of scientific explanation is the deductive-nomological model. It says that a successful scientific explanation must deduce the occurrence of the phenomena in question from a scientific law. This view has been subjected to substantial criticism, resulting in several widely acknowledged counterexamples to the theory. It is especially challenging to characterize what is meant by an explanation when the thing to be explained cannot be deduced from any law because it is a matter of chance, or otherwise cannot be perfectly predicted from what is known. Wesley Salmon developed a model in which a good scientific explanation must be statistically relevant to the outcome to be explained. Others have argued that the key to a good explanation is unifying disparate phenomena or providing a causal mechanism. Although it is often taken for granted, it is not at all clear how one can infer the validity of a general statement from a number of specific instances or infer the truth of a theory from a series of successful tests. For example, a chicken observes that each morning the farmer comes and gives it food, for hundreds of days in a row. The chicken may therefore use inductive reasoning to infer that the farmer will bring food every morning. However, one morning, the farmer comes and kills the chicken. How is scientific reasoning more trustworthy than the chicken's reasoning?[citation needed] One approach is to acknowledge that induction cannot achieve certainty, but observing more instances of a general statement can at least make the general statement more probable. So the chicken would be right to conclude from all those mornings that it is likely the farmer will come with food again the next morning, even if it cannot be certain. However, there remain difficult questions about the process of interpreting any given evidence into a probability that the general statement is true. One way out of these particular difficulties is to declare that all beliefs about scientific theories are subjective, or personal, and correct reasoning is merely about how evidence should change one's subjective beliefs over time. Some argue that what scientists do is not inductive reasoning at all but rather abductive reasoning, or inference to the best explanation. In this account, science is not about generalizing specific instances but rather about hypothesizing explanations for what is observed. As discussed in the previous section, it is not always clear what is meant by the "best explanation". Occam's razor, which counsels choosing the simplest available explanation, thus plays an important role in some versions of this approach. To return to the example of the chicken, would it be simpler to suppose that the farmer cares about it and will continue taking care of it indefinitely or that the farmer is fattening it up for slaughter? Philosophers have tried to make this heuristic principle more precise regarding theoretical parsimony or other measures. Yet, although various measures of simplicity have been brought forward as potential candidates, it is generally accepted that there is no such thing as a theory-independent measure of simplicity. In other words, there appear to be as many different measures of simplicity as there are theories themselves, and the task of choosing between measures of simplicity appears to be every bit as problematic as the job of choosing between theories. Nicholas Maxwell has argued for some decades that unity rather than simplicity is the key non-empirical factor in influencing the choice of theory in science, persistent preference for unified theories in effect committing science to the acceptance of a metaphysical thesis concerning unity in nature. In order to improve this problematic thesis, it needs to be represented in the form of a hierarchy of theses, each thesis becoming more insubstantial as one goes up the hierarchy. When making observations, scientists look through telescopes, study images on electronic screens, record meter readings, and so on. Generally, on a basic level, they can agree on what they see, e.g., the thermometer shows 37.9 degrees C. But, if these scientists have different ideas about the theories that have been developed to explain these basic observations, they may disagree about what they are observing. For example, before Albert Einstein's general theory of relativity, observers would have likely interpreted an image of the Einstein cross as five different objects in space. In light of that theory, however, astronomers will tell you that there are actually only two objects, one in the center and four different images of a second object around the sides. Alternatively, if other scientists suspect that something is wrong with the telescope and only one object is actually being observed, they are operating under yet another theory. Observations that cannot be separated from theoretical interpretation are said to be theory-laden. All observation involves both perception and cognition. That is, one does not make an observation passively, but rather is actively engaged in distinguishing the phenomenon being observed from surrounding sensory data. Therefore, observations are affected by one's underlying understanding of the way in which the world functions, and that understanding may influence what is perceived, noticed, or deemed worthy of consideration. In this sense, it can be argued that all observation is theory-laden. Should science aim to determine ultimate truth, or are there questions that science cannot answer? Scientific realists claim that science aims at truth and that one ought to regard scientific theories as true, approximately true, or likely true. Conversely, scientific anti-realists argue that science does not aim (or at least does not succeed) at truth, especially truth about unobservables like electrons or other universes. Instrumentalists argue that scientific theories should only be evaluated on whether they are useful. In their view, whether theories are true or not is beside the point, because the purpose of science is to make predictions and enable effective technology.[citation needed] Realists often point to the success of recent scientific theories as evidence for the truth (or near truth) of current theories. Antirealists point to either the many false theories in the history of science, epistemic morals, the success of false modeling assumptions, or widely termed postmodern criticisms of objectivity as evidence against scientific realism. Antirealists attempt to explain the success of scientific theories without reference to truth. Some antirealists claim that scientific theories aim at being accurate only about observable objects and argue that their success is primarily judged by that criterion. The notion of real patterns has been propounded, notably by philosopher Daniel C. Dennett, as an intermediate position between strong realism and eliminative materialism.[jargon] This concept delves into the investigation of patterns observed in scientific phenomena to ascertain whether they signify underlying truths or are mere constructs of human interpretation. Dennett provides a unique ontological account concerning real patterns, examining the extent to which these recognized patterns have predictive utility and allow for efficient compression of information. The discourse on real patterns extends beyond philosophical circles, finding relevance in various scientific domains. For example, in biology, inquiries into real patterns seek to elucidate the nature of biological explanations, exploring how recognized patterns contribute to a comprehensive understanding of biological phenomena. Similarly, in chemistry, debates around the reality of chemical bonds as real patterns continue. Evaluation of real patterns also holds significance in broader scientific inquiries. Researchers, like Tyler Millhouse, propose criteria for evaluating the realness of a pattern, particularly in the context of universal patterns and the human propensity to perceive patterns, even where there might be none. This evaluation is pivotal in advancing research in diverse fields, from climate change to machine learning, where recognition and validation of real patterns in scientific models play a crucial role. Values intersect with science in different ways. There are epistemic values that mainly guide the scientific research. The scientific enterprise is embedded in particular culture and values through individual practitioners. Values emerge from science, both as product and process and can be distributed among several cultures in the society. When it comes to the justification of science in the sense of general public participation by single practitioners, science plays the role of a mediator between evaluating the standards and policies of society and its participating individuals, wherefore science indeed falls victim to vandalism and sabotage adapting the means to the end. If it is unclear what counts as science, how the process of confirming theories works, and what the purpose of science is, there is considerable scope for values and other social influences to shape science. Indeed, values can play a role ranging from determining which research gets funded to influencing which theories achieve scientific consensus. For example, in the 19th century, cultural values held by scientists about race shaped research on evolution, and values concerning social class influenced debates on phrenology (considered scientific at the time). Feminist philosophers of science, sociologists of science, and others explore how social values affect science.[citation needed] History The origins of philosophy of science trace back to Plato and Aristotle, who distinguished the forms of approximate and exact reasoning, set out the threefold scheme of abductive, deductive, and inductive inference, and also analyzed reasoning by analogy. The eleventh century Arab polymath Ibn al-Haytham (known in Latin as Alhazen) conducted his research in optics by way of controlled experimental testing and applied geometry, especially in his investigations into the images resulting from the reflection and refraction of light. Roger Bacon (1214–1294), an English thinker and experimenter heavily influenced by al-Haytham, is recognized by many to be the father of modern scientific method. His view that mathematics was essential to a correct understanding of natural philosophy is considered to have been 400 years ahead of its time. Francis Bacon (no direct relation to Roger Bacon, who lived 300 years earlier) was a seminal figure in philosophy of science at the time of the Scientific Revolution. In his work Novum Organum (1620)—an allusion to Aristotle's Organon—Bacon outlined a new system of logic to improve upon the old philosophical process of syllogism. Bacon's method relied on experimental histories to eliminate alternative theories. In 1637, René Descartes established a new framework for grounding scientific knowledge in his treatise, Discourse on Method, advocating the central role of reason as opposed to sensory experience. By contrast, in 1713, the 2nd edition of Isaac Newton's Philosophiae Naturalis Principia Mathematica argued that "... hypotheses ... have no place in experimental philosophy. In this philosophy[,] propositions are deduced from the phenomena and rendered general by induction." This passage influenced a "later generation of philosophically-inclined readers to pronounce a ban on causal hypotheses in natural philosophy". In particular, later in the 18th century, David Hume would famously articulate skepticism about the ability of science to determine causality and gave a definitive formulation of the problem of induction, though both theses would be contested by the end of the 18th century by Immanuel Kant in his Critique of Pure Reason and Metaphysical Foundations of Natural Science. In 19th century Auguste Comte made a major contribution to the theory of science. The 19th century writings of John Stuart Mill are also considered important in the formation of current conceptions of the scientific method, as well as anticipating later accounts of scientific explanation. Instrumentalism[jargon] became popular among physicists around the turn of the 20th century, after which logical positivism defined the field for several decades. Logical positivism accepts only testable statements as meaningful, rejects metaphysical interpretations, and embraces verificationism (a set of theories of knowledge that combines logicism, empiricism, and linguistics to ground philosophy on a basis consistent with examples from the empirical sciences). Seeking to overhaul all of philosophy and convert it to a new scientific philosophy, the Berlin Circle and the Vienna Circle propounded logical positivism in the late 1920s. Interpreting Ludwig Wittgenstein's early philosophy of language, logical positivists identified a verifiability principle or criterion of cognitive meaningfulness. From Bertrand Russell's logicism they sought reduction of mathematics to logic. They also embraced Russell's logical atomism, Ernst Mach's phenomenalism—whereby the mind knows only actual or potential sensory experience, which is the content of all sciences, whether physics or psychology—and Percy Bridgman's operationalism. Thereby, only the verifiable was scientific and cognitively meaningful, whereas the unverifiable was unscientific, cognitively meaningless "pseudostatements"—metaphysical, emotive, or such—not worthy of further review by philosophers, who were newly tasked to organize knowledge rather than develop new knowledge.[citation needed] Logical positivism is commonly portrayed as taking the extreme position that scientific language should never refer to anything unobservable—even the seemingly core notions of causality, mechanism, and principles—but that is an exaggeration. Talk of such unobservables could be allowed as metaphorical—direct observations viewed in the abstract—or at worst metaphysical or emotional. Theoretical laws would be reduced to empirical laws, while theoretical terms would garner meaning from observational terms via correspondence rules. Mathematics in physics would reduce to symbolic logic via logicism, while rational reconstruction would convert ordinary language into standardized equivalents, all networked and united by a logical syntax. A scientific theory would be stated with its method of verification, whereby a logical calculus or empirical operation could verify its falsity or truth.[citation needed] In the late 1930s, logical positivists fled Germany and Austria for Britain and America. By then, many had replaced Mach's phenomenalism with Otto Neurath's physicalism, and Rudolf Carnap had sought to replace verification with simply confirmation. With World War II's close in 1945, logical positivism became milder, logical empiricism, led largely by Carl Hempel, in America, who expounded the covering law model of scientific explanation as a way of identifying the logical form of explanations without any reference to the suspect notion of "causation". The logical positivist movement became a major underpinning of analytic philosophy, and dominated Anglosphere philosophy, including philosophy of science, while influencing sciences, into the 1960s. Yet the movement failed to resolve its central problems, and its doctrines were increasingly assaulted. Nevertheless, it brought about the establishment of philosophy of science as a distinct subdiscipline of philosophy, with Carl Hempel playing a key role. In the 1962 book The Structure of Scientific Revolutions, Thomas Kuhn argued that the process of observation and evaluation takes place within a "paradigm", which he describes as "universally recognized achievements that for a time provide model problems and solutions to community of practitioners." A paradigm implicitly identifies the objects and relations under study and suggests what experiments, observations or theoretical improvements need to be carried out to produce a useful result. He characterized normal science as the process of observation and "puzzle solving" which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Kuhn was a historian of science and his ideas were inspired by the study of older paradigms that have been discarded, such as Aristotelian mechanics or aether theory. These had often been portrayed by historians as using "unscientific" methods or beliefs. But Kuhn's examination showed that they were no less "scientific" than modern paradigms. A paradigm shift occurred when a significant number of observational anomalies arose in the old paradigm and efforts to resolve them within the paradigm were unsuccessful. A new paradigm was available that handled the anomalies with less difficulty and yet still covered (most of) the previous results. Over a period of time, often as long as a generation, more practitioners began working within the new paradigm and eventually the old paradigm was abandoned. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn explicitly rejected a relativist interpretation of his ideas. He wrote "terms like 'subjective' and 'intuitive' cannot be applied to [paradigms]." Paradigms, as he understood them, are grounded in objective, observable evidence, but our use of them is psychological and our acceptance of them is social. Current approaches According to Robert Priddy, all scientific study inescapably builds on at least some essential assumptions that cannot be tested by scientific processes; that is, that scientists must start with some assumptions as to the ultimate analysis of the facts with which it deals. These assumptions would then be justified partly by their adherence to the types of occurrence of which we are directly conscious, and partly by their success in representing the observed facts with a certain generality, devoid of ad hoc suppositions." Kuhn also claims that all science is based on assumptions about the character of the universe, rather than merely on empirical facts. These assumptions – a paradigm – comprise a collection of beliefs, values and techniques that are held by a given scientific community, which legitimize their systems and set the limitations to their investigation. For naturalists, nature is the only reality, the "correct" paradigm, and there is no such thing as supernatural, i.e. anything above, beyond, or outside of nature. The scientific method is to be used to investigate all reality, including the human spirit. Some[who?] claim that naturalism is the implicit philosophy of working scientists, and that the following basic assumptions are needed to justify the scientific method: In contrast to the view that science rests on foundational assumptions, coherentism asserts that statements are justified by being a part of a coherent system. Or, rather, individual statements cannot be validated on their own: only coherent systems can be justified. A prediction of a transit of Venus is justified by its being coherent with broader beliefs about celestial mechanics and earlier observations. As explained above, observation is a cognitive act. That is, it relies on a pre-existing understanding, a systematic set of beliefs. An observation of a transit of Venus requires a huge range of auxiliary beliefs, such as those that describe the optics of telescopes, the mechanics of the telescope mount, and an understanding of celestial mechanics. If the prediction fails and a transit is not observed, that is likely to occasion an adjustment in the system, a change in some auxiliary assumption, rather than a rejection of the theoretical system.[citation needed] According to the Duhem–Quine thesis, after Pierre Duhem and W.V. Quine, it is impossible to test a theory in isolation. One must always add auxiliary hypotheses in order to make testable predictions. For example, to test Newton's Law of Gravitation in the solar system, one needs information about the masses and positions of the Sun and all the planets. Famously, the failure to predict the orbit of Uranus in the 19th century led not to the rejection of Newton's Law but rather to the rejection of the hypothesis that the Solar System comprises only seven planets. The investigations that followed led to the discovery of an eighth planet, Neptune. If a test fails, something is wrong. But there is a problem in figuring out what that something is: a missing planet, badly calibrated test equipment, an unsuspected curvature of space, or something else.[citation needed] One consequence of the Duhem–Quine thesis is that one can make any theory compatible with any empirical observation by the addition of a sufficient number of suitable ad hoc hypotheses. Karl Popper accepted this thesis, leading him to reject naïve falsification. Instead, he favored a "survival of the fittest" view in which the most falsifiable scientific theories are to be preferred. Paul Feyerabend (1924–1994) argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. He argued that "the only principle that does not inhibit progress is: anything goes". Feyerabend said that science started as a liberating movement, but that over time it had become increasingly dogmatic and rigid and had some oppressive features, and thus had become increasingly an ideology. Because of this, he said it was impossible to come up with an unambiguous way to distinguish science from religion, magic, or mythology. He saw the exclusive dominance of science as a means of directing society as authoritarian and ungrounded. Promulgation of this epistemological anarchism earned Feyerabend the title of "the worst enemy of science" from his detractors. According to Kuhn, science is an inherently communal activity which can only be done as part of a community. For him, the fundamental difference between science and other disciplines is the way in which the communities function. Others, especially Feyerabend and some post-modernist thinkers, have argued that there is insufficient difference between social practices in science and other disciplines to maintain this distinction. For them, social factors play an important and direct role in scientific method, but they do not serve to differentiate science from other disciplines. On this account, science is socially constructed, though this does not necessarily imply the more radical notion that reality itself is a social construct.[citation needed] Michel Foucault sought to analyze and uncover how disciplines within the social sciences developed and adopted the methodologies used by their practitioners. In works like The Archaeology of Knowledge, he used the term human sciences. The human sciences do not comprise mainstream academic disciplines; they are rather an interdisciplinary space for the reflection on man who is the subject of more mainstream scientific knowledge, taken now as an object, sitting between these more conventional areas, and of course associating with disciplines such as anthropology, psychology, sociology, and even history. Rejecting the realist view of scientific inquiry, Foucault argued throughout his work that scientific discourse is not simply an objective study of phenomena, as both natural and social scientists like to believe, but is rather the product of systems of power relations struggling to construct scientific disciplines and knowledge within given societies. With the advances of scientific disciplines, such as psychology and anthropology, the need to separate, categorize, normalize and institutionalize populations into constructed social identities became a staple of the sciences. Constructions of what were considered "normal" and "abnormal" stigmatized and ostracized groups of people, like the mentally ill and sexual and gender minorities. However, some (such as Quine) do maintain that scientific reality is a social construct: Physical objects are conceptually imported into the situation as convenient intermediaries not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer ... For my part I do, qua lay physicist, believe in physical objects and not in Homer's gods; and I consider it a scientific error to believe otherwise. But in point of epistemological footing, the physical objects and the gods differ only in degree and not in kind. Both sorts of entities enter our conceptions only as cultural posits. The public backlash of scientists against such views, particularly in the 1990s, became known as the science wars. A major development in recent decades has been the study of the formation, structure, and evolution of scientific communities by sociologists and anthropologists – including David Bloor, Harry Collins, Bruno Latour, Ian Hacking and Anselm Strauss. Concepts and methods (such as rational choice, social choice or game theory) from economics have also been applied[by whom?] for understanding the efficiency of scientific communities in the production of knowledge. This interdisciplinary field has come to be known as science and technology studies. Here the approach to the philosophy of science is to study how scientific communities actually operate.[citation needed] Philosophers in the continental philosophical tradition are not traditionally categorized[by whom?] as philosophers of science. However, they have much to say about science, some of which has anticipated themes in the analytical tradition. For example, in The Genealogy of Morals (1887) Friedrich Nietzsche advanced the thesis that the motive for the search for truth in sciences is a kind of ascetic ideal. In general, continental philosophy views science from a world-historical perspective. Philosophers such as Pierre Duhem (1861–1916) and Gaston Bachelard (1884–1962) wrote their works with this world-historical approach to science, predating Kuhn's 1962 work by a generation or more. All of these approaches involve a historical and sociological turn to science, with a priority on lived experience (a kind of Husserlian "life-world"), rather than a progress-based or anti-historical approach as emphasised in the analytic tradition. One can trace this continental strand of thought through the phenomenology of Edmund Husserl (1859–1938), the late works of Merleau-Ponty (Nature: Course Notes from the Collège de France, 1956–1960), and the hermeneutics of Martin Heidegger (1889–1976). The largest effect on the continental tradition with respect to science came from Martin Heidegger's critique of the theoretical attitude in general, which of course includes the scientific attitude. For this reason, the continental tradition has remained much more skeptical of the importance of science in human life and in philosophical inquiry. Nonetheless, there have been a number of important works: especially those of a Kuhnian precursor, Alexandre Koyré (1892–1964). Another important development was that of Michel Foucault's analysis of historical and scientific thought in The Order of Things (1966) and his study of power and corruption within the "science" of madness. Post-Heideggerian authors contributing to continental philosophy of science in the second half of the 20th century include Jürgen Habermas (e.g., Truth and Justification, 1998), Carl Friedrich von Weizsäcker (The Unity of Nature, 1980; German: Die Einheit der Natur (1971)), and Wolfgang Stegmüller (Probleme und Resultate der Wissenschaftstheorie und Analytischen Philosophie, 1973–1986).[citation needed] Other topics Analysis involves breaking an observation or theory down into simpler concepts in order to understand it. Reductionism can refer to one of several philosophical positions related to this approach. One type of reductionism suggests that phenomena are amenable to scientific explanation at lower levels of analysis and inquiry. Perhaps a historical event might be explained in sociological and psychological terms, which in turn might be described in terms of human physiology, which in turn might be described in terms of chemistry and physics. Daniel Dennett distinguishes legitimate reductionism from what he calls greedy reductionism, which denies real complexities and leaps too quickly to sweeping generalizations. A broad issue affecting the neutrality of science concerns the areas which science chooses to explore—that is, what part of the world and of humankind are studied by science. Philip Kitcher in his Science, Truth, and Democracy argues that scientific studies that attempt to show one segment of the population as being less intelligent, less successful, or emotionally backward compared to others have a political feedback effect which further excludes such groups from access to science. Thus such studies undermine the broad consensus required for good science by excluding certain people, and so proving themselves in the end to be unscientific.[citation needed] Philosophy of particular sciences There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination. — Daniel Dennett, Darwin's Dangerous Idea, 1995 In addition to addressing the general questions regarding science and induction, many philosophers of science are occupied by investigating foundational problems in particular sciences. They also examine the implications of particular sciences for broader philosophical questions. The late 20th and early 21st century has seen a rise in the number of practitioners of philosophy of a particular science. The problem of induction discussed above is seen in another form in debates over the foundations of statistics. The standard approach to statistical hypothesis testing avoids claims about whether evidence supports a hypothesis or makes it more probable. Instead, the typical test yields a p-value, which is the probability of the evidence being such as it is, under the assumption that the null hypothesis is true. If the p-value is too high, the hypothesis is rejected, in a way analogous to falsification. In contrast, Bayesian inference seeks to assign probabilities to hypotheses. Related topics in philosophy of statistics include probability interpretations, overfitting, and the difference between correlation and causation.[citation needed] Philosophy of mathematics is concerned with the philosophical foundations and implications of mathematics. The central questions are whether numbers, triangles, and other mathematical entities exist independently of the human mind and what is the nature of mathematical propositions. Is asking whether "1 + 1 = 2" is true fundamentally different from asking whether a ball is red? Was calculus invented or discovered? A related question is whether learning mathematics requires experience or reason alone. What does it mean to prove a mathematical theorem and how does one know whether a mathematical proof is correct? Philosophers of mathematics also aim to clarify the relationships between mathematics and logic, human capabilities such as intuition, and the material universe.[citation needed] Philosophy of physics is the study of the fundamental, philosophical questions underlying modern physics, the study of matter and energy and how they interact. The main questions concern the nature of space and time, atoms and atomism. Also included are the predictions of cosmology, the interpretation of quantum mechanics, the foundations of statistical mechanics, causality, determinism, and the nature of physical laws. Classically, several of these questions were studied as part of metaphysics (for example, those about causality, determinism, and space and time).[citation needed] Philosophy of chemistry is the philosophical study of the methodology and content of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. It includes research on general philosophy of science issues as applied to chemistry. For example, can all chemical phenomena be explained by quantum mechanics or is it not possible to reduce chemistry to physics? For another example, chemists have discussed the philosophy of how theories are confirmed in the context of confirming reaction mechanisms. Determining reaction mechanisms is difficult because they cannot be observed directly. Chemists can use a number of indirect measures as evidence to rule out certain mechanisms, but they are often unsure if the remaining mechanism is correct because there are many other possible mechanisms that they have not tested or even thought of. Philosophers have also sought to clarify the meaning of chemical concepts which do not refer to specific physical entities, such as chemical bonds.[citation needed] The philosophy of astronomy seeks to understand and analyze the methodologies and technologies used by experts in the discipline, focusing on how observations made about space and astrophysical phenomena can be studied. Given that astronomers rely and use theories and formulas from other scientific disciplines, such as chemistry and physics, the pursuit of understanding how knowledge can be obtained about the cosmos, as well as the relation in which Earth and the Solar System have within personal views of humanity's place in the universe, philosophical insights into how facts about space can be scientifically analyzed and configure with other established knowledge is a main point of inquiry.[citation needed] The philosophy of Earth science is concerned with how humans obtain and verify knowledge of the workings of the Earth system, including the atmosphere, hydrosphere, and geosphere (solid earth). Earth scientists' ways of knowing and habits of mind share important commonalities with other sciences, but also have distinctive attributes that emerge from the complex, heterogeneous, unique, long-lived, and non-manipulatable nature of the Earth system.[citation needed] Philosophy of biology deals with epistemological, metaphysical, and ethical issues in the biological and biomedical sciences. Although philosophers of science and philosophers generally have long been interested in biology (e.g., Aristotle, Descartes, Leibniz and even Kant), philosophy of biology only emerged as an independent field of philosophy in the 1960s and 1970s. Philosophers of science began to pay increasing attention to developments in biology, from the rise of the modern synthesis in the 1930s and 1940s to the discovery of the structure of deoxyribonucleic acid (DNA) in 1953 to more recent advances in genetic engineering. Other key ideas such as the reduction of all life processes to biochemical reactions as well as the incorporation of psychology into a broader neuroscience are also addressed. Research in current philosophy of biology includes investigation of the foundations of evolutionary theory (such as Peter Godfrey-Smith's work), and the role of viruses as persistent symbionts in host genomes. As a consequence, the evolution of genetic content order is seen as the result of competent genome editors [further explanation needed] in contrast to former narratives in which error replication events (mutations) dominated. Beyond medical ethics and bioethics, the philosophy of medicine is a branch of philosophy that includes the epistemology and ontology/metaphysics of medicine. Within the epistemology of medicine, evidence-based medicine (EBM) (or evidence-based practice (EBP)) has attracted attention, most notably the roles of randomisation, blinding and placebo controls. Related to these areas of investigation, ontologies of specific interest to the philosophy of medicine include Cartesian dualism, the monogenetic conception of disease and the conceptualization of 'placebos' and 'placebo effects'. There is also a growing interest in the metaphysics of medicine, particularly the idea of causation. Philosophers of medicine might not only be interested in how medical knowledge is generated, but also in the nature of such phenomena. Causation is of interest because the purpose of much medical research is to establish causal relationships, e.g. what causes disease, or what causes people to get better. Philosophy of psychiatry explores philosophical questions relating to psychiatry and mental illness. The philosopher of science and medicine Dominic Murphy identifies three areas of exploration in the philosophy of psychiatry. The first concerns the examination of psychiatry as a science, using the tools of the philosophy of science more broadly. The second entails the examination of the concepts employed in discussion of mental illness, including the experience of mental illness, and the normative questions it raises. The third area concerns the links and discontinuities between the philosophy of mind and psychopathology. Philosophy of psychology refers to issues at the theoretical foundations of modern psychology. Some of these issues are epistemological concerns about the methodology of psychological investigation. For example, is the best method for studying psychology to focus only on the response of behavior to external stimuli or should psychologists focus on mental perception and thought processes? If the latter, an important question is how the internal experiences of others can be measured. Self-reports of feelings and beliefs may not be reliable because, even in cases in which there is no apparent incentive for subjects to intentionally deceive in their answers, self-deception or selective memory may affect their responses. Then even in the case of accurate self-reports, how can responses be compared across individuals? Even if two individuals respond with the same answer on a Likert scale, they may be experiencing very different things.[citation needed] Other issues in philosophy of psychology are philosophical questions about the nature of mind, brain, and cognition, and are perhaps more commonly thought of as part of cognitive science, or philosophy of mind. For example, are humans rational creatures? Is there any sense in which they have free will, and how does that relate to the experience of making choices? Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, psycholinguistics, and artificial intelligence, questioning what they can and cannot explain in psychology.[citation needed] Philosophy of psychology is a relatively young field, because psychology only became a discipline of its own in the late 1800s. In particular, neurophilosophy has just recently become its own field with the works of Paul Churchland and Patricia Churchland. Philosophy of mind, by contrast, has been a well-established discipline since before psychology was a field of study at all. It is concerned with questions about the very nature of mind, the qualities of experience, and particular issues like the debate between dualism and monism.[citation needed] The philosophy of social science is the study of the logic and method of the social sciences, such as sociology and cultural anthropology. Philosophers of social science are concerned with the differences and similarities between the social and the natural sciences, causal relationships between social phenomena, the possible existence of social laws, and the ontological significance of structure and agency.[citation needed] The French philosopher, Auguste Comte (1798–1857), established the epistemological perspective of positivism in The Course in Positivist Philosophy, a series of texts published between 1830 and 1842. The first three volumes of the Course dealt chiefly with the natural sciences already in existence (geoscience, astronomy, physics, chemistry, biology), whereas the latter two emphasised the inevitable coming of social science: "sociologie". For Comte, the natural sciences had to necessarily arrive first, before humanity could adequately channel its efforts into the most challenging and complex "Queen science" of human society itself. Comte offers an evolutionary system proposing that society undergoes three phases in its quest for the truth according to a general 'law of three stages'. These are (1) the theological, (2) the metaphysical, and (3) the positive. Comte's positivism established the initial philosophical foundations for formal sociology and social research. Durkheim, Marx, and Weber are more typically cited as the fathers of contemporary social science. In psychology, a positivistic approach has historically been favoured in behaviourism. Positivism has also been espoused by 'technocrats' who believe in the inevitability of social progress through science and technology. The positivist perspective has been associated with 'scientism'; the view that the methods of the natural sciences may be applied to all areas of investigation, be it philosophical, social scientific, or otherwise. Among most social scientists and historians, orthodox positivism has long since lost popular support. Today, practitioners of both social and physical sciences instead take into account the distorting effect of observer bias and structural limitations. This scepticism has been facilitated by a general weakening of deductivist accounts of science by philosophers such as Thomas Kuhn, and new philosophical movements such as critical realism and neopragmatism. The philosopher-sociologist Jürgen Habermas has critiqued pure instrumental rationality as meaning that scientific-thinking becomes something akin to ideology itself. The philosophy of technology is a sub-field of philosophy that studies the nature of technology. Specific research topics include study of the role of tacit and explicit knowledge in creating and using technology, the nature of functions in technological artifacts, the role of values in design, and ethics related to technology. Technology and engineering can both involve the application of scientific knowledge. The philosophy of engineering is an emerging sub-field of the broader philosophy of technology.[citation needed] See also References Further reading External links
========================================
[SOURCE: https://www.theverge.com/entertainment] | [TOKENS: 1600]
Entertainment The Verge’s entertainment section collects the latest news from the worlds of pop culture, music, movies, television, and video games. Whether you want to know what to watch on Netflix or how to make the most of your streaming service budget, the entertainment section acts a reliable source. There’s simply too much to read, watch, hear, and play. Let us be your tour guide. The Virtual Boy accessory for the Switch is only for the most dedicated Nintendo fans. The Virtual Boy accessory for the Switch is only for the most dedicated Nintendo fans. Latest In Entertainment Now that Andor has come to end, series Tony Gilroy is free to speak more openly about what it was like working for Disney, and in a recent interview with The Hollywood Reporter he says that the studio was very insistent on him not using the word “fascism” while talking about his show focused on fighting fascism. [The Hollywood Reporter] I Am Frankelda — co-writer / directors Arturo and Roy Ambriz’s stop motion dark fantasy film about a girl with a strange connection to another dimension — has been acquired by Netflix and is slated to debut on the streamer sometime later this year. Ahead of the debut of The Gorillaz’s new album The Mountain, the animated band has dropped a teaser video that makes it seem like there might also be a short film situation on the way. According to Fox 11 Los Angeles: The lawsuit alleges that “children in Los Angeles County have been repeatedly exposed to sexually explicit content, exploitation and grooming on Roblox because the company chooses to put corporate profit over the safety of children.” Some states have sued Roblox, too. [Fox 11 Los Angeles] That’s two weeks from today! The game has new characters, new cards, and even a co-op mode. Can’t wait. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Highguard’s website has been down for two days, and developer Wildlight Entertainment reportedly laid off most of its staff last week, but development on the game continues. Last night, Wildlight published notes about a new patch and detailed some changes coming to Highguard’s next patch, including a new raid-focused mode. We already knew that the threat in Toy Story 5 would be a fresh piece of technology voiced by Greta Lee, and the latest trailer shows a bit more of the adorably creepy Lilypad, which is “always listening.” I guess I know what fictional gadget I’m reviewing next. Starting on May 3rd, 2026, five Formula 1 races, including the Miami, Monaco, British, Italian, and United States Grands Prix, will be shown in “select IMAX locations” across the country. The showings will take place in at least 50 IMAX theaters as a result of a new partnership with Apple. Though HBO still hasn’t announced a firm release date for House of the Dragon’s upcoming third season, there’s a new trailer teasing out Rhaenyra’s plan to make her enemies pay using her squad of newly-tamed dragons. The new season drops some time in June. The epic sci-fi RPG, which debuted on the Wii U before being ported to the Switch, is now available in a Switch 2 edition. It includes technical upgrades like support for 4K and 60fps in TV mode, and if you already own the Switch version, it’s $4.99 to upgrade. With the new integration, SeatGeek will join the dozens of other companies Spotify partners with to sell tickets within its app. SeatGeek will surface tickets for shows at 15 major venues around the US, which could appear as recommendations on an artist’s Spotify page or inside notifications. If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Starting on February 18th, Dropout fans will be able to drop into a new 24/7, ad-free livestream channel on the comedy-focused streaming service. “Dropout 24/7” will run all of its content, including the TTRPG series Dimension 20, improv comedy shows Game Changer and Make Some Noise, and more, starting with a Dimension 20 marathon. Focus Features is billing The AI Doc: Or How I Became An Apocaloptimist as an “eye-opening” exploration of “the most powerful technology humanity has ever created.” You’d think the doc might feature some critical voices, but its new trailer makes it feel like it might be one big commercial. The film premieres on March 27th. The latest trailer for The Mandalorian and Grogu really highlights how very, very young Grogu still is for one of Yoda’s species, which makes it seem that much more absurd that Din Djarin still hasn’t found his son a helmet to protect that (presumably) soft head. The legendary composer is celebrating 40 years of Music Mouse, which brought algorithmic composition to home computers. It can apparently recognize how much influence a given track or artist had on AI content, and can work with or without cooperation from AI developers. Sony thinks it could be used to create a licensing system for AI music, but “has yet to decide” when it might be put to use. [Nikkei Asia] Activision removed the game from app stores in May because it “did not meet our expectations.” If you’re still playing it, you’ll have a couple months until it’s offline for good. [Activision] It’s a new feature showing up in the app as part of the first iOS 26.4 developer beta, as reported by 9to5Mac. The beta also includes a new “Playlist Playground” feature that uses Apple Intelligence to make a playlist from a text prompt. Update: Added Playlist Playground details. [9to5Mac] GameHub on Android let Sean Hollister play the Steam version of Hollow Knight: Silksong on his phone, and GameSir says its new Mac app is “coming soon.” Gamesir claims the app will let you run Windows games natively, and it’s teasing a new controller, too. It only took an on-camera F-bomb, accusations of cheating, and then video confirmation to get people to take interest. Is the Olympic sport that features frantic brooming having a Hawk-Eye moment? [ESPN.com] During the company’s Q4 earnings call, Gustav Söderström revealed that Spotify had fully embraced vibe coding. AI is coming for a lot of jobs, and software developer is high on the list of those in danger. Still, it’s shocking that the top devs at Spotify haven’t written any code in 2026. Per Business Insider: “When I speak to my most senior engineers — the best developers we have — they actually say that they haven’t written a single line of code since December… They actually only generate code and supervise it.” - Spotify CEO Gustav Söderström Flow Tuner lets users pick which genres are included in a Flow session, rather than just disliking individual songs. It’s a hamfisted way to customize your algorithm, but the specificity of Deezer’s options are impressive: Metalcore, Balkan Folk, Schlager. This is well beyond your basic buckets like rock and pop. It’s now in development, but still in the “early stages.” The latest trailer for the 2D fighter Marvel Tōkon: Fighting Souls reveals a squad featuring X-Men like Wolverine and Storm. The game looks ridiculous in the best possible way, and is launching on August 6th on both the PS5 and PC. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Zanzibar_Archipelago] | [TOKENS: 235]
Contents Zanzibar Archipelago The Zanzibar Archipelago (Funguvisiwa la Zanzibar, in Swahili, Arabic: أرخبيل زنجبار) is a group of islands off the coast of mainland Tanzania in the sea of Zanj. The archipelago is also known as the Spice Islands. There are three main islands with permanent human settlements, Zanzibar island, Pemba island, and Mafia island. There is also a fourth coral island, Latham island, that serves as an essential breeding ground for seabirds. There are also a number of smaller islets that surround these islands. Most of the archipelago belongs to the Zanzibar semi-autonomous zones of Tanzania, while the neighboring Mafia Archipelago and its associated islets are parts of the Pwani Region on the Tanzanian mainland.6°33′S 39°34′E / 6.550°S 39.567°E / -6.550; 39.567 List of islands Source: See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Resettlement_policy_of_the_Neo-Assyrian_Empire] | [TOKENS: 1348]
Contents Resettlement policy of the Neo-Assyrian Empire For a period of three centuries beginning with the reign of Ashur-dan II (934–912 BCE), the Neo-Assyrian Empire maintained a policy of enforcing population transfer within the territories that it controlled and conquered. The majority of these displacements were carried out with careful planning by governing forces in order to strengthen the empire's rule and influence. For example, a population might have been moved around to spread agricultural techniques or develop new lands. In several cases, this policy was used as a punishment for political enemies—largely as a pragmatic alternative to mass execution. In other cases, elites of a conquered territory were meticulously selected and imported to the Neo-Assyrian Empire to enrich and increase the knowledge in the state's centre. Professor Bustenay Oded of Haifa University estimated in 1979 that about 4.4 million people (± 900,000) were transferred by the ancient Assyrians over the course of some 250 years. Perhaps the best known of these population transfers occurred after the Kingdom of Israel fell to the Neo-Assyrian Empire in 720 BCE, resulting in the Ten Lost Tribes. Objectives Forced deportation and subsequent resettlement were used as tools of political domination and subjugation to maintain control over conquered people groups. Large population groups were systematically transferred between different regions within the empire to strengthen their political unity or put down possible rebellions. Imperial administrators planned the population transfers, taking into account political, economic, and cultural considerations. For example, people might have been moved to develop new lands. In 720 BCE, Sargon II resettled 6,300 Assyrians who were involved in a power struggle against him from the heartland of the empire to the newly conquered city of Hamat in Syria. By ordering resettlement instead of execution of his enemies, the king displayed his mercy, political threats were removed from the empire's center, and the deportees were also beneficial in the reconstruction of the war-torn city. In other cases, Assyria also relocated people from newly conquered territories to its heartland. Typically, the elite section of the population was selected in a careful process. This group included highly skilled people: craftsmen, scholars and cultural elites, whose resettlement in the empire's heartland would bring knowledge and wealth. The empire's capitals, Nineveh, Kalhu and Assur were well-populated with people from throughout the empire, who were instrumental in the building of Assyria's lasting monuments, including the famous Royal Library of Ashurbanipal. Logistics The Assyrian state supervised and planned the move to be as efficient as possible. The deportees were meant to arrive intact, ready to work and resettle in their new environment. Some surviving Assyrian art depicts deportees traveling with their family and possessions with beasts of burden in tow, while other pieces depict the displaced peoples marching while shackled or tied up, or while being pulled along with hooks placed in their cheeks or noses. Ride animals were used, as well as boxes and vessels to carry supplies needed for resettlement. State officials were directly involved, for example a letter from an official to Tiglath-pileser III showed that the official provided the "food supplies, clothes, a waterskin, [...] shoes and oil" and was waiting for donkeys to be available before sending a convoy of deportees. A 1979 estimate by Bustenay Oded—extrapolating based on written documents—estimated that 4.4 million people, plus or minus 900,000, were relocated over a 250-year period. 85% of them were resettled in the Assyrian heartland. Status of deportees Surviving documents do not speak directly to the social and legal status of deportees, but historians attempted to infer them indirectly, especially from documents mentioning people with non-Assyrian names in Assyrian heartlands—presumably many of such people were deportees. The treatment of the deportees varied from case to case and it is hard to generalize, often those who were untrained were enslaved and put to work on massive building projects, while those who worked in various professions were placed to work according to their training. Those who worked in agriculture were assigned lands to work on, with a similar status to that of others within the empire. Many worked in high-skilled jobs, including as craftsmen, scholars, and merchants. The most educated and trained deportees were placed in royal service, and those willing to adopt the Assyrian identity and gods were able to join the Assyrian military. The state encouraged the mixing of deportees and native inhabitants where they lived in order to abolish their previous ethnic and religious identity in favor of a new shared "Assyrian" identity. The resettlement of Israelites conquered by the Neo-Assyrian Empire were mentioned in the Old Testament, which came to be called the "Assyrian captivity". The first occurred in 734 BCE and is related in 2 Kings 15:29. The Assyrian King Tiglath-Pileser III defeated an alliance which included King Pekah of Israel, occupied Northern Israel and then ordered a large number of Israelites to relocate to Assyria proper. The second deportation started after 722 BCE and related in 2 Kings 18:11–12. Pekah's successor King Hoshea rebelled against Assyria in 724 BCE. King Shalmaneser V (Tiglath-Pileser's successor) besieged Samaria, which was finally captured in 722 BCE by Shalmaneser's successor Sargon II. After the fall of Samaria, 27,280 people (according to Assyrian records) were deported to various places throughout the empire, mainly to Guzana in the Assyrian heartland, as well as to the cities of the Medes in the eastern part of the empire (modern-day Iran). The cities of the Medes were only conquered by Assyria in 716 BCE, six years after the fall of Samaria, suggesting that the relocation took years to plan before it was implemented. At the same time, people from other parts of the empire were resettled in the depopulated areas of the then Assyrian province of Samerina. Legacy As the successor of Assyrian hegemony, the Neo-Babylonian Empire continued the practice of moving conquered populations to other parts of the empire, with yet another well-known case being that of the Kingdom of Judah, whose populace was subject to the Babylonian captivity until the Babylonians themselves were conquered by the Persians in 539 BCE. See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-283] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/0-333-49130-0] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-90] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Philosophy_and_economics] | [TOKENS: 1284]
Contents Philosophy and economics Empirical methods Prescriptive and policy Philosophy and economics studies topics such as public economics, behavioural economics, rationality, justice, history of economic thought, rational choice, the appraisal of economic outcomes, institutions and processes, the status of highly idealized economic models, the ontology of economic phenomena and the possibilities of acquiring knowledge of them. It is useful to divide philosophy of economics in this way into three subject matters which can be regarded respectively as branches of action theory, ethics (or normative social and political philosophy), and philosophy of science. Economic theories of rationality, welfare, and social choice defend substantive philosophical theses often informed by relevant philosophical literature and of evident interest to those interested in action theory, philosophical psychology, and social and political philosophy. Economics is of special interest to those interested in epistemology and philosophy of science both because of its detailed peculiarities and because it has many of the overt features of the natural sciences, while its object consists of social phenomena. In any empirical setting, the epistemic assumptions of financial economics (and related applied financial disciplines) are relevant, and are further discussed under the Epistemology of finance. Scope The question usually addressed in any subfield of philosophy (the philosophy of X) is "what is X?". A philosophical approach to the question "what is economics?" is less likely to produce an answer than it is to produce a survey of the definitional and territorial difficulties and controversies. Similar considerations apply as a prologue to further discussion of methodology in a subject. Definitions of economics have varied over time from the modern origins of the subject, reflecting programmatic concerns and distinctions of expositors. Ontological questions continue with further "what is..." questions addressed at fundamental economic phenomena, such as "what is (economic) value?" or "what is a market?". While it is possible to respond to such questions with real verbal definitions, the philosophical value of posing such questions actually aims at shifting entire perspectives as to the nature of the foundations of economics. In the rare cases that attempts at ontological shifts gain wide acceptance, their ripple effects can spread throughout the entire field of economics. An epistemology deals with how we know things. In the philosophy of economics this means asking questions such as: what kind of a "truth claim" is made by economic theories – for example, are we claiming that the theories relate to reality or perceptions? How can or should we prove economic theories – for example, must every economic theory be empirically verifiable? How exact are economic theories and can they lay claim to the status of an exact science – for example, are economic predictions as reliable as predictions in the natural sciences, and why or why not? Another way of expressing this issue is to ask whether economic theories can state "laws". Philosophers of science and economists have explored these issues intensively since the work of Alexander Rosenberg and Daniel M. Hausman dating to 3 decades ago. Philosophical approaches in decision theory focus on foundational concepts in decision theory – for example, on the natures of choice or preference, rationality, risk and uncertainty, and economic agents. Game theory is shared between a number of disciplines, but especially mathematics, economics and philosophy. Game theory is still extensively discussed within the field of the philosophy of economics. Game theory is closely related to and builds on decision theory and is likewise very strongly interdisciplinary. The ethics of economic systems deals with the issues such as how it is right (just, fair) to keep or distribute economic goods. Economic systems as a product of collective activity allow examination of their ethical consequences for all of their participants. Ethics and economics relates ethical studies to welfare economics. It has been argued that a closer relation between welfare economics and modern ethical studies may enrich both areas, even including predictive and descriptive economics as to rationality of behaviour, given social interdependence. Ethics and justice overlap disciplines in different ways. Approaches are regarded as more philosophical when they study the fundamentals – for example, John Rawls' A Theory of Justice (1971) and Robert Nozick's Anarchy, State and Utopia (1974). 'Justice' in economics is a subcategory of welfare economics with models frequently representing the ethical-social requirements of a given theory. "Practical" matters include such subjects as law and cost–benefit analysis Utilitarianism, one of the ethical methodologies, has its origins inextricably interwoven with the emergence of modern economic thought. Today utilitarianism has spread throughout applied ethics as one of a number of approaches. Non-utilitarian approaches in applied ethics are also now used when questioning the ethics of economic systems – e.g. rights-based (deontological) approaches. Many political ideologies have been an immediate outgrowth of reflection on the ethics of economic systems. Marx, for example, is generally regarded primarily as a philosopher, his most notable work being on the philosophy of economics. However, Marx's economic critique of capitalism did not depend on ethics, justice, or any form of morality, instead focusing on the inherent contradictions of capitalism through the lens of a process which is today called dialectical materialism. The philosophy of economics defines itself as including the questioning of foundations or assumptions of economics. The foundations and assumption of economics have been questioned from the perspective of noteworthy but typically under-represented groups. These areas are therefore to be included within the philosophy of economics. Scholars cited in the literature Related disciplines The ethics of economic systems is an area of overlap between business ethics and the philosophy of economics. People who write on the ethics of economic systems are more likely to call themselves political philosophers than business ethicists or economic philosophers. There is significant overlap between theoretical issues in economics and the philosophy of economics. As economics is generally accepted to have its origins in philosophy, the history of economics overlaps with the philosophy of economics. Degrees Some universities offer joint degrees that combine philosophy, politics and economics. These degrees cover many of the problems that are discussed in Philosophy and Economics, but are more broadly construed. A small number of universities, notably the London School of Economics, University of Edinburgh, the Erasmus University Rotterdam, Copenhagen Business School, the University of Vienna the University of Bayreuth, the University of Hamburg and the Witten/Herdecke University offer master's degree programs specialized in philosophy, politics and economics. Journals See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_ref-390] | [TOKENS: 10515]
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links
========================================
[SOURCE: https://www.theverge.com/ai-artificial-intelligence] | [TOKENS: 1961]
AI Artificial intelligence is more a part of our lives than ever before. While some might call it hype and compare it to NFTs or 3D TVs, generative AI is causing a sea change in nearly every part of the technology industry. OpenAI’s ChatGPT is still the best-known AI chatbot around, but with Google pushing Gemini, Microsoft building Copilot, and Apple adding its Intelligence to Siri, AI is probably going to be in the spotlight for a very long time. At The Verge, we’re exploring what might be possible with AI — and a lot of the bad stuff AI does, too. Posts from this topic will be added to your daily email digest and your homepage feed. See All Anthropic Posts from this topic will be added to your daily email digest and your homepage feed. See All xAI Two minor AWS outages have reportedly occurred as a result of actions by Amazon’s AI tools. The company might be developing smart glasses and a smart lamp, too. Latest In AI The “conversational” assistant was announced for phones last month, after a rollout on TVs, and is now available in the One UI 8.5 beta in the US and a few other countries. Expect to hear more about it, and its full release plans, at next week’s Unpacked. [Samsung Newsroom] After a previously-announced $100 billion deal went “on ice,” as The Wall Street Journal reported, Nvidia is nearing a $30 billion equity investment as part of a larger funding round, the Financial Times reports. The investment might be tied up as soon as this weekend. [Financial Times] Adthena, an “AI search intelligence” platform, has spotted ads in ChatGPT, and they can apparently trigger as soon as after your first prompt. Ads from Expedia, Qualcomm, Best Buy, and Enterprise Mobility are starting to show up in ChatGPT responses, OpenAI told Adweek. HBO’s medical drama has been teasing out a smart story about what makes gen AI so tempting and concerning. Gemini 3.1 Pro is rolling out starting today in the Gemini app and NotebookLM. According to Google: 3.1 Pro is designed for tasks where a simple answer isn’t enough, taking advanced reasoning and making it useful for your hardest challenges. This improved intelligence can help in practical applications — whether you’re looking for a clear, visual explanation of a complex topic, a way to synthesize data into a single view, or bringing a creative project to life. YouTube is starting to test its conversational AI tool with a “small group of users” on smart TVs, gaming consoles, and streaming devices. The tool, first introduced in 2023, lets you ask questions about the videos you’re watching. [YouTube Help] The AI industry is rife with defections, FOMO, and radical mission statements. It’s about to get supercharged. Prediction: This is going to be a mess for the Trump right. Chatbots could bring back the golden age of search spam, as one BBC journalist found after publishing a blog post about his (fake) hot dog eating exploits, which fed right into AI answers. SEO spam is nothing new, but Gemini’s confident attribution to hot dog “reporting” makes things a whole lot worse. [BBC Future] Charles Porch, who helped land Instagram’s biggest partnerships — including the launch of Beyoncé’s self-titled album on the social network — will now serve as OpenAI’s first VP of global creative partnerships. “I’m going to be the person that’s talking to creative communities around the world to figure out how we build the best products to serve them,” Porch tells Vanity Fair. [Vanity Fair] The funding will go toward Meta’s pro-AI super PACs, including two new ones: Republican-focused “Forge the Future Project” and Democrat-focused “Making Our Tomorrow,” the New York Times reports. The PACs will back politicians who are friendly to AI and push back against legislation that could limit the growth of Meta’s AI business. [The New York Times] Meshcapade’s team will join Epic Games’ AI Research team and contribute to technologies for Unreal Engine and the hyperrealistic MetaHumans. With the acquisition, Epic says it’s “looking forward to working together to advance digital human technologies for use across gaming, film and entertainment.” Update: Added Epic statement. [The Max Planck Society for the Advancement of Science] A new report from 404 Media today featured a leaked email from Ring founder Jamie Siminoff, who leads the camera maker inside Amazon, saying back in October that he has grander ambitions for the company’s controversial Search Party feature beyond just finding lost dogs. We had Siminoff on Decoder a few months ago, when I asked him explicitly about using facial recognition to identify people, something the company has since claimed it has no plans to do. Check out what he had to say in the clip below. Otherwise, the TikTok parent will face “immediate litigation” for copyright infringement of Netflix’s Stranger Things, KPop Demon Hunters, Squid Game, and Bridgerton franchises: “Seedance acts as a high-speed piracy engine, generating mass quantities of unauthorized derivative works utilizing Netflix’s iconic characters, worlds, and scripted narratives. Netflix will not stand by and watch ByteDance treat our valued IP as free, public domain clip art.” Disney, Paramount, and Hollywood trade groups are equally concerned. Anthropic launched the latest version of Claude Sonnet on Tuesday, which it says “approaches Opus-level intelligence,” featuring improvements in coding and computer use with tasks like navigating spreadsheets or filling out web forms. Sonnet 4.6 is now replacing Sonnet 4.5 as the default model for free and pro Claude users. [Anthropic] Focus Features is billing The AI Doc: Or How I Became An Apocaloptimist as an “eye-opening” exploration of “the most powerful technology humanity has ever created.” You’d think the doc might feature some critical voices, but its new trailer makes it feel like it might be one big commercial. The film premieres on March 27th. The NAACP sent a notice of intent to sue, accusing Musk’s company of illegally installing gas turbines in Mississippi to power its Colossus 2 data center. Thermal images taken by drone show more than a dozen turbines running at the site without a permit, according to a Floodlight investigation. [The Hill] The legendary composer is celebrating 40 years of Music Mouse, which brought algorithmic composition to home computers. As Axios reports that the Department of Defense and / or War is preparing to brand Anthropic a “supply chain risk,” one commenter wonders if the Claude company might revisit its Super Bowl ad to turn that to its advantage. hodgdon: “Extrajudicial killings are coming to AI. But not to Claude.” Get the day’s best comment and more in my free newsletter, The Verge Daily. It can apparently recognize how much influence a given track or artist had on AI content, and can work with or without cooperation from AI developers. Sony thinks it could be used to create a licensing system for AI music, but “has yet to decide” when it might be put to use. [Nikkei Asia] Europe’s privacy watchdog has opened yet another investigation into the millions of sexualized images, some of children, produced and shared on the platform last month. It joins the EU’s DSA effort already underway, whatever France is doing, and a few more in the UK. [FT] Lockdown Mode is “not necessary” for most people and “tightly constrains how ChatGPT can interact with external systems to reduce the risk of prompt injection–based data exfiltration,” according to OpenAI. [OpenAI] Despite developers being increasingly skeptical of generative AI, “AI-driven authoring is our second major area of focus for 2026,” Unity CEO Matthew Bromberg said in earnings remarks reported on by Game Developer. The company plans to reveal more about the new prompting tools at the GDC Festival of Gaming in March. [Game Developer] Should Anthropic get the designation, “anyone who wants to do business with the U.S. military has to cut ties with the company,” Axios says. The two sides have apparently been negotiating for months over how the military can use Anthropic’s AI tools. [Axios] St. Peter’s Basilica in the Vatican has partnered with Translated, an AI-assisted live-translation service, to make the liturgy available in 60 languages. Vatican visitors can use their smartphones to access audio and text translations via the web by scanning QR codes within the Basilica, no app or special equipment required. [Engadget] The security camera maker’s Search Party feature, advertised during the Super Bowl, has sparked a surveillance backlash. The former host of Morning Edition and current host of Left, Right & Center claims that Google illegally replicated his voice for its male podcast host in NotebookLM. Google denies this, but Greene (and many of his friends and colleagues) say the resemblance is “uncanny.” As the Washington Post reports: To Greene, the resemblance of the AI voice to his own is uncanny — and the harm is deeper and more personal than just a missed chance to capitalize on his most recognizable asset. “My voice is, like, the most important part of who I am,” Greene said. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://www.theverge.com/policy] | [TOKENS: 1438]
Policy Tech is reshaping the world — and not always for the better. Whether it’s the rules for Apple’s App Store or Facebook’s plan for fighting misinformation, tech platform policies can have enormous ripple effects on the rest of society. They’re so powerful that, increasingly, companies aren’t setting them alone but sharing the fight with government regulators, civil society groups, and internal standards bodies like Meta’s Oversight Board. The result is an ongoing political struggle over harassment, free speech, copyright, and dozens of other issues, all mediated through some of the largest and most chaotic electronic spaces the world has ever seen. The internet personality has been chasing ICE on the streets of Minneapolis, but fellow locals are divided on whether he’s helping or hurting the cause. Brian Boland testified about shifting from ‘deep blind faith’ in Meta to becoming its public critic. Latest In Policy Plaintiff attorney Rachel Lanier told Judge Carolyn Kuhl this morning that after she’d admonished against using smart glasses in the courthouse, they learned that one person was still wearing them in the hallway where jurors were present. After alerting Meta’s counsel, Lanier said they were told the glasses weren’t recording. Prediction: This is going to be a mess for the Trump right. He faces potential “misconduct in public office” charges, related to documents he allegedly passed to Jeffrey Epstein while serving as a trade envoy. Already stripped of his titles, he made a few appearances in the files, and is the first senior British royal to be arrested since 1647. [BBC News] The FCC has an “enforcement action underway,” Carr said, according to Deadline. This week, Stephen Colbert said CBS banned him from airing his own interview with Talarico, a Democratic state representative from Texas who is running for the US Senate. [Deadline] The US has been working on an online portal at “freedom.gov” that would let Europeans see content their governments have banned, Reuters reports. A planned launch last week was apparently delayed, and State Department officials have expressed concerns about the project. Freedom.gov currently links to a Cloudflare Access page with the National Design Studio logo. [Reuters] California regulators killed a proposal that would have imposed fees on gas-burning furnaces and water heaters that release smog-forming pollutants. More than 20,000 comments they received opposing the proposal were generated by a single AI platform, some addressed from people with no idea their names had been used. [Los Angeles Times] The funding will go toward Meta’s pro-AI super PACs, including two new ones: Republican-focused “Forge the Future Project” and Democrat-focused “Making Our Tomorrow,” the New York Times reports. The PACs will back politicians who are friendly to AI and push back against legislation that could limit the growth of Meta’s AI business. [The New York Times] The Meta CEO walked through the public entrance of the LA Superior Court and past parent advocates and media waiting to learn if they’d get a seat to hear his testimony. A coalition including the American Public Health Association, American Lung Association, and Sierra Club have filed suit against the Trump administration for repealing the landmark ‘endangerment finding.’ The repeal — if successful — could strip away the Environmental Protection Agency’s ability to to regulate planet-heating pollution. [NBC News] Otherwise, the TikTok parent will face “immediate litigation” for copyright infringement of Netflix’s Stranger Things, KPop Demon Hunters, Squid Game, and Bridgerton franchises: “Seedance acts as a high-speed piracy engine, generating mass quantities of unauthorized derivative works utilizing Netflix’s iconic characters, worlds, and scripted narratives. Netflix will not stand by and watch ByteDance treat our valued IP as free, public domain clip art.” Disney, Paramount, and Hollywood trade groups are equally concerned. Just after we entered the courtroom, we learned that a juror has been hospitalized. The parties decided to postpone today’s testimony from former Meta employees to see if the juror can return. Regardless, Meta CEO Mark Zuckerberg is expected to testify tomorrow — either before the original juror, or an alternate. I’m in downtown Los Angeles where a state judge is hearing the first of several landmark trials about how social media allegedly harmed a teen girl going by K.G.M. We expect to hear from Meta CEO Mark Zuckerberg this week. As Axios reports that the Department of Defense and / or War is preparing to brand Anthropic a “supply chain risk,” one commenter wonders if the Claude company might revisit its Super Bowl ad to turn that to its advantage. hodgdon: “Extrajudicial killings are coming to AI. But not to Claude.” Get the day’s best comment and more in my free newsletter, The Verge Daily. Europe’s privacy watchdog has opened yet another investigation into the millions of sexualized images, some of children, produced and shared on the platform last month. It joins the EU’s DSA effort already underway, whatever France is doing, and a few more in the UK. [FT] Should Anthropic get the designation, “anyone who wants to do business with the U.S. military has to cut ties with the company,” Axios says. The two sides have apparently been negotiating for months over how the military can use Anthropic’s AI tools. [Axios] First came Jmail, then Jikipedia. The Epstein files have yielded a lot so far, but I didn’t expect a whole new tech ecosystem to be among them. alectrem: at this rate they’re going to make a whole platform of web services and IPO before all the files are even released Get the day’s best comment and more in my free newsletter, The Verge Daily. In a letter sent to Congress Saturday, the Attorney General said that the DOJ had released “all ‘records, documents, communications and investigative materials in the possession of the Department’” in accordance with the Epstein Files Transparency Act. She also included a list of over 300 people mentioned in the files. Google, Reddit, Discord, and Meta have received “hundreds” of subpoenas from the DHS in recent months, according to a report from The New York Times. The agency is reportedly asking the platforms for the names, email addresses, phone numbers, and other information associated with accounts that “track or criticize” ICE. [The New York Times] Kamala Harris’ campaign account, @KamalaHQ, has rebranded as a digital rapid response operation. The less densely populated areas outside the Twin Cities make it harder for protesters and observers to organize. Senate Democrats blocked a bill funding the Department of Homeland Security on Thursday, which could trigger a temporary shutdown of the department. The vote was 52 to 47, with just one Democrat — Sen. John Fetterman — voting in favor. “We will not support an extension of the status quo,” Senate Majority Leader Chuck Schumer said before the vote. [Politico] Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Israeli_system_of_government#Separation_of_Powers] | [TOKENS: 2757]
Contents Israeli system of government The Israeli system of government is based on parliamentary democracy. The Prime Minister of Israel is the head of government and leader of a multi-party system. Executive power is exercised by the government (also known as the cabinet). Legislative power is vested in the Knesset. The Judiciary is independent of the executive and the legislature. The political system of the State of Israel and its main principles are set out in 11 Basic Laws. Israel does not have a written constitution. Presidency The President of the State is the de jure head of state of Israel. The position is largely an apolitical and ceremonial role, and is not considered a part of any Government Branch. The President's ceremonial roles include signing every law (except those pertaining to the President's powers) and international or bilateral treaty, ceremonially appointing the Prime Minister, confirming and endorsing the credentials of ambassadors, and receiving the credentials of foreign diplomats. The President also has several important functions in government. The President is the only government official with the power to pardon or commute prisoners. The President appoints the governor of the Bank of Israel, the president of the national emergency relief service Magen David Adom, and the members and leaders of several institutions. The President also ceremonially appoints judges to their posts after their selection. Executive branch The Prime Minister is the most powerful political figure in the country. Under sections 7 to 14 of Basic Law: The Government, the Prime Minister is nominated by the President after consulting party leaders in the Knesset; the appointment of the Prime Minister and cabinet is in turn confirmed by a majority vote of confidence from the assembled Knesset members. As head of government, the Prime Minister makes foreign and domestic policy decisions which are voted on by the cabinet. The cabinet is composed of ministers, most of whom are the heads of government departments, though some are deputy ministers and ministers without portfolio. Cabinet ministers are appointed by the Prime Minister. The cabinet's composition must also be approved by the Knesset. The Prime Minister may dismiss cabinet members, but any replacements must be approved by the Knesset. Most ministers are members of the Knesset, though only the Prime Minister is required to be one. The cabinet meets weekly on Sundays, and there may be additional meetings if circumstances require it. Each cabinet meeting is chaired by the Prime Minister. A select group of ministers led by the Prime Minister forms the security cabinet, responsible for outlining and implementing a foreign and defense policy. This forum is designed to coordinate diplomatic negotiations, and to make quick and effective decisions in times of crisis and war. The Israeli government has 28 ministries, each of them responsible for a sector of public administration. Many ministries are located in the Kiryat Ben Gurion Government complex in the area of Givat Ram in Jerusalem. Each ministry is led by a minister, who is also a member of the cabinet and is usually a member of the Knesset. The Office of the Prime Minister coordinates the work of all government ministries, and assists the Prime Minister in their daily tasks. The State Comptroller, who supervises and reviews the policies and operations of the government, is elected by the Knesset in secret ballot. They can only be removed from office by a two-thirds vote in the Knesset. In addition to their fiscal and operational oversight function, the State Comptroller also serves as a national ombudsman for the state, receiving complaints from the public about the actions of public officials and institutions. Legislative branch The Knesset is Israel's unicameral legislature and is seated in Jerusalem. Its 120 members are elected to 4-year terms through party-list proportional representation (see electoral system, below), as mandated by the 1958 Basic Law: The Knesset. Knesset seats are allocated among parties using the D'Hondt method of party list proportional representation. Parties select candidates using a closed list. Thus, voters select the party of their choice, rather than any specific candidate. Israel requires a party to meet an election threshold of 3.25% to be allocated a Knesset seat. All Israeli citizens 18 years of age and older may participate in legislative elections, which are conducted by secret ballot. As the legislative branch of the Israeli government, the Knesset has the power to enact and repeal all laws. It enjoys de jure parliamentary supremacy, and can pass any law by a simple majority, even one that might arguably conflict with a Basic Law, unless it has specific conditions for its modification. The Knesset can adopt and amend Basic Laws acting through its capacity as a Constituent Assembly. The Knesset also supervises government activities through its committees, nominates the Prime Minister and approves the cabinet, and elects the President of the State and the State Comptroller. It also has the power to remove the President and State Comptroller from office, revoke the immunity of its members, and to dissolve itself and call new elections. The February 2009 elections produced five prominent political parties; Kadima, Likud, Israel Beytenu, Labor and Shas, each with more than ten seats in the Knesset. Three of these parties were ruling parties in the past. However, only once has a single party held the 61 seats needed for a majority government (the Alignment from 1968 until the 1969 elections). Therefore, aside from that one exception, since 1948 Israeli governments have always comprised coalitions. As of 2009, there are 12 political parties represented in the Knesset, spanning both the political and religious spectra. Israel's electoral system operates within the parameters of a Basic Law (The Knesset) and of the 1969 Knesset Elections Law. The Knesset's 120 members are elected by secret ballot to 4-year terms, although the Knesset may decide to call for new elections before the end of the 4-year term, and a government can change without a general election; since the 1988 election, no Knesset has finished its 4-year term. In addition a motion of no confidence may be called. Voting in general elections takes place using the highest averages method of party-list proportional representation, using the d'Hondt formula. General elections use closed lists: voters vote only for party lists and cannot affect the order of candidates within the lists. Since the 1992 Parties Law, only registered parties may stand. There are no separate electoral districts; all voters vote on the same party lists. Suffrage is universal among Israeli citizens aged 18 years or older. Voting is optional. Polling locations are open throughout Israel; absentee ballots are limited to diplomatic staff and the merchant marine. While each party attains one seat for 1 in 120 votes, there is a minimum threshold of 3.25% for parties to attain their first seat in an election. This requirement aimed to bar smaller parties from parliament but spurred some parties to join together simply to overcome the threshold. The low vote-threshold for entry into parliament, as well as the need for parties with small numbers of seats to form coalition governments, results in a highly fragmented political spectrum, with small parties exercising extensive power (relative to their electoral support) within coalitions. The president selects the prime minister as the party leader most able to form a government, based on the number of parliament seats their coalition has won. After the president's selection, the prime minister has forty-five days to form a government. The Knesset collectively must approve the members of the cabinet. This electoral system, inherited from the Yishuv (Jewish settlement organization during the British Mandate), makes it very difficult for any party to gain a working majority in the Knesset and thus governments generally form on the basis of coalitions. Due to the difficulties in holding coalitions together, elections often occur earlier than scheduled. The average life-span of an Israeli government is about two years. Over the years, the peace process, the role of religion in the state, and political scandals have caused coalitions to break apart or have produced early elections. Judicial system The judicial branch is an independent branch of the government, including secular and religious courts for the various religions present in Israel. The court system involves three stages of justice. Judges for all courts are appointed by the Judicial Selection Committee. The committee is composed of nine members: two cabinet members (one being the Minister of Justice), two Knesset members, two members of the Israel Bar Association, and three Supreme Court justices (one being the President of the Supreme Court). The committee is chaired by the Minister of Justice. In November 1985, the Israeli government informed the United Nations Secretariat that it would no longer accept compulsory International Court of Justice jurisdiction. Israeli judicial courts consist of a three-tier system: Some issues of family law (marriage and divorce in particular) fall either under the jurisdiction of religious courts or under parallel jurisdiction of those and the state's family courts. The state maintains and finances Rabbinical, Sharia and various Canonical courts for the needs of the various religious communities. All judges are civil servants, and required to uphold general law in their tribunals as well. The Supreme Court serves as final appellate instance for all religious courts. Jewish religious courts are under control of the Prime Minister's Office and the Chief Rabbinate of Israel. These courts have jurisdiction in only five areas: Kashrut, Sabbath, Jewish burial, marital issues (especially divorce), and Jewish status of immigrants. However, except for determining a person's marital status, all other marital issues may also be taken to secular Family Courts. The other major religious communities in Israel, such as Muslims and Christians, have their own religious courts. These courts have similar jurisdiction over their followers as Jewish religious courts, although Muslim religious courts have more control over family affairs. There are five regional labor courts in Israel as a tribunal of first instance, and a National Labor Court in Jerusalem to hear appeals and few cases of national importance. The labor courts have exclusive jurisdiction over cases involving employer-employee relationship, employment, strikes and labor union disputes, labor-related complaints against the National Insurance Institute, and Health Insurance claims. The Israel Defense Forces (IDF) maintains a series of district military courts and special military tribunals. The Military Court of Appeals is the IDF's supreme appellate court. It considers and judges over appeals submitted by the Military Advocate General, which challenges decisions rendered by the lower courts. In all matters having to do with admiralty, commercial shipping, accidents at sea, and other maritime matters, the Haifa District Court, sitting as the Court of Admiralty, has exclusive statewide jurisdiction. Separation of powers The Basic Law: The Government contains a number of checks and balances between the Knesset and the Government. The fact that the Government holds office by virtue of the confidence of the Knesset creates a significant check on the Government's power, but there are also restrictions on the Knesset's ability to vote no confidence in the Government. The Government serves with the confidence of the Knesset, but the Knesset is limited to a constructive vote of no confidence under Basic Law: The Government (2001). Members of the Knesset are also disincentivized from supporting a vote of no confidence for the purpose of obtaining a ministerial portfolio in a subsequent government, as when members of Knesset (MKs) defect from their faction—which is defined as opposing one’s party’s position on a vote of confidence—they are ineligible to serve as ministers during that Knesset, and they cannot run on their party’s list in the subsequent election. Additionally, the Knesset can exercise oversight over the Government. Knesset committees can compel testimony of government ministers, and the Government is required to comply with such oversight requests. The Basic Laws also reserve a role for the Knesset minority, with 40 MKs empowered to compel the Prime Minister’s attendance in the Knesset on a set topic. The Basic Laws contemplate a regularized system of oversight, with any reorganization of ministerial powers requiring Knesset approval and the creation of a committee in the Knesset to oversee the ministry. This requirement supports the Knesset’s oversight of ministerial regulations. When government ministers issue regulations that involve criminal sanctions for violations, the Knesset committee that oversees that committee has the ability to invalidate that regulation within 45 days. The Israeli Supreme Court has emphasized the importance of such oversight mechanisms, in some cases requiring the government avoid taking action, including during a state of emergency, unless and until the Knesset can properly exercise oversight of it through its committees. Local government For governmental purposes, Israel is divided into six districts: Central District; Haifa District; Jerusalem District; Northern District; Southern District; and Tel Aviv District. The districts further subdivide into fifteen sub-districts and into fifty natural regions. Administration of the districts is coordinated by the Ministry of Interior. There are three forms of local government in Israel: city councils, local councils, and regional councils. City councils govern municipalities classified as cities, local councils govern small municipalities, and regional councils govern groups of communities. These bodies look after public services such as urban planning, zoning, the provision of water, emergency services, and education and culture, as per guidelines of the Ministry of Interior. Local governments consist of a governing council chaired by a mayor. The mayor and all council members are chosen in municipal elections. The Ministry of Defense has responsibility for the administration of the occupied territories. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTELundrigan19969_125-3] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/European_Southern_Observatory] | [TOKENS: 4181]
Contents European Southern Observatory The European Organisation for Astronomical Research in the Southern Hemisphere, commonly referred to as the European Southern Observatory (ESO), is an intergovernmental research organisation made up of 16 member states for ground-based astronomy. Created in 1962, ESO has provideastronomers with state-of-the-art research facilities and access to the southern sky. The organisation employs over 750 staff members and receives annual member state contributions of approximately €162 million. Its observatories are located in northern Chile. ESO has built and operated some of the largest and most technologically advanced telescopes. These include the 3.6 m New Technology Telescope, an early pioneer in the use of active optics, and the Very Large Telescope (VLT), which consists of four individual 8.2 m telescopes and four smaller auxiliary telescopes which can all work together or separately. The Atacama Large Millimeter Array observes the universe in the millimetre and submillimetre wavelength ranges, and is the world's largest ground-based astronomy project to date. It was completed in March 2013 in an international collaboration by Europe (represented by ESO), North America, East Asia and Chile. Currently under construction is the Extremely Large Telescope. It will use a 39.3-metre-diameter segmented mirror, and become the world's largest optical reflecting telescope when operational towards the end of this decade. Its light-gathering power will allow detailed studies of planets around other stars, the first objects in the universe, supermassive black holes, and the nature and distribution of the dark matter and dark energy which dominate the universe. ESO's observing facilities have made astronomical discoveries and produced several astronomical catalogues. Its findings include the discovery of the most distant gamma-ray burst and evidence for a black hole at the centre of the Milky Way. In 2004, the VLT allowed astronomers to obtain the first picture of an extrasolar planet (2M1207b) orbiting a brown dwarf 173 light-years away. The High Accuracy Radial Velocity Planet Searcher (HARPS) instrument installed on the older ESO 3.6 m telescope led to the discovery of extrasolar planets, including Gliese 581c—one of the smallest planets seen outside the Solar System. Past The idea that European astronomers should establish a common large observatory was broached by Walter Baade and Jan Oort at the Leiden Observatory in the Netherlands in spring 1953. It was pursued by Oort, who gathered a group of astronomers in Leiden to consider it on 21 June that year. Immediately thereafter, the subject was further discussed at the Groningen conference in the Netherlands. On 26 January 1954, an ESO declaration was signed by astronomers from six European countries expressing the wish that a joint European observatory be established in the southern hemisphere. At the time, all reflector telescopes with an aperture of 2 metres or more were located in the northern hemisphere. The decision to build the observatory in the southern hemisphere resulted from the necessity of observing the southern sky; some research subjects (such as the central parts of the Milky Way and the Magellanic Clouds) were accessible only from the southern hemisphere. It was initially planned to set up telescopes in South Africa where several European observatories were located (Boyden Observatory), but tests from 1955 to 1962 demonstrated that a site in the Andes was preferable: When Jürgen Stock enthusiastically reported his observations from Chile, Otto Heckmann decided to leave the South African project on hold. ESO—at that time about to sign the contracts with South Africa—decided to establish their observatory in Chile. The ESO Convention was signed 5 October 1962 by Belgium, Germany, France, the Netherlands and Sweden. Otto Heckmann was nominated as the organisation's first director general on 1 November 1962. On 15 November 1963 Chile was chosen as the site for ESO's observatory. A preliminary proposal for a convention of astronomy organisations in these five countries was drafted in 1954. Although some amendments were made in the initial document, the convention proceeded slowly until 1960 when it was discussed during that year's committee meeting. The new draft was examined in detail, and a council member of CERN (the European Organization for Nuclear Research) highlighted the need for a convention between governments (in addition to organisations). The convention and government involvement became pressing due to rapidly rising costs of site-testing expeditions. The final 1962 version was largely adopted from the CERN convention, due to similarities between the organisations and the dual membership of some members. In 1966, the first ESO telescope at the La Silla site in Chile began operating. Because CERN (like ESO) had sophisticated instrumentation, the astronomy organisation frequently turned to the nuclear-research body for advice and a collaborative agreement between ESO and CERN was signed in 1970. Several months later, ESO's telescope division moved into a CERN building in Geneva and ESO's Sky Atlas Laboratory was established on CERN property. ESO's European departments moved into the new ESO headquarters in Garching (near Munich), Germany, in 1980. In 2015, Guillem Anglada-Escudé confirmed the existence of Proxima Centauri b at the Southern Observatory. Chilean observation sites Although ESO is headquartered in Germany, its telescopes and observatories are in northern Chile, where the organisation operates advanced ground-based astronomical facilities: These are among the best locations for astronomical observations in the southern hemisphere. An ESO project is the Extremely Large Telescope (ELT), a 40-metre-class telescope based on a five-mirror design and the formerly planned Overwhelmingly Large Telescope. The ELT will be the largest visible and near-infrared telescope in the world. ESO began its design in early 2006, and aimed to begin construction in 2012. Construction work at the ELT site started in June 2014. As decided by the ESO council on 26 April 2010, a fourth site (Cerro Armazones) is to be home to ELT. Each year about 2,000 requests are made for the use of ESO telescopes, for four to six times more nights than are available. Observations made with these instruments appear in a number of peer-reviewed publications annually; in 2017, more than 1,000 reviewed papers based on ESO data were published. ESO telescopes generate large amounts of data at a high rate, which are stored in a permanent archive facility at ESO headquarters. The archive contains more than 1.5 million images (or spectra) with a total volume of about 65 terabytes (65,000,000,000,000 bytes) of data. La Silla, located in the southern Atacama Desert 600 kilometres (370 mi) north of Santiago de Chile at an altitude of 2,400 metres (7,900 ft), is the home of ESO's original observation site. Like other observatories in the area, La Silla is far from sources of light pollution and has one of the darkest night skies on Earth. In La Silla, ESO operates three telescopes: a 3.6-metre telescope, the New Technology Telescope (NTT) and the 2.2-metre Max-Planck-ESO Telescope. The observatory hosts visitor instruments, attached to a telescope for the duration of an observational run and then removed. La Silla also hosts national telescopes, such as the 1.2-metre Swiss and the 1.5-metre Danish telescopes. About 300 reviewed publications annually are attributable to the work of the observatory. Discoveries made with La Silla telescopes include the HARPS-spectrograph detection of the planets orbiting within the Gliese 581 planetary system, which contains the first known rocky planet in a habitable zone outside the solar system. Several telescopes at La Silla played a role in linking gamma-ray bursts, the most energetic explosions in the universe since the Big Bang, with the explosions of massive stars. The ESO La Silla Observatory also played a role in the study of supernova SN 1987A. The ESO 3.6-metre telescope began operations in 1977. It has been upgraded, including the installation of a new secondary mirror. The conventionally designed horseshoe-mount telescope was primarily used for infrared spectroscopy; it now hosts the HARPS spectrograph, used in search of extra-solar planets and for asteroseismology. The telescope was designed for very high long-term radial velocity accuracy (on the order of 1 m/s). The New Technology Telescope (NTT) is an altazimuth, 3.58-metre Ritchey–Chrétien telescope, inaugurated in 1989 and the first in the world with a computer-controlled main mirror. The flexible mirror's shape is adjusted during observation to preserve optimal image quality. The secondary mirror position is also adjustable in three directions. This technology (developed by ESO and known as active optics) is now applied to all major telescopes, including the VLT and the future ELT. The design of the octagonal enclosure housing the NTT is innovative. The telescope dome is relatively small and ventilated by a system of flaps directing airflow smoothly across the mirror, reducing turbulence and resulting in sharper images. The 2.2-metre telescope has been in operation at La Silla since early 1984, and is on indefinite loan to ESO from the Max Planck Society (Max-Planck-Gesellschaft zur Förderung der Wissenschaften, or MPG, in German). Telescope time is shared between MPG and ESO observing programmes, while operation and maintenance of the telescope are ESO's responsibility. Its instrumentation includes a 67-million-pixel wide-field imager (WFI) with a field of view as large as the full moon, which has taken many images of celestial objects. Other instruments used are GROND (Gamma-Ray Burst Optical Near-Infrared Detector), which seeks the afterglow of gamma-ray bursts—the most powerful explosions in the universe, and the high-resolution spectrograph FEROS (Fiber-fed Extended Range Optical Spectrograph), used to make detailed studies of stars. La Silla also hosts several national and project telescopes not operated by ESO. Among them are the Swiss Euler Telescope, the Danish National Telescope and the REM, TRAPPIST and TAROT telescopes. The Paranal Observatory is located atop Cerro Paranal in the Atacama Desert in northern Chile. Cerro Paranal is a 2,635-metre-high (8,645 ft) mountain about 120 kilometres (75 mi) south of Antofagasta and 12 kilometres (7.5 mi) from the Pacific coast. The observatory has seven major telescopes operating in visible and infrared light: the four 8.2-metre (27 ft) telescopes of the Very Large Telescope, the 2.6-metre (8 ft 6 in) VLT Survey Telescope (VST) and the 4.1-metre (13 ft) Visible and Infrared Survey Telescope for Astronomy. In addition, there are four 1.8-metre (5 ft 11 in) auxiliary telescopes forming an array used for interferometric observations. In March 2008, Paranal was the location for several scenes of the 22nd James Bond film, Quantum of Solace. The main facility at Paranal is the VLT, which consists of four nearly identical 8.2-metre (27 ft) unit telescopes (UTs), each hosting two or three instruments. These large telescopes can also work together in groups of two or three as a giant interferometer. The ESO Very Large Telescope Interferometer (VLTI) allows astronomers to see details up to 25 times finer than those seen with the individual telescopes. The light beams are combined in the VLTI with a complex system of mirrors in tunnels, where the light paths must diverge less than 1/1000 mm over 100 metres. The VLTI can achieve an angular resolution of milliarcseconds, equivalent to the ability to see the headlights of a car on the Moon. The first of the UTs had its first light in May 1998, and was offered to the astronomical community on 1 April 1999. The other telescopes followed suit in 1999 and 2000, making the VLT fully operational. Four 1.8-metre auxiliary telescopes (ATs), installed between 2004 and 2007, have been added to the VLTI for accessibility when the UTs are used for other projects. Data from the VLT have led to the publication of an average of more than one peer-reviewed scientific paper per day; in 2017, over 600 reviewed scientific papers were published based on VLT data. The VLT's scientific discoveries include imaging an extrasolar planet, tracking individual stars moving around the supermassive black hole at the centre of the Milky Way and observing the afterglow of the furthest known gamma-ray burst. At the Paranal inauguration in March 1999, names of celestial objects in the Mapuche language were chosen to replace the technical designations of the four VLT Unit Telescopes (UT1–UT4). An essay contest was prior arranged for schoolchildren in the region concerning the meaning of these names which attracted many entries dealing with the cultural heritage of ESO's host country. A 17-year-old adolescent from Chuquicamata, near Calama, submitted the winning essay and was awarded an amateur telescope during the inauguration. The four unit telescopes, UT1, UT2, UT3 and UT4, are since known as Antu (sun), Kueyen (moon), Melipal (Southern Cross), and Yepun (Evening Star), with the latter having been originally mistranslated as "Sirius", instead of "Venus". Visible and Infrared Survey Telescope for Astronomy (VISTA) is housed on the peak adjacent to the one hosting the VLT, sharing observational conditions. VISTA's main mirror is 4.1 metres (13 ft) across, a highly curved mirror for its size and quality. Its deviations from a perfect surface are less than a few thousandths the thickness of a human hair, and its construction and polishing presented a challenge. VISTA was conceived and developed by a consortium of 18 universities in the United Kingdom led by Queen Mary, University of London, and it became an in-kind contribution to ESO as part of the UK's ratification agreement. The telescope's design and construction were managed by the Science and Technology Facilities Council's UK Astronomy Technology Centre (STFC, UK ATC). Provisional acceptance of VISTA was formally granted by ESO at the December 2009 ceremony at ESO headquarters in Garching, which was attended by representatives of Queen Mary, University of London and STFC. Since then the telescope has been operated by ESO, capturing quality images since it began operation. The VLT Survey Telescope (VST) is a state-of-the-art, 2.6-metre (8 ft 6 in) telescope equipped with OmegaCAM, a 268-megapixel CCD camera with a field of view four times the area of the full moon. It complements VISTA by surveying the sky in visible light. The VST (which became operational in 2011) is the result of a joint venture between ESO and the Astronomical Observatory of Capodimonte (Naples), a research centre at the Italian National Institute for Astrophysics INAF. The scientific goals of both surveys range from the nature of dark energy to assessing near-Earth objects. Teams of European astronomers will conduct the surveys; some will cover most of the southern sky, while others will focus on smaller areas. VISTA and the VST are expected to produce large amounts of data; a single picture taken by VISTA has 67 megapixels, and images from OmegaCam (on the VST) will have 268 megapixels. The two survey telescopes collect more data every night than all the other instruments on the VLT combined. The VST and VISTA produce more than 100 terabytes of data per year. The Llano de Chajnantor is a 5,100-metre-high (16,700 ft) plateau in the Atacama Desert, about 50 kilometres (31 mi) east of San Pedro de Atacama. The site is 750 metres (2,460 ft) higher than the Mauna Kea Observatory and 2,400 metres (7,900 ft) higher than the Very Large Telescope on Cerro Paranal. It is dry and inhospitable to humans, but a good site for submillimetre astronomy; because water vapour molecules in Earth's atmosphere absorb and attenuate submillimetre radiation, a dry site is required for this type of radio astronomy. The telescopes are: ALMA is a telescope designed for millimetre and submillimetre astronomy. This type of astronomy is a relatively unexplored frontier, revealing a universe which cannot be seen in more-familiar visible or infrared light and ideal for studying the "cold universe"; light at these wavelengths shines from vast cold clouds in interstellar space at temperatures only a few tens of degrees above absolute zero. Astronomers use this light to study the chemical and physical conditions in these molecular clouds, the dense regions of gas and cosmic dust where new stars are being born. Seen in visible light, these regions of the universe are often dark and obscure due to dust; however, they shine brightly in the millimetre and submillimetre portions of the electromagnetic spectrum. This wavelength range is also ideal for studying some of the earliest (and most distant) galaxies in the universe, whose light has been redshifted into longer wavelengths from the expansion of the universe. ESO hosts the Atacama Pathfinder Experiment, APEX, and operates it on behalf of the Max Planck Institute for Radio Astronomy (MPIfR). APEX is a 12-metre (39 ft) diameter telescope, operating at millimetre and submillimetre wavelengths—between infrared light and radio waves. ALMA is an astronomical interferometer initially composed of 66 high-precision antennas and operating at wavelengths of 0.3 to 3.6 mm. Its main array will have 50 12-metre (39 ft) antennas acting as a single interferometer. An additional compact array of four 12-metre and twelve 7-metre (23 ft) antennas, known as the Morita array is also available. The antennas can be arranged across the desert plateau over distances from 150 metres to 16 kilometres (9.9 mi), which will give ALMA a variable "zoom". The array will be able to probe the universe at millimetre and submillimeter wavelengths with unprecedented sensitivity and resolution, with vision up to ten times sharper than the Hubble Space Telescope. These images will complement those made with the VLT Interferometer. ALMA is a collaboration between East Asia (Japan and Taiwan), Europe (ESO), North America (US and Canada) and Chile. The scientific goals of ALMA include studying the origin and formation of stars, galaxies, and planets with observations of molecular gas and dust, studying distant galaxies towards the edge of the observable universe and studying relic radiation from the Big Bang. A call for ALMA science proposals was issued on 31 March 2011, and early observations began on 3 October. Outreach Outreach activities are carried out by the ESO education and Public Outreach Department (ePOD). ePOD also manages the ESO Supernova Planetarium & Visitor Centre, an astronomy centre located at the site of the ESO Headquarters in Garching bei München, which was inaugurated 26 April 2018. Video gallery See also References Bibliography External links 48°15′36″N 11°40′16″E / 48.26000°N 11.67111°E / 48.26000; 11.67111
========================================