text stringlengths 0 473k |
|---|
[SOURCE: https://www.mako.co.il/study-career-open-days] | [TOKENS: 659] |
לימודים וקריירה בדרך לתואר עובדות מפתיעות על לימודים אקדמיים בישראלמה התחומים הכי נלמדים? איך בנויה השנה? ומה למדו הסלבס? 12 סטודנטים שאתם חייבים להכירהם צעירים, מבטיחים ומוכיחים שכדי להצליח באקדמיה צריך לא רק ציונים. השראה עד כמה הקורונה השפיעה על הלימודים שלכם?ענו על הסקר וגלו איפה אתם ביחס לשאר תוכניות לימוד הנדסה חדשניות בבר אילן ואתם מרוויחים מכל העולמות הזרמים החדשים בחינוך שאתם חייבים להכיר יצאנו לבדוק את הנושא מהזווית שלהם ככה זה מרגיש להיות מורה ביסודי בשנת 2020 אלו הסיבות שיותר ויותר סטודנטים מצטרפים כך תשתלבו בתעשיית ההייטק במקביל ללימודים הכירו את המיזם הסקרן של הסטודנטים האלו תלמידי ובוגרי HIT מספרים על ההמצאות שלהם כל מה שחשוב לקחת בחשבון לפני תחילת הלימודים |
======================================== |
[SOURCE: https://www.mako.co.il/study-career-career/drushim] | [TOKENS: 84] |
We are sorry... ...but your activity and behavior on this website made us think that you are a bot. Please solve this CAPTCHA in helping us understand your behavior to grant access You reached this page when trying to access https://www.mako.co.il/study-career-career/drushim from 79.181.162.231 on February 21 2026, 10:52:59 UTC |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-85] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEMcFerran201513_144-0] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nuclear_weapons_of_the_United_States] | [TOKENS: 9283] |
Contents Nuclear weapons of the United States The United States holds the second largest arsenal of nuclear weapons among the nine nuclear-armed countries. Under the Manhattan Project, the United States became the first country to manufacture nuclear weapons and remains the only country to have used them in combat, with the bombings of Hiroshima and Nagasaki in World War II against Japan. In total it conducted 1,054 nuclear tests, the most of any country.[a] It is an original party to and one of the five "nuclear-weapon states" recognized by the 1968 Treaty on the Non-Proliferation of Nuclear Weapons. As of 2025[update], the US and Russia possess a comparable number of warheads; together more than 90% of the world's stockpile. The US holds in total 5,177 warheads, of which 3,700 are stockpiled, and 1,477 are awaiting dismantlement. Of the stockpile, 1,770 are deployed, while 1,930 are held in reserve. The President of the United States has the sole authority to use nuclear weapons, and US policy permits nuclear first use. The US stockpile is mostly under Strategic Command,[b] assigned to its nuclear triad: 1,920[c] to 280 Trident II submarine-launched ballistic missiles aboard 14 Ohio-class submarines, 800[d] to 400 silo-based Minuteman III intercontinental ballistic missiles, and 780[e] B61 and B83 bombs and AGM-86B cruise missiles to 19 B-2 Spirit and 46 B-52 Stratofortress bombers respectively. The US plans to modernize its triad with the Columbia-class submarine, Sentinel ICBM, and B-21 Raider, from 2029. Early warning is provided by various radars including Solid State Phased Arrays and satellites including the Space-Based Infrared System. The Missile Defense Agency maintains a limited anti-ballistic missile capability via the Ground-Based Interceptor and Aegis systems. Additionally, 200 B61 bombs are available for tactical nuclear use by fighter aircraft.[f] The US currently stations approximately 100 of these nuclear weapons in six European NATO countries: Belgium, Germany, Italy, the Netherlands, Turkey, and United Kingdom. The US extends a nuclear umbrella to South Korea, Japan, and Australia. Throughout the Cold War, the US and USSR competed in the nuclear arms race. From 1951, the US became the first country to develop thermonuclear weapons. From the 1950s, the US positioned nuclear weapons in at least 17 other nations, including NATO allies, South Korea, Japan, Taiwan, and the Philippines, while Strategic Air Command operated hundreds of strategic bombers, under the policies of massive retaliation and containment of Eastern Bloc countries. By the 1960s, ICBMs were deployed in silos, such as the Atlas and Titan, and aboard submarines as Polaris. The 1962 Cuban Missile Crisis is regarded as one close call that threatened nuclear weapons use and cemented the concept of mutually assured destruction. The arsenal grew in the 1980s, alongside the proposed Peacekeeper ICBM and space-based Strategic Defense Initiative missile defense system. When the Cold War ended, all Army and surface Navy nuclear weapons were withdrawn. The arsenal was also limited by bilateral treaties, beginning with START I. Its successor, New START, expired in 2026. Since 2025, the US has pursued the space-based Golden Dome missile defense system. Between 1940 and 1996, the US spent over US$11.9 trillion in present-day terms on nuclear weapons infrastructure, and nuclear forces maintenance is projected to cost $60 billion per year from 2021 through 2030. The US produced over 70,000 nuclear warheads, more than all other states combined. Design takes place at Los Alamos, Livermore, and Sandia laboratories; tests were conducted at Nevada Test Site and Pacific Proving Grounds. Until the 1963 Partial Nuclear Test Ban Treaty, the vast majority of tests were atmospheric. Subsequent underground testing limited nuclear fallout. Nuclear sites radioactively contaminated civilian communities: the US government compensated Marshall Islanders over US$759 million for testing exposure, and US citizens over US$2.5 billion. The US began a testing moratorium in 1992 and signed the Comprehensive Nuclear-Test-Ban Treaty in 1996, but has not ratified it. Stockpile Stewardship is the current warhead maintenance program, using experiments including supercomputer simulation and inertial confinement fusion. Development history The United States first began developing nuclear weapons during World War II under the order of President Franklin Roosevelt in 1939, motivated by the fear that they were engaged in a race with Nazi Germany to develop such a weapon. After a slow start under the direction of the National Bureau of Standards, at the urging of British scientists and American administrators, the program was put under the Office of Scientific Research and Development, and in 1942 it was officially transferred under the auspices of the United States Army and became known as the Manhattan Project, an American, British and Canadian joint venture. Under the direction of General Leslie Groves, over thirty different sites were constructed for the research, production, and testing of components related to bomb-making. These included the Los Alamos National Laboratory at Los Alamos, New Mexico, under the direction of physicist Robert Oppenheimer, the Hanford plutonium production facility in Washington, and the Y-12 National Security Complex in Tennessee. By investing heavily in breeding plutonium in early nuclear reactors and in the electromagnetic and gaseous diffusion enrichment processes for the production of uranium-235, the United States was able to develop three usable weapons by mid-1945. The Trinity test was a plutonium implosion-design weapon tested on 16 July 1945, with around a 20 kiloton yield. Faced with a planned invasion of the Japanese home islands scheduled to begin on 1 November 1945 and with Japan not surrendering, President Harry S. Truman ordered the atomic raids on Japan. On 6 August 1945, the US detonated a uranium-gun design bomb, Little Boy, over the Japanese city of Hiroshima with an energy of about 15 kilotons of TNT, killing approximately 70,000 people, among them 20,000 Japanese combatants and 20,000 Korean forced laborers, and destroying nearly 50,000 buildings (including the 2nd General Army and Fifth Division headquarters). Three days later, on 9 August, the US attacked Nagasaki using a plutonium implosion-design bomb, Fat Man, with the explosion equivalent to about 20 kilotons of TNT, destroying 60% of the city and killing approximately 35,000 people, among them 23,200–28,200 Japanese munitions workers, 2,000 Korean forced laborers, and 150 Japanese combatants. On 1 January 1947, the Atomic Energy Act of 1946 (known as the McMahon Act) took effect, and the Manhattan Project was officially turned over to the United States Atomic Energy Commission (AEC). On 15 August 1947, the Manhattan Project was abolished. The American atomic stockpile was small and grew slowly in the immediate aftermath of World War II, and the size of that stockpile was a closely guarded secret. However, there were forces that pushed the United States towards greatly increasing the size of the stockpile. Some of these were international in origin and focused on the increasing tensions of the Cold War, including the loss of China, the Soviet Union becoming an atomic power, and the onset of the Korean War. And some of the forces were domestic – both the Truman administration and the Eisenhower administration wanted to rein in military spending and avoid budget deficits and inflation. It was the perception that nuclear weapons gave more "bang for the buck" and thus were the most cost-efficient way to respond to the security threat the Soviet Union represented. As a result, beginning in 1950 the AEC embarked on a massive expansion of its production facilities, an effort that would eventually be one of the largest US government construction projects ever to take place outside of wartime. And this production would soon include the far more powerful hydrogen bomb, which the United States had decided to move forward with after an intense debate during 1949–50. as well as much smaller tactical atomic weapons for battlefield use. By 1990, the United States had produced more than 70,000 nuclear warheads, in over 65 different varieties, ranging in yield from around .01 kilotons (such as the man-portable Davy Crockett shell) to the 25 megaton B41 bomb. Between 1940 and 1996, the US spent at least $11.9 trillion in present-day terms on nuclear weapons development. Over half was spent on building delivery mechanisms for the weapon. $749 billion in present-day terms was spent on nuclear waste management and environmental remediation. Richland, Washington was the first city established to support plutonium production at the nearby Hanford nuclear site, to power the American nuclear weapons arsenals. It produced plutonium for use in cold war atomic bombs. Throughout the Cold War, the US and USSR threatened with all-out nuclear attack in case of war, regardless of whether it was a conventional or a nuclear clash. US nuclear doctrine called for mutually assured destruction (MAD), which entailed a massive nuclear attack against strategic targets and major populations centers of the Soviet Union and its allies. The term "mutual assured destruction" was coined in 1962 by American strategist Donald Brennan. MAD was implemented by deploying nuclear weapons simultaneously on three different types of weapons platforms. After the 1989 end of the Cold War and the 1991 dissolution of the Soviet Union, the US nuclear program was heavily curtailed; halting its program of nuclear testing, ceasing its production of new nuclear weapons, and reducing its stockpile by half by the mid-1990s under President Bill Clinton. Many former nuclear facilities were closed, and their sites became targets of extensive environmental remediation. Efforts were redirected from weapons production to stockpile stewardship; attempting to predict the behavior of aging weapons without using full-scale nuclear testing. Increased funding was directed to anti-nuclear proliferation programs, such as helping the states of the former Soviet Union to eliminate their former nuclear sites and to assist Russia in their efforts to inventory and secure their inherited nuclear stockpile. By February 2006, over $1.2 billion had been paid under the Radiation Exposure Compensation Act of 1990 to US citizens exposed to nuclear hazards as a result of the US nuclear weapons program, and by 1998 at least $759 million had been paid to the Marshall Islanders in compensation for their exposure to US nuclear testing. Over $15 million was paid to the Japanese government following the exposure of its citizens and food supply to nuclear fallout from the 1954 "Bravo" test. In 1998, the country spent an estimated $35.1 billion on its nuclear weapons and weapons-related programs. In the 2013 book Plutopia: Nuclear Families, Atomic Cities, and the Great Soviet and American Plutonium Disasters (Oxford), Kate Brown explores the health of affected citizens in the United States, and the "slow-motion disasters" that still threaten the environments where the plants are located. According to Brown, the plants at Hanford, over a period of four decades, released millions of curies of radioactive isotopes into the surrounding environment. Brown says that most of this radioactive contamination over the years at Hanford were part of normal operations, but unforeseen accidents did occur and plant management kept this secret, as the pollution continued unabated. Even today, as pollution threats to health and the environment persist, the government keeps knowledge about the associated risks from the public. During the presidency of George W. Bush, and especially after the 11 September terrorist attacks of 2001, rumors circulated in major news sources that the US was considering designing new nuclear weapons ("bunker-busting nukes") and resuming nuclear testing for reasons of stockpile stewardship. Republicans argued that small nuclear weapons appear more likely to be used than large nuclear weapons, and thus small nuclear weapons pose a more credible threat that has more of a deterrent effect against hostile behavior. Democrats counterargued that allowing the weapons could trigger an arms race. In 2003, the Senate Armed Services Committee voted to repeal the 1993 Spratt-Furse ban on the development of small nuclear weapons. This change was part of the 2004 fiscal year defense authorization. The Bush administration wanted the repeal so that they could develop weapons to address the threat from North Korea. "Low-yield weapons" (those with one-third the force of the bomb that was dropped on Hiroshima in 1945) were permitted to be developed. The Bush administration was unsuccessful in its goal to develop a guided low-yield nuclear weapon, however, in 2010 President Barack Obama began funding and development for what would become the B61-12, a smart guided low-yield nuclear bomb developed off of the B61 “dumb bomb”. Statements by the US government in 2004 indicated that they planned to decrease the arsenal to around 5,500 total warheads by 2012. Much of that reduction was already accomplished by January 2008. According to the Pentagon's June 2019 Doctrine for Joint Nuclear Operations, "Integration of nuclear weapons employment with conventional and special operations forces is essential to the success of any mission or operation." In 2024 it was estimated that the United States possessed 1,770 deployed nuclear warheads, 1,938 in reserve, and 1,336 retired and awaiting dismantlement (a total of 5,044). 1,370 strategic warheads were deployed on ballistic missiles, 300 at strategic bomber bases in the United States, and 100 tactical bombs at air bases in Europe. Nuclear weapons testing Between 16 July 1945 and 23 September 1992, the United States maintained a program of vigorous nuclear testing, with the exception of a moratorium between November 1958 and September 1961. By official count, a total of 1,054 nuclear tests and two nuclear attacks were conducted, with over 100 of them taking place at sites in the Pacific Ocean, over 900 of them at the Nevada Test Site, and ten on miscellaneous sites in the United States (Alaska, Colorado, Mississippi, and New Mexico). Until November 1962, the vast majority of the US tests were atmospheric (that is, above-ground); after the acceptance of the Partial Test Ban Treaty all testing was relegated underground, in order to prevent the dispersion of nuclear fallout. In 1992 a new testing moratorium was initiated, which has been maintained through 2024. The US program of atmospheric nuclear testing exposed a number of the population to the hazards of fallout. Estimating exact numbers, and the exact consequences, of people exposed has been medically very difficult, with the exception of the high exposures of Marshall Islanders and Japanese fishers in the case of the Castle Bravo incident in 1954. A number of groups of US citizens—especially farmers and inhabitants of cities downwind of the Nevada Test Site and US military workers at various tests—have sued for compensation and recognition of their exposure, many successfully. The passage of the Radiation Exposure Compensation Act of 1990 allowed for a systematic filing of compensation claims in relation to testing as well as those employed at nuclear weapons facilities. By June 2009 over $1.4 billion total has been given in compensation, with over $660 million going to "downwinders". Prior to his meeting with CCP General Secretary Xi Jinping on October 30, 2025, President Trump, in a social media post, "instructed the Department of War [sic]" to resume testing nuclear weapons "on an equal basis." On October 31, in an interview with 60 Minutes, Trump claimed Russia, China, Pakistan, and North Korea were carrying out covert nuclear tests. On November 3, Secretary of Energy Chris Wright stated that nuclear testing would not resume, and subcritical testing would continue. A few notable US nuclear tests include: A summary table of each of the American operational series may be found at United States' nuclear test series. Delivery systems The original Little Boy and Fat Man weapons, developed by the United States during the Manhattan Project, were relatively large (Fat Man had a diameter of 5 feet (1.5 m)) and heavy (around 5 tons each) and required specially modified bomber planes to be adapted for their bombing missions against Japan. Each modified bomber could only carry one such weapon and only within a limited range. After these initial weapons were developed, a considerable amount of money and research was conducted towards the goal of standardizing nuclear warheads so that they did not require highly specialized experts to assemble them before use, as in the case with the idiosyncratic wartime devices, and miniaturization of the warheads for use in more variable delivery systems. Through the aid of brainpower acquired through Operation Paperclip at the tail end of the European theater of World War II, the United States was able to embark on an ambitious program in rocketry. One of the first products of this was the development of rockets capable of holding nuclear warheads. The MGR-1 Honest John was the first such weapon, developed in 1953 as a surface-to-surface missile with a 15-mile (24 km) maximum range. Because of their limited range, their potential use was heavily constrained (they could not, for example, threaten Moscow with an immediate strike). Development of long-range bombers, such as the B-29 Superfortress during World War II, was continued during the Cold War period. In 1946, the Convair B-36 Peacemaker became the first purpose-built nuclear bomber; it served with the USAF until 1959. The Boeing B-52 Stratofortress was able by the mid-1950s to carry a wide arsenal of nuclear bombs, each with different capabilities and potential use situations. Starting in 1946, the US based its initial deterrence force on the Strategic Air Command, which, by the late 1950s, maintained a number of nuclear-armed bombers in the sky at all times, prepared to receive orders to attack the USSR whenever needed. This system was, however, tremendously expensive, both in terms of natural and human resources, and raised the possibility of an accidental nuclear war. During the 1950s and 1960s, elaborate computerized early warning systems such as Defense Support Program were developed to detect incoming Soviet attacks and to coordinate response strategies. During this same period, intercontinental ballistic missile (ICBM) systems were developed that could deliver a nuclear payload across vast distances, allowing the US to house nuclear forces capable of hitting the Soviet Union in the American Midwest. Shorter-range weapons, including small tactical weapons, were fielded in Europe as well, including nuclear artillery and man-portable Special Atomic Demolition Munition. The development of submarine-launched ballistic missile systems allowed for hidden nuclear submarines to covertly launch missiles at distant targets as well, making it virtually impossible for the Soviet Union to successfully launch a first strike attack against the United States without receiving a deadly response. Improvements in warhead miniaturization in the 1970s and 1980s allowed for the development of MIRVs—missiles which could carry multiple warheads, each of which could be separately targeted. The question of whether these missiles should be based on constantly rotating train tracks (to avoid being easily targeted by opposing Soviet missiles) or based in heavily fortified silos (to possibly withstand a Soviet attack) was a major political controversy in the 1980s (eventually the silo deployment method was chosen). MIRVed systems enabled the US to render Soviet missile defenses economically unfeasible, as each offensive missile would require between three and ten defensive missiles to counter. Additional developments in weapons delivery included cruise missile systems, which allowed a plane to fire a long-distance, low-flying nuclear-armed missile towards a target from a relatively comfortable distance. The current delivery systems of the US make virtually any part of the Earth's surface within the reach of its nuclear arsenal. Though its land-based missile systems have a maximum range of 10,000 kilometres (6,200 mi) (less than worldwide), its submarine-based forces extend its reach from a coastline 12,000 kilometres (7,500 mi) inland. Additionally, in-flight refueling of long-range bombers and the use of aircraft carriers extends the possible range virtually indefinitely. Command and control Command and control procedures in case of nuclear war were given by the Single Integrated Operational Plan (SIOP) until 2003, when this was superseded by Operations Plan 8044. Since World War II, the President of the United States has had sole authority to launch US nuclear weapons, whether as a first strike or nuclear retaliation. This arrangement was seen as necessary during the Cold War to present a credible nuclear deterrent; if an attack was detected, the United States would have only minutes to launch a counterstrike before its nuclear capability was severely damaged, or national leaders killed. If the President has been killed, command authority follows the presidential line of succession. Changes to this policy have been proposed, but currently the only way to countermand such an order before the strike was launched would be for the Vice President and the majority of the Cabinet to relieve the President under Section 4 of the Twenty-fifth Amendment to the United States Constitution. Regardless of whether the United States is actually under attack by a nuclear-capable adversary, the President alone has the authority to order nuclear strikes. The President and the Secretary of Defense form the National Command Authority, but the Secretary of Defense has no authority to refuse or disobey such an order. The President's decision must be transmitted to the National Military Command Center, which will then issue the coded orders to nuclear-capable forces. The President can give a nuclear launch order using their nuclear briefcase (nicknamed the nuclear football), or can use command centers such as the White House Situation Room. The command would be carried out by a Nuclear and Missile Operations Officer (a member of a missile combat crew, also called a "missileer") at a missile launch control center. A two-man rule applies to the launch of missiles, meaning that two officers must turn keys simultaneously (far enough apart that this cannot be done by one person).[citation needed] When President Reagan was shot in 1981, there was confusion about where the "nuclear football" was, and who was in charge. In 1975, a launch crew member, Harold Hering, was dismissed from the Air Force for asking how he could know whether the order to launch his missiles came from a sane president. In response to this situation, Ron Rosenbaum wrote that no command and control system is foolproof, and that the sanity of senior nuclear decision makers would always be a weak point in any conceivable command and control protocol. Starting with President Eisenhower, authority to launch a full-scale nuclear attack has been delegated to theater commanders and other specific commanders if they believe it is warranted by circumstances, and are out of communication with the president or the president had been incapacitated. For example, during the Cuban Missile Crisis, on 24 October 1962, General Thomas Power, commander of the Strategic Air Command (SAC), took the country to DEFCON 2, the very precipice of full-scale nuclear war, launching the SAC bombers of the US with nuclear weapons ready to strike. Moreover, some of these commanders subdelegated to lower commanders the authority to launch nuclear weapons under similar circumstance. In fact, the nuclear weapons were not placed under locks (i.e., permissive action links) until decades later, and so pilots or individual submarine commanders had the power to launch nuclear weapons entirely on their own, without higher authority. Accidents The United States nuclear program since its inception has experienced accidents of varying forms, ranging from single-casualty research experiments (such as that of Louis Slotin during the Manhattan Project), to the nuclear fallout dispersion of the Castle Bravo shot in 1954, to accidents such as crashes of aircraft carrying nuclear weapons, the dropping of nuclear weapons from aircraft, losses of nuclear submarines, and explosions of nuclear-armed missiles (broken arrows). How close any of these accidents came to being major nuclear disasters is a matter of technical and scholarly debate and interpretation. Weapons accidentally dropped by the United States include incidents off the coast of British Columbia (1950) (see 1950 British Columbia B-36 crash), near Atlantic City, New Jersey (1957); Savannah, Georgia (1958) (see Tybee Bomb); Goldsboro, North Carolina (1961) (see 1961 Goldsboro B-52 crash); off the coast of Okinawa (1965); in the sea near Palomares, Spain (1966, see 1966 Palomares B-52 crash); and near Thule Air Base, Greenland (1968) (see 1968 Thule Air Base B-52 crash). In some of these cases (such as the 1966 Palomares case), the explosive system of the fission weapon discharged, but did not trigger a nuclear chain reaction (safety features prevent this from easily happening), but did disperse hazardous nuclear materials across wide areas, necessitating expensive cleanup endeavors. Several US nuclear weapons, partial weapons, or weapons components are thought to be lost and unrecovered, primarily in aircraft accidents. The 1980 Damascus Titan missile explosion in Damascus, Arkansas, threw a warhead from its silo but did not release any radiation. The nuclear testing program resulted in a number of cases of fallout dispersion onto populated areas. The most significant of these was the Castle Bravo test, which spread radioactive ash over an area of over 100 square miles (260 km2), including a number of populated islands. The populations of the islands were evacuated but not before suffering radiation burns. They would later suffer long-term effects, such as birth defects and increased cancer risk. There are ongoing concerns around deterioration of the nuclear waste site on Runit Island and a potential radioactive spill. There were also instances during the nuclear testing program in which soldiers were exposed to overly high levels of radiation, which grew into a major scandal in the 1970s and 1980s, as many soldiers later suffered from what were claimed to be diseases caused by their exposures. Many of the former nuclear facilities produced significant environmental damages during their years of activity, and since the 1990s have been Superfund sites of cleanup and environmental remediation. Hanford is currently the most contaminated nuclear site in the United States and is the focus of the nation's largest environmental cleanup. Radioactive materials are known to be leaking from Hanford into the environment. The Radiation Exposure Compensation Act of 1990 allows for US citizens exposed to radiation or other health risks through the US nuclear program to file for compensation and damages. Deliberate attacks on weapons facilities In 1972, three hijackers took control of a domestic passenger flight along the east coast of the US and threatened to crash the plane into a US nuclear weapons plant in Oak Ridge, Tennessee. The plane got as close as 8,000 feet above the site before the hijackers' demands were met. Various acts of civil disobedience since 1980 by the peace group Plowshares have shown how nuclear weapons facilities can be penetrated, and the group's actions represent extraordinary breaches of security at nuclear weapons plants in the United States. The National Nuclear Security Administration has acknowledged the seriousness of the 2012 Plowshares action. Non-proliferation policy experts have questioned "the use of private contractors to provide security at facilities that manufacture and store the government's most dangerous military material". Nuclear weapons materials on the black market are a global concern, and there is concern about the possible detonation of a small, crude nuclear weapon by a militant group in a major city, with significant loss of life and property. Stuxnet is a computer worm discovered in June 2010 that is believed to have been created by the United States and Israel to attack Iran's nuclear fuel enrichment facilities. Development agencies The initial US nuclear program was run by the National Bureau of Standards starting in 1939 under the edict of President Franklin Delano Roosevelt. Its primary purpose was to delegate research and dispense funds. In 1940 the National Defense Research Committee (NDRC) was established, coordinating work under the Committee on Uranium among its other wartime efforts. In June 1941, the Office of Scientific Research and Development (OSRD) was established, with the NDRC as one of its subordinate agencies, which enlarged and renamed the Uranium Committee as the Section on Uranium. In 1941, NDRC research was placed under direct control of Vannevar Bush as the OSRD S-1 Section, which attempted to increase the pace of weapons research. In June 1942, the US Army Corps of Engineers took over the project to develop atomic weapons, while the OSRD retained responsibility for scientific research. This was the beginning of the Manhattan Project, run as the Manhattan Engineering District (MED), an agency under military control that was in charge of developing the first atomic weapons. After World War II, the MED maintained control over the US arsenal and production facilities and coordinated the Operation Crossroads tests. In 1946 after a long and protracted debate, the Atomic Energy Act of 1946 was passed, creating the Atomic Energy Commission (AEC) as a civilian agency that would be in charge of the production of nuclear weapons and research facilities, funded through Congress, with oversight provided by the Joint Committee on Atomic Energy. The AEC was given vast powers of control over secrecy, research, and money, and could seize lands with suspected uranium deposits. Along with its duties towards the production and regulation of nuclear weapons, it was also in charge of stimulating development and regulating civilian nuclear power. The full transference of activities was finalized in January 1947. In 1975, following the "energy crisis" of the early 1970s and public and congressional discontent with the AEC (in part because of the impossibility to be both a producer and a regulator), it was disassembled into component parts as the Energy Research and Development Administration (ERDA), which assumed most of the AEC's former production, coordination, and research roles, and the Nuclear Regulatory Commission, which assumed its civilian regulation activities. ERDA was short-lived, however, and in 1977 the US nuclear weapons activities were reorganized under the Department of Energy, which maintains such responsibilities through the semi-autonomous National Nuclear Security Administration. Some functions were taken over or shared by the Department of Homeland Security in 2002. The already-built weapons themselves are in the control of the Strategic Command, which is part of the Department of Defense. In general, these agencies served to coordinate research and build sites. They generally operated their sites through contractors, however, both private and public (for example, Union Carbide, a private company, ran Oak Ridge National Laboratory for many decades; the University of California, a public educational institution, has run the Los Alamos and Lawrence Livermore laboratories since their inception, and will jointly manage Los Alamos with the private company Bechtel as of its next contract). Funding was received both through these agencies directly, but also from additional outside agencies, such as the Department of Defense. Each branch of the military also maintained its own nuclear-related research agencies (generally related to delivery systems). Weapons production complex This table is not comprehensive, as numerous facilities throughout the United States have contributed to its nuclear weapons program. It includes the major sites related to the US weapons program (past and present), their basic site functions, and their current status of activity. Not listed are the many bases and facilities at which nuclear weapons have been deployed. In addition to deploying weapons on its own soil, during the Cold War, the United States also stationed nuclear weapons in 27 foreign countries and territories, including Okinawa (which was US-controlled until 1971), Japan (during the occupation immediately following World War II), Greenland, Germany, Taiwan, and French Morocco then independent Morocco. Proliferation Early on in the development of its nuclear weapons, the United States relied in part on information-sharing with both the United Kingdom and Canada, as codified in the Quebec Agreement of 1943. These three parties agreed not to share nuclear weapons information with other countries without the consent of the others, an early attempt at nonproliferation. After the development of the first nuclear weapons during World War II, though, there was much debate within the political circles and public sphere of the United States about whether or not the country should attempt to maintain a monopoly on nuclear technology, or whether it should undertake a program of information sharing with other nations (especially its former ally and likely competitor, the Soviet Union), or submit control of its weapons to some sort of international organization (such as the United Nations) who would use them to attempt to maintain world peace. Though fear of a nuclear arms race spurred many politicians and scientists to advocate some degree of international control or sharing of nuclear weapons and information, many politicians and members of the military believed that it was better in the short term to maintain high standards of nuclear secrecy and to forestall a Soviet bomb as long as possible (and they did not believe the USSR would actually submit to international controls in good faith). Since this path was chosen, the United States was, in its early days, essentially an advocate for the prevention of nuclear proliferation, though primarily for the reason of self-preservation. A few years after the USSR detonated its first weapon in 1949, though, the US under President Dwight D. Eisenhower sought to encourage a program of sharing nuclear information related to civilian nuclear power and nuclear physics in general. The Atoms for Peace program, begun in 1953, was also in part political: the US was better poised to commit various scarce resources, such as enriched uranium, towards this peaceful effort, and to request a similar contribution from the Soviet Union, who had far fewer resources along these lines; thus the program had a strategic justification as well, as was later revealed by internal memos. This overall goal of promoting civilian use of nuclear energy in other countries, while also preventing weapons dissemination, has been labeled by many critics as contradictory and having led to lax standards for a number of decades which allowed a number of other nations, such as China and India, to profit from dual-use technology (purchased from nations other than the US). The Cooperative Threat Reduction program of the Defense Threat Reduction Agency was established after the breakup of the Soviet Union in 1991 to aid former Soviet bloc countries in the inventory and destruction of their sites for developing nuclear, chemical, and biological weapons, and their methods of delivering them (ICBM silos, long-range bombers, etc.). Over $4.4 billion has been spent on this endeavor to prevent purposeful or accidental proliferation of weapons from the former Soviet arsenal. After India and Pakistan tested nuclear weapons in 1998, President Bill Clinton imposed economic sanctions on the countries. In 1999, however, the sanctions against India were lifted; those against Pakistan were kept in place as a result of the military government that had taken over. Shortly after the September 11 attacks in 2001, President George W. Bush lifted the sanctions against Pakistan as well, in order to get the Pakistani government's help as a conduit for US and NATO forces for operations in Afghanistan. The US government has been vocal against the proliferation of such weapons in the countries of Iran and North Korea. The 2003 US-led invasion of Iraq was carried out under the pretext of disarming Iraq from possessing weapons of mass destruction; however, no such weapons were discovered. In September 2018, then South Korean president Moon Jae-in travelled to Pyongyang, North Korea to attend the September 2018 inter-Korean summit along with North Korean supreme leader, Kim Jong Un. A joint declaration consisting of conditions on nuclear non-proliferation was signed. The DPRK agreed to dismantle its nuclear complex in the presence of international experts if the US takes correlative action. Nuclear disarmament in international law The United States is one of the five nuclear weapons states with a declared nuclear arsenal under the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), of which it was an original drafter and signatory on 1 July 1968 (ratified 5 March 1970). All signatories of the NPT agreed to refrain from aiding in nuclear weapons proliferation to other states. Further under Article VI of the NPT, all signatories, including the US, agreed to negotiate in good faith to stop the nuclear arms race and to negotiate for complete elimination of nuclear weapons. "Each of the Parties to the Treaty undertakes to pursue negotiations in good faith on effective measures relating to cessation of the nuclear arms race at an early date and to nuclear disarmament, and on a treaty on general and complete disarmament." The International Court of Justice (ICJ), the preeminent judicial tribunal of international law, in its advisory opinion on the Legality of the Threat or Use of Nuclear Weapons, issued 8 July 1996, unanimously interprets the text of Article VI as implying that: There exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control. The International Atomic Energy Agency (IAEA) in 2005 proposed a comprehensive ban on fissile material that would greatly limit the production of weapons of mass destruction. 147 countries voted for this proposal, but the United States voted against.[citation needed][dubious – discuss] The US government has also resisted the Treaty on the Prohibition of Nuclear Weapons, a binding agreement for negotiations for the total elimination of nuclear weapons, supported by more than 120 nations. International relations and nuclear weapons In 1958, the United States Air Force had considered a plan to drop nuclear bombs on China during a confrontation over Taiwan but it was overruled, previously secret documents showed after they were declassified due to the Freedom of Information Act in April 2008. The plan included an initial plan to drop 10–15 kiloton bombs on airfields in Amoy (now called Xiamen) in the event of a Chinese blockade against Taiwan's Offshore Islands. Occupational illness The Energy Employees Occupational Illness Compensation Program (EEOICP) began on 31 July 2001. The program provides compensation and health benefits to Department of Energy nuclear weapons workers (employees, former employees, contractors and subcontractors) as well as compensation to certain survivors if the worker is already deceased. By 14 August 2010, the program had already identified 45,799 civilians who lost their health (including 18,942 who developed cancer) due to exposure to radiation and toxic substances while producing nuclear weapons for the United States. Current status The United States is one of the five recognized nuclear powers by the signatories of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) and one of the four countries wielding a nuclear triad. As of 2017, the US has an estimated 4,018 nuclear weapons in either deployment or storage. This figure compares to a peak of 31,225 total warheads in 1967 and 22,217 in 1989 and does not include "several thousand" warheads that have been retired and scheduled for dismantlement. The Pantex Plant near Amarillo, Texas, is the only location in the United States where weapons from the aging nuclear arsenal can be refurbished or dismantled. In 2009 and 2010, the Obama administration declared policies that would invalidate the Bush-era policy for use of nuclear weapons and its motions to develop new ones. First, in a prominent 2009 speech, US President Barack Obama outlined a goal of "a world without nuclear weapons". To that goal, US President Barack Obama and Russian President Dmitry Medvedev signed a new START treaty on 8 April 2010, to reduce the number of active nuclear weapons from 2,200 to 1,550. That same week Obama also revised US policy on the use of nuclear weapons in a Nuclear Posture Review required of all presidents, declaring for the first time that the US would not use nuclear weapons against non-nuclear, NPT-compliant states. The policy also renounces development of any new nuclear weapons. However, within the same Nuclear Posture Review of April 2010, there was a stated need to develop new “low yield” nuclear weapons. This resulted in the development of the B61 Mod 12. Despite President Obama's goal of a nuclear-free world and reversal of former President Bush's nuclear policies, his presidency cut fewer warheads from the stockpile than any previous post-Cold War presidency. Following a renewal of tension after the Russo-Ukrainian War started in 2014, the Obama administration announced plans to continue to renovate the US nuclear weapons facilities and platforms with a budgeted spend of about a trillion dollars over 30 years. Under these new plans, the US government would fund research and development of new nuclear cruise missiles. The Trump and Biden administrations continued with these plans. As of 2021, American nuclear forces on land consist of 400 Minuteman III ICBMs spread among 450 operational launchers, staffed by Air Force Global Strike Command. Those in the seas consist of 14 nuclear-capable Ohio-class Trident submarines, nine in the Pacific and five in the Atlantic. Nuclear capabilities in the air are provided by 60 nuclear-capable heavy bombers, 20 B-2 bombers and 40 B-52s. The Air Force has modernized its Minuteman III missiles to last through 2030, and a Ground Based Strategic Deterrent (GBSD) is set to begin replacing them in 2029. The Navy has undertaken efforts to extend the operational lives of its missiles in warheads past 2020; it is also producing new Columbia-class submarines to replace the Ohio fleet beginning 2031. The Air Force is also retiring the nuclear cruise missiles of its B-52s, leaving only half nuclear-capable. It intends to procure a new long-range bomber, the B-21, and a new long-range standoff (LRSO) cruise missile in the 2020s. Nuclear disarmament movement In the early 1980s, the revival of the nuclear arms race triggered large protests about nuclear weapons. On 12 June 1982, one million people demonstrated in New York City's Central Park against nuclear weapons and for an end to the cold war arms race. It was the largest anti-nuclear protest and the largest political demonstration in American history. International Day of Nuclear Disarmament protests were held on 20 June 1983 at 50 sites across the United States. There were many Nevada Desert Experience protests and peace camps at the Nevada Test Site during the 1980s and 1990s. There have also been protests by anti-nuclear groups at the Y-12 Nuclear Weapons Plant, the Idaho National Laboratory, Yucca Mountain nuclear waste repository proposal, the Hanford Site, the Nevada Test Site, Lawrence Livermore National Laboratory, and transportation of nuclear waste from the Los Alamos National Laboratory. On 1 May 2005, 40,000 anti-nuclear/anti-war protesters marched past the United Nations in New York, 60 years after the atomic bombings of Hiroshima and Nagasaki. This was the largest anti-nuclear rally in the US for several decades. In May 2010, some 25,000 people, including members of peace organizations and 1945 atomic bomb survivors, marched from downtown New York to the United Nations headquarters, calling for the elimination of nuclear weapons. Some scientists and engineers have opposed nuclear weapons, including Paul M. Doty, Hermann Joseph Muller, Linus Pauling, Eugene Rabinowitch, M. V. Ramana and Frank N. von Hippel. In recent years, many elder statesmen have also advocated nuclear disarmament. Sam Nunn, William Perry, Henry Kissinger, and George Shultz—have called upon governments to embrace the vision of a world free of nuclear weapons, and in various op-ed columns have proposed an ambitious program of urgent steps to that end. The four have created the Nuclear Security Project to advance this agenda. Organizations such as Global Zero, an international non-partisan group of 300 world leaders dedicated to achieving nuclear disarmament, have also been established. United States nuclear weapons arsenal New START Treaty Aggregate Numbers of Strategic Offensive Arms, 14 June 2023 Notes: Nuclear Notebook from the Bulletin of the Atomic Scientists, 3 May 2024 Notes: While the New START counting rules attribute a warhead to each deployed bomber, American bombers normally do not carry nuclear weapons. Their number therefore is not added to the warhead count. The Nuclear Notebook also counts as deployed all weapons that can be quickly loaded onto an aircraft, as well as nonstrategic nuclear weapons at European air bases. See also Notes References Notes 2 Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Thirty-seventh_government_of_Israel#cite_note-172] | [TOKENS: 9915] |
Contents Thirty-seventh government of Israel The thirty-seventh government of Israel is the current cabinet of Israel, formed on 29 December 2022, following the Knesset election the previous month. The coalition government currently consists of five parties — Likud, Shas, Otzma Yehudit, Religious Zionist Party and New Hope — and is led by Benjamin Netanyahu, who took office as the prime minister of Israel for the sixth time. The government is widely regarded as the most right-wing government in the country's history, and includes far-right politicians. Several of the government's policy proposals have led to controversies, both within Israel and abroad, with the government's attempts at reforming the judiciary leading to a wave of demonstrations across the country. Following the outbreak of the Gaza war, opposition leader Yair Lapid initiated discussions with Netanyahu on the formation of an emergency government. On 11 October 2023, National Unity MKs Benny Gantz, Gadi Eisenkot, Gideon Sa'ar, Hili Tropper, and Yifat Shasha-Biton joined the Security Cabinet of Israel to form an emergency national unity government. Their accession to the Security Cabinet and to the government (as ministers without portfolio) was approved by the Knesset the following day. Gantz, Netanyahu, and Defense Minister Yoav Gallant became part of the newly formed Israeli war cabinet, with Eisenkot and Ron Dermer serving as observers. National Unity left the government in June 2024. New Hope rejoined the government in September. Otzma Yehudit announced on 19 January 2025 that it had withdrawn from the government, which took effect on 21 January, following the cabinet's acceptance of the three-phase Gaza war ceasefire proposal, though it rejoined two months later. United Torah Judaism left the government in July 2025 over dissatisfaction with the government's draft conscription law. Shas left the government several days later, though it remains part of the coalition. Background The right-wing bloc of parties, led by Benjamin Netanyahu, known in Israel as the national camp, won 64 of the 120 seats in the elections for the Knesset, while the coalition led by the incumbent prime minister Yair Lapid won 51 seats. The new majority has been variously described as the most right-wing government in Israeli history, as well as Israel's most religious government. Shortly after the elections, Lapid conceded to Netanyahu, and congratulated him, wishing him luck "for the sake of the Israeli people". On 15 November, the swearing-in ceremony for the newly elected members of the 25th Knesset was held during the opening session. The vote to appoint a new Speaker of the Knesset, which is usually conducted at the opening session, as well as the swearing in of cabinet members were postponed since ongoing coalition negotiations had not yet resulted in agreement on these positions. Government formation Yair Lapid Yesh Atid Benjamin Netanyahu Likud On 3 November 2022, Netanyahu told his aide Yariv Levin to begin informal coalition talks with allied parties, after 97% of the vote was counted. The leader of the Shas party Aryeh Deri met with Yitzhak Goldknopf, the leader of United Torah Judaism and its Agudat Yisrael faction, on 4 November. The two parties agreed to cooperate as members of the next government. The Degel HaTorah faction of United Torah Judaism stated on 5 November that it will maintain its ideological stance about not seeking any ministerial posts, as per the instruction of its spiritual leader Rabbi Gershon Edelstein, but will seek other senior posts like Knesset committee chairmen and deputy ministers. Netanyahu himself started holding talks on 6 November. He first met with Moshe Gafni, the leader of Degel HaTorah, and then with Goldknopf. Meanwhile, the Religious Zionist Party leader Bezalel Smotrich and the leader of its Otzma Yehudit faction Itamar Ben-Gvir pledged that they would not enter the coalition without the other faction. Gafni later met with Smotrich for coalition talks. Smotrich then met with Netanyahu. On 7 November, Netanyahu met with Ben-Gvir who demanded the Ministry of Public Security with expanded powers for himself and the Ministry of Education or Transport and Road Safety for Yitzhak Wasserlauf. A major demand among all of Netanyahu's allies was that the Knesset be allowed to ignore the rulings of the Supreme Court. Netanyahu met with the Noam faction leader and its sole MK Avi Maoz on 8 November after he threatened to boycott the coalition. He demanded complete control of the Western Wall by the Haredi rabbinate and removal of what he considered as anti-Zionist and anti-Jewish content in schoolbooks. President Isaac Herzog began consultations with heads of all the political parties on 9 November after the election results were certified. During the consultations, he expressed his reservations about Ben-Gvir becoming a member in the next government. Shas met with Likud for coalition talks on 10 November. By 11 November, Netanyahu had secured recommendations from 64 MKs, which constituted a majority. He was given the mandate to form the thirty-seventh government of Israel by President Herzog on 13 November. Otzma Yehudit and Noam officially split from Religious Zionism on 20 November as per a pre-election agreement. On 25 November, Otzma Yehudit and Likud signed a coalition agreement, under which Ben-Gvir will assume the newly created position of National Security Minister, whose powers would be more expansive than that of the Minister of Public Security, including overseeing the Israel Police and the Israel Border Police in the West Bank, as well as giving powers to authorities to shoot thieves stealing from military bases. Yitzhak Wasserlauf was given the Ministry for the Development of the Negev and the Galilee with expanded powers to regulate new West Bank settlements, while separating it from the "Periphery" portfolio, which will be given to Shas. The deal also includes giving the Ministry of Heritage to Amihai Eliyahu, separating it from the "Jerusalem Affairs" portfolio, the chairmanship of the Knesset's Public Security Committee to Zvika Fogel and that of the Special Committee for the Israeli Citizens' Fund to Limor Son Har-Melech, the post of Deputy Economic Minister to Almog Cohen, establishment of a national guard, and expansion of mobilization of reservists in the Border Police. Netanyahu and Maoz signed a coalition agreement on 27 November, under which the latter would become a deputy minister, would head an agency on Jewish identity in the Prime Minister's Office, and would also head Nativ, which processes the aliyah from the former Soviet Union. The agency for Jewish identity would have authority over educational content taught outside the regular curriculum in schools, in addition to the department of the Ministry of Education overseeing external teaching and partnerships, which would bring nonofficial organisations permitted to teach and lecture at schools under its purview. Likud signed a coalition agreement with the Religious Zionist Party on 1 December. Under the deal, Smotrich would serve as the Minister of Finance in rotation with Aryeh Deri, and the party will receive the post of a minister within the Ministry of Defense with control over the departments administering settlement and open lands under the Coordinator of Government Activities in the Territories, in addition to another post of a deputy minister. The deal also includes giving the post of Minister of Aliyah and Integration to Ofir Sofer, the newly created National Missions Ministry to Orit Strook, and the chairmanship of the Knesset's Constitution, Law and Justice Committee to Simcha Rothman. Likud and United Torah Judaism signed a coalition agreement on 6 December, to allow request for an extension to the deadline. Under it, the party would receive the Ministry of Construction and Housing, the chairmanship of the Knesset Finance Committee which will be given to Moshe Gafni, the Ministry of Jerusalem and Tradition (which would replace the Ministry of Jerusalem Affairs and Heritage), in addition to several posts of deputy ministers and chairmanships of Knesset committees. Likud also signed a deal with Shas by 8 December, securing interim coalition agreements with all of their allies. Under the deal, Deri will first serve as the Minister of Interior and Health, before rotating posts with Smotrich after two years. The party will also receive the Ministry of Religious Services and Welfare Ministries, as well as posts of deputy ministers in the Ministry of Education and Interior. The vote to replace then-incumbent Knesset speaker Mickey Levy was scheduled for 13 December, after Likud and its allies secured the necessary number of signatures for it. Yariv Levin of Likud was elected as an interim speaker by 64 votes, while his opponents Merav Ben-Ari of Yesh Atid and Ayman Odeh of Hadash received 45 and five votes respectively. Netanyahu asked Herzog for a 14-day extension after the agreement with Shas to finalise the roles his allied parties would play. Herzog on 9 December extended the deadline to 21 December. On that date, Netanyahu informed Herzog that he had succeeded in forming a coalition, with the new government expected to be sworn in by 2 January 2023. The government was sworn in on 29 December 2022. Timeline Israeli law stated that people convicted of crimes cannot serve in the government. An amendment to that law was made in late 2022, known colloquially as the Deri Law, to allow those who had been convicted without prison time to serve. This allowed Deri to be appointed to the cabinet. Shas leader Aryeh Deri was appointed to be Minister of Health, Minister of the Interior, and Vice Prime Minister in December 2022. He was fired in January 2023, following a Supreme Court decision that his appointment was unreasonable, since he had been convicted of fraud, and had promised not to seek government roles through a plea deal. In March 2023, Defence Minister Yoav Gallant called on the government to delay legislation related to the judicial reform. Prime Minister Netanyahu announced that he had been dismissed from his position, leading to the continuation of mass protests across the country (which had started in January in Tel Aviv). Gallant continued to serve as a minister as he had not received formal notice of dismissal, and two weeks later it was announced that Netanyahu had reversed his decision. Public Safety Minister Itamar Ben-Gvir (Otzma Yehudit leader) and Minister of Justice Yariv Levin (Likud) both threatened to resign if the judicial reform was delayed.[better source needed] After the outbreak of the Gaza war, five members of the National Unity party joined the government as ministers without portfolio, with leader Benny Gantz being made a member of the new Israeli war cabinet (along with Netanyahu and Gallant). As the war progressed, minister of national security Itamar Ben-Gvir threatened to leave the government if the war was ended. A month later in mid December, he again threatened to leave if the war did not maintain "full strength". Gideon Sa'ar stated on 16 March that his New Hope party would resign from the government and join the opposition if Prime Minister Benjamin Netanyahu did not appoint him to the Israeli war cabinet. Netanyahu did not do so, resulting in Sa'ar's New Hope party leaving the government nine days later, reducing the size of the coalition from 76 MKs to 72. Ben-Gvir and Bezalel Smotrich, of the National Religious Party–Religious Zionism party, have indicated that they will withdraw their parties from the government if the January 2025 Gaza war ceasefire is adopted, which would bring down the government. Ben-Gvir announced on 5 June that the members of his party would be allowed to vote as they wish, though his party resumed support on 9 June. On 18 May, Gantz set an 8 June deadline for withdrawal from the coalition, which was delayed by a day following the 2024 Nuseirat rescue operation. Gantz and his party left the government on 9 June, giving the government 64 seats in the Knesset. Sa'ar and his New Hope party rejoined the Netanyahu government on 30 September, increasing the number of seats held by the government to 68. The High Court of Justice ruled on 28 March 2024 that yeshiva funds would no longer be available for students who are "eligible for enlistment", effectively allowing ultra-Orthodox Jews to be drafted into the IDF. Attorney general Gali Baharav-Miara indicated on 31 March that the conscription process must begin on 1 April. The court ruled on 25 June that the IDF must begin to draft yeshiva students. Likud announced on 7 July that it would not put forward any legislation after Shas and United Torah Judaism said that they would boycott the plenary session over the lack of legislation dealing with the Haredi draft. The Ultra-Orthodox boycott continued for a second day, with UTJ briefly ending its boycott on 9 July to unsuccessfully vote in favor of a bill which would have weakened the Law of Return. Yuli Edelstein, who was replaced by Boaz Bismuth on the Foreign Affairs and Defense Committee in early August, published a draft version of the conscription law shortly before his ouster. Bismuth cancelled the work on the draft law in September 2025, which Edelstein called "a shame." Bismuth released the official version of the draft law in late November 2025. It weakened penalties for draft evaders, with Edelstein saying it was "the exact opposite" of the bill which he attempted to pass. Members of Otzma Yehudit resigned from the government on 19 January 2025 over the January 2025 Gaza war ceasefire, which took effect on 21 January. The members rejoined in March, following the "resumption" of the war in Gaza. Avi Maoz of the Noam party left the government in March 2025. On 4 June 2025, senior rabbis for United Torah Judaism Dov Lando and Moshe Hillel Hirsch instructed the party's MKs to pass a bill which would dissolve the Knesset. Yesh Atid, Yisrael Beytenu and The Democrats announced that they will "submit a bill" for dissolution on 11 June, with Yesh Atid tabling the bill on 4 June. There were also reports that Shas would vote in favor of Knesset dissolution amidst division within the governing coalition on Haredi conscription. This jeopardized the coalition's majority and would have triggered new elections if the bill passed. The following day, Agudat Yisrael, one of the United Torah Judaism factions, confirmed that it would submit a bill to dissolve the Knesset. Asher Medina, a Shas spokesman, indicated on 9 June that the party would vote in favor of a preliminary bill to dissolve the Knesset. The rabbis of Degel HaTorah instructed the parties' MKs on 12 June 2025 to oppose the dissolution of the Knesset, which was followed by Yuli Edelstein and the Shas and Degel HaTorah parties announcing that a deal had been reached, with "rabbinical leaders" telling their parties to delay the dissolution vote by a week. Shas and Degel HaTorah voted against the dissolution bill, which led to the bill failing its preliminary reading in a vote of 61 against and 53 in favor. MKs Ya'akov Tessler and Moshe Roth of Agudat Yisrael voted in favor of dissolution. Another dissolution bill will be unable to be brought forward for six months. If the bill had passed its preliminary reading, in addition to three more readings, an election would have been held in approximately three months; The Jerusalem Post posited it would have been held in October. Degel HaTorah announced on 14 July 2025 that it would leave the government because members of the party were dissatisfied after viewing the proposed draft bill by Yuli Edelstein regarding Haredi exemptions from the Israeli draft. Several hours later, Agudat Yisrael announced that it would also leave the government. Deputy Transportation Minister Uri Maklev, Moshe Gafni, the head of the Knesset Finance Committee, Ya'akov Asher, the head of the Knesset Interior and Environment Protection Committee and Jerusalem Affairs minister Meir Porush all submitted their resignations, with their resignations taking effect in 48 hours. Sports Minister Ya'akov Tessler and "Special Committee for Public Petitions Chair" Yitzhak Pindrus also submitted resignations. Yisrael Eichler submitted his resignation as the "head of the Knesset Labor and Welfare Committee" the same day. The resignations will leave Netanyahu's government with a 60-seat majority in the Knesset, as Avi Maoz, of the Noam party, left the government in March 2025. Despite Edelstein's ouster in August, a spokesman for UTJ head Yitzhak Goldknopf remarked that it would not change the faction's withdrawal from the government. The religious council for Shas, called the Moetzet Chachmei HaTorah, instructed the party on 16 July to leave the government, but stay in the coalition. The following day, various cabinet ministers submitted their resignations, including "Interior Minister Moshe Arbel, Social Affairs Minister Ya'akov Margi and Religious Services Minister Michael Malchieli." Malchieli reportedly has postponed his resignation so he could attend a 20 July meeting of the panel investigating whether attorney general Gali Baharav-Miara should be dismissed. Deputy Minister of Agriculture Moshe Abutbul, Minister of Health Uriel Buso and Haim Biton, a minister in the Education Ministry, also submitted their resignation letters, while Arbel retracted his resignation letter. The last cabinet member from the party to submit it was Labor Minister Yoav Ben-Tzur. The ministers who resigned will return to the Knesset, replacing MKs Moshe Roth, Yitzhak Pindrus and Eliyahu Baruchi. Members of government Listed below are the current ministers in the government: Principles and priorities According to the agreements signed between Likud and each of its coalition partners, and the incoming government's published guideline principles, its stated priorities are to combat the cost of living, further centralize Orthodox control over the state religious services, pass judicial reforms which include legislation to reduce judicial controls on executive and legislative power, expand settlements in the West Bank, and consider an annexation of the West Bank. Before the vote of confidence in his new government in the Knesset, Netanyahu presented three top priorities for the new government: internal security and governance, halting the nuclear program of Iran, and the development of infrastructure, with a focus on further connecting the center of the country with its periphery. Policies The government's flagship program, centered around reforms in the judicial branch, drew widespread criticism. Critics said it would have negative effects on the separation of powers, the office of the Attorney General, the economy, public health, women and minorities, workers' rights, scientific research, the overall strength of Israel's democracy and its foreign relations. After weeks of public protests on Israel's streets, joined by a growing number of military reservists, Minister of Defense Yoav Gallant spoke against the reform on 25 March, calling for a halt of the legislative process "for the sake of Israel's security". The next day, Netanyahu announced that he would be removed from his post, sparking another wave of protest across Israel and ultimately leading to Netanyahu agreeing to pause the legislation. On 10 April, Netanyahu announced that Gallant would keep his post. On 27 March 2023, after the public protests and general strikes, Netanyahu announced a pause in the reform process to allow for dialogue with opposition parties. However, negotiations aimed at reaching a compromise collapsed in June, and the government resumed its plans to unilaterally pass parts of the legislation. On 24 July 2023, the Knesset passed a bill that curbs the power of the Supreme Court to declare government decisions unreasonable; on 1 January 2024, the Supreme Court struck the bill down. The Knesset passed a "watered-down" version of the judicial reform package in late March 2025 which "changes the composition" of the judicial selection committee. In December 2022 Minister of National Security Itamar Ben-Gvir sought to amend the law that regulates the operations of the Israel Police, such that the ministry will have more direct control of its forces and policies, including its investigative priorities. Attorney General Gali Baharav-Miara objected to the draft proposal, raising concerns that the law would enable the politicization of police work, and the draft was amended to partially address those concerns. Nevertheless, in March 2023 Deputy Attorney General Gil Limon stated that the Attorney General's fears had been realized, referring to several instances of ministerial involvement in the day-to-day work of the otherwise independent police force – statements that were repeated by the Attorney General herself two days later. Separately, Police Commissioner Kobi Shabtai instructed Deputy Commissioners to avoid direct communication with the minister, later stating that "the Israel Police will remain apolitical, and act only according to law". Following appeals by the Association for Civil Rights in Israel and the Movement for Quality Government in Israel, the High Court of Justice instructed Ben-Gvir "to refrain from giving operational directions to the police... [especially] as regards to protests and demonstrations against the government." As talks of halting the judicial reform gained wind during March 2023, Minister of National Security Itamar Ben-Gvir threatened to resign if the legislation implementing the changes was suspended. To appease Ben-Gvir, Prime Minister Netanyahu announced that the government would promote the creation of a new National Guard, to be headed by Ben-Gvir. On 29 March, thousands of Israelis demonstrated in Tel Aviv, Haifa and Jerusalem against this decision. On 1 April, the New York Times quoted Gadeer Nicola, head of the Arab department at the Association for Civil Rights in Israel, as saying "If this thing passes, it will be an imminent danger to the rights of Arab citizens in this country. This will create two separate systems of applying the law. The regular police which will operate against Jewish citizens — and a militarized militia to deal only with Arab citizens." The same day, while speaking on Israel's Channel 13 about those whom he'd like to see enlist in the National Guard, Ben-Gvir specifically mentioned La Familia, the far-right fan club of the Beitar Jerusalem soccer team. On 2 April, Israel's cabinet approved the establishment of a law enforcement body that would operate independently of the police, under Ben-Gvir's authority. According to the decision, the Minister was to establish a committee chaired by the Director General of the Ministry of National Security, with representatives of the ministries of defense, justice and finance, as well as the police and the IDF, to outline the operations of the new organization. The committee's recommendations will be submitted to the government for consideration. Addressing a conference on 4 April, Police Commissioner Kobi Shabtai said that he is not opposed to the establishment of a security body which would answer to the police, but "a separate body? Absolutely not." The police chief said he had warned Ben-Gvir that the establishment of a security body separate from the police is "unnecessary, with extremely high costs that may harm citizens' personal security." During a press conference on 10 April, Prime Minister Netanyahu said, in what has been seen by some news outlets as a concession to the protesters, that "This will not be anyone's militia, it will be a security body, orderly, professional, that will be subordinate to one of the [existing] security bodies." The committee established by the government recommended the government to order the establishment of the National Guard immediately while allocating budgets. The National Guard, under whose command will be a superintendent of the police, will not be subordinate to Ben-Gvir. It will be subordinate to the police commissioner and will be part of Israel Border Police. The Ministry of Defense and Finance opposed the conclusions. The Israeli National Security Council called for further discussion on this. The coalition's efforts to expand the purview of Rabbinical courts; force some organizations, such as hospitals, to enforce certain religious practices; amend the Law Prohibiting Discrimination to allow gender segregation and discrimination on the grounds of religious belief; expand funding for religious causes; and put into law the exemption of yeshiva and kolel students from conscription have drawn criticism. According to the Haaretz op-ed of 7 March 2023, "the current coalition is interested... in modifying the public space so it suits the religious lifestyle. The legal coup is meant to castrate anyone who can prevent it, most of all the HCJ." Several banks and institutional investors, including the Israel Discount Bank and AIG have committed to avoid investing in, or providing credit to any organization that will discriminate against others on ground of religion, race, gender or sexual orientation. A series of technology companies and investment firms including Wiz, Intel Israel, Salesforce and Microsoft Israel Research and Development, have criticized the proposed changes to the Law Prohibiting Discrimination, with Wiz stating that it will require its suppliers to commit to preventing discrimination. Over sixty prominent law firms pledged that they will neither represent, nor do business with discriminating individuals and organizations. Insight Partners, a major private equity fund operating in Israel, released a statement warning against intolerance and any attempt to harm personal liberties. Orit Lahav, chief executive of the women's rights organization Mavoi Satum ("Dead End"), said that "the Rabbinical courts are the most discriminatory institution in the State of Israel... Limiting the HCJ[d] while expanding the jurisdiction of the Rabbinical courts would... cause significant harm to women." Anat Thon Ashkenazy, Director of the Center for Democratic Values and Institutions at the Israel Democracy Institute, said that "almost every part of the reform could harm women... the meaning of an override clause is that even if the court says that the law on gender segregation is illegitimate, is harmful, the Knesset could say 'Okay, we say otherwise'". She added that "there is a very broad institutional framework here, after which there will come legislation that harms women's right and we will have no way of protecting or stopping it." During July 2023, 20 professional medical associations signed a letter of position warning against the ramifications to public health that would result from the exclusion of women from the public sphere. They cited, among others, a rise in prevalence of risk factors for cardiovascular disease, pregnancy-related ailments, psychological distress, and the risk of suicide. On 30 July the Knesset passed an amendment to penal law adding sexual offenses to those offenses whose penalty can be doubled if done on grounds of "nationalistic terrorism, racism or hostility towards a certain community". According to MK Limor Son Har-Melech, the bill is meant to penalize any individual who "[intends to] harm a woman sexually based on her Jewishness". The law was criticized by MK Gilad Kariv as "populist, nationalistic, and dangerous towards the Arab citizens of Israel", and by MK Ahmad Tibi as a "race law", and was objected to by legal advisors at the Ministry of Justice and the Knesset Committee on National Security. Activist Orit Kamir wrote that "the amendment... is neither feminist, equal, nor progressive, but the opposite: it subordinates women's sexuality to the nationalistic, racist patriarchy. It hijacks the Law for Prevention of Sexual Harassment to serve a world view that tags women as sexual objects that personify the nation's honor." Yael Sherer, director of the Lobby to Combat Sexual Violence, criticized the law as being informed by dated ideas about sexual assault, and proposed that MKs "dedicate a session... to give victims of sexual assault an opportunity to come out of the darkness... instead of [submitting] declarative bills that change nothing and are not meant but for grabbing headlines". In Israel, during 2022, 24 women "were murdered because they were women," which was an increase of 50% compared to 2021. A law permitting courts to order men subject to a restraining order following domestic violence offenses to wear electronic tags was drafted during the previous Knesset and had passed its first reading unanimously. On 22 March 2023, the Knesset voted to reject the bill. It had been urged to do so by National Security Minister Itamar Ben-Gvir, who said that the bill was unfair to men. Earlier in the week, Ben-Gvir had blocked the measure from advancing in the ministerial legislative committee. The MKs voting against the bill included Prime Minister Netanyahu. The Association of Families of Murder Victims said that by rejecting the law, National Security Minister Itamar Ben-Gvir "brings joy to violent men and abandons the women threatened with murder… unsupervised restraining orders endanger women's lives even more. They give women the illusion of being protected, and then they are murdered." MK Pnina Tamano-Shata, chairwoman of the Knesset Committee on the Status of Women and Gender Equality, said that "the coalition proved today that it despises women's lives." The NGO Amutat Bat Melech [he], which assists Orthodox and ultra-Orthodox women who suffer from domestic violence, said that: "Rejecting the electronic bracelet bill is disconnected from the terrible reality of seven femicides since the beginning of the year. This is an effective tool of the first degree that could have saved lives and reduced the threat to women suffering from domestic violence. This is a matter of life and death, whose whole purpose is to provide a solution to defend women." The agreement signed by the coalition parties includes the setting up of a committee to draft changes to the Law of Return. Israeli religious parties have long demanded that the "grandchild clause" of the Law of Return be cancelled. This clause grants citizenship to anyone with at least one Jewish grandparent, as long as they do not practice another religion. If the grandchild clause were to be removed from the Law of Return then around 3 million people who are currently eligible for aliyah would no longer be eligible. The heads of the Jewish Agency, the Jewish Federations of North America, the World Zionist Organization and Keren Hayesod sent a joint letter to Prime Minister Netanyahu, expressing their "deep concern" about any changes to the Law of Return, adding that "Any change in the delicate and sensitive status quo on issues such as the Law of Return or conversion could threaten to unravel the ties between us and keep us away from each other." The Executive Council of Australian Jewry and the Zionist Federation of Australia issued a joint statement saying "We… view with deep concern… proposals in relation to religious pluralism and the law of return that risk damaging Israel's… relationship with Diaspora Jewry." On 19 March 2023, Israeli Finance Minister Bezalel Smotrich spoke in Paris at a memorial service for a Likud activist. The lectern at which Smotrich spoke was covered with a flag depicting the 'Greater Land of Israel,' encompassing the whole of Mandatory Palestine, as well as Trans-Jordan. During his speech, Smotrich said that "there's no such thing as Palestinians because there's no such thing as a Palestinian people." He added that the Palestinian people are a fictitious nation invented only to fight the Zionist movement, asking "Is there a Palestinian history or culture? There isn't any." The event received widespread media coverage. On 21 March, a spokesman for the US State Department sharply criticized Smotrich's comments. "The comments, which were delivered at a podium adorned with an inaccurate and provocative map, are offensive, they are deeply concerning, and, candidly, they're dangerous. The Palestinians have a rich history and culture, and the United States greatly values our partnership with the Palestinian people," he said. The Jordanian Foreign Ministry also voiced disapproval: "The Israeli Minister of Finance's use, during his participation in an event held yesterday in Paris, of a map of Israel that includes the borders of the Hashemite Kingdom of Jordan and the occupied Palestinian territories represents a reckless inflammatory act, and a violation of international norms and the Jordanian-Israeli peace treaty." Additionally, a map encompassing Mandatory Palestine and Trans-Jordan with a Jordanian flag on it was placed on a central lectern in the Jordanian Parliament. Jordan's parliament voted to expel the Israeli ambassador. Israel's Ministry of Foreign Affairs released a clarification relating to the matter, stating that "Israel is committed to the 1994 peace agreement with Jordan. There has been no change in the position of the State of Israel, which recognizes the territorial integrity of the Hashemite Kingdom of Jordan". Ahead of a Europe Day event due to take place on 9 May 2023, far-right wing National Security Minister Itamar Ben-Gvir was assigned as a representative of the government and a speaker at the event by the government secretariat, which deals with placing ministers at receptions on the occasion of the national days of the foreign embassies. The European Union requested that Ben-Gvir not attend, but the government did not make changes to the plan. On 8 May, the European delegation to Israel cancelled the reception, stating that: "The EU Delegation to Israel is looking forward to celebrating Europe Day on May 9, as it does every year. Regrettably, this year we have decided to cancel the diplomatic reception, as we do not want to offer a platform to someone whose views contradict the values the European Union stands for. However, the Europe Day cultural event for the Israeli public will be maintained to celebrate with our friends and partners in Israel the strong and constructive bilateral relationship". Israel's Opposition Leader Yair Lapid stated: "Sending Itamar Ben-Gvir to a gathering of EU ambassadors is a serious professional mistake. The government is embarrassing a large group of friendly countries, jeopardizing future votes in international institutions, and damaging our foreign relations. Last year, after a decade of efforts, we succeeded in signing an economic-political agreement with the European Union that will contribute to the Israeli economy and our foreign relations. Why risk it, and for what? Ben-Gvir is not a legitimate person in the international community (and not really in Israel either), and sometimes you have to be both wise and just and simply send someone else". On 23 February 2023, Defense Minister Gallant signed an agreement assigning governmental powers in the West Bank to a body to be headed by Minister Bezalel Smotrich, who will effectively become the governor of the West Bank, controlling almost all areas of life in the area, including planning, building and infrastructure. Israeli governments have hitherto been careful to keep the occupation as a military government. The temporary holding of power by an occupying military force, pending a negotiated settlement, is a principle of international law – an expression of the prohibition against obtaining sovereignty through conquest that was introduced in the wake of World War II. An editorial in Haaretz noted that the assignment of governmental powers in the West Bank to a civilian governor, alongside the plan to expand the dual justice system so that Israeli law will apply fully to settlers in the West Bank, constitutes de jure annexation of the West Bank. On 26 February 2023, following the 2023 Huwara shooting in which two Israelis were killed by an unidentified attacker, hundreds of Israeli settlers attacked the Palestinian town of Huwara and three nearby villages, setting alight hundreds of Palestinian homes (some with people in them), businesses, a school, and numerous vehicles, killing one Palestinian man and injuring 100 others. Bezalel Smotrich subsequently called on Twitter for Huwara to be "wiped out" by the Israeli government. Zvika Fogel MK, of the ultra-nationalist Otzma Yehudit, which forms part of the governing coalition, said that he "looks very favorably upon" the results of the rampage. Members of the coalition proposed an amendment to the Disengagement Law, which would allow Israelis to resettle settlements vacated during the 2005 Israeli disengagement from Gaza and the northern West Bank. The evacuated settlements were considered illegal under international law, according to most countries. The proposal was approved for voting by the Foreign Affairs and Defense Committee on 9 March 2023, while the committee was still waiting for briefing materials from the NSS, IDF, MFA and Shin Bet, and was passed on 21 March. The US has requested clarification from Israeli ambassador Michael Herzog. A US State Department spokesman stated that "The U.S. strongly urges Israel to refrain from allowing the return of settlers to the area covered by the legislation, consistent with both former Prime Minister Sharon and the current Israeli Government's commitment to the United States," noting that the actions represent a clear violation of undertakings given by the Sharon government to the Bush administration in 2005 and Netanyahu's far-right coalition to the Biden administration the previous week. Minister of Communication Shlomo Karhi had initially intended to cut the funding of the Israeli Public Broadcasting Corporation (also known by its blanket branding Kan) by 400 million shekels – roughly half of its total budget – closing several departments, and privatizing content creation. In response, the Director-General of the European Broadcasting Union, Noel Curran, sent two urgent letters to Netanyahu, expressing his concerns and calling on the Israeli government to "safeguard the independence of our Member KAN and ensure it is allowed to operate in a sustainable way, with funding that is both stable, adequate, fair, and transparent." On 25 January 2023, nine journalist organizations representing some of Kan's competitors issued a statement of concern, acknowledging the "important contribution of public broadcasting in creating a worthy, unbiased and non-prejudicial journalistic platform", and noting that "the existence of the [broadcasting] corporation as a substantial public broadcast organization strengthens media as a whole, adding to the competition in the market rather than weakening it." They also expressed their concern that the "real reason" for the proposal was actually "an attempt to silence voices from which... [the Minister] doesn't always draw satisfaction". The same day, hundreds of journalists, actors and filmmakers protested in Tel Aviv. The proposal was eventually put on hold. On 22 February 2023 it was reported that Prime Minister Netanyahu was attempting to appoint his close associate Yossi Shelley as the deputy to the National Statistician — a highly sensitive position in charge of providing accurate data for decision makers. The appointment of Shelley, who did not possess the required qualifications for the role, was withdrawn following publication. In its daily editorial, Haaretz tied this attempt with the judicial reform: "once they take control of the judiciary, law enforcement and public media, they wish to control the state's data base, the dry numerical data it uses to plan its future". Netanyahu also proposed Avi Simhon for the role, and eventually froze all appointments at the Israel Central Bureau of Statistics. Also on 22 February 2023, it was revealed that Yoav Kish, the Minister of Education, was promoting a draft government decision change to the National Library of Israel board of directors which would grant him more power over the institution. In response, the Hebrew University — which owned the library until 2008 – announced that if the draft is accepted, it will withdraw its collections from the library. The university's collections, which according to the university constitute some 80% of the library's collection, include the Agnon archive, the original manuscript of Hatikvah, and the Rothschild Haggadah, the oldest known Haggadah. A group of 300 authors and poets signed an open letter against the move, further noting their objection against "political takeover" of public broadcasting, as well as "any legislation that will castrate the judiciary and damage the democratic foundations of the state of Israel". Several days later, it was reported that a series of donors decided to withhold their donations to the library, totaling some 80 million shekels. On 3 March a petition against the move by 1,500 academics, including Israel Prize laureates, was sent to Kish. The proposal has been seen by some as retribution against Shai Nitzan, the former State Attorney and the library's current rector. On 5 March it was reported that the Legal Advisor to the Ministry of Finance, Asi Messing, was withholding the proposal. According to Messing, the proposal – which was being promoted as part of the Economic Arrangements Law – "was not reviewed... by the qualified personnel in the Ministry of Finance, does not align with any of the common goals of the economic plan, was not agreed to by myself and was not approved by the Attorney General." As of February 2023, the government has been debating several proposals that will significantly weaken the Ministry of Environmental Protection, including reducing the environmental regulation of planning and development and electricity production. One of the main proposals, the transferal of a 3 billion shekel fund meant to finance waste management plants from the Ministry of Environmental Protection to the Ministry of the Interior, was eventually withdrawn. The Minister of Environmental Protection, Idit Silman, has been criticized for using for meeting with climate change denialists, for wasteful and personally-motivated travel on the ministry's expense, for politicizing the role, and for engaging in political activity on the ministry's time. The government has been noted for an unusually high number of dismissals and resignations of senior career civil servants, and for the frequent attempts to replace them with candidates with known political associations, who are often less competent. According to sources, Netanyahu and people in his vicinity are seeking out civil servants who were appointed by the previous government, intent on replacing them with people loyal to him. Governmental nominees for various positions have been criticized for lack of expertise. In addition to the nominee to the position of Deputy National Statistician (see above), the Director General of the Ministry of Finance, Shlomi Heisler; the Director General of the Ministry of Justice, Itamar Donenfeld; and the Director General of Ministry of Transport, Moshe Ben Zaken, have all been criticized for incompetence, lack of familiarity with their Ministries' subject matter, lack of interest in the job, or lack of experience in managing large organizations. It has been reported that in some ministries, senior officials were enacting slowdowns as a means for dealing with the new ministers and director generals. On 28 July the director general of the Ministry of Education resigned, citing as reason the societal "rift". Asaf Zalel, a retired Air Force Brigadier General, was appointed in January. When asked about attempts to appoint his personal friend and attorney to the board of directors of a state-owned company, Minister David Amsalem replied: "that is my job, due to my authority to appoint directors. I put forward people that I know and hold in esteem". Under Minister of Transport Miri Regev, the ministry has either dismissed or lost the heads of the National Public Transport Authority, Israel Airports Authority, National Road Safety Authority, Israel Railways, and several officials in Netivei Israel. The current chair of Netivei Israel is Likud member and Regev associate Yigal Amadi, and the legal counsel is Einav Abuhzira, daughter of a former Likud branch chair. Abuhzira was appointed instead of Elad Berdugo, nephew of Netanyahu surrogate Yaakov Bardugo, after he was disqualified for the role by the Israel Government Companies Authority. In July 2023 the Ministry of Communications, Shlomo Karhi, and the minister in charge of the Israel Government Companies Authority, Dudi Amsalem, deposed the chair of the Israel Postal Company, Michael Vaknin. The chair, who was hired to lead the company's financial recovery after years of operational loss and towards privatization, has gained the support of officials at the Authority and at the Ministry of Finance; nevertheless, the ministers claimed that his performance is inadequate, and nominated in his place Yiftah Ron-Tal, who has known ties to Netanyahu and Smotrich. They also nominated four new directors, two of which have known political associations, and a third who was a witness in Netanyahu's trial. The coalition is allowed to spend a portion of the state's budget on a discretionary basis, meant to coax member parties to reach an agreement on the budget. As of May 2023, the government was pushing an allocation of over 13 billion shekels over two years - almost seven times the amount allocated by the previous government. Most of the funds will be allocated for uses associated with the religious, orthodox and settler communities. The head of the Budget Department at the Ministry of Finance, Yoav Gardos, objected to the allocations, claiming they would exacerbate unemployment in the Orthodox community, which is projected to cost the economy a total of 6.7 trillion shekels in lost produce by 2065. At the onset of the Gaza war and the declaration of a state of national emergency, Minister of Finance Bezalel Smotrich instructed government agencies to continue with the planned distribution of discretionary funds. Corruption During March 2023, the government was promoting an amendment to the Law on Public Service (Gifts) that would allow Netanyahu to receive donations to fund his legal defense. The amendment follows a decision by the High Court of Justice (HCJ) that forced Netanyahu to refund US$270,000 given to him and his wife by his late cousin, Nathan Mileikowsky, for their legal defense. This is in contrast to past statements by Minister of Justice Yariv Levin, who spoke against the possible conflict of interests that can result from such transactions. The bill was opposed by the Attorney General Gali Baharav-Miara, who stressed that it could "create a real opportunity for governmental corruption", and was eventually withdrawn at the end of March. As of March 2023, the coalition was promoting a bill that would prevent judicial review of ministerial appointments. The bill is intended to prevent the HCJ from reviewing the appointment of the twice-convicted chairman of Shas, Aryeh Deri (convicted of bribery, fraud, and breach of trust), to a ministerial position, after his previous appointment was annulled on grounds of unreasonableness. The bill follows on the heels of another amendment, that relaxed the ban on the appointment of convicted criminals, so that Deri - who was handed a suspended sentence after his second conviction - could be appointed. The bill is opposed by the Attorney General, as well as by the Knesset Legal Adviser, Sagit Afik. Israeli law allows for declaring a Prime Minister (as well as several other high-ranking public officials) to be temporarily or permanently incapacitated, but does not specify the conditions which can lead to a declaration of incapacitation. In the case of the Prime Minister, the authority to do so is given to the Attorney General. In March 2023, the coalition advanced a bill that passes this authority from the Attorney General to the government with the approval of the Knesset committee, and clarified that incapacitation can only result from medical or mental conditions. On 3 January 2024, the Supreme Court ruled by a majority of 6 out of 11 that the validity of the law will be postponed to the next Knesset because the bill in its immediate application is a personal law and is intended to serve a distinct personal purpose. Later, the court rejected a petition regarding the definition of Netanyahu as an incapacitated prime minister due to his ongoing trial and conflict of interests. Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jews#cite_note-:1-165] | [TOKENS: 15852] |
Contents Jews Jews (Hebrew: יְהוּדִים, ISO 259-2: Yehudim, Israeli pronunciation: [jehuˈdim]), or the Jewish people, are an ethnoreligious group and nation, originating from the Israelites of ancient Israel and Judah. They traditionally adhere to Judaism. Jewish ethnicity, religion, and community are highly interrelated, as Judaism is an ethnic religion, though many ethnic Jews do not practice it. Religious Jews regard converts to Judaism as members of the Jewish nation, pursuant to the long-standing conversion process. The Israelites emerged from the pre-existing Canaanite peoples to establish Israel and Judah in the Southern Levant during the Iron Age. Originally, Jews referred to the inhabitants of the kingdom of Judah and were distinguished from the gentiles and the Samaritans. According to the Hebrew Bible, these inhabitants predominately originate from the tribe of Judah, who were descendants of Judah, the fourth son of Jacob. The tribe of Benjamin were another significant demographic in Judah and were considered Jews too. By the late 6th century BCE, Judaism had evolved from the Israelite religion, dubbed Yahwism (for Yahweh) by modern scholars, having a theology that religious Jews believe to be the expression of the Mosaic covenant between God and the Jewish people. After the Babylonian exile, Jews referred to followers of Judaism, descendants of the Israelites, citizens of Judea, or allies of the Judean state. Jewish migration within the Mediterranean region during the Hellenistic period, followed by population transfers, caused by events like the Jewish–Roman wars, gave rise to the Jewish diaspora, consisting of diverse Jewish communities that maintained their sense of Jewish history, identity, and culture. In the following millennia, Jewish diaspora communities coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (Central and Eastern Europe), the Sephardim (Iberian Peninsula), and the Mizrahim (Middle East and North Africa). While these three major divisions account for most of the world's Jews, there are other smaller Jewish groups outside of the three. Prior to World War II, the global Jewish population reached a peak of 16.7 million, representing around 0.7% of the world's population at that time. During World War II, approximately six million Jews throughout Europe were systematically murdered by Nazi Germany in a genocide known as the Holocaust. Since then, the population has slowly risen again, and as of 2021[update], was estimated to be at 15.2 million by the demographer Sergio Della Pergola or less than 0.2% of the total world population in 2012.[b] Today, over 85% of Jews live in Israel or the United States. Israel, whose population is 73.9% Jewish, is the only country where Jews comprise more than 2.5% of the population. Jews have significantly influenced and contributed to the development and growth of human progress in many fields, both historically and in modern times, including in science and technology, philosophy, ethics, literature, governance, business, art, music, comedy, theatre, cinema, architecture, food, medicine, and religion. Jews founded Christianity and had an indirect but profound influence on Islam. In these ways and others, Jews have played a significant role in the development of Western culture. Name and etymology The term "Jew" is derived from the Hebrew word יְהוּדִי Yehudi, with the plural יְהוּדִים Yehudim. Endonyms in other Jewish languages include the Ladino ג׳ודיו Djudio (plural ג׳ודיוס, Djudios) and the Yiddish ייִד Yid (plural ייִדן Yidn). Though Genesis 29:35 and 49:8 connect "Judah" with the verb yada, meaning "praise", scholars generally agree that "Judah" most likely derives from the name of a Levantine geographic region dominated by gorges and ravines. The gradual ethnonymic shift from "Israelites" to "Jews", regardless of their descent from Judah, although not contained in the Torah, is made explicit in the Book of Esther (4th century BCE) of the Tanakh. Some modern scholars disagree with the conflation, based on the works of Josephus, Philo and Apostle Paul. The English word "Jew" is a derivation of Middle English Gyw, Iewe. The latter was loaned from the Old French giu, which itself evolved from the earlier juieu, which in turn derived from judieu/iudieu which through elision had dropped the letter "d" from the Medieval Latin Iudaeus, which, like the New Testament Greek term Ioudaios, meant both "Jew" and "Judean" / "of Judea". The Greek term was a loan from Aramaic *yahūdāy, corresponding to Hebrew יְהוּדִי Yehudi. Some scholars prefer translating Ioudaios as "Judean" in the Bible since it is more precise, denotes the community's origins and prevents readers from engaging in antisemitic eisegesis. Others disagree, believing that it erases the Jewish identity of Biblical characters such as Jesus. Daniel R. Schwartz distinguishes "Judean" and "Jew". Here, "Judean" refers to the inhabitants of Judea, which encompassed southern Palestine. Meanwhile, "Jew" refers to the descendants of Israelites that adhere to Judaism. Converts are included in the definition. But Shaye J.D. Cohen argues that "Judean" is inclusive of believers of the Judean God and allies of the Judean state. Another scholar, Jodi Magness, wrote the term Ioudaioi refers to a "people of Judahite/Judean ancestry who worshipped the God of Israel as their national deity and (at least nominally) lived according to his laws." The etymological equivalent is in use in other languages, e.g., يَهُودِيّ yahūdī (sg.), al-yahūd (pl.), in Arabic, "Jude" in German, "judeu" in Portuguese, "Juif" (m.)/"Juive" (f.) in French, "jøde" in Danish and Norwegian, "judío/a" in Spanish, "jood" in Dutch, "żyd" in Polish etc., but derivations of the word "Hebrew" are also in use to describe a Jew, e.g., in Italian (Ebreo), in Persian ("Ebri/Ebrani" (Persian: عبری/عبرانی)) and Russian (Еврей, Yevrey). The German word "Jude" is pronounced [ˈjuːdə], the corresponding adjective "jüdisch" [ˈjyːdɪʃ] (Jewish) is the origin of the word "Yiddish". According to The American Heritage Dictionary of the English Language, fourth edition (2000), It is widely recognized that the attributive use of the noun Jew, in phrases such as Jew lawyer or Jew ethics, is both vulgar and highly offensive. In such contexts Jewish is the only acceptable possibility. Some people, however, have become so wary of this construction that they have extended the stigma to any use of Jew as a noun, a practice that carries risks of its own. In a sentence such as There are now several Jews on the council, which is unobjectionable, the substitution of a circumlocution like Jewish people or persons of Jewish background may in itself cause offense for seeming to imply that Jew has a negative connotation when used as a noun. Identity Judaism shares some of the characteristics of a nation, an ethnicity, a religion, and a culture, making the definition of who is a Jew vary slightly depending on whether a religious or national approach to identity is used.[better source needed] Generally, in modern secular usage, Jews include three groups: people who were born to a Jewish family regardless of whether or not they follow the religion, those who have some Jewish ancestral background or lineage (sometimes including those who do not have strictly matrilineal descent), and people without any Jewish ancestral background or lineage who have formally converted to Judaism and therefore are followers of the religion. In the context of biblical and classical literature, Jews could refer to inhabitants of the Kingdom of Judah, or the broader Judean region, allies of the Judean state, or anyone that followed Judaism. Historical definitions of Jewish identity have traditionally been based on halakhic definitions of matrilineal descent, and halakhic conversions. These definitions of who is a Jew date back to the codification of the Oral Torah into the Babylonian Talmud, around 200 CE. Interpretations by Jewish sages of sections of the Tanakh – such as Deuteronomy 7:1–5, which forbade intermarriage between their Israelite ancestors and seven non-Israelite nations: "for that [i.e. giving your daughters to their sons or taking their daughters for your sons,] would turn away your children from following me, to serve other gods"[failed verification] – are used as a warning against intermarriage between Jews and gentiles. Leviticus 24:10 says that the son in a marriage between a Hebrew woman and an Egyptian man is "of the community of Israel." This is complemented by Ezra 10:2–3, where Israelites returning from Babylon vow to put aside their gentile wives and their children. A popular theory is that the rape of Jewish women in captivity brought about the law of Jewish identity being inherited through the maternal line, although scholars challenge this theory citing the Talmudic establishment of the law from the pre-exile period. Another argument is that the rabbis changed the law of patrilineal descent to matrilineal descent due to the widespread rape of Jewish women by Roman soldiers. Since the anti-religious Haskalah movement of the late 18th and 19th centuries, halakhic interpretations of Jewish identity have been challenged. According to historian Shaye J. D. Cohen, the status of the offspring of mixed marriages was determined patrilineally in the Bible. He brings two likely explanations for the change in Mishnaic times: first, the Mishnah may have been applying the same logic to mixed marriages as it had applied to other mixtures (Kil'ayim). Thus, a mixed marriage is forbidden as is the union of a horse and a donkey, and in both unions the offspring are judged matrilineally. Second, the Tannaim may have been influenced by Roman law, which dictated that when a parent could not contract a legal marriage, offspring would follow the mother. Rabbi Rivon Krygier follows a similar reasoning, arguing that Jewish descent had formerly passed through the patrilineal descent and the law of matrilineal descent had its roots in the Roman legal system. Origins The prehistory and ethnogenesis of the Jews are closely intertwined with archaeology, biology, historical textual records, mythology, and religious literature. The ethnic origin of the Jews lie in the Israelites, a confederation of Iron Age Semitic-speaking tribes that inhabited a part of Canaan during the tribal and monarchic periods. Modern Jews are named after and also descended from the southern Israelite Kingdom of Judah. Gary A. Rendsburg links the early Canaanite nomadic pastoralists confederation to the Shasu known to the Egyptians around the 15th century BCE. According to the Hebrew Bible narrative, Jewish history begins with the Biblical patriarchs such as Abraham, his son Isaac, Isaac's son Jacob, and the Biblical matriarchs Sarah, Rebecca, Leah, and Rachel, who lived in Canaan. The twelve sons of Jacob subsequently gave birth to the Twelve Tribes. Jacob and his family migrated to Ancient Egypt after being invited to live with Jacob's son Joseph by the Pharaoh himself. Jacob's descendants were later enslaved until the Exodus, led by Moses. Afterwards, the Israelites conquered Canaan under Moses' successor Joshua, and went through the period of the Biblical judges after the death of Joshua. Through the mediation of Samuel, the Israelites were subject to a king, Saul, who was succeeded by David and then Solomon, after whom the United Monarchy ended and was split into a separate Kingdom of Israel and a Kingdom of Judah. The Kingdom of Judah is described as comprising the tribes of Judah, Benjamin and partially, Levi. They later assimilated remnants of other tribes who migrated there from the northern Kingdom of Israel. In the extra-biblical record, the Israelites become visible as a people between 1200 and 1000 BCE. There is well accepted archeological evidence referring to "Israel" in the Merneptah Stele, which dates to about 1200 BCE, and in the Mesha stele from 840 BCE. It is debated whether a period like that of the Biblical judges occurred and if there ever was a United Monarchy. There is further disagreement about the earliest existence of the Kingdoms of Israel and Judah and their extent and power. Historians agree that a Kingdom of Israel existed by c. 900 BCE,: 169–95 there is a consensus that a Kingdom of Judah existed by c. 700 BCE at least, and recent excavations in Khirbet Qeiyafa have provided strong evidence for dating the Kingdom of Judah to the 10th century BCE. In 587 BCE, Nebuchadnezzar II, King of the Neo-Babylonian Empire, besieged Jerusalem, destroyed the First Temple and deported parts of the Judahite population. Scholars disagree regarding the extent to which the Bible should be accepted as a historical source for early Israelite history. Rendsburg states that there are two approximately equal groups of scholars who debate the historicity of the biblical narrative, the minimalists who largely reject it, and the maximalists who largely accept it, with the minimalists being the more vocal of the two. Some of the leading minimalists reframe the biblical account as constituting the Israelites' inspiring national myth narrative, suggesting that according to the modern archaeological and historical account, the Israelites and their culture did not overtake the region by force, but instead branched out of the Canaanite peoples and culture through the development of a distinct monolatristic—and later monotheistic—religion of Yahwism centered on Yahweh, one of the gods of the Canaanite pantheon. The growth of Yahweh-centric belief, along with a number of cultic practices, gradually gave rise to a distinct Israelite ethnic group, setting them apart from other Canaanites. According to Dever, modern archaeologists have largely discarded the search for evidence of the biblical narrative surrounding the patriarchs and the exodus. According to the maximalist position, the modern archaeological record independently points to a narrative which largely agrees with the biblical account. This narrative provides a testimony of the Israelites as a nomadic people known to the Egyptians as belonging to the Shasu. Over time these nomads left the desert and settled on the central mountain range of the land of Canaan, in simple semi-nomadic settlements in which pig bones are notably absent. This population gradually shifted from a tribal lifestyle to a monarchy. While the archaeological record of the ninth century BCE provides evidence for two monarchies, one in the south under a dynasty founded by a figure named David with its capital in Jerusalem, and one in the north under a dynasty founded by a figure named Omri with its capital in Samaria. It also points to an early monarchic period in which these regions shared material culture and religion, suggesting a common origin. Archaeological finds also provide evidence for the later cooperation of these two kingdoms in their coalition against Aram, and for their destructions by the Assyrians and later by the Babylonians. Genetic studies on Jews show that most Jews worldwide bear a common genetic heritage which originates in the Middle East, and that they share certain genetic traits with other Gentile peoples of the Fertile Crescent. The genetic composition of different Jewish groups shows that Jews share a common gene pool dating back four millennia, as a marker of their common ancestral origin. Despite their long-term separation, Jewish communities maintained their unique commonalities, propensities, and sensibilities in culture, tradition, and language. History The earliest recorded evidence of a people by the name of Israel appears in the Merneptah Stele, which dates to around 1200 BCE. The majority of scholars agree that this text refers to the Israelites, a group that inhabited the central highlands of Canaan, where archaeological evidence shows that hundreds of small settlements were constructed between the 12th and 10th centuries BCE. The Israelites differentiated themselves from neighboring peoples through various distinct characteristics including religious practices, prohibition on intermarriage, and an emphasis on genealogy and family history. In the 10th century BCE, two neighboring Israelite kingdoms—the northern Kingdom of Israel and the southern Kingdom of Judah—emerged. Since their inception, they shared ethnic, cultural, linguistic and religious characteristics despite a complicated relationship. Israel, with its capital mostly in Samaria, was larger and wealthier, and soon developed into a regional power. In contrast, Judah, with its capital in Jerusalem, was less prosperous and covered a smaller, mostly mountainous territory. However, while in Israel the royal succession was often decided by a military coup d'état, resulting in several dynasty changes, political stability in Judah was much greater, as it was ruled by the House of David for the whole four centuries of its existence. Scholars also describe Biblical Jews as a 'proto-nation', in the modern nationalist sense, comparable to classical Greeks, the Gauls and the British Celts. Around 720 BCE, Kingdom of Israel was destroyed when it was conquered by the Neo-Assyrian Empire, which came to dominate the ancient Near East. Under the Assyrian resettlement policy, a significant portion of the northern Israelite population was exiled to Mesopotamia and replaced by immigrants from the same region. During the same period, and throughout the 7th century BCE, the Kingdom of Judah, now under Assyrian vassalage, experienced a period of prosperity and witnessed a significant population growth. This prosperity continued until the Neo-Assyrian king Sennacherib devastated the region of Judah in response to a rebellion in the area, ultimately halting at Jerusalem. Later in the same century, the Assyrians were defeated by the rising Neo-Babylonian Empire, and Judah became its vassal. In 587 BCE, following a revolt in Judah, the Babylonian king Nebuchadnezzar II besieged and destroyed Jerusalem and the First Temple, putting an end to the kingdom. The majority of Jerusalem's residents, including the kingdom's elite, were exiled to Babylon. According to the Book of Ezra, the Persian Cyrus the Great ended the Babylonian exile in 538 BCE, the year after he captured Babylon. The exile ended with the return under Zerubbabel the Prince (so called because he was a descendant of the royal line of David) and Joshua the Priest (a descendant of the line of the former High Priests of the Temple) and their construction of the Second Temple circa 521–516 BCE. As part of the Persian Empire, the former Kingdom of Judah became the province of Judah (Yehud Medinata), with a smaller territory and a reduced population. Judea was under control of the Achaemenids until the fall of their empire in c. 333 BCE to Alexander the Great. After several centuries under foreign imperial rule, the Maccabean Revolt against the Seleucid Empire resulted in an independent Hasmonean kingdom, under which the Jews once again enjoyed political independence for a period spanning from 110 to 63 BCE. Under Hasmonean rule the boundaries of their kingdom were expanded to include not only the land of the historical kingdom of Judah, but also the Galilee and Transjordan. In the beginning of this process the Idumeans, who had infiltrated southern Judea after the destruction of the First Temple, were converted en masse. In 63 BCE, Judea was conquered by the Romans. From 37 BCE to 6 CE, the Romans allowed the Jews to maintain some degree of independence by installing the Herodian dynasty as vassal kings. However, Judea eventually came directly under Roman control and was incorporated into the Roman Empire as the province of Judaea. The Jewish–Roman wars, a series of failed uprisings against Roman rule during the first and second centuries CE, had profound and devastating consequences for the Jewish population of Judaea. The First Jewish–Roman War (66–73/74 CE) culminated in the destruction of Jerusalem and the Second Temple, after which the significantly diminished Jewish population was stripped of political autonomy. A few generations later, the Bar Kokhba revolt (132–136 CE) erupted in response to Roman plans to rebuild Jerusalem as a Roman colony, and, possibly, to restrictions on circumcision. Its violent suppression by the Romans led to the near-total depopulation of Judea, and the demographic and cultural center of Jewish life shifted to Galilee. Jews were subsequently banned from residing in Jerusalem and the surrounding area, and the province of Judaea was renamed Syria Palaestina. These developments effectively ended Jewish efforts to restore political sovereignty in the region for nearly two millennia. Similar upheavals impacted the Jewish communities in the empire's eastern provinces during the Diaspora Revolt (115–117 CE), leading to the near-total destruction of Jewish diaspora communities in Libya, Cyprus and Egypt, including the highly influential community in Alexandria. The destruction of the Second Temple in 70 CE brought profound changes to Judaism. With the Temple's central place in Jewish worship gone, religious practices shifted towards prayer, Torah study (including Oral Torah), and communal gatherings in synagogues. Judaism also lost much of its sectarian nature.: 69 Two of the three main sects that flourished during the late Second Temple period, namely the Sadducees and Essenes, eventually disappeared, while Pharisaic beliefs became the foundational, liturgical, and ritualistic basis of Rabbinic Judaism, which emerged as the prevailing form of Judaism since late antiquity. The Jewish diaspora existed well before the destruction of the Second Temple in 70 CE and had been ongoing for centuries, with the dispersal driven by both forced expulsions and voluntary migrations. In Mesopotamia, a testimony to the beginnings of the Jewish community can be found in Joachin's ration tablets, listing provisions allotted to the exiled Judean king and his family by Nebuchadnezzar II, and further evidence are the Al-Yahudu tablets, dated to the 6th–5th centuries BCE and related to the exiles from Judea arriving after the destruction of the First Temple, though there is ample evidence for the presence of Jews in Babylonia even from 626 BCE. In Egypt, the documents from Elephantine reveal the trials of a community founded by a Persian Jewish garrison at two fortresses on the frontier during the 5th–4th centuries BCE, and according to Josephus the Jewish community in Alexandria existed since the founding of the city in the 4th century BCE by Alexander the Great. By 200 BCE, there were well established Jewish communities both in Egypt and Mesopotamia ("Babylonia" in Jewish sources) and in the two centuries that followed, Jewish populations were also present in Asia Minor, Greece, Macedonia, Cyrene, and, beginning in the middle of the first century BCE, in the city of Rome. Later, in the first centuries CE, as a result of the Jewish-Roman Wars, a large number of Jews were taken as captives, sold into slavery, or compelled to flee from the regions affected by the wars, contributing to the formation and expansion of Jewish communities across the Roman Empire as well as in Arabia and Mesopotamia. After the Bar Kokhba revolt, the Jewish population in Judaea—now significantly reduced— made efforts to recover from the revolt's devastating effects, but never fully regained its former strength. Between the second and fourth centuries CE, the region of Galilee emerged as the primary center of Jewish life in Syria Palaestina, experiencing both demographic growth and cultural development. It was during this period that two central rabbinic texts, the Mishnah and the Jerusalem Talmud, were composed. The Romans recognized the patriarchs—rabbinic sages such as Judah ha-Nasi—as representatives of the Jewish people, granting them a certain degree of autonomy. However, as the Roman Empire gave way to the Christianized Byzantine Empire under Constantine, Jews began to face persecution by both the Church and imperial authorities, Jews came to be persecuted by the church and the authorities, and many immigrated to communities in the diaspora. By the fourth century CE, Jews are believed to have lost their demographic majority in Syria Palaestina. The long-established Jewish community of Mesopotamia, which had been living under Parthian and later Sasanian rule, beyond the confines of the Roman Empire, became an important center of Jewish study as Judea's Jewish population declined. Estimates often place the Babylonian Jewish community of the 3rd to 7th centuries at around one million, making it the largest Jewish diaspora community of that period. Under the political leadership of the exilarch, who was regarded as a royal heir of the House of David, this community had an autonomous status and served as a place of refuge for the Jews of Syria Palaestina. A number of significant Talmudic academies, such as the Nehardea, Pumbedita, and Sura academies, were established in Mesopotamia, and many important Amoraim were active there. The Babylonian Talmud, a centerpiece of Jewish religious law, was compiled in Babylonia in the 3rd to 6th centuries. Jewish diaspora communities are generally described to have coalesced into three major ethnic subdivisions according to where their ancestors settled: the Ashkenazim (initially in the Rhineland and France), the Sephardim (initially in the Iberian Peninsula), and the Mizrahim (Middle East and North Africa). Romaniote Jews, Tunisian Jews, Yemenite Jews, Egyptian Jews, Ethiopian Jews, Bukharan Jews, Mountain Jews, and other groups also predated the arrival of the Sephardic diaspora. During the same period, Jewish communities in the Middle East thrived under Islamic rule, especially in cities like Baghdad, Cairo, and Damascus. In Babylonia, from the 7th to 11th centuries the Pumbedita and Sura academies led the Arab and to an extent the entire Jewish world. The deans and students of said academies defined the Geonic period in Jewish history. Following this period were the Rishonim who lived from the 11th to 15th centuries. Like their European counterparts, Jews in the Middle East and North Africa also faced periods of persecution and discriminatory policies, with the Almohad Caliphate in North Africa and Iberia issuing forced conversion decrees, causing Jews such as Maimonides to seek safety in other regions. Despite experiencing repeated waves of persecution, Ashkenazi Jews in Western Europe worked in a variety of fields, making an impact on their communities' economy and societies. In Francia, for example, figures like Isaac Judaeus and Armentarius occupied prominent social and economic positions. Francia also witnessed the development of a sophisticated tradition of biblical commentary, as exemplified by Rashi and the tosafists. In 1144, the first documented blood libel occurred in Norwich, England, marking an escalation in the pattern of discrimination and violence that Jews had already been subjected to throughout medieval Europe. During the 12th and 13th centuries, Jews faced frequent antisemitic legislation - including laws prescribing distinctive dress - alongside segregation, repeated blood libels, pogroms, and massacres such as the Rhineland Massacres (1066). The Jews of the Holy Roman Empire were designated Servi camerae regis (“servants of the imperial chamber”) by Frederick II, a status that afforded limited protection while simultaneously entangling them in the political struggles between the emperor and the German principalities and cities. Persecution intensified during the Black Death in the mid-14th century, when Jews were accused of poisoning wells and many communities were destroyed. These pressures, combined with major expulsions such as that from England in 1290, gradually pushed Ashkenazi Jewish populations eastward into Poland, Lithuania, and Russia. One of the largest Jewish communities of the Middle Ages was in the Iberian Peninsula, which for a time contained the largest Jewish population in Europe. Iberian Jewry endured discrimination under the Visigoths but saw its fortunes improve under Umayyad rule and later the Taifa kingdoms. During this period, the Jews of Muslim Spain entered a "Golden Age" marked by achievements in Hebrew poetry and literature, religious scholarship, grammar, medicine and science, with leading figures including Hasdai ibn Shaprut, Judah Halevi, Moses ibn Ezra and Solomon ibn Gabirol. Jews also rose to high office, most notably Samuel ibn Naghrillah, a scholar and poet who served as grand vizier and military commander of Granada. The Golden Age ended with the rise of the radical Almoravid and Almohad dynasties, whose persecutions drove many Jews from Iberia (including Maimonides), together with the advancing Reconquista. In 1391, widespread pogroms swept across Spain, leaving thousands dead and forcing mass conversions. The Spanish Inquisition was later established to pursue, torture and execute conversos who continued to practice Judaism in secret, while public disputations were staged to discredit Judaism. In 1492, after the Reconquista, Isabella I of Castile and Ferdinand II of Aragon decreed the expulsion of all Jews who refused conversion, sending an estimated 200,000 into exile in Portugal, Italy, North Africa, and the Ottoman Empire. In 1497, Portugal's Jews, about 30,000, were formally ordered expelled but instead were forcibly converted to retain their economic role. In 1498, some 3,500 Jews were expelled from Navarre. Many converts outwardly adopted Christianity while secretly preserving Jewish practices, becoming crypto-Jews (also known as marranos or anusim), who remained targets of the various Inquisitions for centuries. Following the expulsions from Spain and Portugal in the 1490s, Jewish exiles dispersed across the Mediterranean, Europe, and North Africa. Many settled in the Ottoman Empire—which, replacing the Iberian Peninsula, became home to the world's largest Jewish population—where new communities developed in Anatolia, the Balkans, and the Land of Israel. Cities such as Istanbul and Thessaloniki grew into major Jewish centers, while in 16th-century Safed a flourishing spiritual life took shape. There, Solomon Alkabetz, Moses Cordovero, and Isaac Luria developed influential new schools of Kabbalah, giving powerful impetus to Jewish mysticism, and Joseph Karo composed the Shulchan Aruch, which became a cornerstone of Jewish law. In the 17th century, Portuguese conversos who returned to Judaism and engaged in trade and banking helped establish Amsterdam as a prosperous Jewish center, while also forming communities in cities such as Antwerp and London. This period also witnessed waves of messianic fervor, most notably the rise of the Sabbatean movement in the 1660s, led by Sabbatai Zvi of İzmir, which reverberated throughout the Jewish world. In Eastern Europe, Poland–Lithuania became the principal center of Ashkenazi Jewry, eventually becoming home to the largest Jewish population in the world. Jewish life flourished there from in the early modern era, supported by relative stability, economic opportunity, and strong communal institutions. The mid-17th century brought devastation with the Cossack uprisings in Ukraine, which reversed migration flows and sent refugees westward, yet Poland–Lithuania remained the demographic and cultural heartland of Ashkenazic Jewry. Following the partitions of Poland, most of its Jews came under Russian rule and were confined to the "Pale of Settlement." The 18th century also witnessed new religious and intellectual currents. Hasidism, founded by Baal Shem Tov, emphasized mysticism and piety, while its opponents, the Misnagdim ("opponents") led by the Vilna Gaon, defended rabbinic scholarship and tradition. In Western Europe, during the 1760s and 1770s, the Haskalah (Jewish Enlightenment) emerged in German-speaking lands, where figures such as Moses Mendelssohn promoted secular learning, vernacular literacy, and integration into European society. Elsewhere, Jews began to be re-admitted to Western Europe, including England, where Menasseh ben Israel petitioned Oliver Cromwell for their return. In the Americas, Jews of Sephardic descent first arrived as conversos in Spanish and Portuguese colonies, where many faced trial by Inquisition tribunals for "judaizing." A more durable presence began in Dutch Brazil, where Jews openly practiced their religion and established the first synagogues in the New World, before the Portuguese reconquest forced their dispersal to Amsterdam, the Caribbean, and North America. Sephardic communities took root in Curaçao, Suriname, Jamaica, and Barbados, later joined by Ashkenazi migrants. In North America, Jews were present from the mid-17th century, with New Amsterdam hosting the first organized congregation in 1654. By the time of the American Revolution, small communities in New York, Newport, Philadelphia, Savannah, and Charleston played an active role in the struggle for independence. In the late 19th century, Jews in Western Europe gradually achieved legal emancipation, though social acceptance remained limited by persistent antisemitism and rising nationalism. In Eastern Europe, particularly within the Russian Empire's Pale of Settlement, Jews faced mounting legal restrictions and recurring pogroms. From this environment emerged Zionism, a national revival movement originating in Central and Eastern Europe that sought to re-establish a Jewish polity in the Land of Israel as a means of returning the Jewish people to their ancestral homeland and ending centuries of exile and persecution. This led to waves of Jewish migration to Ottoman-controlled Palestine. Theodor Herzl, who is considered the father of political Zionism, offered his vision of a future Jewish state in his 1896 book Der Judenstaat (The Jewish State); a year later, he presided over the First Zionist Congress. The antisemitism that inflicted Jewish communities in Europe also triggered a mass exodus of 2.8 million Jews to the United States between 1881 and 1924. Despite this, some Jews of Europe and the United States were able to make great achievements in various fields of science and culture. Among the most influential from this period are Albert Einstein in physics, Sigmund Freud in psychology, Franz Kafka in literature, and Irving Berlin in music. Many Nobel Prize winners at this time were Jewish, as is still the case. When Adolf Hitler and the Nazi Party came to power in Germany in 1933, the situation for Jews deteriorated rapidly as a direct result of Nazi policies. Many Jews fled from Europe to Mandatory Palestine, the United States, and the Soviet Union as a result of racial anti-Semitic laws, economic difficulties, and the fear of an impending war. World War II started in 1939, and by 1941, Hitler occupied almost all of Europe. Following the German invasion of the Soviet Union in 1941, the Final Solution—an extensive, organized effort with an unprecedented scope intended to annihilate the Jewish people—began, and resulted in the persecution and murder of Jews in Europe and North Africa. In Poland, three million were murdered in gas chambers in all concentration camps combined, with one million at the Auschwitz camp complex alone. The Holocaust is the name given to this genocide, in which six million Jews in total were systematically murdered. Before and during the Holocaust, enormous numbers of Jews immigrated to Mandatory Palestine. In 1944, the Jewish insurgency in Mandatory Palestine began with the aim of gaining full independence from the United Kingdom. On 14 May 1948, upon the termination of the mandate, David Ben-Gurion declared the creation of the State of Israel, a Jewish and democratic state. Immediately afterwards, all neighboring Arab states invaded, and were resisted by the newly formed Israel Defense Forces. In 1949, the war ended and Israel started building its state and absorbing waves of Aliyah, granting citizenship to Jews all over the world via the Law of Return passed in 1950. However, both the Israeli–Palestinian conflict and wider Arab–Israeli conflict continue to this day. Culture The Jewish people and the religion of Judaism are strongly interrelated. Converts to Judaism have a status within the Jewish people equal to those born into it. However, converts who go on to practice no Judaism are likely to be viewed with skepticism. Mainstream Judaism does not proselytize, and conversion is considered a difficult task. A significant portion of conversions are undertaken by children of mixed marriages, or would-be or current spouses of Jews. The Hebrew Bible, a religious interpretation of the traditions and early history of the Jews, established the first of the Abrahamic religions, which are now practiced by 54 percent of the world. Judaism guides its adherents in both practice and belief, and has been called not only a religion, but also a "way of life," which has made drawing a clear distinction between Judaism, Jewish culture, and Jewish identity rather difficult. Throughout history, in eras and places as diverse as the ancient Hellenic world, in Europe before and after The Age of Enlightenment (see Haskalah), in Islamic Spain and Portugal, in North Africa and the Middle East, India, China, or the contemporary United States and Israel, cultural phenomena have developed that are in some sense characteristically Jewish without being at all specifically religious. Some factors in this come from within Judaism, others from the interaction of Jews or specific communities of Jews with their surroundings, and still others from the inner social and cultural dynamics of the community, as opposed to from the religion itself. This phenomenon has led to considerably different Jewish cultures unique to their own communities. Hebrew is the liturgical language of Judaism (termed lashon ha-kodesh, "the holy tongue"), the language in which most of the Hebrew scriptures (Tanakh) were composed, and the daily speech of the Jewish people for centuries. By the 5th century BCE, Aramaic, a closely related tongue, joined Hebrew as the spoken language in Judea. By the 3rd century BCE, some Jews of the diaspora were speaking Greek. Others, such as in the Jewish communities of Asoristan, known to Jews as Babylonia, were speaking Hebrew and Aramaic, the languages of the Babylonian Talmud. Dialects of these same languages were also used by the Jews of Syria Palaestina at that time.[citation needed] For centuries, Jews worldwide have spoken the local or dominant languages of the regions they migrated to, often developing distinctive dialectal forms or branches that became independent languages. Yiddish is the Judaeo-German language developed by Ashkenazi Jews who migrated to Central Europe. Ladino is the Judaeo-Spanish language developed by Sephardic Jews who migrated to the Iberian Peninsula. Due to many factors, including the impact of the Holocaust on European Jewry, the Jewish exodus from Arab and Muslim countries, and widespread emigration from other Jewish communities around the world, ancient and distinct Jewish languages of several communities, including Judaeo-Georgian, Judaeo-Arabic, Judaeo-Berber, Krymchak, Judaeo-Malayalam and many others, have largely fallen out of use. For over sixteen centuries Hebrew was used almost exclusively as a liturgical language, and as the language in which most books had been written on Judaism, with a few speaking only Hebrew on the Sabbath. Hebrew was revived as a spoken language by Eliezer ben Yehuda, who arrived in Palestine in 1881. It had not been used as a mother tongue since Tannaic times. Modern Hebrew is designated as the "State language" of Israel. Despite efforts to revive Hebrew as the national language of the Jewish people, knowledge of the language is not commonly possessed by Jews worldwide and English has emerged as the lingua franca of the Jewish diaspora. Although many Jews once had sufficient knowledge of Hebrew to study the classic literature, and Jewish languages like Yiddish and Ladino were commonly used as recently as the early 20th century, most Jews lack such knowledge today and English has by and large superseded most Jewish vernaculars. The three most commonly spoken languages among Jews today are Hebrew, English, and Russian. Some Romance languages, particularly French and Spanish, are also widely used. Yiddish has been spoken by more Jews in history than any other language, but it is far less used today following the Holocaust and the adoption of Modern Hebrew by the Zionist movement and the State of Israel. In some places, the mother language of the Jewish community differs from that of the general population or the dominant group. For example, in Quebec, the Ashkenazic majority has adopted English, while the Sephardic minority uses French as its primary language. Similarly, South African Jews adopted English rather than Afrikaans. Due to both Czarist and Soviet policies, Russian has superseded Yiddish as the language of Russian Jews, but these policies have also affected neighboring communities. Today, Russian is the first language for many Jewish communities in a number of Post-Soviet states, such as Ukraine and Uzbekistan,[better source needed] as well as for Ashkenazic Jews in Azerbaijan, Georgia, and Tajikistan. Although communities in North Africa today are small and dwindling, Jews there had shifted from a multilingual group to a monolingual one (or nearly so), speaking French in Algeria, Morocco, and the city of Tunis, while most North Africans continue to use Arabic or Berber as their mother tongue.[citation needed] There is no single governing body for the Jewish community, nor a single authority with responsibility for religious doctrine. Instead, a variety of secular and religious institutions at the local, national, and international levels lead various parts of the Jewish community on a variety of issues. Today, many countries have a Chief Rabbi who serves as a representative of that country's Jewry. Although many Hasidic Jews follow a certain hereditary Hasidic dynasty, there is no one commonly accepted leader of all Hasidic Jews. Many Jews believe that the Messiah will act a unifying leader for Jews and the entire world. A number of modern scholars of nationalism support the existence of Jewish national identity in antiquity. One of them is David Goodblatt, who generally believes in the existence of nationalism before the modern period. In his view, the Bible, the parabiblical literature and the Jewish national history provide the base for a Jewish collective identity. Although many of the ancient Jews were illiterate (as were their neighbors), their national narrative was reinforced through public readings. The Hebrew language also constructed and preserved national identity. Although it was not widely spoken after the 5th century BCE, Goodblatt states: the mere presence of the language in spoken or written form could invoke the concept of a Jewish national identity. Even if one knew no Hebrew or was illiterate, one could recognize that a group of signs was in Hebrew script. ... It was the language of the Israelite ancestors, the national literature, and the national religion. As such it was inseparable from the national identity. Indeed its mere presence in visual or aural medium could invoke that identity. Anthony D. Smith, an historical sociologist considered one of the founders of the field of nationalism studies, wrote that the Jews of the late Second Temple period provide "a closer approximation to the ideal type of the nation [...] than perhaps anywhere else in the ancient world." He adds that this observation "must make us wary of pronouncing too readily against the possibility of the nation, and even a form of religious nationalism, before the onset of modernity." Agreeing with Smith, Goodblatt suggests omitting the qualifier "religious" from Smith's definition of ancient Jewish nationalism, noting that, according to Smith, a religious component in national memories and culture is common even in the modern era. This view is echoed by political scientist Tom Garvin, who writes that "something strangely like modern nationalism is documented for many peoples in medieval times and in classical times as well," citing the ancient Jews as one of several "obvious examples", alongside the classical Greeks and the Gaulish and British Celts. Fergus Millar suggests that the sources of Jewish national identity and their early nationalist movements in the first and second centuries CE included several key elements: the Bible as both a national history and legal source, the Hebrew language as a national language, a system of law, and social institutions such as schools, synagogues, and Sabbath worship. Adrian Hastings argued that Jews are the "true proto-nation", that through the model of ancient Israel found in the Hebrew Bible, provided the world with the original concept of nationhood which later influenced Christian nations. However, following Jerusalem's destruction in the first century CE, Jews ceased to be a political entity and did not resemble a traditional nation-state for almost two millennia. Despite this, they maintained their national identity through collective memory, religion and sacred texts, even without land or political power, and remained a nation rather than just an ethnic group, eventually leading to the rise of Zionism and the establishment of Israel. Steven Weitzman suggests that Jewish nationalist sentiment in antiquity was encouraged because under foreign rule (Persians, Greeks, Romans) Jews were able to claim that they were an ancient nation. This claim was based on the preservation and reverence of their scriptures, the Hebrew language, the Temple and priesthood, and other traditions of their ancestors. Doron Mendels further observes that the Hasmonean kingdom, one of the few examples of indigenous statehood at its time, significantly reinforced Jewish national consciousness. The memory of this period of independence contributed to the persistent efforts to revive Jewish sovereignty in Judea, leading to the major revolts against Roman rule in the 1st and 2nd centuries CE. Demographics Within the world's Jewish population there are distinct ethnic divisions, most of which are primarily the result of geographic branching from an originating Israelite population, and subsequent independent evolutions. An array of Jewish communities was established by Jewish settlers in various places around the Old World, often at great distances from one another, resulting in effective and often long-term isolation. During the millennia of the Jewish diaspora the communities would develop under the influence of their local environments: political, cultural, natural, and populational. Today, manifestations of these differences among the Jews can be observed in Jewish cultural expressions of each community, including Jewish linguistic diversity, culinary preferences, liturgical practices, religious interpretations, as well as degrees and sources of genetic admixture. Jews are often identified as belonging to one of two major groups: the Ashkenazim and the Sephardim. Ashkenazim are so named in reference to their geographical origins (their ancestors' culture coalesced in the Rhineland, an area historically referred to by Jews as Ashkenaz). Similarly, Sephardim (Sefarad meaning "Spain" in Hebrew) are named in reference their origins in Iberia. The diverse groups of Jews of the Middle East and North Africa are often collectively referred to as Sephardim together with Sephardim proper for liturgical reasons having to do with their prayer rites. A common term for many of these non-Spanish Jews who are sometimes still broadly grouped as Sephardim is Mizrahim (lit. 'easterners' in Hebrew). Nevertheless, Mizrahis and Sepharadim are usually ethnically distinct. Smaller groups include, but are not restricted to, Indian Jews such as the Bene Israel, Bnei Menashe, Cochin Jews, and Bene Ephraim; the Romaniotes of Greece; the Italian Jews ("Italkim" or "Bené Roma"); the Teimanim from Yemen; various African Jews, including most numerously the Beta Israel of Ethiopia; and Chinese Jews, most notably the Kaifeng Jews, as well as various other distinct but now almost extinct communities. The divisions between all these groups are approximate and their boundaries are not always clear. The Mizrahim for example, are a heterogeneous collection of North African, Central Asian, Caucasian, and Middle Eastern Jewish communities that are no closer related to each other than they are to any of the earlier mentioned Jewish groups. In modern usage, however, the Mizrahim are sometimes termed Sephardi due to similar styles of liturgy, despite independent development from Sephardim proper. Thus, among Mizrahim there are Egyptian Jews, Iraqi Jews, Lebanese Jews, Kurdish Jews, Moroccan Jews, Libyan Jews, Syrian Jews, Bukharian Jews, Mountain Jews, Georgian Jews, Iranian Jews, Afghan Jews, and various others. The Teimanim from Yemen are sometimes included, although their style of liturgy is unique and they differ in respect to the admixture found among them to that found in Mizrahim. In addition, there is a differentiation made between Sephardi migrants who established themselves in the Middle East and North Africa after the expulsion of the Jews from Spain and Portugal in the 1490s and the pre-existing Jewish communities in those regions. Ashkenazi Jews represent the bulk of modern Jewry, with at least 70 percent of Jews worldwide (and up to 90 percent prior to World War II and the Holocaust). As a result of their emigration from Europe, Ashkenazim also represent the overwhelming majority of Jews in the New World continents, in countries such as the United States, Canada, Argentina, Australia, and Brazil. In France, the immigration of Jews from Algeria (Sephardim) has led them to outnumber the Ashkenazim. Only in Israel is the Jewish population representative of all groups, a melting pot independent of each group's proportion within the overall world Jewish population. Y DNA studies tend to imply a small number of founders in an old population whose members parted and followed different migration paths. In most Jewish populations, these male line ancestors appear to have been mainly Middle Eastern. For example, Ashkenazi Jews share more common paternal lineages with other Jewish and Middle Eastern groups than with non-Jewish populations in areas where Jews lived in Eastern Europe, Germany, and the French Rhine Valley. This is consistent with Jewish traditions in placing most Jewish paternal origins in the region of the Middle East. Conversely, the maternal lineages of Jewish populations, studied by looking at mitochondrial DNA, are generally more heterogeneous. Scholars such as Harry Ostrer and Raphael Falk believe this indicates that many Jewish males found new mates from European and other communities in the places where they migrated in the diaspora after fleeing ancient Israel. In contrast, Behar has found evidence that about 40 percent of Ashkenazi Jews originate maternally from just four female founders, who were of Middle Eastern origin. The populations of Sephardi and Mizrahi Jewish communities "showed no evidence for a narrow founder effect." Subsequent studies carried out by Feder et al. confirmed the large portion of non-local maternal origin among Ashkenazi Jews. Reflecting on their findings related to the maternal origin of Ashkenazi Jews, the authors conclude "Clearly, the differences between Jews and non-Jews are far larger than those observed among the Jewish communities. Hence, differences between the Jewish communities can be overlooked when non-Jews are included in the comparisons." However, a 2025 genetic study on the Ashkenazi Jewish founder population supports the presence of a substantial Near Eastern component in the maternal lineages. Analyses of mitochondrial DNA (mtDNA) indicate that the core founder lineages, estimated at around 54, likely originated from the Near East, with these founder signatures appearing in multiple copies across the population. While later admixture introduced additional mtDNA lineages, these absorbed lineages are distinguishable from the original founders. The findings are consistent with genome-wide Identity-by-Descent and Lineage Extinction analyses, reinforcing the Near Eastern origin of the Ashkenazi maternal founders. A study showed that 7% of Ashkenazi Jews have the haplogroup G2c, which is mainly found in Pashtuns and on lower scales all major Jewish groups, Palestinians, Syrians, and Lebanese. Studies of autosomal DNA, which look at the entire DNA mixture, have become increasingly important as the technology develops. They show that Jewish populations have tended to form relatively closely related groups in independent communities, with most in a community sharing significant ancestry in common. For Jewish populations of the diaspora, the genetic composition of Ashkenazi, Sephardic, and Mizrahi Jewish populations show a predominant amount of shared Middle Eastern ancestry. According to Behar, the most parsimonious explanation for this shared Middle Eastern ancestry is that it is "consistent with the historical formulation of the Jewish people as descending from ancient Hebrew and Israelite residents of the Levant" and "the dispersion of the people of ancient Israel throughout the Old World". North African, Italian and others of Iberian origin show variable frequencies of admixture with non-Jewish historical host populations among the maternal lines. In the case of Ashkenazi and Sephardi Jews (in particular Moroccan Jews), who are closely related, the source of non-Jewish admixture is mainly Southern European, while Mizrahi Jews show evidence of admixture with other Middle Eastern populations. Behar et al. have remarked on a close relationship between Ashkenazi Jews and modern Italians. A 2001 study found that Jews were more closely related to groups of the Fertile Crescent (Kurds, Turks, and Armenians) than to their Arab neighbors, whose genetic signature was found in geographic patterns reflective of Islamic conquests. The studies also show that Sephardic Bnei Anusim (descendants of the "anusim" who were forced to convert to Catholicism), which comprise up to 19.8 percent of the population of today's Iberia (Spain and Portugal) and at least 10 percent of the population of Ibero-America (Hispanic America and Brazil), have Sephardic Jewish ancestry within the last few centuries. The Bene Israel and Cochin Jews of India, Beta Israel of Ethiopia, and a portion of the Lemba people of Southern Africa, despite more closely resembling the local populations of their native countries, have also been thought to have some more remote ancient Jewish ancestry. Views on the Lemba have changed and genetic Y-DNA analyses in the 2000s have established a partially Middle-Eastern origin for a portion of the male Lemba population but have been unable to narrow this down further. Although historically, Jews have been found all over the world, in the decades since World War II and the establishment of Israel, they have increasingly concentrated in a small number of countries. In 2021, Israel and the United States together accounted for over 85 percent of the global Jewish population, with approximately 45.3% and 39.6% of the world's Jews, respectively. More than half (51.2%) of world Jewry resides in just ten metropolitan areas. As of 2021, these ten areas were Tel Aviv, New York, Jerusalem, Haifa, Los Angeles, Miami, Philadelphia, Paris, Washington, and Chicago. The Tel Aviv metro area has the highest percent of Jews among the total population (94.8%), followed by Jerusalem (72.3%), Haifa (73.1%), and Beersheba (60.4%), the balance mostly being Israeli Arabs. Outside Israel, the highest percent of Jews in a metropolitan area was in New York (10.8%), followed by Miami (8.7%), Philadelphia (6.8%), San Francisco (5.1%), Washington (4.7%), Los Angeles (4.7%), Toronto (4.5%), and Baltimore (4.1%). As of 2010, there were nearly 14 million Jews around the world, roughly 0.2% of the world's population at the time. According to the 2007 estimates of The Jewish People Policy Planning Institute, the world's Jewish population is 13.2 million. This statistic incorporates both practicing Jews affiliated with synagogues and the Jewish community, and approximately 4.5 million unaffiliated and secular Jews.[citation needed] According to Sergio Della Pergola, a demographer of the Jewish population, in 2021 there were about 6.8 million Jews in Israel, 6 million in the United States, and 2.3 million in the rest of the world. Israel, the Jewish nation-state, is the only country in which Jews make up a majority of the citizens. Israel was established as an independent democratic and Jewish state on 14 May 1948. Of the 120 members in its parliament, the Knesset, as of 2016[update], 14 members of the Knesset are Arab citizens of Israel (not including the Druze), most representing Arab political parties. One of Israel's Supreme Court judges is also an Arab citizen of Israel. Between 1948 and 1958, the Jewish population rose from 800,000 to two million. Currently, Jews account for 75.4 percent of the Israeli population, or 6 million people. The early years of the State of Israel were marked by the mass immigration of Holocaust survivors in the aftermath of the Holocaust and Jews fleeing Arab lands. Israel also has a large population of Ethiopian Jews, many of whom were airlifted to Israel in the late 1980s and early 1990s. Between 1974 and 1979 nearly 227,258 immigrants arrived in Israel, about half being from the Soviet Union. This period also saw an increase in immigration to Israel from Western Europe, Latin America, and North America. A trickle of immigrants from other communities has also arrived, including Indian Jews and others, as well as some descendants of Ashkenazi Holocaust survivors who had settled in countries such as the United States, Argentina, Australia, Chile, and South Africa. Some Jews have emigrated from Israel elsewhere, because of economic problems or disillusionment with political conditions and the continuing Arab–Israeli conflict. Jewish Israeli emigrants are known as yordim. The waves of immigration to the United States and elsewhere at the turn of the 19th century, the founding of Zionism and later events, including pogroms in Imperial Russia (mostly within the Pale of Settlement in present-day Ukraine, Moldova, Belarus and eastern Poland), the massacre of European Jewry during the Holocaust, and the founding of the state of Israel, with the subsequent Jewish exodus from Arab lands, all resulted in substantial shifts in the population centers of world Jewry by the end of the 20th century. More than half of the Jews live in the Diaspora (see Population table). Currently, the largest Jewish community outside Israel, and either the largest or second-largest Jewish community in the world, is located in the United States, with 6 million to 7.5 million Jews by various estimates. Elsewhere in the Americas, there are also large Jewish populations in Canada (315,000), Argentina (180,000–300,000), and Brazil (196,000–600,000), and smaller populations in Mexico, Uruguay, Venezuela, Chile, Colombia and several other countries (see History of the Jews in Latin America). According to a 2010 Pew Research Center study, about 470,000 people of Jewish heritage live in Latin America and the Caribbean. Demographers disagree on whether the United States has a larger Jewish population than Israel, with many maintaining that Israel surpassed the United States in Jewish population during the 2000s, while others maintain that the United States still has the largest Jewish population in the world. Currently, a major national Jewish population survey is planned to ascertain whether or not Israel has overtaken the United States in Jewish population. Western Europe's largest Jewish community, and the third-largest Jewish community in the world, can be found in France, home to between 483,000 and 500,000 Jews, the majority of whom are immigrants or refugees from North African countries such as Algeria, Morocco, and Tunisia (or their descendants). The United Kingdom has a Jewish community of 292,000. In Eastern Europe, the exact figures are difficult to establish. The number of Jews in Russia varies widely according to whether a source uses census data (which requires a person to choose a single nationality among choices that include "Russian" and "Jewish") or eligibility for immigration to Israel (which requires that a person have one or more Jewish grandparents). According to the latter criteria, the heads of the Russian Jewish community assert that up to 1.5 million Russians are eligible for aliyah. In Germany, the 102,000 Jews registered with the Jewish community are a slowly declining population, despite the immigration of tens of thousands of Jews from the former Soviet Union since the fall of the Berlin Wall. Thousands of Israelis also live in Germany, either permanently or temporarily, for economic reasons. Prior to 1948, approximately 800,000 Jews were living in lands which now make up the Arab world (excluding Israel). Of these, just under two-thirds lived in the French-controlled Maghreb region, 15 to 20 percent in the Kingdom of Iraq, approximately 10 percent in the Kingdom of Egypt and approximately 7 percent in the Kingdom of Yemen. A further 200,000 lived in Pahlavi Iran and the Republic of Turkey. Today, around 26,000 Jews live in Muslim-majority countries, mainly in Turkey (14,200) and Iran (9,100), while Morocco (2,000), Tunisia (1,000), and the United Arab Emirates (500) host the largest communities in the Arab world. A small-scale exodus had begun in many countries in the early decades of the 20th century, although the only substantial aliyah came from Yemen and Syria. The exodus from Arab and Muslim countries took place primarily from 1948. The first large-scale exoduses took place in the late 1940s and early 1950s, primarily in Iraq, Yemen and Libya, with up to 90 percent of these communities leaving within a few years. The peak of the exodus from Egypt occurred in 1956. The exodus in the Maghreb countries peaked in the 1960s. Lebanon was the only Arab country to see a temporary increase in its Jewish population during this period, due to an influx of refugees from other Arab countries, although by the mid-1970s the Jewish community of Lebanon had also dwindled. In the aftermath of the exodus wave from Arab states, an additional migration of Iranian Jews peaked in the 1980s when around 80 percent of Iranian Jews left the country.[citation needed] Outside Europe, the Americas, the Middle East, and the rest of Asia, there are significant Jewish populations in Australia (112,500) and South Africa (70,000). There is also a 6,800-strong community in New Zealand. Since at least the time of the Ancient Greeks, a proportion of Jews have assimilated into the wider non-Jewish society around them, by either choice or force, ceasing to practice Judaism and losing their Jewish identity. Assimilation took place in all areas, and during all time periods, with some Jewish communities, for example the Kaifeng Jews of China, disappearing entirely. The advent of the Jewish Enlightenment of the 18th century (see Haskalah) and the subsequent emancipation of the Jewish populations of Europe and America in the 19th century, accelerated the situation, encouraging Jews to increasingly participate in, and become part of, secular society. The result has been a growing trend of assimilation, as Jews marry non-Jewish spouses and stop participating in the Jewish community. Rates of interreligious marriage vary widely: In the United States, it is just under 50 percent; in the United Kingdom, around 53 percent; in France, around 30 percent; and in Australia and Mexico, as low as 10 percent. In the United States, only about a third of children from intermarriages affiliate with Jewish religious practice. The result is that most countries in the Diaspora have steady or slightly declining religiously Jewish populations as Jews continue to assimilate into the countries in which they live.[citation needed] The Jewish people and Judaism have experienced various persecutions throughout their history. During Late Antiquity and the Early Middle Ages, the Roman Empire (in its later phases known as the Byzantine Empire) repeatedly repressed the Jewish population, first by ejecting them from their homelands during the pagan Roman era and later by officially establishing them as second-class citizens during the Christian Roman era. According to James Carroll, "Jews accounted for 10% of the total population of the Roman Empire. By that ratio, if other factors had not intervened, there would be 200 million Jews in the world today, instead of something like 13 million." Later in medieval Western Europe, further persecutions of Jews by Christians occurred, notably during the Crusades—when Jews all over Germany were massacred—and in a series of expulsions from the Kingdom of England, Germany, and France. Then there occurred the largest expulsion of all, when Spain and Portugal, after the Reconquista (the Catholic Reconquest of the Iberian Peninsula), expelled both unbaptized Sephardic Jews and the ruling Muslim Moors. In the Papal States, which existed until 1870, Jews were required to live only in specified neighborhoods called ghettos. Islam and Judaism have a complex relationship. Traditionally Jews and Christians living in Muslim lands, known as dhimmis, were allowed to practice their religions and administer their internal affairs, but they were subject to certain conditions. They had to pay the jizya (a per capita tax imposed on free adult non-Muslim males) to the Islamic state. Dhimmis had an inferior status under Islamic rule. They had several social and legal disabilities such as prohibitions against bearing arms or giving testimony in courts in cases involving Muslims. Many of the disabilities were highly symbolic. The one described by Bernard Lewis as "most degrading" was the requirement of distinctive clothing, not found in the Quran or hadith but invented in early medieval Baghdad; its enforcement was highly erratic. On the other hand, Jews rarely faced martyrdom or exile, or forced compulsion to change their religion, and they were mostly free in their choice of residence and profession. Notable exceptions include the massacre of Jews and forcible conversion of some Jews by the rulers of the Almohad dynasty in Al-Andalus in the 12th century, as well as in Islamic Persia, and the forced confinement of Moroccan Jews to walled quarters known as mellahs beginning from the 15th century and especially in the early 19th century. In modern times, it has become commonplace for standard antisemitic themes to be conflated with anti-Zionist publications and pronouncements of Islamic movements such as Hezbollah and Hamas, in the pronouncements of various agencies of the Islamic Republic of Iran, and even in the newspapers and other publications of Turkish Refah Partisi."[better source needed] Throughout history, many rulers, empires and nations have oppressed their Jewish populations or sought to eliminate them entirely. Methods employed ranged from expulsion to outright genocide; within nations, often the threat of these extreme methods was sufficient to silence dissent. The history of antisemitism includes the First Crusade which resulted in the massacre of Jews; the Spanish Inquisition (led by Tomás de Torquemada) and the Portuguese Inquisition, with their persecution and autos-da-fé against the New Christians and Marrano Jews; the Bohdan Chmielnicki Cossack massacres in Ukraine; the Pogroms backed by the Russian Tsars; as well as expulsions from Spain, Portugal, England, France, Germany, and other countries in which the Jews had settled. According to a 2008 study published in the American Journal of Human Genetics, 19.8 percent of the modern Iberian population has Sephardic Jewish ancestry, indicating that the number of conversos may have been much higher than originally thought. The persecution reached a peak in Nazi Germany's Final Solution, which led to the Holocaust and the slaughter of approximately 6 million Jews. Of the world's 16 million Jews in 1939, almost 40% were murdered in the Holocaust. The Holocaust—the state-led systematic persecution and genocide of European Jews (and certain communities of North African Jews in European controlled North Africa) and other minority groups of Europe during World War II by Germany and its collaborators—remains the most notable modern-day persecution of Jews. The persecution and genocide were accomplished in stages. Legislation to remove the Jews from civil society was enacted years before the outbreak of World War II. Concentration camps were established in which inmates were used as slave labour until they died of exhaustion or disease. Where the Third Reich conquered new territory in Eastern Europe, specialized units called Einsatzgruppen murdered Jews and political opponents in mass shootings. Jews and Roma were crammed into ghettos before being transported hundreds of kilometres by freight train to extermination camps where, if they survived the journey, the majority of them were murdered in gas chambers. Virtually every arm of Germany's bureaucracy was involved in the logistics of the mass murder, turning the country into what one Holocaust scholar has called "a genocidal nation." Throughout Jewish history, Jews have repeatedly been directly or indirectly expelled from both their original homeland, the Land of Israel, and many of the areas in which they have settled. This experience as refugees has shaped Jewish identity and religious practice in many ways, and is thus a major element of Jewish history. In summary, the pogroms in Eastern Europe, the rise of modern antisemitism, the Holocaust, as well as the rise of Arab nationalism, all served to fuel the movements and migrations of huge segments of Jewry from land to land and continent to continent until they arrived back in large numbers at their original historical homeland in Israel. In the Bible, the patriarch Abraham is described as a migrant to the land of Canaan from Ur of the Chaldees. His descendants, the Children of Israel, undertook the Exodus (meaning "departure" or "exit" in Greek) from ancient Egypt, as described in the Book of Exodus. The first movement documented in the historical record occurred with the resettlement policy of the Neo-Assyrian Empire, which mandated the deportation of conquered peoples, and it is estimated some 4,500,000 among its captive populations suffered this dislocation over three centuries of Assyrian rule. With regard to Israel, Tiglath-Pileser III claims he deported 80% of the population of Lower Galilee, some 13,520 people. Some 27,000 Israelites, 20 to 25% of the population of the Kingdom of Israel, were described as being deported by Sargon II, and were replaced by other deported populations and sent into permanent exile by Assyria, initially to the Upper Mesopotamian provinces of the Assyrian Empire. Between 10,000 and 80,000 people from the Kingdom of Judah were similarly exiled by Babylonia, but these people were then returned to Judea by Cyrus the Great of the Persian Achaemenid Empire. Many Jews were exiled again by the Roman Empire. The 2,000 year dispersion of the Jewish diaspora beginning under the Roman Empire, as Jews were spread throughout the Roman world and, driven from land to land, settled wherever they could live freely enough to practice their religion. Over the course of the diaspora the center of Jewish life moved from Babylonia to the Iberian Peninsula to Poland to the United States and, as a result of Zionism, back to Israel. There were also many expulsions of Jews during the Middle Ages and Enlightenment in Europe, including: 1290, 16,000 Jews were expelled from England, (see the Statute of Jewry); in 1396, 100,000 from France; in 1421, thousands were expelled from Austria. Many of these Jews settled in East-Central Europe, especially Poland. Following the Spanish Inquisition in 1492, the Spanish population of around 200,000 Sephardic Jews were expelled by the Spanish crown and Catholic church, followed by expulsions in 1493 in Sicily (37,000 Jews) and Portugal in 1496. The expelled Jews fled mainly to the Ottoman Empire, the Netherlands, and North Africa, others migrating to Southern Europe and the Middle East. During the 19th century, France's policies of equal citizenship regardless of religion led to the immigration of Jews (especially from Eastern and Central Europe). This contributed to the arrival of millions of Jews in the New World. Over two million Eastern European Jews arrived in the United States from 1880 to 1925. In the latest phase of migrations, the Islamic Revolution of Iran caused many Iranian Jews to flee Iran. Most found refuge in the US (particularly Los Angeles, California, and Long Island, New York) and Israel. Smaller communities of Persian Jews exist in Canada and Western Europe. Similarly, when the Soviet Union collapsed, many of the Jews in the affected territory (who had been refuseniks) were suddenly allowed to leave. This produced a wave of migration to Israel in the early 1990s. Israel is the only country with a Jewish population that is consistently growing through natural population growth, although the Jewish populations of other countries, in Europe and North America, have recently increased through immigration. In the Diaspora, in almost every country the Jewish population in general is either declining or steady, but Orthodox and Haredi Jewish communities, whose members often shun birth control for religious reasons, have experienced rapid population growth. Orthodox and Conservative Judaism discourage proselytism to non-Jews, but many Jewish groups have tried to reach out to the assimilated Jewish communities of the Diaspora in order for them to reconnect to their Jewish roots. Additionally, while in principle Reform Judaism favours seeking new members for the faith, this position has not translated into active proselytism, instead taking the form of an effort to reach out to non-Jewish spouses of intermarried couples. There is also a trend of Orthodox movements reaching out to secular Jews in order to give them a stronger Jewish identity so there is less chance of intermarriage. As a result of the efforts by these and other Jewish groups over the past 25 years, there has been a trend (known as the Baal teshuva movement) for secular Jews to become more religiously observant, though the demographic implications of the trend are unknown. Additionally, there is also a growing rate of conversion to Jews by Choice of gentiles who make the decision to head in the direction of becoming Jews. Contributions Jewish individuals have played a significant role in the development and growth of Western culture, advancing many fields of thought, science and technology, both historically and in modern times, including through discrete trends in Jewish philosophy, Jewish ethics and Jewish literature, as well as specific trends in Jewish culture, including in Jewish art, Jewish music, Jewish humor, Jewish theatre, Jewish cuisine and Jewish medicine. Jews have established various Jewish political movements, religious movements, and, through the authorship of the Hebrew Bible and parts of the New Testament, provided the foundation for Christianity and Islam. More than 20 percent of the awarded Nobel Prize have gone to individuals of Jewish descent. Philanthropic giving is a widespread core function among Jewish organizations. Notes References External links |
======================================== |
[SOURCE: https://www.wired.com/story/branch-deals-february-2026/] | [TOKENS: 1458] |
Louryn StrampeGearFeb 20, 2026 1:18 PMOur Favorite Office Chairs Are on SaleBranch makes excellent office essentials, and our top picks are discounted.Courtesy of BranchCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyWe've been rigorously testing work-from-home gear for years—even prior to the Covid-19 remote work boom—and that includes dozens of office chairs and desks. Branch furniture has made standouts that are highlighted in our guides over and over again. Its Presidents' Day deals have been extended, bringing some of the better discounts we've seen on essentials we've tested like chairs and desks.Check out our other deals coverage for additional discounts on gear we've tried and would recommend to a friend.Branch Ergonomic Chair Pro for $449 ($50 off)Photograph: Julian ChokkattuPhotograph: Julian ChokkattuPhotograph: Julian ChokkattuCourtesy of BranchChevronChevronSave to wishlistSave to wishlistBranchErgonomic Chair Pro$499 $449 (10% off) Branch$499 Amazon$499 The Container StoreThis price matches the best we usually see for our very favorite office chair. Out of the dozens we've tried, this chair strikes the best balance of features for the price. It's comfortable, adjustable, and easy to dial in so you can get your perfect ergonomic fit. It also has a solid warranty and isn't too terribly expensive compared to similar chairs. There are different fabric finishes and colors to choose from, all of which are on sale right now.Branch Ergonomic Chair for $323 ($36 off)Courtesy of BranchSave to wishlistSave to wishlistBranchErgonomic Chair$359 $323 (10% off) Branch$389 AmazonThe best budget office chair is even more affordable right now thanks to this deal. It's easy to assemble, it has some adjustable elements, it's comfortable and breathable, and it looks nice with or without the optional headrest. The upholstery is available in several colors, though the fabric does pill and attract pet hair. We still think this is a chair worth checking out if you're on a tight budget.Branch Four Leg Standing Desk for $854 ($95 off)Photograph: Julian ChokkattuSave to wishlistSave to wishlistBranchFour Leg Standing Desk$949 $854 (10% off) BranchThis is editor Julian Chokkattu's favorite desk he's tried. At first glance, it looks like a standard desk, but it's actually a standing desk that can be raised or lowered with the little control panel. Assembly was easy, the controls are simple, and the shape is elegant. If you want a desk that looks great no matter how tall it is, this is worth checking out, especially at this price.Branch Duo Standing Desk for $494 ($55 off)Photograph: Julian ChokkattuSave to wishlistSave to wishlistBranchDuo Standing Desk$549 $466 (15% off) Branch$549 Amazon$549 West ElmWe like this compact, affordable standing desk, which gets you a lot of value for how little you'll pay. It's compatible with a lot of add-ons and the paddle controls are easy to use. There's even a preset mode so you can press the paddle twice to raise it to your pre-set height. This desk is compact, but if you don't need a ton of room for your working setup, it's a good option even at full-price. (Luckily, right now, you can snag it for less.)Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that's too important to ignore. Subscribe Today. Our Favorite Office Chairs Are on Sale We've been rigorously testing work-from-home gear for years—even prior to the Covid-19 remote work boom—and that includes dozens of office chairs and desks. Branch furniture has made standouts that are highlighted in our guides over and over again. Its Presidents' Day deals have been extended, bringing some of the better discounts we've seen on essentials we've tested like chairs and desks. Check out our other deals coverage for additional discounts on gear we've tried and would recommend to a friend. Branch Ergonomic Chair Pro for $449 ($50 off) Branch Branch Amazon The Container Store This price matches the best we usually see for our very favorite office chair. Out of the dozens we've tried, this chair strikes the best balance of features for the price. It's comfortable, adjustable, and easy to dial in so you can get your perfect ergonomic fit. It also has a solid warranty and isn't too terribly expensive compared to similar chairs. There are different fabric finishes and colors to choose from, all of which are on sale right now. Branch Ergonomic Chair for $323 ($36 off) Branch Branch Amazon The best budget office chair is even more affordable right now thanks to this deal. It's easy to assemble, it has some adjustable elements, it's comfortable and breathable, and it looks nice with or without the optional headrest. The upholstery is available in several colors, though the fabric does pill and attract pet hair. We still think this is a chair worth checking out if you're on a tight budget. Branch Four Leg Standing Desk for $854 ($95 off) Branch Branch This is editor Julian Chokkattu's favorite desk he's tried. At first glance, it looks like a standard desk, but it's actually a standing desk that can be raised or lowered with the little control panel. Assembly was easy, the controls are simple, and the shape is elegant. If you want a desk that looks great no matter how tall it is, this is worth checking out, especially at this price. Branch Duo Standing Desk for $494 ($55 off) Branch Branch Amazon West Elm We like this compact, affordable standing desk, which gets you a lot of value for how little you'll pay. It's compatible with a lot of add-ons and the paddle controls are easy to use. There's even a preset mode so you can press the paddle twice to raise it to your pre-set height. This desk is compact, but if you don't need a ton of room for your working setup, it's a good option even at full-price. (Luckily, right now, you can snag it for less.) Power up with unlimited access to WIRED. Get best-in-class reporting and exclusive subscriber content that's too important to ignore. Subscribe Today. Comments Wired Coupons Squarespace Promo Code: 20% Off Annual Acuity Subscriptions Laptop - $400 Off LG Promo Code 10% Off Dell Coupon Code for New Customers 30% Samsung Coupon - Offer Program 2026 10% Off Canon Promo Code + Up to 30% Off 50% Off Doordash Promo Code For New & Existing Users © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cabinet_(government)] | [TOKENS: 2997] |
Contents Cabinet (government) A cabinet in governing is a group of people with the constitutional or legal task to rule a country or state, or advise a head of state, usually from the executive branch. Their members are known as ministers and secretaries and they are often appointed by either heads of state or government. Cabinets are typically the body responsible for the day-to-day management of the government and response to sudden events, whereas the legislative and judicial branches work in a measured pace, in sessions according to lengthy procedures. The function of a cabinet varies: in some countries, it is a collegiate decision-making body with collective responsibility, while in others it may function either as a purely advisory body or an assisting institution to a decision-making head of state or head of government. In some countries, particularly those that use a parliamentary system (e.g., the United Kingdom), the cabinet collectively decides the government's direction, especially in regard to legislation passed by the parliament. In countries with a presidential system, such as the United States, the cabinet does not function as a collective legislative influence; rather, their primary role is as an official advisory council to the head of government. In this way, the president obtains opinions and advice relating to forthcoming decisions. Legally, under both types of system, the Westminster variant of a parliamentary system and the presidential system, the cabinet "advises" the head of state: the difference is that, in a parliamentary system, the monarch, viceroy, or ceremonial president will almost always follow this advice, whereas, in a presidential system, a president who is also head of government and political leader may depart from the cabinet's advice if they do not agree with it. In practice, in nearly all parliamentary democracies that do not follow the Westminster system, and in three countries that do (Japan, Ireland, and Israel), very often the cabinet does not "advise" the head of state as they play only a ceremonial role. Instead, it is usually the head of government (usually called "prime minister") who holds all means of power in their hands (e.g. in Germany, Sweden, etc.) and to whom the cabinet reports. In both presidential and parliamentary systems, cabinet officials administer executive branches, government agencies, or departments. Cabinets are also important originators for legislation. Cabinets and ministers are usually in charge of the preparation of proposed legislation in the ministries before it is passed to the parliament. Thus, often the majority of new legislation actually originates from the cabinet and its ministries. Terminology In most governments, members of the cabinet are given the title of "minister", and each holds a different portfolio of government duties ("Minister of Foreign Affairs", "Minister of Health", etc.). In a few governments, as in the case of Mexico, the Philippines, the UK, and the U.S., the title of "secretary" is also used for some cabinet members ("Secretary of Education", or "Secretary of State for X" in the UK or the Netherlands). In many countries (e.g. Germany, Luxembourg, France, Spain, etc.), a secretary (of State) is a cabinet member with an inferior rank to a minister. In Finland, a secretary of state is a career official that serves the minister. While almost all countries have an institution that is recognisably a cabinet, the name of this institution varies. In many countries, (such as Ireland, Sweden, and Vietnam) the term "government" refers to the body of executive ministers; the broader organs of state having another name. Others, such as Spain, Poland, and Cuba, refer to their cabinet as a council of ministers, or the similar council of state. Some German-speaking areas use the term "senate" (such as the Senate of Berlin) for their cabinet, rather than the more common meaning of a legislative upper house. However, a great many countries simply call their top executive body the cabinet, including Israel, the United States, Venezuela, and Singapore, among others. The supranational European Union uses a different convention: the European Commission refers to its executive cabinet as a "college", with its top public officials referred to as "commissioners", whereas a "European Commission cabinet" is the personal office of a European Commissioner. The term comes from the Italian gabinetto, which originated from the Latin capanna, which was used in the sixteenth century to denote a closet or small room. From it originated in the 1600s the English word cabinet or cabinett which was used to denote a small room, particularly in the houses of nobility or royalty. Around this time the use of cabinet associated with small councils arose both in England and other locations such as France and Italy. For example, Francis Bacon used the term Cabanet Counselles in 1607. Selection of members In presidential systems such as the United States, members of the cabinet are chosen by the president, and may also have to be confirmed by one or both of the houses of the legislature (in the case of the U.S., it is the Senate that confirms members with a simple majority vote). Depending on the country, cabinet members must, must not, or may be members of parliament. The following are examples of this variance: Some countries that adopt a presidential system also place restrictions on who is eligible for nomination to cabinet based on electoral outcomes. For instance in the Philippines, candidates who have lost in any election in the country may not be appointed to cabinet positions within one (1) year of that election. The candidate prime minister and/or the president selects the individual ministers to be proposed to the parliament, which may accept or reject the proposed cabinet composition. Unlike in a presidential system, the cabinet in a parliamentary system must not only be confirmed, but enjoy the continuing confidence of the parliament: a parliament can pass a motion of no confidence to remove a government or individual ministers. Often, but not necessarily, these votes are taken across party lines. In some countries (e.g. the U.S.) attorneys general also sit in the cabinet, while in many others this is strictly prohibited, as the attorneys general are considered to be part of the judicial branch of government. Instead, there is a Minister of Justice, separate from the attorney general. Furthermore, in Sweden, Finland, and Estonia, the cabinet includes a Chancellor of Justice, a civil servant that acts as the legal counsel to the cabinet. In multi-party systems, the formation of a government may require the support of multiple parties. Thus, a coalition government is formed. Continued cooperation between the participating political parties is necessary for the cabinet to retain the confidence of the parliament. For this, a government platform is negotiated, in order for the participating parties to toe the line and support their cabinet. However, this is not always successful: constituent parties of the coalition or members of parliament can still vote against the government, and the cabinet can break up from internal disagreement or be dismissed by a motion of no confidence. The size of cabinets varies, although most contain around ten to twenty ministers. Researchers have found an inverse correlation between a country's level of development and cabinet size: on average, the more developed a country is, the smaller is its cabinet. Origins of cabinets A council of advisers of a head of state has been a common feature of government throughout history and around the world. In Ancient Egypt, priests assisted the pharaohs in administrative duties. In Sparta, the Gerousia, or council of elders, normally sat with the two kings to deliberate on law or to judge cases. The Maurya Empire under the emperor Ashoka was ruled by a royal council. In Kievan Rus', the prince was obliged to accept the advice and receive the approval of the duma, or council, which was composed of boyars, or nobility. An inner circle of a few members of the duma formed a cabinet to attend and advise the prince constantly. The ruins of Chichen Itza and Mayapan in the Maya civilisation suggest that political authority was held by a supreme council of elite lords. In the Songhai Empire, the central government was composed of the top office holders of the imperial council. In the Oyo Empire, the Oyo Mesi, or royal council, were members of the aristocracy who constrained the power of the Alaafin, or king. During the Qing dynasty, the highest decision-making body was the Deliberative Council. In the United Kingdom and its colonies, cabinets began as smaller sub-groups of the English Privy Council. The term comes from the name for a relatively small and private room used as a study or retreat. Phrases such as "cabinet counsel", meaning advice given in private to the monarch, occur from the late 16th century, and, given the non-standardised spelling of the day, it is often hard to distinguish whether "council" or "counsel" is meant. The Oxford English Dictionary credits Francis Bacon in his Essays (1605) with the first use of "Cabinet council", where it is described as a foreign habit, of which he disapproves: "For which inconveniences, the doctrine of Italy, and practice of France, in some kings' times, hath introduced cabinet counsels; a remedy worse than the disease". Charles I began a formal "Cabinet Council" from his accession in 1625, as his Privy Council, or "private council", was evidently not private enough,[citation needed] and the first recorded use of "cabinet" by itself for such a body comes from 1644, and is again hostile and associates the term with dubious foreign practices. The process has repeated itself in recent times, as leaders have felt the need to have a Kitchen Cabinet or "sofa government". Parliamentary cabinets Under the Westminster system, members of the cabinet are Ministers of the Crown who are collectively responsible for all government policy. All ministers, whether senior and in the cabinet or junior ministers, must publicly support the policy of the government, regardless of any private reservations. Although, in theory, all cabinet decisions are taken collectively by the cabinet, in practice many decisions are delegated to the various sub-committees of the cabinet, which report to the full cabinet on their findings and recommendations. As these recommendations have already been agreed upon by those in the cabinet who hold affected ministerial portfolios, the recommendations are usually agreed to by the full cabinet with little further discussion. The cabinet may also provide ideas on/if new laws were established, and what they include. Cabinet deliberations are secret and documents dealt with in cabinet are confidential. Most of the documentation associated with cabinet deliberations will only be publicly released a considerable period after the particular cabinet disbands, depending on provisions of a nation's freedom of information legislation. In theory the prime minister or premier is first among equals. However, the prime minister is ultimately the person from whom the head of state will take advice (by constitutional convention) on the exercise of executive power, which may include the powers to declare war, use nuclear weapons, and appoint cabinet members. This results in the situation where the cabinet is de facto appointed by and serves at the pleasure of the prime minister. Thus, the cabinet is often strongly subordinate to the prime minister as they can be replaced at any time, or can be moved ("demoted") to a different portfolio in a cabinet reshuffle for "underperforming". This position in relation to the executive power means that, in practice, any spreading of responsibility for the overall direction of the government has usually been done as a matter of preference by the prime minister – either because they are unpopular with their backbenchers, or because they believe that the cabinet should collectively decide things. A shadow cabinet consists of the leading members, or frontbenchers, of an opposition party, who generally hold critic portfolios "shadowing" cabinet ministers, questioning their decisions and proposing policy alternatives. In some countries, the shadow ministers are referred to as spokespersons. The Westminster cabinet system is the foundation of cabinets as they are known at the federal and provincial (or state) jurisdictions of Australia, Canada, India, Pakistan, South Africa, and other Commonwealth countries whose parliamentary model is closely based on that of the United Kingdom. Cabinet of the United States Under the doctrine of separation of powers in the United States, a cabinet under a presidential system of government is part of the executive branch. In addition to administering their respective segments of the executive branch, cabinet members are responsible for advising the head of government on areas within their purview. They are appointed by and serve at the pleasure of the head of government and are therefore strongly subordinate to the president as they can be replaced at any time. Normally, since they are appointed by the president, they are members of the same political party, but the executive is free to select anyone, including opposition party members, subject to the advice and consent of the Senate. Normally, the legislature or a segment thereof must confirm the appointment of a cabinet member; this is but one of the many checks and balances built into a presidential system. The legislature may also remove a cabinet member through a usually difficult impeachment process. In the cabinet, members do not serve to influence legislative policy to the degree found in a Westminster system; however, each member wields significant influence in matters relating to their executive department. Since the administration of Franklin D. Roosevelt, the President of the United States has acted most often through his own executive office or the National Security Council rather than through the cabinet as was the case in earlier administrations. Although the term "Secretary" is usually used to name the most senior official of a government department, some departments have different titles to name such officials. For instance, the Department of Justice uses the term "Attorney General" instead of "Justice Secretary", but the Attorney General is nonetheless a cabinet-level position. Following the federal government's model, state executive branches are also organised into executive departments headed by cabinet secretaries. The government of California calls these departments "agencies" or informally "superagencies", while the government of Kentucky styles them as "cabinets". Communist system Communist states can be ruled de facto by the politburo, such as the Politburo of the Communist Party of the Soviet Union. This is an organ of the communist party and not a state organ, but due to one-party rule, the state and its cabinet (e.g. Government of the Soviet Union) are in practice subordinate to the politburo. Technically, a politburo is overseen and its members selected by the central committee, but in practice it was often the other way around: powerful members of the politburo would ensure their support in the central committee through patronage. In China, political power has been further centralised into the Politburo Standing Committee of the Chinese Communist Party. See also References External links |
======================================== |
[SOURCE: https://www.ynet.co.il/judaism] | [TOKENS: 205] |
יהדות אורי ויזל התגרש – ולמחרת אחיו נפל במלחמה כשאישה מגויסת נגד נשים אחרות: הזוועה של יעל פוליאקוב כשמביטים באחר כאילו הוא אויב – זה מוביל להתפרקות איזה מכלי המשכן קשור למדליה של רז הרשקו מאולימפיאדת פריז? | חידון |
======================================== |
[SOURCE: https://www.mako.co.il/special-turkish_keep_going] | [TOKENS: 337] |
תנו לזה לשקוע: טונה מחפש כיוון מוזיקלימסע קצר של מילים, ביטים והרבה קפה את טייק הזהב הזה אתם לא רוצים לפספסטונה מחפש מילים ונירו ויקיר משחקים עם הביט ההופעה הבלתי נשכחת בהיכל מנורהטונה לוקח הפסקה מהשיר בין שירים והקלטות - הרגעים מאחורי העדשההצצה למאחורי הקלעים של טונה ודורין יש לנו שיר? לחשיפה הזו לא היינו מוכניםטונה משמיע לראשונה את הטקסט של השיר |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-NIUBL-IWS_86-0] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/DAP_(software)] | [TOKENS: 380] |
Contents DAP (software) DAP is a statistics and graphics program based on the C programming language that performs data management, analysis, and C-style graphical visualization tasks without requiring complex syntax. Its name is an acronym for Data Analysis and Presentation. DAP was written to be a free replacement for SAS, but users are assumed to have a basic familiarity with the C programming language in order to permit greater flexibility. It has been designed to be used on large data sets and is primarily used in statistical consulting practices. However, even with its clear benefits, DAP hasn't been updated since 2014 and hasn't seen widespread use when compared to other statistical analysis programs. Features DAP is a command line driven program. Below are various features that DAP can perform. DAP can compute means and percentiles, correlation, & ANOVA from data sets. This includes Unbalanced as well as Crossed, Nested ANOVA. It can also be used to create scatterplots, line graphs and histograms of data. This can include split plots, treatment combinations, as well as latin squares. DAP can perform linear regression and can utilize regressions to build linear models. In addition to linear regression, DAP can also perform logistic regression analysis as well. There's a variety of other analysis that DAP can do as well including building loglinear models as well as Logit models for linear-by-linear association. In terms of models, DAP can create mixed balanced and unbalanced models as well as random unbalanced models. It has been designed so as to cope with very large data sets; even when the size of the data exceeds the size of the computer's memory because the program processes files one line at a time rather than reading entire files into memory. Applications Industry Uses See also References Sources External links This scientific software article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://www.theverge.com/autonomous-cars] | [TOKENS: 1226] |
Autonomous Cars Self-driving cars are finally here, and how they are deployed will change how we get around forever. From Tesla to Google to Uber to all the major automakers, we bring you complete coverage of the race to develop fully autonomous vehicles. This includes helpful explanations about the technology and policies that underpin the movement to build driverless cars. The EV company says the staff cuts are intended to “improve operational effectiveness and optimize our resources,” TechCrunch reports. An internal memo added that the company is still focused on “further expansion into the robotaxi market,” following the launch of a robotaxi collaboration with Nuro and Uber last year. [TechCrunch] Turns out, if you leave a Waymo door open, someone gets paid to close it, opening up some novel opportunities for improving the economy. tsmuse: So you’re saying we can create jobs if we call a bunch of waymos, open their doors, and then walk away? Get the day’s best comment and more in my free newsletter, The Verge Daily. Since Waymo doesn’t have a vehicle with automatic doors, it has to pay on gig workers for help. (The Washington Post covered this phenomenon recently.) Just another example of the invisible human labor that’s required to keep these autonomous systems afloat. Co-CEO Tekedra Mawakana told Bloomberg the robotaxi company was on track to reach the 1 million weekly rides milestone by the end of 2026. The company is currently provides about 400,000 rides per week across six US cities. Waymo just announced that its sixth-generation vehicle is going to start accepting passengers in San Francisco and Los Angeles. [Bloomberg] Chinese automotive publication Gasgoo says the new companies are in talks to dramatically increase Waymo’s fleet of Hyundai EVs. The deal could be worth around $2.5 billion, assuming $50,000 per vehicle. But even if the report is true, don’t expect Waymo’s robotaxi fleet to suddenly grow by 50,000: the company has said it plans on adding only 2,000 more vehicles in 2026, for a total fleet size of 3,500. Waymo is currently testing and validating the Ioniq 5 and the Zeekr RT as its next two robotaxis. Say that five times really fast! Uber has said it would use Baidu’s Apollo Go robotaxis in London, and now the company is adding Dubai as well, starting in March 2026. As Waymo uses AI-generated 3D worlds to simulate driverless cars’ encounters with tornadoes, floods, and even elephants, one commenter wonders if they could try AI school buses next. cowboyfromspace: They got elephants down but forgot about school buses? Get the day’s best comment and more in my free newsletter, The Verge Daily. I’ve been seeing a lot of posts and articles claiming that Waymo’s robotaxis are being secretly controlled by teleoperators in the Philippines. The claims stem from a Senate Commerce Committee hearing this week, during which a top Waymo executive told Sen. Ed Markey (D-MA) that the company employs some remote operators overseas. But he was also clear that those operators aren’t actually controlling the vehicles. I watched all two hours of the hearing, and here’s what Mauricio Peña, Waymo’s chief safety officer, had to say: They do not remotely drive the vehicles. As you stated, Waymo asks for guidance in certain situations. And it’s an input, but the Waymo vehicle is always in charge of the dynamic driving tasks. Now, as to some other robotaxi operators… Geely may build cars in the US, but their software still has to follow cybersecurity restrictions. By trying to drive more assertively, Waymo appears to be adopting some dangerous human habits. A Waymo made the unfortunate decision to drive on light rail tracks in Phoenix with a passenger inside while a train was approaching. The passenger made the right call to abandon the robotaxis, even if it meant getting out in the middle of traffic. Valley Metro, which oversees light rail service, says there were no significant delays as a result of the incident. This comes a few weeks after a blackout caused a massive Waymo traffic jam in San Francisco. Waymo has a new name for its Zeekr-produced autonomous minivans that are set to roll out this year. Ojai, named for the city northwest of Los Angeles, was chosen because most American consumers aren’t familiar with the Geely-owned Zeekr brand, according to InsideEVs. That may be true, but try saying “Waymo Ojai” five times really fast. The chipmaker is making a big bet on self-driving cars. And it’s making quick progress too. In addition to an update for its power outage problem, Waymo is also working on an AI Ride Assistant. That’s according to security researcher Jane Manchun Wong, who found details on the bot’s system prompt from Waymo’s mobile app code. When a substation fire cut off electricity across the city, Waymo SUVs stuck at malfunctioning stoplights quickly became another headache, and now the company is explaining it as an issue of too many remote operator assistance requests: While the Waymo Driver is designed to handle dark traffic signals as four-way stops, it may occasionally request a confirmation check to ensure it makes the safest choice. While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests. This created a backlog that, in some cases, led to response delays contributing to congestion on already-overwhelmed streets. The EV maker was known for its outdoor-themed off-roaders. Why is it now chasing Elon Musk down an AI rabbit hole? Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_ref-9] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://www.wired.com/gallery/best-coffee-grinders/] | [TOKENS: 19700] |
Matthew KorfhageGearFeb 20, 2026 8:42 AMThe Best Coffee Grinders for Espresso or Pour-OverWe used particle size analysis and our own taste buds to find the 5 best coffee grinders for each brew style—and the best affordable grinders for people on a budget.CommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyFeatured in this articleBest Coffee Grinder for Most PeopleBaratza Encore ESPRead more$200 AmazonBest Coffee Grinder for Pour-Over and Drip CoffeeFellow Ode Gen 2Read more$400 AmazonBest $100 Grinder for Drip CoffeeOxo Conical Burr GrinderRead more$110 $103 (6% off) AmazonBest $100 Coffee Grinder for Espresso and Pour-OverKingrinder K6 Manual Coffee GrinderRead more$99 Amazoncoffee Is Only as good as the beans you use to make it, and the beans you use are only as good as the grinder you use to render them extractable. Beans fed into a coffee grinder right before you brew them might as well be a whole different substance from the bag of ground beans you get at the supermarket, full of aroma and flavor compounds that quickly dissipate as they oxidize in the air. (It also helps to use fresh beans from excellent roasters: Check out our Best Coffee Subscriptions guide for some of our favorites.)Grinders have become the most active tech frontier in coffee, and over the past decade coffee geeks have become devoted to the idea that getting the right grind on your beans is every bit as important as the machine you brew with. Uneven grinds can mean uneven extraction and uneven flavor—which is why we use particle size analysis to test the character of each grind from every grinder we review. But, of course, particle size alone does not tell the whole story, which is why I use the grounds from every model for cups of drip coffee, pour-over, Aeropress, and, when applicable, espresso.Our Reviews team has tested dozens of grinders over the past five years, and I've retested all the top burr-grinder picks this past year. And while super-high-end grinders like the wonderful Mazzer Philos ($1,500) can run into the thousands, our top pick, the Baratza Encore ESP ($200), will keep most people happy for much less. This said, the Fellow Ode Gen 2 ($400), our favorite flat-burr grinder for drip coffee, is able to wring a special sort of poetry from each cup. The five machines below are the coffee grinders we currently recommend to anyone who cares a lot about home coffee.Be sure to check out WIRED's other coffee coverage, like the Best Drip Coffee Makers, Best Espresso Machines, Best Latte and Cappuccino Machines, and Best Cold Brew Devices.Updated February 2026: We tested and added the Mazzer Philos to our top grinder picks, and we tested and added the Wirsh Geimori T38 Plus and Geimori GU38 to the guide. We also added a budget grinder section for lower-cost picks, streamlined our top picks, checked and updated links, and added new explanatory content, including an expanded section on particle size analysis.Table of ContentsAccordionItemContainerButtonLargeChevronCompare the Top 5 GrindersResults of Particle Size Analysis of Top Coffee GrindersBudget Coffee Grinder PicksHonorable Mentions and Runners-UpOther Grinders TestedHow We Test GrindersWhat's a Conical Burr, Flat Burr, or Blade Grinder?What's the Difference Between Cheap and Expensive Grinders?Which Grinders are Best for Espresso?Best Coffee Grinder for Most PeoplePhotograph: Matthew KorfhagePhotograph: Matthew KorfhageVideo: Matthew KorfhageChevronChevronSave to wishlistSave to wishlistBaratzaEncore ESP$200 Amazon$200 Williams SonomaIt's hard to fathom how good this Baratza Encore ESP coffee grinder is for the money—and how many types of coffee drinkers it can serve. Like other Baratzas, it's tank-durable and compact. It's easy to clean and maintain, and it's surprisingly precise for a conical burr grinder at this price point, especially with finer grinds. I verified this precision using particle size analysis—but also the evidence of my own senses after drinking the coffee that results. The Encore makes excellent espresso, and quite good drip coffee, and also good cold brew coffee—one of few devices on the market to handle such range, while still being priced accessibly for beginners to specialty coffee. And thus, it can serve pretty much every kind of coffee drinker with aplomb. That it does so for $200 is a gift.The secret behind this versatility is an ingenious bit of engineering. The previous-generation Baratza Encore had been a top grinder pick for years, but it didn't allow enough fine-tuning to make great espresso. With the ESP, Baratza upgraded the burr set and redesigned the grind wheel to allow for micro-adjustment when dialing in espresso-fine grinds. You also get broader adjustments at the medium-coarse end of the scale for pour-over, drip, and cold brew.The ESP makes round, full-bodied drip coffee with a pleasing mouthfeel. But if what you want is to gain crystalline clarity of flavor from light-roast beans, your grinder of choice is probably instead the Fellow Ode Gen 2 ($400).Other coffee grinders are prettier, or more likely to be mistaken for midcentury sculpture. And some may also be a little quieter. But at $200, this Encore ESP is the best grinder most coffee drinkers will likely need, for whatever style of coffee. And if it has the longevity of the previous-generation Encore, the ESP is likely to last a decade or more.SpecsDimensions5.9 x 5.1 x 13.4 inches (L x W x H)Weight5.6 poundsTypeConical burrGrind settings40: 20 espresso settings, 20 for filter, pour-over, French press, and cold brewCapable of espresso?YesHopper capacity4 ounces/120 gramsWarranty1 yearWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDInnovative fine-tuning for espresso, drip and French press alikePrecise grinds, with clarity of flavorBuilt like a tank. Best value proposition overallTIREDNot a looker, reallyNo auto-shutoffBest Coffee Grinder for Pour-Over and Drip CoffeePhotograph: Matthew KorfhageSave to wishlistSave to wishlistFellowOde Gen 2$400 Amazon$400 Fellow$400 Williams-SonomaLike Liam Neeson, the Ode Gen 2 has a highly particular set of skills. As with the majority of flat-burr grinders priced under $500, there's no espresso involved. But the Ode is a beauteous precision machine, made to elicit tuning-fork clarity of flavor from pour-over and drip and Aeropress, without sacrificing richness. On drip coffee settings, the particle size distribution on my coffee grounds looks like the same tight bell curve you learn in a first-year statistics class. Tick the Ode one setting finer, or coarser, and you'll often discover a new character, a different flavor note. It's a fun game for a mid-morning cup, when I'm down to experiment. (The Ode can also grind coarser for cold brew and French press, of course, but for cold brew I'd really only need the firepower of this flat burr for a Kyoto-style slow-drip, or the sui generis cold brew on a Fellow Aiden.)While the previous-generation Ode struggled with some of the lightest-roast pour-overs, this second-generation burr set sails through and remains beauteously static-free—you probably won't even have to water-spritz your beans like a sampler at a perfume counter. The Ode grinds quietly and diffusely, generally below 80 decibels, and preternaturally swiftly. When I first started using it, I kept double-checking to make sure it had ground all the beans so soon. It's also pretty, an elegant companion to whatever drip coffee machine I happen to be trying out, and offers better clarity of flavor than many grinders twice its price. And so I forgive it for the somewhat … squishy … feel of its power button, and a strange grind cup with interior fins whose proposed utility is still a mystery.SpecsDimensions9.4 x 4.1 x 9.8 inches (L x W x H)Weight9.9 poundsTypeFlat burrGrind settings30Capable of espresso?Grinds fine enough for dark roasts, but not designed for espressoHopper capacity3.5 ounces (100 grams)Warranty2 yearsWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDTuning-fork precision for drip and pour-overMinimalist-pretty designLow mess. Auto-shutoff when hopper's emptyTIREDNo espressoOdd haptics on power switchGrind cup fins are weirdBest $100 Grinder for Drip CoffeePhotograph: Matthew KorfhageVideo: Matthew KorfhageChevronChevronSave to wishlistSave to wishlistOxoConical Burr Grinder$110 $103 (6% off) Amazon$110 TargetThe Oxo Brew Conical Burr Grinder has a good balance of features, usefulness, and a relatively low price among the conical-burr grinders we've used. Quite simply, it's the cheapest grinder I recommend to make actual good coffee. There are 30 settings that range from espresso to a coarse grind for French press—but the lack of fine adjustments at the low end of the scale won't suffice for non-pressurized espresso baskets. This Oxo is also not as quiet as our top pick, and kicks out more boulders and fines than either of our top two picks.That said, this machine is industrial-handsome and intuitive to use, with a grind-by-time function you can dial in for your brewer if you hate scales. (You should still use a scale, like this $28 one from Maestri House.) This Oxo also grinds consistently enough for good-tasting drip. It's a great grinder for beginners—an entry-level choice at half the price of our top pick, with a solid warranty and a sturdy build.SpecsDimensions7.5 x 5.3 x 12.9 inches (L x W x H)Weight4.5 poundsTypeConical burrGrind settings30 settings (15 half-adjustments)Capable of espresso?Technically yes, but fine adjustments are fewHopper capacity12 ouncesWarranty2 year limitedWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDLowest-cost burr grinder we recommend30 grind settings, easy useSturdy build, 2-year warrantyTIREDFew adjustments for espressoA little loudNot as precise as top picksBest $100 Coffee Grinder for Espresso and Pour-OverPhotograph: Matthew KorfhageVideo: Matthew KorfhageChevronChevronSave to wishlistSave to wishlistKingrinderK6 Manual Coffee Grinder$99 Amazon“We live in the 21st century,” you're probably saying. “Electricity has been a pretty successful addition to modern life. Why would I want to grind coffee by hand?” The answer, at home, is precision and cost. The highest-end, most consistent, most adjustable coffee grinders can run into the thousands if you let them, especially where espresso is concerned. But a slower-grinding, precisely machined manual coffee grinder can attain similar precision at a far lower cost. This little Taiwanese-made Kingrinder K6 is a beast, with better consistency of grind than any flat burr I’ve tested south of four figures, beautiful fine adjustments, and a grind size range from the finest espresso to the coarsest of French press or cold brew. And yet, it's often $100 on sale. It's also a great travel and camping grinder. It's even better for home baristas.Sure, it requires effort and a little grip strength. And there's a bit of a learning curve: Each click on the K6's adjustment dial accounts for 16 microns of burr movement. Start with the dial at zero, and click counterclockwise. One full rotation is 60 clicks. Great espresso might begin at around 35 clicks, while great light-roast drip coffee is somewhere closer to 110. There's a helpful guide linked here. Then weigh your beans, fill them from the top of the device, close the lid, insert the handle, and crank it for 20 seconds or so for single-dose espresso or pour-over, a fast grind compared to most manuals. One of the best cups of pour-over I've made in my life has been with this little thing, a sweet spot of precise aromatics I couldn't replicate with any electric grinder I own. But fair warning, you'll get a little bit of a workout on fine grinds.If you want an electric coffee grinder that'll make good espresso at a similar price, check out the Wirsh Geimori T38 Plus ($130), described below in our picks for budget coffee grinders.SpecsDimensions2.1 x 2.1 x 6.7 inches (L x W x H)Weight1.3 poundsTypeManual, conical burrGrind settings180 (about 20-30 espresso adjustments)Capable of espresso?Yes. Capable of Turkish coffee, even.Hopper capacity35 grams (about an ounce)Warranty1 year limitedWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDPrecision at a low price, with 180 adjustmentsSturdy, analog machining and engineeringPortableTIREDGrinding by hand is a process.Instructions/grind guides hard to come byGood for single-serve, not batchBest Buy-It-For-Life Espresso GrinderPhotograph: Matthew KorfhagePhotograph: Matthew KorfhageVideo: Matthew KorfhageVideo: Matthew KorfhageChevronChevronSave to wishlistSave to wishlistMazzerPhilos Single-Dose Grinder$1,495 MazzerIf ever you need proof that an excellent grinder is every bit as important as a good espresso machine, here's a little experiment. Try using this Mazzer Philos coffee grinder (9/10, WIRED Recommends) with a cheap espresso machine. Your espresso shot will still come out full-bodied, syrupy, and delicate in all the right ways. Indeed, I've made some of my most prized espresso shots in recent memory using the Philos and a mid-tier machine.Italian brand Mazzer is best known for devices used in specialty coffee shops. The Philos is the first one I know that's just as suitable for home use. This said, it costs $1,500, as much as our favorite top-line espresso machines. This makes it cause for a bit of a thought experiment. What leads to even extraction, and fewer off flavors? The excellence and evenness of the grind, or consistency and tight control over temperature and pressure? For my money, I might opt for this $1,500 grinder and an excellent but bare-bones $300 Breville Bambino, rather than spending the extra money on a more expensive espresso machine and a cheaper grinder.The Philos is a precise and thoughtfully designed device, from one of the most trusted names in Italian coffee grinders. It hums powerfully, but grinds quietly. It retains precious few coffee grounds. The burrs are easy to replace and clean, and you can choose between an i189D burr set meant for medium and dark roasts and an i200D set optimized for better clarity on lighter roasts. Stepless adjustment is also possible. You might get better drip coffee out of the Fellow Ode, but not by much. The espresso, meanwhile, is among the best you can make from a home grinder. The build quality is unmatched, with replaceable parts mostly made of metal. In a world full of plastic, the Philos is mostly devoid of it.SpecsDimensions13.8 x 6 x 14.2 inches (L x W x H)Weight28 poundsType64-mm vertical flat burrGrind settings145 settings, stepless possibleCapable of espresso?YesHopper capacity60 gramsWarranty1 year parts and laborWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDBest-in-class precision on espresso grinds makes for syrupy, delicious brewVery low coffee bean retention.Durable, modular, mostly metal buildTIREDStill good, but not as precise for drip coffeeLarge countertop footprint, small hopperCompare Our Top 5 GrindersGrinderWiredTiredTypeGrind SettingsEspresso RatingDrip RatingBaratza Encore ESP ($200)Innovative dial offers fine adjustments for espresso. Precise grinds, with clarity of flavor. Built like a tank. Admirable versatility for all coffee types. Best value proposition overall.Not a looker, really. Neither loud nor quiet.Conical burr404/54/5Oxo Brew Conical Burr ($100)Lowest-cost burr grinder that still offers a good grind. Slim, sturdy build. Good for drip, Aeropress, French press.Few espresso adjustments. Not as precise as top picks. A little loud.Conical burr301/53.5/5Fellow Ode Gen 2 ($400)Tuning-fork precision on drip and pour-over. Minimalist-pretty design. Low static. Auto-shutoff when hopper is empty. Helpful grind size guide.Not a good pick for espresso. Odd haptics on power switch. Grind cup also weirdFlat burr312/55/5Kingrinder K6 Hand Grinder ($100)Wild precision, at a low price. Sturdy, wholly analog machining and engineering. 180 fine adjustments. Compact.Hand grinding is a process. Instructions are few. Best for single-serve, not batchManual1804/54.5/5Mazzer Philos Coffee Grinder ($1,500)Wonderful clarity, depth, and body for espresso. Fine adjustments, easy cleaning, capability for all coffee brew styles.Quite large. Drip coffee is excellent but not as good as with the Ode.Flat burr1455/53/5Best Budget Coffee GrindersAs mentioned above, the best bang for your buck will always be a hand grinder like my favorite, the Kingrinder K6 manual coffee grinder ($100). A precisely machined manual coffee grinder can rival coffee grinders many hundreds of dollars more expensive, both in precision and durability. And so the best manual coffee grinder will also be the budget option that'll lead to the best coffee. I've personally come to love the routine and the control.But I get it. You'll happily grind your pepper with the best pepper grinder, but you draw the line at grinding coffee. Mornings are hard. Electricity helps. These are the budgetiest of budget electric coffee grinder options for each style of brew, all blessedly hands-off. None of these will lead to the clarity of flavors or sweetness or delicacy of our top picks. But they're the absolute lowest-cost devices we recommend for each category of brew.Best Budget Coffee Grinder for Drip CoffeePhotograph: Matthew KorfhageSave to wishlistSave to wishlistOxoCompact Conical Burr Grinder$80 Amazon$80 Macy's$80 Crate & BarrelWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDCompact and low-costDecent to good coffee grinds for drip, cold brew, and French pressTIREDNot suitable for espressoA bit fussier to use than Oxo's full-size modelGrind clarity is also better on the full-sizeJust when you thought Oxo had already cornered the market on affordable conical burr coffee grinders, the company comes in at an even lower price with this compact model. The Oxo Brew is stacked like a wee layer cake. And so the grind cup is housed within the column of the device and can be pulled out when you're done grinding. But while this is quite clever, neither consistency of grind nor ease of use is quite on par with Oxo's $110 basic conical burr, which remains my pick for an entry-level drip coffee grinder. But it's also very easy to move from the cabinet to the counter and back, and $30 less is $30 less. This is the lowest-price electric grinder I could actually recommend for Aeropress, drip, pour-over, French press, or cold brew. I wouldn't attempt espresso, though.Best Budget Coffee Grinder for EspressoPhotograph: Matthew KorfhageSave to wishlistSave to wishlistGeimori T38 Plus$160 $130 (19% off) AmazonWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDPortable and affordableGrind range encompasses everything from espresso to cold brewStepless adjustment for precise fine-tuningTIREDGrinds slow, at low rpmsStill not as tasty as top espresso or drip picksI'm continuing to test this, but for the moment, the lowest-cost electric espresso-capable grinder I can recommend with a clean conscience is the Wirsh Geimori T38 Plus for $130. This portable conical-burr grinder is about the size of a Christmas nutcracker and looks alarmingly like Pinocchio's left leg. But it offers surprisingly low coffee retention, stepless grind adjustments, and far better precision than expected for a grinder of its price. It achieves this by grinding at low rpms—meaning it grinds quite slowly and carefully for an electric grinder. This also means the T38 Plus takes more than 30 seconds to grind enough beans for a double shot of espresso. This is disqualifyingly slow for batches large enough for drip or French press, and the T38 doesn't really have the clarity you want for pour-over. Still, it might be the only electric grinder I've tested south of $150 that can make decent espresso on non-pressurized baskets. It's also wee—great for small kitchens or as a travel coffee grinder. It's the grinder I'd definitely take with me to a hotel room if I didn't feel like grinding coffee by hand.Best Coffee Grinder for $50 or LessCourtesy of KitchenAidSave to wishlistSave to wishlistKitchenAidBlade Coffee Grinder$50 $45 (10% off) Amazon$35 WalmartWIRED/TIREDAccordionItemContainerButtonLargeChevronWIREDVery cheapVery smallSimple and durableTIREDChoppy grind, with too many bouldersOnly marginal improvement over pre-ground coffeeBad for light roastsLook, blade grinders like this KitchenAid won't offer the powdery fineness and full-bodied coffee pleasures of a great conical burr, nor the precision of WIRED's top flat-burr pick. Blade grinders chop the heck out of beans, offering an uneven grind. But this is a very affordable coffee grinder, it's simple as pancakes to use, and blade-ground fresh beans are still a little better than the stuff in the supermarket. That said, they're probably still worse than getting beans fresh-ground at a café and using them within a week. When non-bean-geek friends ask for a grinder that costs less than dinner for two at Arby's, this is the one I offer up—especially if they're using darker grinds, and favor French press or a less expressive drip coffee maker. At the very least, it's enough so you're not crippled when you get whole-bean coffee as a gift. But let's be clear. The Oxo Compact conical burr grinder above is five times as good for about $30 more.Results of Particle Size Analysis of Coffee GrindersPhotograph: Matthew KorfhageI of course assessed coffee grinders by tasting the resulting coffee, across a number of brew styles and beans. But I also backed up my taste buds with scientific instruments. I analyze the flavor profile and the grind consistency of each of WIRED's top burr coffee grinder picks using particle analysis by a device called the DiFluid Omni. You can read a broader discussion of that particle analysis here.Specifically, I tested each of WIRED's top burr grinders on multiple grind settings using the same medium-grind coffee beans—at both espresso-fine grinds and medium grinds more suitable for drip or pourover coffee. Since September 2025, I also test every grinder I review or consider as a top pick, including the Moccamaster KM5, whose results are included here. I tested at least 5 times for each grind sample, collating the results into a characteristic curve for that bean and grinder.Expand the discussion below for detailed discussion, and bar graphs and such.Particle Size Analysis of Top Coffee Grinder PicksAccordionItemContainerButtonLargeChevronParticle size analysis of coffee grinds is not a cut-and-dried test: It's more a clue as the the probable character of a brew. Patterns begin to emerge that correlate to the experiences I've had tasting coffee from each grinder. Taste is the ultimate test, alongside consistency of finicky espresso pulls. But quantitative analysis helps me (and you) actually trust and maybe understand those sensory test results.When looking at these bar graph curves below, there are also a few rules of thumb. Big boulders north of a thousand microns will often lead to muddier character. Too many fines below 100 microns might lead to bitterness. A tight particle size distribution is associated with greater clarity of flavor. Look at the standard deviation (SD) for a clue as to overall precision: Smaller numbers indicate likely greater clairty. This said,a broad distribution of coffee ground sizes can also lead to better body, and more perceived sweetness.Our top pick for most people, the Baratza Encore ESP, proved itself to have quite precise results at very fine grinds—with standard deviation below 200 microns on espresso grinds, and 30 percent of particles concentrated within a single range. At its price range, this is admirable precision matched by very few grinders.Courtesy of Matthew KorfhageThe same wasn't as true at pour-over coffee settings for the Baratza Encore ESP (seen here at setting 22), which showed a broader and more heterogeneous particle size distribution—with both small and large particle sizes. In practice, this led to a full-bodied and rounder cup, but with a little bit less of the precise aromatics one can get from our favorite grinder for drip and pour-over, the Fellow Ode Gen 2.Courtesy of Matthew KorfhageThe Ode showed a characteristic Bell-curve shape, surrounding a single high peak, which corresponded with the precise aromatics I taste when brewing drip or pour-over coffee using the Ode.Even greater precision was on display with the Technivorm Moccamaster KM5, a flat burr grinder that showed precise results across the board—rivaling the Encore ESP at fine grinds and the Ode Gen 2 at grinding for drip. It's not as user-friendly as some of the top-pick devices, and the resulting brews can sometimes feel clinically clean, with a thinner body. But my lord it does offer clarity.Courtesy of Matthew KorfhageThe Kingrinder K6 hand grinder, our top manual grinder pick,, also showed strong peaks at grind sizes appropriate for pour-over coffee. Shown here are analyses of two medium-fine grinds, at 60 and 70 clicks from zero, respectively. Hand grinders have a secret weapon, which is that they cause you to grind slowly—which works very well at coaxing out more clarity from conical burrs. For pour-over grinds, the Kingrinder showed a higher peak than basically any grinder I tested, meaning grinds are very concentrated in a tight range of sizes: as many as 40 percent of coffee grounds were functionaly the same size, and about 70 percent were grouped tightly around this. This leads to quite pronounced, intense flavor notes.Courtesy of Matthew KorfhageAt grind sizes suitable for espresso, the Mazzer Philos bests this precision, with more than 90 percent of coffee grounds huddled in a tight grouping while using the i200D burr set. Nonetheless, while boulders are all but nonexistent, enough coffee fines exist to give each shot an almost syrupy consistency. The result is both body and perceived sweetness, with a surprisingly delicate clarity. While I haven't tested the i189D burrs also available as an option, reports from the world say that the 189Ds lean even harder into clarity of flavors. But note that at bigger grind sizes more suitable for drip coffee, you'll get a quite broad distribution. This will lead to a well-rounded cup, but may not offer the clarity of flavor of the Fellow Ode Gen 2 or the Kingrinder K6 for drip and pour-over brews.Omni via Matthew KorfhageOmni via Matthew KorfhageFrequently Asked QuestionsHow We Test Coffee GrindersAccordionItemContainerButtonLargeChevronWIRED tests coffee grinders by grinding a lot of beans, and making a lot of coffee—testing each grinder to see if it can serve well for espresso, Aeropress, drip or pour-over coffee, and coarse-ground cold brew and French Press. I tend to always grind a drip Stumptown Homestead or Single-Origin Colombia as a baseline, because each is readily available at my local supermarket with stamped roast dates, and because I know the flavor well enough I can detect variations. But I'll also try out a number of flavors and roasts on each grinder, for different brewing methods.Photograph: Matthew KorfhageWe assess each grinder for decibel level while grinding, ease of cleaning and operation, hopper design, the presence or absence of “popcorning” (where the beans pop around inside the hopper, often leading to more uneven results), messiness and static electrical buildup, grind retention, ease of use, value, and simple aesthetics.Previous WIRED reviewers assessed grind uniformity visually with the aid of macro lenses, or filtered coffee grounds with sieves. In more recent rounds of testing, I re-assessed each top coffee grinder pick using particle grind size analysis, with the help of the DiFluid Omni roast color and particle size analyzer, as well as a data analysis app.I tested both fine and medium grinds on each grinder, using the same beans for each grinder, roasted within a month of testing. I repeated the particle analysis at least five times for each grinder and setting, and collated the results. I assessed the uniformity of the grind and the overall distribution of particle sizes—paying particular attention to the share of coffee fines (the tiniest particles smaller than 100 microns) and boulders (big coffee bits larger than 1000 microns).Why Grind Whole Beans Instead of Buying Pre-Ground?AccordionItemContainerButtonLargeChevronThe reasons are simple: Flavor. Freshness. Aroma.Whenever you open a vacuum-sealed bag of beans, a little invisible clock starts. Oxidation begins to erode the character of your beans, breaking down organic compounds and degrading them, turning your lovely beans to cardboard. Aromatic flavor compounds also escape from the bean, gassing out into the air where they do no particular good.When you grind your beans, these processes go into overdrive. Freshness for whole beans can be measured in weeks. For ground beans, freshness in the open air is a matter of hours or even minutes. That bag of pre-ground beans you got from the supermarket? It's still coffee, of course, and it'll taste like coffee. But the vibrancy is gone. As far as true freshness is concerned, that coffee's been dead for weeks. (Pre-ground beans can be kept airtight for a week or so and maintain their flavor, if you get them ground fresh at a coffee roaster.)Photograph: Matthew KorfhageThe only reliable way to get truly excellent flavor from your coffee beans, the way you experience it at a café, is to use fresh, whole beans. This is also how you can exercise some control over extraction, and dial in your brewer or espresso maker to get the pefect results for each bean.Espresso requires a fine grind, pour-over a little coarser, electric drip coffee a little coarser than this. Each grinder should have a guide to the best adjustments for each brewing method. Lighter-roast beans will want a finer grind than dark-roast, to aid in extraction: porous dark-roast beans give up their secrets a lot easier.It's all kinda fun to figure out, if you let it be fun. But certainly, when you strike paydirt, you'll know it: Finding the right marriage of grind and bean, on a good grinder, can turn into the best cup you've ever had. It's like the magical first time you seared a perfect steak, or baked a perfect layer cake. Effort meets reward. It's marvelous. The grinders in this guide will help you find that moment more often.What Is a Conical, Flat, or Blade Grinder?AccordionItemContainerButtonLargeChevronPhotograph: Iryna Veklich/Getty ImagesMost coffee grinders fall into three main types: Conical-burr, flat-burr, and blade grinders. Burr grinders are generally higher quality, and higher cost.Conical-burr grinders are the category occupied by our top pick, the Baratza Encore ESP, and pretty much all of the most affordable grinders that still make good coffeee. And there's a reason for this: Conical tends to offer the sweet spot at the intersection of high-performance, cost, and flexibility. In a conical grinder, coffee beans are crushed and ground between two rings of burrs. They deliver a finer, much more consistent grind than you’d get with a traditional blade grinder, even the nicest blade grinder you ever met. Conicals do tend to throw off more fines than a flat burr, but many feel this leads to more body and a more rounded flavor character.Flat-burr grinders are thought to be more precise than conical grinders (though this is by no means universal). They’re also typically more finicky and also more expensive. Burrs are laid on top of each other, and the beans pass through them as they grind. The grinder action pushes the grounds out of one end, instead of relying on gravity like a conical-burr grinder, which means the beans spend more time in contact with the burrs. This often results in a more consistent grind, and therefore more precise flavors. For this reason, flat-burr grinders are often preferred as a way to elicit clarity of tasting notes in single-origin beans for pour-over, drip, and Aeropress.Blade grinders have a chopping blade that spins around like a food processor. But blades don't produce even results. Some of your coffee will be fine powder at the bottom, and at the top you'll have bits too large for even French press. The result is an inconsistent, unpredictable brew. These grinders are generally quite cheap. But in case you're wondering, using fresh beans in a blade grinder is still probably better than buying stale ground coffee. (You can learn how to shake the beans to even your grind just a little. Pulsing the machine often also works. See world barista champion James Hoffmann's video for some more blade grinder hacks.) Still, if you can afford it, the conical or flat-burr grinders on this list will lead to far better coffee than any blade.What's the Difference Between a Cheap and Expensive Burr Grinder?AccordionItemContainerButtonLargeChevronThe machinery in a high-quality burr grinder is a bit more complicated, and it's built to withstand greater wear and tear. In cheap burr grinders, the burrs can get blunt from regular use, and flimsier motors may burn out in a matter of months.But also, coffee grinders have undergone a revolution in technology and consideration in the past decade. Manufacturers have been experimenting with different shapes of burr even on conical burr grinders—pentagonal, hexagonal, heptagonal. And grinders with more precision cuts will cost more money.Flat burrs also cost more money to manufacture, and are seen as having more precision. The true geeks are swapping out to new generations of flat burr that offer greater precision in machining, and multistage grinds. Grinder makers are experimenting with larger and smaller burrs, and different materials. It's a hive of invention out there. And these precision parts cost money: Some burr sets might cost hundreds all by themselves.The end result of all this attention is a greater range or finer adjustment of grind sizes, better and more reliable calibration, and often more precision in the resulting coffee grinds—and thus more precision in the flavor of your coffee or the brew of your espresso.Can I Run Pre-Ground Beans Through My Burr Grinder to Get Better Coffee?AccordionItemContainerButtonLargeChevronNo, please don't do this.First off, if you're trying to improve the flavor of store-bought beans, the game's already lost. One of the main reasons to use fresh-ground whole beans is to avoid oxidation, and pre-ground beans have already been cardboarded up by evil, stale air.But also, you'll mostly just muck up your machine. Logically, it might make some sense. Your grind is too coarse, so let's just run them through again at a finer setting, and perfect coffee results! Alas, on burr grinders, pre-ground coffee will get stuck inside the burrs, gum them up, and cause you to have to take the whole thing apart and clean it with your little brush and put it back together.What Are the Best Coffee Grinders for Espresso?AccordionItemContainerButtonLargeChevronQuite simply, the best coffee grinders for espresso are the ones that offer the finest calibration at the “fine” end of the spectrum. If you want to get super specific, look for coffee grinders that offer a number of fine calibrations at the fine end of the spectrum.Dialing in individual espresso beans can require quite fine adjustments—and so even if a grinder is technically able to grind fine enough for espresso, it should also be able to make precise enough adjustments within that range to account for different beans, roasts, and machines. (For a real-world counterexample, witness the wall WIRED reviewer Joe Ray ran into when trying to get the (excellent) Wilfa Uniform Coffee Grinder to work for espresso. Without fine adjustments, chances are you'll fail.)Otherwise, what you're looking for is excellent build, a motor that can withstand the higher torque you'll need to grind finely even on lighter roasts, and a machine that deals well with static electricity: Finer espresso grinds can turn static into a terrible enemy, sending coffee grounds spraying wildly.The most vaunted espresso grinders can travel upwards into the high hundreds of dollars (see the Timemore Sculptor 064S flat-burr) or the thousands of dollars (see the Zerno Z1).The Mazzer Philos Coffee Grinder ($1,500) offered maybe the best shots of espresso I've pulled at home in the past year. This offers delicate flavor and syrupy shots like the ones you'll get from a café, even on lower-cost espresso machines. (See my full review of the Mazzer Philos.)But in this guide, we focused mostly on the best espresso grinders for the 90-some percent of people who are trying to gain access to good coffee without spending four figures. For most people and most budgets, our top pick, the Baratza Encore ESP ($200), will be the best choice, with sturdy construction and 20 grind adjustments for espresso alone. If you don't mind a little elbow grease, you can tune your grinds even finer by using a manual coffee grinder like the Kingrinder K6 ($99).And then there's the true budget electric option. The tiny, slow-grinding Wirsh Geimori T38 Plus ($130) is the lowest-cost electric espresso grinder I've tried that can actually make good espresso on non-presurized baskets, though I'd probably limit it to medium roasts or darker, lest you strain the machine. Torque is not a strong suit.Honorable Mentions and Runners-UpMore Excellent Conical-burr All-rounders:Fellow Opus for $200: The Fellow Opus is our previous top grinder pick. And it's forever bound to be compared with our current top pick, the Baratza Encore ESP—a yang and yin among excellent $200 grinders that has caused oddly intense arguments on the WIRED Reviews team about which one's better. The Opus comes out ahead in simple beauty, a mid-century stylishness that keeps it welcome on your counter. The Opus is among the quietest grinders I've tested, about half as loud as most picks on our list. But it's not as easy to adjust and tune for espresso as our top pick all-rounder, the Encore ESP, and it retains more coffee grounds. And for truly excellent drip, I'd upgrade to the flat-burr Fellow Ode Gen 2 or the Moccamaster KM5 (below).Baratza Encore for $150: Baratza's original Encore is the Honda of the conical burr grinder world: easy to maintain, runs great, easy to use, lasts forever, replacement parts easy to find. It's been on the market largely unchanged for more than a decade. For not much more money, though, our top-pick Encore ESP offers beautiful adjustment on espresso settings, so I tend to recommend paying an extra $50 for the added versatility. But the original Encore remains a solid entry-level choice.Baratza Virtuoso+ for $250: The Virtuoso+ uses the same burr set as the ESP, but is not quite as optimized for espresso. The biggest upgrade against the Encore ESP is a timer. Both have similar rock-solid but compact builds (although the Virtuoso is a little more stylish with its fitted grounds bin), 40 grind settings, and burr grinders for consistent grounds. The Virtuoso’s digital timer, however, is great for those wanting consistent coffee ground dosings each morning. You’ll have to dial in on your grind time versus coffee grounds output, but once you figure that out, you can walk away from the grinder and multitask if you please. —Tyler ShaneOxo Brew Conical Burr Grinder With Scale for $299: Making great coffee consistently is all about measuring your variables, and this Oxo model comes with a built-in scale. Set your grind size, select the weight you want, hit Start, and walk away; it shuts itself off when it's done. This is a great way to streamline your morning ritual, but the device does spray off a few grounds—and at its price range, we tend to prefer the Fellow Opus or Baratza ESP as an all-rounder, or the bare-bones Oxo as a budget pick.KitchenAid Burr Grinder for $200: This KitchenAid is stylish and easy to clean, and former WIRED reviewer Jaina Grey likes that the burrs are accessible thanks to their placement directly beneath the hopper. It also features precise dose control, with grind size controlled by a dial. For espresso lovers, one excellent feature is that you can swap the little container that catches the grounds with a holder for a portafilter.Excellent flat burr coffee grinders for drip and pour-over:Photograph: Matthew KorfhageTechnivorm Moccamaster KM5 Flat Burr Grinder for $329: The performance on this stepless (read: infinite adjustment) grinder is somewhere between good and damn good. The razor-thin grind size distribution in early testing makes the KM5 a credible rival to the similarly priced Fellow Ode, in fact. And like the Ode, this Moccamaster is made especially for bringing out precise flavors on drip and pour-over. Particle analysis shows this Moccamaster to potentially offer even more precise grinds, leading to an almost clinically clean brew with light body. The KM5 not overly user-friendly, mind you: It cranks at 90 decibels, you have to hold down its analog switch to grind, and its aesthetics are the same sturdy industrial chic as all Moccamasters. Indeed, it's designed to sit alongside the classic drip coffee maker that's been on our buy-it-for-life guide since we've had one. If you prefer clarity to ease of use, this gives the Ode a run for the money, for less money.Eureka Mignon Filtro for $269: The precision on flat burrs is terrific. But usually, so is the price. This no-frills Filtro from beloved Italian coffee brand Eureka clocks in at more than $100 less than our top-pick flat-burr, and it's an absolute metal-clad tank of a machine, says former WIRED reviewer Jaina Grey. It's as robust as the higher-end models and offers excellent consistency of grind size. Sure, it's a little loud, and you have to hold the button down when you grind. But life is full of trade-offs.Wilfa Uniform for $349: This Wilfa has long been on our list as a great flat-burr grinder for pour-overs and drip. It remains such, though the Ode springboarded it as the top pick with its Gen 2 burr update, at about the same price. Like its name suggests, the Wilfa offers a beautifully consistent grind size and will make you a lovely pour-over. That said, it's fussier to adjust and louder than the Ode.Courtesy of BrevilleBreville Smart Grinder Pro for $200: WIRED has recommended this Breville in the past for its accessible burrs that make it easy to clean. But it's not really optimized for lighter-roast espresso, and ever since Breville bought Baratza, they've slowly been swapping out the grinders in their top-line semi-automatic espresso machines with those excellent Baratza burrs. For a stand-alone grinder at the same price, we give the same advice to you.Baratza Vario W+ for $600: The Encore has a bigger, beefier, flat burr cousin, the Baratza Vario-W+ (7/10, WIRED Recommends) with a built-in scale and ridiculously granular adjustment (230 settings!). But like a lot of flat burrs, it struggles on finer grinds, according to WIRED contributor Joe Ray. And static is an issue. With price in play, the Ode Gen 2 comes out on top, but Ray was still a big fan of the Vario.Best coffee grinder for travel and camping:Courtesy of VSSLVSSL Java manual grinder for $170: VSSL specializes in ultra-durable camping tools, and it applied this same durable construction to this hardy campsite-ready hand grinder that WIRED reviewer Scott Gilbertson attests to be rugged enough to survive the zombie apocalypse. The handle folds out to provide a lot of leverage while you grind, and you can use it as a hook to hang the device up when you're done.Also TestedPhotograph: Matthew KorfhageWirsh Geimori GU38 for $200: The GU38 grinder from Wirsh/Geimori uses an identical burr set to the T38 Plus model I recommend as a budget espresso grinder. It's also bulkier and built a little sturdier. But the angled hopper causes more coffee retention, including some coffee beans that just refuse to feed into the grinder. Performance also seems slightly less reliable than the TU38, perhaps because the GU38 grinds faster. Either way, I'd opt for the lower-cost T38 Plus over this quite similar model.Aarke flat-burr grinder Photograph: Matthew KorfhageAarke Flat-Burr Grinder for $400: This pretty, shiny, stainless steel Aarke grinder contains a unique feature when paired with Aarke's coffee brewer, detecting the water in the brewer's tank and grinding the appropriate amount of beans. But this feature wasn't as calibrated as we'd like, and there have been a lot of online reports of grinder jams. I didn't have the same problem, but at more than $300 for a grinder that hasn't been long on the market, prudence is often rewarded.Hario Skerton Pro for $55: The Hario Skerton was the gateway hand grinder for many a coffee nerd, but it has since given ground to newer entrants. It's fast and cheap, but it'll give you a heck of a workout and isn't as consistent for coarse grinds, plus the silicone handle has a habit of falling off.Courtesy of AmazonHario Mini-Slim Plus for $39: This smaller Hario manual grinder is slower than the Skerton, but its plastic construction makes it good to throw in a travel bag. The low price is its main advertisement.Cuisinart Burr Grinder for $75: At first, it seems like a good deal. It's Cuisinart, a known brand, and a conical burr grinder for less than $100! But former WIRED reviewer Jaina Grey found that the low price came with a cost: These things apparently burn out faster than a rock star in the late '60s.Bodum Bistro Electric Blade Grinder for $20: This little blade grinder is quite cheap, and the model has served WIRED contributing reviewer Tyler Shane for years. That said, after some inconsistent reports on reliability, we favor the KitchenAid as our ultra-budget pick.DmofwHi Cordless Grinder for $40: We used to recommend this cordless blade grinder for camping, largely because it can make 15 pots of French press without need of a recharge. It's out of stock as of February 2026, and we're monitoring to see whether it returns.Power up with unlimited access to WIRED. Get best-in-class reporting that's too important to ignore. Subscribe Today.CommentsBack to topTriangleMatthew Korfhage is a staff writer and reviewer on WIRED's Gear team, where he focuses on home and kitchen devices that range from air fryers and coffee machines to space heaters, water filters, and beard trimmers. Before joining WIRED in 2024, he covered food, drink, business, culture, and technology for ... Read MoreProduct Writer & ReviewerLinkedInXFacebookInstagramTopicsShoppingcoffeeFood and Drinkbuying guideskitchenespressoRead MoreThe Best Drip Coffee Makers Now Rival Café Pour-OverThe old-fashioned drip coffee maker has come a long way. Our favorite machines here can turn your barista into a stranger.The Best Automatic Litter Boxes Tested by Our Spoiled CatsWith these high-tech automatic litter boxes, gone are the days of scooping and smells. Welcome to the future.How Bosch’s Newest Vacuum Compares to Shark and DysonBosch’s Unlimited 10 combines key features from Dyson’s and Shark’s stick vacuums and packs some nice extras, but it can’t seem to keep debris inside.The Best Mattresses You Can Buy Online in 2026WIRED has tested 100-plus bed-in-a-box mattresses for a week each. Our top pick, the Helix Midnight Luxe hybrid, is the best bed you can buy online.The Best Shower Filters for Removing Chlorine, Lead, and PFASWe tested leading filtered shower heads, from Rorra to Canopy to Jolie. The winners were clear.The Best Essential Oil Diffusers for Freshening Up Your HouseKeep things smelling fresh with these handy home gadgets. After researching and testing over a dozen devices, the Urpower Aroma is our favorite for most people.The Best Nintendo Switch 2 AccessoriesLooking to jazz up your portable gaming experience? Here are the best accessories we’ve tested for the Nintendo Switch 2.The Best Mirrorless Cameras to Level Up Your PhotosWant the image quality of a DSLR without the bulk? These WIRED picks do more with less.The Best Computer Monitors to Upgrade Your Desk SetupThe Gear team spends countless hours in front of displays while writing for you. So we reviewed those too (including a portable screen).We Checked Ink Bleed and Paper Weight to Find the Best Paper NotebooksCelebrate National Handwriting Day (I did not make that up) with new notebooks, a journal, or sketchbooks.The Best 3-in-1 Apple Wireless ChargersKeep your iPhone, Apple Watch, and AirPods topped up with these WIRED-tested docking systems.The Best Qi2 and MagSafe Power Banks for Your PhoneKeep your iPhone or Qi2 Android phone topped up with one of these WIRED-tested Qi2 or MagSafe portable chargers.Wired CouponsSquarespace Promo CodeSquarespace Promo Code: 20% Off Annual Acuity SubscriptionsLG Promo CodeLaptop - $400 Off LG Promo CodeDell Coupon Code10% Off Dell Coupon Code for New CustomersSamsung Promo Code30% Samsung Coupon - Offer Program 2026Canon Promo Code10% Off Canon Promo Code + Up to 30% OffDoordash Promo Code50% Off Doordash Promo Code For New & Existing Users The Best Coffee Grinders for Espresso or Pour-Over Featured in this article Amazon Amazon Amazon Amazon coffee Is Only as good as the beans you use to make it, and the beans you use are only as good as the grinder you use to render them extractable. Beans fed into a coffee grinder right before you brew them might as well be a whole different substance from the bag of ground beans you get at the supermarket, full of aroma and flavor compounds that quickly dissipate as they oxidize in the air. (It also helps to use fresh beans from excellent roasters: Check out our Best Coffee Subscriptions guide for some of our favorites.) Grinders have become the most active tech frontier in coffee, and over the past decade coffee geeks have become devoted to the idea that getting the right grind on your beans is every bit as important as the machine you brew with. Uneven grinds can mean uneven extraction and uneven flavor—which is why we use particle size analysis to test the character of each grind from every grinder we review. But, of course, particle size alone does not tell the whole story, which is why I use the grounds from every model for cups of drip coffee, pour-over, Aeropress, and, when applicable, espresso. Our Reviews team has tested dozens of grinders over the past five years, and I've retested all the top burr-grinder picks this past year. And while super-high-end grinders like the wonderful Mazzer Philos ($1,500) can run into the thousands, our top pick, the Baratza Encore ESP ($200), will keep most people happy for much less. This said, the Fellow Ode Gen 2 ($400), our favorite flat-burr grinder for drip coffee, is able to wring a special sort of poetry from each cup. The five machines below are the coffee grinders we currently recommend to anyone who cares a lot about home coffee. Be sure to check out WIRED's other coffee coverage, like the Best Drip Coffee Makers, Best Espresso Machines, Best Latte and Cappuccino Machines, and Best Cold Brew Devices. Updated February 2026: We tested and added the Mazzer Philos to our top grinder picks, and we tested and added the Wirsh Geimori T38 Plus and Geimori GU38 to the guide. We also added a budget grinder section for lower-cost picks, streamlined our top picks, checked and updated links, and added new explanatory content, including an expanded section on particle size analysis. Best Coffee Grinder for Most People Baratza Amazon Williams Sonoma It's hard to fathom how good this Baratza Encore ESP coffee grinder is for the money—and how many types of coffee drinkers it can serve. Like other Baratzas, it's tank-durable and compact. It's easy to clean and maintain, and it's surprisingly precise for a conical burr grinder at this price point, especially with finer grinds. I verified this precision using particle size analysis—but also the evidence of my own senses after drinking the coffee that results. The Encore makes excellent espresso, and quite good drip coffee, and also good cold brew coffee—one of few devices on the market to handle such range, while still being priced accessibly for beginners to specialty coffee. And thus, it can serve pretty much every kind of coffee drinker with aplomb. That it does so for $200 is a gift. The secret behind this versatility is an ingenious bit of engineering. The previous-generation Baratza Encore had been a top grinder pick for years, but it didn't allow enough fine-tuning to make great espresso. With the ESP, Baratza upgraded the burr set and redesigned the grind wheel to allow for micro-adjustment when dialing in espresso-fine grinds. You also get broader adjustments at the medium-coarse end of the scale for pour-over, drip, and cold brew. The ESP makes round, full-bodied drip coffee with a pleasing mouthfeel. But if what you want is to gain crystalline clarity of flavor from light-roast beans, your grinder of choice is probably instead the Fellow Ode Gen 2 ($400). Other coffee grinders are prettier, or more likely to be mistaken for midcentury sculpture. And some may also be a little quieter. But at $200, this Encore ESP is the best grinder most coffee drinkers will likely need, for whatever style of coffee. And if it has the longevity of the previous-generation Encore, the ESP is likely to last a decade or more. Best Coffee Grinder for Pour-Over and Drip Coffee Fellow Amazon Fellow Williams-Sonoma Like Liam Neeson, the Ode Gen 2 has a highly particular set of skills. As with the majority of flat-burr grinders priced under $500, there's no espresso involved. But the Ode is a beauteous precision machine, made to elicit tuning-fork clarity of flavor from pour-over and drip and Aeropress, without sacrificing richness. On drip coffee settings, the particle size distribution on my coffee grounds looks like the same tight bell curve you learn in a first-year statistics class. Tick the Ode one setting finer, or coarser, and you'll often discover a new character, a different flavor note. It's a fun game for a mid-morning cup, when I'm down to experiment. (The Ode can also grind coarser for cold brew and French press, of course, but for cold brew I'd really only need the firepower of this flat burr for a Kyoto-style slow-drip, or the sui generis cold brew on a Fellow Aiden.) While the previous-generation Ode struggled with some of the lightest-roast pour-overs, this second-generation burr set sails through and remains beauteously static-free—you probably won't even have to water-spritz your beans like a sampler at a perfume counter. The Ode grinds quietly and diffusely, generally below 80 decibels, and preternaturally swiftly. When I first started using it, I kept double-checking to make sure it had ground all the beans so soon. It's also pretty, an elegant companion to whatever drip coffee machine I happen to be trying out, and offers better clarity of flavor than many grinders twice its price. And so I forgive it for the somewhat … squishy … feel of its power button, and a strange grind cup with interior fins whose proposed utility is still a mystery. Best $100 Grinder for Drip Coffee Oxo Amazon Target The Oxo Brew Conical Burr Grinder has a good balance of features, usefulness, and a relatively low price among the conical-burr grinders we've used. Quite simply, it's the cheapest grinder I recommend to make actual good coffee. There are 30 settings that range from espresso to a coarse grind for French press—but the lack of fine adjustments at the low end of the scale won't suffice for non-pressurized espresso baskets. This Oxo is also not as quiet as our top pick, and kicks out more boulders and fines than either of our top two picks. That said, this machine is industrial-handsome and intuitive to use, with a grind-by-time function you can dial in for your brewer if you hate scales. (You should still use a scale, like this $28 one from Maestri House.) This Oxo also grinds consistently enough for good-tasting drip. It's a great grinder for beginners—an entry-level choice at half the price of our top pick, with a solid warranty and a sturdy build. Best $100 Coffee Grinder for Espresso and Pour-Over Kingrinder Amazon “We live in the 21st century,” you're probably saying. “Electricity has been a pretty successful addition to modern life. Why would I want to grind coffee by hand?” The answer, at home, is precision and cost. The highest-end, most consistent, most adjustable coffee grinders can run into the thousands if you let them, especially where espresso is concerned. But a slower-grinding, precisely machined manual coffee grinder can attain similar precision at a far lower cost. This little Taiwanese-made Kingrinder K6 is a beast, with better consistency of grind than any flat burr I’ve tested south of four figures, beautiful fine adjustments, and a grind size range from the finest espresso to the coarsest of French press or cold brew. And yet, it's often $100 on sale. It's also a great travel and camping grinder. It's even better for home baristas. Sure, it requires effort and a little grip strength. And there's a bit of a learning curve: Each click on the K6's adjustment dial accounts for 16 microns of burr movement. Start with the dial at zero, and click counterclockwise. One full rotation is 60 clicks. Great espresso might begin at around 35 clicks, while great light-roast drip coffee is somewhere closer to 110. There's a helpful guide linked here. Then weigh your beans, fill them from the top of the device, close the lid, insert the handle, and crank it for 20 seconds or so for single-dose espresso or pour-over, a fast grind compared to most manuals. One of the best cups of pour-over I've made in my life has been with this little thing, a sweet spot of precise aromatics I couldn't replicate with any electric grinder I own. But fair warning, you'll get a little bit of a workout on fine grinds. If you want an electric coffee grinder that'll make good espresso at a similar price, check out the Wirsh Geimori T38 Plus ($130), described below in our picks for budget coffee grinders. Best Buy-It-For-Life Espresso Grinder Mazzer Mazzer If ever you need proof that an excellent grinder is every bit as important as a good espresso machine, here's a little experiment. Try using this Mazzer Philos coffee grinder (9/10, WIRED Recommends) with a cheap espresso machine. Your espresso shot will still come out full-bodied, syrupy, and delicate in all the right ways. Indeed, I've made some of my most prized espresso shots in recent memory using the Philos and a mid-tier machine. Italian brand Mazzer is best known for devices used in specialty coffee shops. The Philos is the first one I know that's just as suitable for home use. This said, it costs $1,500, as much as our favorite top-line espresso machines. This makes it cause for a bit of a thought experiment. What leads to even extraction, and fewer off flavors? The excellence and evenness of the grind, or consistency and tight control over temperature and pressure? For my money, I might opt for this $1,500 grinder and an excellent but bare-bones $300 Breville Bambino, rather than spending the extra money on a more expensive espresso machine and a cheaper grinder. The Philos is a precise and thoughtfully designed device, from one of the most trusted names in Italian coffee grinders. It hums powerfully, but grinds quietly. It retains precious few coffee grounds. The burrs are easy to replace and clean, and you can choose between an i189D burr set meant for medium and dark roasts and an i200D set optimized for better clarity on lighter roasts. Stepless adjustment is also possible. You might get better drip coffee out of the Fellow Ode, but not by much. The espresso, meanwhile, is among the best you can make from a home grinder. The build quality is unmatched, with replaceable parts mostly made of metal. In a world full of plastic, the Philos is mostly devoid of it. Compare Our Top 5 Grinders Best Budget Coffee Grinders As mentioned above, the best bang for your buck will always be a hand grinder like my favorite, the Kingrinder K6 manual coffee grinder ($100). A precisely machined manual coffee grinder can rival coffee grinders many hundreds of dollars more expensive, both in precision and durability. And so the best manual coffee grinder will also be the budget option that'll lead to the best coffee. I've personally come to love the routine and the control. But I get it. You'll happily grind your pepper with the best pepper grinder, but you draw the line at grinding coffee. Mornings are hard. Electricity helps. These are the budgetiest of budget electric coffee grinder options for each style of brew, all blessedly hands-off. None of these will lead to the clarity of flavors or sweetness or delicacy of our top picks. But they're the absolute lowest-cost devices we recommend for each category of brew. Oxo Amazon Macy's Crate & Barrel Just when you thought Oxo had already cornered the market on affordable conical burr coffee grinders, the company comes in at an even lower price with this compact model. The Oxo Brew is stacked like a wee layer cake. And so the grind cup is housed within the column of the device and can be pulled out when you're done grinding. But while this is quite clever, neither consistency of grind nor ease of use is quite on par with Oxo's $110 basic conical burr, which remains my pick for an entry-level drip coffee grinder. But it's also very easy to move from the cabinet to the counter and back, and $30 less is $30 less. This is the lowest-price electric grinder I could actually recommend for Aeropress, drip, pour-over, French press, or cold brew. I wouldn't attempt espresso, though. Amazon I'm continuing to test this, but for the moment, the lowest-cost electric espresso-capable grinder I can recommend with a clean conscience is the Wirsh Geimori T38 Plus for $130. This portable conical-burr grinder is about the size of a Christmas nutcracker and looks alarmingly like Pinocchio's left leg. But it offers surprisingly low coffee retention, stepless grind adjustments, and far better precision than expected for a grinder of its price. It achieves this by grinding at low rpms—meaning it grinds quite slowly and carefully for an electric grinder. This also means the T38 Plus takes more than 30 seconds to grind enough beans for a double shot of espresso. This is disqualifyingly slow for batches large enough for drip or French press, and the T38 doesn't really have the clarity you want for pour-over. Still, it might be the only electric grinder I've tested south of $150 that can make decent espresso on non-pressurized baskets. It's also wee—great for small kitchens or as a travel coffee grinder. It's the grinder I'd definitely take with me to a hotel room if I didn't feel like grinding coffee by hand. KitchenAid Amazon Walmart Look, blade grinders like this KitchenAid won't offer the powdery fineness and full-bodied coffee pleasures of a great conical burr, nor the precision of WIRED's top flat-burr pick. Blade grinders chop the heck out of beans, offering an uneven grind. But this is a very affordable coffee grinder, it's simple as pancakes to use, and blade-ground fresh beans are still a little better than the stuff in the supermarket. That said, they're probably still worse than getting beans fresh-ground at a café and using them within a week. When non-bean-geek friends ask for a grinder that costs less than dinner for two at Arby's, this is the one I offer up—especially if they're using darker grinds, and favor French press or a less expressive drip coffee maker. At the very least, it's enough so you're not crippled when you get whole-bean coffee as a gift. But let's be clear. The Oxo Compact conical burr grinder above is five times as good for about $30 more. Results of Particle Size Analysis of Coffee Grinders I of course assessed coffee grinders by tasting the resulting coffee, across a number of brew styles and beans. But I also backed up my taste buds with scientific instruments. I analyze the flavor profile and the grind consistency of each of WIRED's top burr coffee grinder picks using particle analysis by a device called the DiFluid Omni. You can read a broader discussion of that particle analysis here. Specifically, I tested each of WIRED's top burr grinders on multiple grind settings using the same medium-grind coffee beans—at both espresso-fine grinds and medium grinds more suitable for drip or pourover coffee. Since September 2025, I also test every grinder I review or consider as a top pick, including the Moccamaster KM5, whose results are included here. I tested at least 5 times for each grind sample, collating the results into a characteristic curve for that bean and grinder. Expand the discussion below for detailed discussion, and bar graphs and such. Particle Size Analysis of Top Coffee Grinder Picks Particle size analysis of coffee grinds is not a cut-and-dried test: It's more a clue as the the probable character of a brew. Patterns begin to emerge that correlate to the experiences I've had tasting coffee from each grinder. Taste is the ultimate test, alongside consistency of finicky espresso pulls. But quantitative analysis helps me (and you) actually trust and maybe understand those sensory test results. When looking at these bar graph curves below, there are also a few rules of thumb. Big boulders north of a thousand microns will often lead to muddier character. Too many fines below 100 microns might lead to bitterness. A tight particle size distribution is associated with greater clarity of flavor. Look at the standard deviation (SD) for a clue as to overall precision: Smaller numbers indicate likely greater clairty. This said,a broad distribution of coffee ground sizes can also lead to better body, and more perceived sweetness. Our top pick for most people, the Baratza Encore ESP, proved itself to have quite precise results at very fine grinds—with standard deviation below 200 microns on espresso grinds, and 30 percent of particles concentrated within a single range. At its price range, this is admirable precision matched by very few grinders. The same wasn't as true at pour-over coffee settings for the Baratza Encore ESP (seen here at setting 22), which showed a broader and more heterogeneous particle size distribution—with both small and large particle sizes. In practice, this led to a full-bodied and rounder cup, but with a little bit less of the precise aromatics one can get from our favorite grinder for drip and pour-over, the Fellow Ode Gen 2. The Ode showed a characteristic Bell-curve shape, surrounding a single high peak, which corresponded with the precise aromatics I taste when brewing drip or pour-over coffee using the Ode. Even greater precision was on display with the Technivorm Moccamaster KM5, a flat burr grinder that showed precise results across the board—rivaling the Encore ESP at fine grinds and the Ode Gen 2 at grinding for drip. It's not as user-friendly as some of the top-pick devices, and the resulting brews can sometimes feel clinically clean, with a thinner body. But my lord it does offer clarity. The Kingrinder K6 hand grinder, our top manual grinder pick,, also showed strong peaks at grind sizes appropriate for pour-over coffee. Shown here are analyses of two medium-fine grinds, at 60 and 70 clicks from zero, respectively. Hand grinders have a secret weapon, which is that they cause you to grind slowly—which works very well at coaxing out more clarity from conical burrs. For pour-over grinds, the Kingrinder showed a higher peak than basically any grinder I tested, meaning grinds are very concentrated in a tight range of sizes: as many as 40 percent of coffee grounds were functionaly the same size, and about 70 percent were grouped tightly around this. This leads to quite pronounced, intense flavor notes. At grind sizes suitable for espresso, the Mazzer Philos bests this precision, with more than 90 percent of coffee grounds huddled in a tight grouping while using the i200D burr set. Nonetheless, while boulders are all but nonexistent, enough coffee fines exist to give each shot an almost syrupy consistency. The result is both body and perceived sweetness, with a surprisingly delicate clarity. While I haven't tested the i189D burrs also available as an option, reports from the world say that the 189Ds lean even harder into clarity of flavors. But note that at bigger grind sizes more suitable for drip coffee, you'll get a quite broad distribution. This will lead to a well-rounded cup, but may not offer the clarity of flavor of the Fellow Ode Gen 2 or the Kingrinder K6 for drip and pour-over brews. Frequently Asked Questions How We Test Coffee Grinders WIRED tests coffee grinders by grinding a lot of beans, and making a lot of coffee—testing each grinder to see if it can serve well for espresso, Aeropress, drip or pour-over coffee, and coarse-ground cold brew and French Press. I tend to always grind a drip Stumptown Homestead or Single-Origin Colombia as a baseline, because each is readily available at my local supermarket with stamped roast dates, and because I know the flavor well enough I can detect variations. But I'll also try out a number of flavors and roasts on each grinder, for different brewing methods. We assess each grinder for decibel level while grinding, ease of cleaning and operation, hopper design, the presence or absence of “popcorning” (where the beans pop around inside the hopper, often leading to more uneven results), messiness and static electrical buildup, grind retention, ease of use, value, and simple aesthetics. Previous WIRED reviewers assessed grind uniformity visually with the aid of macro lenses, or filtered coffee grounds with sieves. In more recent rounds of testing, I re-assessed each top coffee grinder pick using particle grind size analysis, with the help of the DiFluid Omni roast color and particle size analyzer, as well as a data analysis app. I tested both fine and medium grinds on each grinder, using the same beans for each grinder, roasted within a month of testing. I repeated the particle analysis at least five times for each grinder and setting, and collated the results. I assessed the uniformity of the grind and the overall distribution of particle sizes—paying particular attention to the share of coffee fines (the tiniest particles smaller than 100 microns) and boulders (big coffee bits larger than 1000 microns). Why Grind Whole Beans Instead of Buying Pre-Ground? The reasons are simple: Flavor. Freshness. Aroma. Whenever you open a vacuum-sealed bag of beans, a little invisible clock starts. Oxidation begins to erode the character of your beans, breaking down organic compounds and degrading them, turning your lovely beans to cardboard. Aromatic flavor compounds also escape from the bean, gassing out into the air where they do no particular good. When you grind your beans, these processes go into overdrive. Freshness for whole beans can be measured in weeks. For ground beans, freshness in the open air is a matter of hours or even minutes. That bag of pre-ground beans you got from the supermarket? It's still coffee, of course, and it'll taste like coffee. But the vibrancy is gone. As far as true freshness is concerned, that coffee's been dead for weeks. (Pre-ground beans can be kept airtight for a week or so and maintain their flavor, if you get them ground fresh at a coffee roaster.) The only reliable way to get truly excellent flavor from your coffee beans, the way you experience it at a café, is to use fresh, whole beans. This is also how you can exercise some control over extraction, and dial in your brewer or espresso maker to get the pefect results for each bean. Espresso requires a fine grind, pour-over a little coarser, electric drip coffee a little coarser than this. Each grinder should have a guide to the best adjustments for each brewing method. Lighter-roast beans will want a finer grind than dark-roast, to aid in extraction: porous dark-roast beans give up their secrets a lot easier. It's all kinda fun to figure out, if you let it be fun. But certainly, when you strike paydirt, you'll know it: Finding the right marriage of grind and bean, on a good grinder, can turn into the best cup you've ever had. It's like the magical first time you seared a perfect steak, or baked a perfect layer cake. Effort meets reward. It's marvelous. The grinders in this guide will help you find that moment more often. What Is a Conical, Flat, or Blade Grinder? Most coffee grinders fall into three main types: Conical-burr, flat-burr, and blade grinders. Burr grinders are generally higher quality, and higher cost. Conical-burr grinders are the category occupied by our top pick, the Baratza Encore ESP, and pretty much all of the most affordable grinders that still make good coffeee. And there's a reason for this: Conical tends to offer the sweet spot at the intersection of high-performance, cost, and flexibility. In a conical grinder, coffee beans are crushed and ground between two rings of burrs. They deliver a finer, much more consistent grind than you’d get with a traditional blade grinder, even the nicest blade grinder you ever met. Conicals do tend to throw off more fines than a flat burr, but many feel this leads to more body and a more rounded flavor character. Flat-burr grinders are thought to be more precise than conical grinders (though this is by no means universal). They’re also typically more finicky and also more expensive. Burrs are laid on top of each other, and the beans pass through them as they grind. The grinder action pushes the grounds out of one end, instead of relying on gravity like a conical-burr grinder, which means the beans spend more time in contact with the burrs. This often results in a more consistent grind, and therefore more precise flavors. For this reason, flat-burr grinders are often preferred as a way to elicit clarity of tasting notes in single-origin beans for pour-over, drip, and Aeropress. Blade grinders have a chopping blade that spins around like a food processor. But blades don't produce even results. Some of your coffee will be fine powder at the bottom, and at the top you'll have bits too large for even French press. The result is an inconsistent, unpredictable brew. These grinders are generally quite cheap. But in case you're wondering, using fresh beans in a blade grinder is still probably better than buying stale ground coffee. (You can learn how to shake the beans to even your grind just a little. Pulsing the machine often also works. See world barista champion James Hoffmann's video for some more blade grinder hacks.) Still, if you can afford it, the conical or flat-burr grinders on this list will lead to far better coffee than any blade. What's the Difference Between a Cheap and Expensive Burr Grinder? The machinery in a high-quality burr grinder is a bit more complicated, and it's built to withstand greater wear and tear. In cheap burr grinders, the burrs can get blunt from regular use, and flimsier motors may burn out in a matter of months. But also, coffee grinders have undergone a revolution in technology and consideration in the past decade. Manufacturers have been experimenting with different shapes of burr even on conical burr grinders—pentagonal, hexagonal, heptagonal. And grinders with more precision cuts will cost more money. Flat burrs also cost more money to manufacture, and are seen as having more precision. The true geeks are swapping out to new generations of flat burr that offer greater precision in machining, and multistage grinds. Grinder makers are experimenting with larger and smaller burrs, and different materials. It's a hive of invention out there. And these precision parts cost money: Some burr sets might cost hundreds all by themselves. The end result of all this attention is a greater range or finer adjustment of grind sizes, better and more reliable calibration, and often more precision in the resulting coffee grinds—and thus more precision in the flavor of your coffee or the brew of your espresso. Can I Run Pre-Ground Beans Through My Burr Grinder to Get Better Coffee? No, please don't do this. First off, if you're trying to improve the flavor of store-bought beans, the game's already lost. One of the main reasons to use fresh-ground whole beans is to avoid oxidation, and pre-ground beans have already been cardboarded up by evil, stale air. But also, you'll mostly just muck up your machine. Logically, it might make some sense. Your grind is too coarse, so let's just run them through again at a finer setting, and perfect coffee results! Alas, on burr grinders, pre-ground coffee will get stuck inside the burrs, gum them up, and cause you to have to take the whole thing apart and clean it with your little brush and put it back together. What Are the Best Coffee Grinders for Espresso? Quite simply, the best coffee grinders for espresso are the ones that offer the finest calibration at the “fine” end of the spectrum. If you want to get super specific, look for coffee grinders that offer a number of fine calibrations at the fine end of the spectrum. Dialing in individual espresso beans can require quite fine adjustments—and so even if a grinder is technically able to grind fine enough for espresso, it should also be able to make precise enough adjustments within that range to account for different beans, roasts, and machines. (For a real-world counterexample, witness the wall WIRED reviewer Joe Ray ran into when trying to get the (excellent) Wilfa Uniform Coffee Grinder to work for espresso. Without fine adjustments, chances are you'll fail.) Otherwise, what you're looking for is excellent build, a motor that can withstand the higher torque you'll need to grind finely even on lighter roasts, and a machine that deals well with static electricity: Finer espresso grinds can turn static into a terrible enemy, sending coffee grounds spraying wildly. The most vaunted espresso grinders can travel upwards into the high hundreds of dollars (see the Timemore Sculptor 064S flat-burr) or the thousands of dollars (see the Zerno Z1). The Mazzer Philos Coffee Grinder ($1,500) offered maybe the best shots of espresso I've pulled at home in the past year. This offers delicate flavor and syrupy shots like the ones you'll get from a café, even on lower-cost espresso machines. (See my full review of the Mazzer Philos.) But in this guide, we focused mostly on the best espresso grinders for the 90-some percent of people who are trying to gain access to good coffee without spending four figures. For most people and most budgets, our top pick, the Baratza Encore ESP ($200), will be the best choice, with sturdy construction and 20 grind adjustments for espresso alone. If you don't mind a little elbow grease, you can tune your grinds even finer by using a manual coffee grinder like the Kingrinder K6 ($99). And then there's the true budget electric option. The tiny, slow-grinding Wirsh Geimori T38 Plus ($130) is the lowest-cost electric espresso grinder I've tried that can actually make good espresso on non-presurized baskets, though I'd probably limit it to medium roasts or darker, lest you strain the machine. Torque is not a strong suit. Honorable Mentions and Runners-Up Fellow Opus for $200: The Fellow Opus is our previous top grinder pick. And it's forever bound to be compared with our current top pick, the Baratza Encore ESP—a yang and yin among excellent $200 grinders that has caused oddly intense arguments on the WIRED Reviews team about which one's better. The Opus comes out ahead in simple beauty, a mid-century stylishness that keeps it welcome on your counter. The Opus is among the quietest grinders I've tested, about half as loud as most picks on our list. But it's not as easy to adjust and tune for espresso as our top pick all-rounder, the Encore ESP, and it retains more coffee grounds. And for truly excellent drip, I'd upgrade to the flat-burr Fellow Ode Gen 2 or the Moccamaster KM5 (below). Baratza Encore for $150: Baratza's original Encore is the Honda of the conical burr grinder world: easy to maintain, runs great, easy to use, lasts forever, replacement parts easy to find. It's been on the market largely unchanged for more than a decade. For not much more money, though, our top-pick Encore ESP offers beautiful adjustment on espresso settings, so I tend to recommend paying an extra $50 for the added versatility. But the original Encore remains a solid entry-level choice. Baratza Virtuoso+ for $250: The Virtuoso+ uses the same burr set as the ESP, but is not quite as optimized for espresso. The biggest upgrade against the Encore ESP is a timer. Both have similar rock-solid but compact builds (although the Virtuoso is a little more stylish with its fitted grounds bin), 40 grind settings, and burr grinders for consistent grounds. The Virtuoso’s digital timer, however, is great for those wanting consistent coffee ground dosings each morning. You’ll have to dial in on your grind time versus coffee grounds output, but once you figure that out, you can walk away from the grinder and multitask if you please. —Tyler Shane Oxo Brew Conical Burr Grinder With Scale for $299: Making great coffee consistently is all about measuring your variables, and this Oxo model comes with a built-in scale. Set your grind size, select the weight you want, hit Start, and walk away; it shuts itself off when it's done. This is a great way to streamline your morning ritual, but the device does spray off a few grounds—and at its price range, we tend to prefer the Fellow Opus or Baratza ESP as an all-rounder, or the bare-bones Oxo as a budget pick. KitchenAid Burr Grinder for $200: This KitchenAid is stylish and easy to clean, and former WIRED reviewer Jaina Grey likes that the burrs are accessible thanks to their placement directly beneath the hopper. It also features precise dose control, with grind size controlled by a dial. For espresso lovers, one excellent feature is that you can swap the little container that catches the grounds with a holder for a portafilter. Technivorm Moccamaster KM5 Flat Burr Grinder for $329: The performance on this stepless (read: infinite adjustment) grinder is somewhere between good and damn good. The razor-thin grind size distribution in early testing makes the KM5 a credible rival to the similarly priced Fellow Ode, in fact. And like the Ode, this Moccamaster is made especially for bringing out precise flavors on drip and pour-over. Particle analysis shows this Moccamaster to potentially offer even more precise grinds, leading to an almost clinically clean brew with light body. The KM5 not overly user-friendly, mind you: It cranks at 90 decibels, you have to hold down its analog switch to grind, and its aesthetics are the same sturdy industrial chic as all Moccamasters. Indeed, it's designed to sit alongside the classic drip coffee maker that's been on our buy-it-for-life guide since we've had one. If you prefer clarity to ease of use, this gives the Ode a run for the money, for less money. Eureka Mignon Filtro for $269: The precision on flat burrs is terrific. But usually, so is the price. This no-frills Filtro from beloved Italian coffee brand Eureka clocks in at more than $100 less than our top-pick flat-burr, and it's an absolute metal-clad tank of a machine, says former WIRED reviewer Jaina Grey. It's as robust as the higher-end models and offers excellent consistency of grind size. Sure, it's a little loud, and you have to hold the button down when you grind. But life is full of trade-offs. Wilfa Uniform for $349: This Wilfa has long been on our list as a great flat-burr grinder for pour-overs and drip. It remains such, though the Ode springboarded it as the top pick with its Gen 2 burr update, at about the same price. Like its name suggests, the Wilfa offers a beautifully consistent grind size and will make you a lovely pour-over. That said, it's fussier to adjust and louder than the Ode. Breville Smart Grinder Pro for $200: WIRED has recommended this Breville in the past for its accessible burrs that make it easy to clean. But it's not really optimized for lighter-roast espresso, and ever since Breville bought Baratza, they've slowly been swapping out the grinders in their top-line semi-automatic espresso machines with those excellent Baratza burrs. For a stand-alone grinder at the same price, we give the same advice to you. Baratza Vario W+ for $600: The Encore has a bigger, beefier, flat burr cousin, the Baratza Vario-W+ (7/10, WIRED Recommends) with a built-in scale and ridiculously granular adjustment (230 settings!). But like a lot of flat burrs, it struggles on finer grinds, according to WIRED contributor Joe Ray. And static is an issue. With price in play, the Ode Gen 2 comes out on top, but Ray was still a big fan of the Vario. VSSL Java manual grinder for $170: VSSL specializes in ultra-durable camping tools, and it applied this same durable construction to this hardy campsite-ready hand grinder that WIRED reviewer Scott Gilbertson attests to be rugged enough to survive the zombie apocalypse. The handle folds out to provide a lot of leverage while you grind, and you can use it as a hook to hang the device up when you're done. Also Tested Wirsh Geimori GU38 for $200: The GU38 grinder from Wirsh/Geimori uses an identical burr set to the T38 Plus model I recommend as a budget espresso grinder. It's also bulkier and built a little sturdier. But the angled hopper causes more coffee retention, including some coffee beans that just refuse to feed into the grinder. Performance also seems slightly less reliable than the TU38, perhaps because the GU38 grinds faster. Either way, I'd opt for the lower-cost T38 Plus over this quite similar model. Aarke flat-burr grinder Aarke Flat-Burr Grinder for $400: This pretty, shiny, stainless steel Aarke grinder contains a unique feature when paired with Aarke's coffee brewer, detecting the water in the brewer's tank and grinding the appropriate amount of beans. But this feature wasn't as calibrated as we'd like, and there have been a lot of online reports of grinder jams. I didn't have the same problem, but at more than $300 for a grinder that hasn't been long on the market, prudence is often rewarded. Hario Skerton Pro for $55: The Hario Skerton was the gateway hand grinder for many a coffee nerd, but it has since given ground to newer entrants. It's fast and cheap, but it'll give you a heck of a workout and isn't as consistent for coarse grinds, plus the silicone handle has a habit of falling off. Hario Mini-Slim Plus for $39: This smaller Hario manual grinder is slower than the Skerton, but its plastic construction makes it good to throw in a travel bag. The low price is its main advertisement. Cuisinart Burr Grinder for $75: At first, it seems like a good deal. It's Cuisinart, a known brand, and a conical burr grinder for less than $100! But former WIRED reviewer Jaina Grey found that the low price came with a cost: These things apparently burn out faster than a rock star in the late '60s. Bodum Bistro Electric Blade Grinder for $20: This little blade grinder is quite cheap, and the model has served WIRED contributing reviewer Tyler Shane for years. That said, after some inconsistent reports on reliability, we favor the KitchenAid as our ultra-budget pick. DmofwHi Cordless Grinder for $40: We used to recommend this cordless blade grinder for camping, largely because it can make 15 pots of French press without need of a recharge. It's out of stock as of February 2026, and we're monitoring to see whether it returns. Power up with unlimited access to WIRED. Get best-in-class reporting that's too important to ignore. Subscribe Today. Comments Wired Coupons Squarespace Promo Code: 20% Off Annual Acuity Subscriptions Laptop - $400 Off LG Promo Code 10% Off Dell Coupon Code for New Customers 30% Samsung Coupon - Offer Program 2026 10% Off Canon Promo Code + Up to 30% Off 50% Off Doordash Promo Code For New & Existing Users © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-2023GeoRL..5003482K_51-0] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_New_York_Times#Content_management_system] | [TOKENS: 13653] |
Contents The New York Times The New York Times (NYT)[b] is a newspaper based in Manhattan, New York City. The New York Times covers domestic, national, and international news, and publishes opinion pieces and reviews. As one of the longest-running newspapers in the United States, the Times serves as one of the country's newspapers of record. As of August 2025[update], The New York Times had 11.88 million total and 11.3 million online subscribers, both by significant margins the highest numbers for any newspaper in the United States; the total also included 580,000 print subscribers. The New York Times is published by the New York Times Company; since 1896, the company has been chaired by the Ochs-Sulzberger family, whose current chairman and the paper's publisher is A. G. Sulzberger. The Times is headquartered at The New York Times Building in Midtown Manhattan. The Times was founded as the conservative New-York Daily Times in 1851, and came to national recognition in the 1870s with its aggressive coverage of corrupt politician Boss Tweed. Following the Panic of 1893, Chattanooga Times publisher Adolph Ochs gained a controlling interest in the company. In 1935, Ochs was succeeded by his son-in-law, Arthur Hays Sulzberger, who began a push into European news. Sulzberger's son Arthur Ochs Sulzberger became publisher in 1963, adapting to a changing newspaper industry and introducing radical changes. The New York Times was involved in the landmark 1964 U.S. Supreme Court case New York Times Co. v. Sullivan, which restricted the ability of public officials to sue the media for defamation. In 1971, The New York Times published the Pentagon Papers, an internal Department of Defense document detailing the United States's historical involvement in the Vietnam War, despite pushback from then-president Richard Nixon. In the landmark decision New York Times Co. v. United States (1971), the Supreme Court ruled that the First Amendment guaranteed the right to publish the Pentagon Papers. In the 1980s, the Times began a two-decade progression to digital technology and launched nytimes.com in 1996. In the 21st century, it shifted its publication online amid the global decline of newspapers. Currently, the Times maintains several regional bureaus staffed with journalists across six continents. It has expanded to several other publications, including The New York Times Magazine, The New York Times International Edition, and The New York Times Book Review. In addition, the paper has produced several television series, podcasts—including The Daily—and games through The New York Times Games. The New York Times has been involved in a number of controversies in its history. Among other accolades, it has been awarded the Pulitzer Prize 135 times since 1918, the most of any publication. According to a 2025 Pew Research Center study on educational differences among audiences of 30 major U.S. news outlets, The New York Times had the highest proportion of college-educated readers among the daily newspapers surveyed, with 56% of its audience holding at least a bachelor's degree. History The New York Times was established in 1851 as the New-York Daily Times by New-York Tribune journalists Henry Jarvis Raymond and George Jones. The Times experienced significant circulation, particularly among conservatives; New-York Tribune publisher Horace Greeley praised the Times. During the American Civil War, Times correspondents gathered information directly from Confederate states. In 1869, Jones inherited the paper from Raymond, who had changed its name to The New-York Times. Under Jones, the Times began to publish a series of articles criticizing Tammany Hall political boss William M. Tweed, despite vehement opposition from other New York newspapers. In 1871, The New-York Times published Tammany Hall's accounting books; Tweed was tried in 1873 and sentenced to twelve years in prison. The Times earned national recognition for its coverage of Tweed. In 1891, Jones died, creating a management imbroglio in which his children had insufficient business acumen to inherit the company and his will prevented an acquisition of the Times. Editor-in-chief Charles Ransom Miller, editorial editor Edward Cary, and correspondent George F. Spinney established a company to manage The New-York Times, but faced financial difficulties during the Panic of 1893. In August 1896, Chattanooga Times publisher Adolph Ochs acquired The New-York Times, implementing significant alterations to the newspaper's structure. Ochs established the Times as a merchant's newspaper and removed the hyphen from the newspaper's name. In 1905, The New York Times opened Times Tower, marking expansion. The Times experienced a political realignment in the 1910s amid several disagreements within the Republican Party. The New York Times reported on the sinking of the Titanic, as other newspapers were cautious about bulletins circulated by the Associated Press. Through managing editor Carr Van Anda, the Times paid considerable attention to advances in science, reporting on Albert Einstein's then-obscure theory of general relativity and becoming involved in the discovery of the tomb of Tutankhamun. In April 1935, Ochs died, leaving his son-in-law Arthur Hays Sulzberger as publisher. The Great Depression forced Sulzberger to reduce The New York Times's operations, and developments in the New York newspaper landscape resulted in the formation of larger newspapers, such as the New York Herald Tribune and the New York World-Telegram. In contrast to Ochs, Sulzberger encouraged wirephotography. The New York Times extensively covered World War II through large headlines, reporting on exclusive stories such as the Yugoslav coup d'état. Amid the war, Sulzberger began expanding the Times's operations further, acquiring WQXR-FM in 1944—the first non-Times investment since the Jones era—and established a fashion show in Times Hall. Despite reductions as a result of conscription, The New York Times retained the largest journalism staff of any newspaper. The Times's print edition became available internationally during the war through the Army & Air Force Exchange Service; The New York Times Overseas Weekly later became available in Japan through The Asahi Shimbun and in Germany through the Frankfurter Zeitung. The international edition would develop into a separate newspaper. Journalist William L. Laurence publicized the atomic bomb race between the United States and Germany, resulting in the Federal Bureau of Investigation seizing copies of the Times. The United States government recruited Laurence to document the Manhattan Project in April 1945. Laurence became the only witness of the Manhattan Project, a detail realized by employees of The New York Times following the atomic bombing of Hiroshima. Following World War II, The New York Times continued to expand. The Times was subject to investigations from the Senate Internal Security Subcommittee, a McCarthyist subcommittee that investigated purported communism from within press institutions. Arthur Hays Sulzberger's decision to dismiss a copyreader who had pleaded the Fifth Amendment drew ire from within the Times and from external organizations. In April 1961, Sulzberger resigned, appointing his son-in-law, The New York Times Company president Orvil Dryfoos. Under Dryfoos, The New York Times established a newspaper based in Los Angeles. In 1962, the implementation of automated printing presses in response to increasing costs mounted fears over technological unemployment. The New York Typographical Union staged a strike in December, altering the media consumption of New Yorkers. The strike left New York with three remaining newspapers—the Times, the Daily News, and the New York Post—by its conclusion in March 1963. In May, Dryfoos died of a heart ailment. Following weeks of ambiguity, Arthur Ochs Sulzberger became The New York Times's publisher. Technological advancements leveraged by newspapers such as the Los Angeles Times and improvements in coverage from The Washington Post and The Wall Street Journal necessitated adaptations to nascent computing. The New York Times published "Heed Their Rising Voices" in 1960, a full-page advertisement purchased by supporters of Martin Luther King Jr. criticizing law enforcement in Montgomery, Alabama for their response to the civil rights movement. Montgomery Public Safety commissioner L. B. Sullivan sued the Times for defamation. In New York Times Co. v. Sullivan (1964), the U.S. Supreme Court ruled that the verdict in Alabama county court and the Supreme Court of Alabama violated the First Amendment. The decision is considered to be landmark. After financial losses, The New York Times ended its international edition, acquiring a stake in the Paris Herald Tribune, forming the International Herald Tribune. The Times initially published the Pentagon Papers, facing opposition from then-president Richard Nixon. The Supreme Court ruled in The New York Times's favor in New York Times Co. v. United States (1971), allowing the Times and The Washington Post to publish the papers. The New York Times remained cautious in its initial coverage of the Watergate scandal. As Congress began investigating the scandal, the Times furthered its coverage, publishing details on the Huston Plan, alleged wiretapping of reporters and officials, and testimony from James W. McCord Jr. that the Committee for the Re-Election of the President paid the conspirators off. The exodus of readers to suburban New York newspapers, such as Newsday and Gannett papers, adversely affected The New York Times's circulation. Contemporary newspapers balked at additional sections; Time devoted a cover for its criticism and New York wrote that the Times was engaging in "middle-class self-absorption". The New York Times, the Daily News, and the New York Post were the subject of a strike in 1978, allowing emerging newspapers to leverage halted coverage. The Times deliberately avoided coverage of the AIDS epidemic, running its first front-page article in May 1983. Max Frankel's editorial coverage of the epidemic, with mentions of anal intercourse, contrasted with then-executive editor A. M. Rosenthal's puritan approach, intentionally avoiding descriptions of the luridity of gay venues. Following years of waning interest in The New York Times, Sulzberger resigned in January 1992, appointing his son, Arthur Ochs Sulzberger Jr., as publisher. The Internet represented a generational shift within the Times; Sulzberger, who negotiated The New York Times Company's acquisition of The Boston Globe in 1993, derided the Internet, while his son expressed antithetical views. @times appeared on America Online's website in May 1994 as an extension of The New York Times, featuring news articles, film reviews, sports news, and business articles. Despite opposition, several employees of the Times had begun to access the Internet. The online success of publications that traditionally co-existed with the Times—such as America Online, Yahoo, and CNN—and the expansion of websites such as Monster.com and Craigslist that threatened The New York Times's classified advertisement model increased efforts to develop a website. nytimes.com debuted on January 19 and was formally announced three days later. The Times published domestic terrorist Ted Kaczynski's essay Industrial Society and Its Future in 1995, contributing to his arrest after his brother David recognized the essay's penmanship. Following the establishment of nytimes.com, The New York Times retained its journalistic hesitancy under executive editor Joseph Lelyveld, refusing to publish an article reporting on the Clinton–Lewinsky scandal from Drudge Report. nytimes.com editors conflicted with print editors on several occasions, including wrongfully naming security guard Richard Jewell as the suspect in the Centennial Olympic Park bombing and covering the death of Diana, Princess of Wales in greater detail than the print edition. The New York Times Electronic Media Company was adversely affected by the dot-com crash. The Times extensively covered the September 11 attacks. The following day's print issue contained sixty-six articles, the work of over three hundred dispatched reporters. Journalist Judith Miller was the recipient of a package containing a white powder during the 2001 anthrax attacks, furthering anxiety within The New York Times. In September 2002, Miller and military correspondent Michael R. Gordon wrote an article for the Times claiming that Iraq had purchased aluminum tubes. The article was cited by then-president George W. Bush to claim that Iraq was constructing weapons of mass destruction; the theoretical use of aluminum tubes to produce nuclear material was speculation. In March 2003, the United States invaded Iraq, beginning the Iraq War. The New York Times attracted controversy after thirty-six articles from journalist Jayson Blair were discovered to be plagiarized. Criticism over then-executive editor Howell Raines and then-managing editor Gerald M. Boyd mounted following the scandal, culminating in a town hall in which a deputy editor criticized Raines for failing to question Blair's sources in article he wrote on the D.C. sniper attacks. In June 2003, Raines and Boyd resigned. Arthur Ochs Sulzberger Jr. appointed Bill Keller as executive editor. Miller continued to report on the Iraq War as a journalistic embed covering the country's weapons of mass destruction program. Keller and then-Washington bureau chief Jill Abramson unsuccessfully attempted to subside criticism. Conservative media criticized the Times over its coverage of missing explosives from the Al Qa'qaa weapons facility. An article in December 2005 disclosing warrantless surveillance by the National Security Agency contributed to further criticism from the George W. Bush administration and the Senate's refusal to renew the Patriot Act. In the Plame affair, a Central Intelligence Agency inquiry found that Miller had become aware of Valerie Plame's identity through then-vice president Dick Cheney's chief of staff Scooter Libby, resulting in Miller's resignation. During the Great Recession, The New York Times suffered significant fiscal difficulties as a consequence of the subprime mortgage crisis and a decline in classified advertising. Exacerbated by Rupert Murdoch's revitalization of The Wall Street Journal through his acquisition of Dow Jones & Company, The New York Times Company began enacting measures to reduce the newsroom budget. The company was forced to borrow $250 million (equivalent to $373.84 million in 2025) from Mexican billionaire Carlos Slim and fired over one hundred employees by 2010. nytimes.com's coverage of the Eliot Spitzer prostitution scandal, resulting in the resignation of then-New York governor Eliot Spitzer, furthered the legitimacy of the website as a journalistic medium. The Times's economic downturn renewed discussions of an online paywall; The New York Times implemented a paywall in March 2011. Abramson succeeded Keller, continuing her characteristic investigations into corporate and government malfeasance into the Times's coverage. Following conflicts with newly appointed chief executive Mark Thompson's ambitions, Abramson was dismissed by Sulzberger Jr., who named Dean Baquet as her replacement. Leading up to the 2016 presidential election, The New York Times elevated the Hillary Clinton email controversy into a national issue. Donald Trump's upset victory contributed to an increase in subscriptions to the Times. The New York Times experienced unprecedented indignation from Trump, who referred to publications such as the Times as "enemies of the people" at the Conservative Political Action Conference and tweeted his disdain for the newspaper and CNN. In October 2017, The New York Times published an article by journalists Jodi Kantor and Megan Twohey alleging that dozens of women had accused film producer and The Weinstein Company co-chairman Harvey Weinstein of sexual misconduct. The investigation resulted in Weinstein's resignation and conviction, precipitated the Weinstein effect, and served as a catalyst for the #MeToo movement. The New York Times Company vacated the public editor position and eliminated the copy desk in November. Sulzberger Jr. announced his resignation in December 2017, appointing his son, A. G. Sulzberger, as publisher. Trump's relationship—equally diplomatic and negative—marked Sulzberger's tenure. In September 2018, The New York Times published "I Am Part of the Resistance Inside the Trump Administration", an anonymous essay by a self-described Trump administration official later revealed to be Department of Homeland Security chief of staff Miles Taylor. The animosity—which extended to nearly three hundred instances of Trump disparaging the Times by May 2019—culminated in Trump ordering federal agencies to cancel their subscriptions to The New York Times and The Washington Post in October 2019. Trump's tax returns have been the subject of three separate investigations.[c] During the COVID-19 pandemic, the Times began implementing data services and graphs. On May 23, 2020, The New York Times's front page solely featured U.S. Deaths Near 100,000, An Incalculable Loss, a subset of the 100,000 people in the United States who died of COVID-19, the first time that the Times's front page lacked images since they were introduced. Since 2020, The New York Times has focused on broader diversification, developing online games and producing television series. The New York Times Company acquired The Athletic in January 2022. Organization Since 1896, The New York Times has been published by the Ochs-Sulzberger family, having previously been published by Henry Jarvis Raymond until 1869 and by George Jones until 1896. Adolph Ochs published the Times until his death in 1935, when he was succeeded by his son-in-law, Arthur Hays Sulzberger. Sulzberger was publisher until 1961 and was succeeded by Orvil Dryfoos, his son-in-law, who served in the position until his death in 1963. Arthur Ochs Sulzberger succeeded Dryfoos until his resignation in 1992. His son, Arthur Ochs Sulzberger Jr., served as publisher until 2018. The New York Times's current publisher is A. G. Sulzberger, Sulzberger Jr.'s son. As of 2023, the Times's executive editor is Joseph Kahn and the paper's managing editors are Marc Lacey and Carolyn Ryan, having been appointed in June 2022. The New York Times's deputy managing editors are Sam Dolnick, Monica Drake, and Steve Duenes, and the paper's assistant managing editors are Matthew Ericson, Jonathan Galinsky, Hannah Poferl, Sam Sifton, Karron Skog, and Michael Slackman. The New York Times is owned by The New York Times Company, a publicly traded company. The New York Times Company, in addition to the Times, owns Wirecutter, The Athletic, The New York Times Cooking, and The New York Times Games, and acquired Serial Productions and Audm. The New York Times Company holds undisclosed minority investments in multiple other businesses, and formerly owned The Boston Globe and several radio and television stations. The New York Times Company is majority-owned by the Ochs-Sulzberger family through elevated shares in the company's dual-class stock structure held largely in a trust, in effect since the 1950s; as of 2022, the family holds ninety-five percent of The New York Times Company's Class B shares, allowing it to elect seventy percent of the company's board of directors. Class A shareholders have restrictive voting rights. As of 2023, The New York Times Company's chief executive is Meredith Kopit Levien, the company's former chief operating officer who was appointed in September 2020. As of March 2023, The New York Times Company employs 5,800 individuals, including 1,700 journalists according to deputy managing editor Sam Dolnick. Journalists for The New York Times may not run for public office, provide financial support to political candidates or causes, endorse candidates, or demonstrate public support for causes or movements. Journalists are subject to the guidelines established in "Ethical Journalism" and "Guidelines on Integrity". According to the former, Times journalists must abstain from using sources with a personal relationship to them and must not accept reimbursements or inducements from individuals who may be written about in The New York Times, with exceptions for gifts of nominal value. The latter requires attribution and exact quotations, though exceptions are made for linguistic anomalies. Staff writers are expected to ensure the veracity of all written claims, but may delegate researching obscure facts to the research desk. In March 2021, the Times established a committee to avoid journalistic conflicts of interest with work written for The New York Times, following columnist David Brooks's resignation from the Aspen Institute for his undisclosed work on the initiative Weave. The New York Times editorial board was established in 1896 by Adolph Ochs. With the opinion department, the editorial board is independent of the newsroom. Then-editor-in-chief Charles Ransom Miller served as opinion editor from 1883 until his death in 1922. Rollo Ogden succeeded Miller until his death in 1937. From 1937 to 1938, John Huston Finley served as opinion editor; in a prearranged plan, Charles Merz succeeded Finley. Merz served in the position until his retirement in 1961. John Bertram Oakes served as opinion editor from 1961 to 1976, when then-publisher Arthur Ochs Sulzberger appointed Max Frankel. Frankel served in the position until 1986, when he was appointed as executive editor. Jack Rosenthal was the opinion editor from 1986 to 1993. Howell Raines succeeded Rosenthal until 2001, when he was made executive editor. Gail Collins succeeded Raines until her resignation in 2006. From 2007 to 2016, Andrew Rosenthal was the opinion editor. James Bennet succeeded Rosenthal until his resignation in 2020. As of July 2024[update], the editorial board comprises thirteen opinion writers. The New York Times's opinion editor is Kathleen Kingsbury and the deputy opinion editor is Patrick Healy. The New York Times's editorial board was initially opposed to liberal beliefs, opposing women's suffrage in 1900 and 1914. The editorial board began to espouse progressive beliefs during Oakes's tenure, conflicting with the Ochs-Sulzberger family, of which Oakes was a member as Adolph Ochs's nephew; in 1976, Oakes publicly disagreed with Sulzberger's endorsement of Daniel Patrick Moynihan over Bella Abzug in the 1976 Senate Democratic primaries in a letter sent from Martha's Vineyard. Under Rosenthal, the editorial board took positions supporting assault weapons legislation and the legalization of marijuana, but publicly criticized the Obama administration over its portrayal of terrorism. In presidential elections, The New York Times has endorsed a total of twelve Republican candidates and thirty-two Democratic candidates, and has endorsed the Democrat in every election since 1960.[j] With the exception of Wendell Willkie, Republicans endorsed by the Times have won the presidency. In 2016, the editorial board issued an anti-endorsement against Donald Trump for the first time in its history. In February 2020, the editorial board reduced its presence from several editorials each day to occasional editorials for events deemed particularly significant. Since August 2024, the board no longer endorses candidates in local or congressional races in New York. Since 1940, editorial, media, and technology workers of The New York Times have been represented by the New York Times Guild. The Times Guild, along with the Times Tech Guild, are represented by the NewsGuild-CWA. In 1940, Arthur Hays Sulzberger was called upon by the National Labor Relations Board amid accusations that he had discouraged Guild membership in the Times. Over the next few years, the Guild would ratify several contracts, expanding to editorial and news staff in 1942 and maintenance workers in 1943. The New York Times Guild has walked out several times in its history, including for six and a half hours in 1981 and in 2017, when copy editors and reporters walked out at lunchtime in response to the elimination of the copy desk. On December 7, 2022, the union held a one-day strike, the first interruption to The New York Times since 1978. The New York Times Guild reached an agreement in May 2023 to increase minimum salaries for employees and a retroactive bonus. The Times Tech Guild is the largest technology union with collective bargaining rights in the United States. The guild held a second strike beginning on November 4, 2024, threatening the Times's coverage of the 2024 United States presidential election. Content As of August 2025, The New York Times has 11.8 million subscribers, with 11.3 million online-only subscribers and 580,000 print subscribers. The New York Times Company intends to have 15 million subscribers by 2027. The Times's shift towards subscription-based revenue with the debut of an online paywall in 2011 contributed to subscription revenue exceeding advertising revenue the following year, furthered by the 2016 presidential election and Donald Trump. In 2022, Vox wrote that The New York Times's subscribers skew "older, richer, whiter, and more liberal"; to reflect the general population of the United States, the Times has attempted to alter its audience by acquiring The Athletic, investing in verticals such as The New York Times Games, and beginning a marketing campaign showing diverse subscribers to the Times. The New York Times Company chief executive Meredith Kopit Levien stated that the average age of subscribers has remained constant. In October 2001, The New York Times began publishing DealBook, a financial newsletter edited by Andrew Ross Sorkin. The Times had intended to publish the newsletter in September, but delayed its debut following the September 11 attacks. A website for DealBook was established in March 2006. The New York Times began shifting towards DealBook as part of the newspaper's financial coverage in November 2010 with a renewed website and a presence in the Times's print edition. In 2011, the Times began hosting the DealBook Summit, an annual conference hosted by Sorkin. During the COVID-19 pandemic, The New York Times hosted the DealBook Online Summit in 2020 and 2021. The 2022 DealBook Summit featured—among other speakers—former vice president Mike Pence and Israeli prime minister Benjamin Netanyahu, culminating in an interview with former FTX chief executive Sam Bankman-Fried; FTX had filed for bankruptcy several weeks prior. The 2023 DealBook Summit's speakers included vice president Kamala Harris, Israeli president Isaac Herzog, and businessman Elon Musk. In June 2010, The New York Times licensed the political blog FiveThirtyEight in a three-year agreement. The blog, written by Nate Silver, had garnered attention during the 2008 presidential election for predicting the elections in forty-nine of fifty states. FiveThirtyEight appeared on nytimes.com in August. According to Silver, several offers were made for the blog; Silver wrote that a merger of unequals must allow for editorial sovereignty and resources from the acquirer, comparing himself to Groucho Marx. According to The New Republic, FiveThirtyEight drew as much as a fifth of the traffic to nytimes.com during the 2012 presidential election. In July 2013, FiveThirtyEight was sold to ESPN. In an article following Silver's exit, public editor Margaret Sullivan wrote that he was disruptive to the Times's culture for his perspective on probability-based predictions and scorn for polling—having stated that punditry is "fundamentally useless", comparing him to Billy Beane, who implemented sabermetrics in baseball. According to Sullivan, his work was criticized by several notable political journalists. The New Republic obtained a memo in November 2013 revealing then-Washington bureau chief David Leonhardt's ambitions to establish a data-driven newsletter with presidential historian Michael Beschloss, graphic designer Amanda Cox, economist Justin Wolfers, and The New Republic journalist Nate Cohn. By March, Leonhardt had amassed fifteen employees from within The New York Times; the newsletter's staff included individuals who had created the Times's dialect quiz, fourth down analyzer, and a calculator for determining buying or renting a home. The Upshot debuted in April 2014. Fast Company reviewed an article about Illinois Secure Choice—a state-funded retirement saving system—as "neither a terse news item, nor a formal financial advice column, nor a politically charged response to economic policy", citing its informal and neutral tone. The Upshot developed "the needle" for the 2016 presidential election and 2020 presidential elections, a thermometer dial displaying the probability of a candidate winning. In January 2016, Cox was named editor of The Upshot. Kevin Quealy was named editor in June 2022. The New York Times has said it is perceived as a liberal newspaper. An analysis by Pew Research Center in October 2014 placed the Times readership as ideologically liberal based on a scale of 10 political values questions. According to an internal readership poll conducted by The New York Times in 2019, eighty-four percent of readers identified as liberal. The New York Times has struggled internally with how to balance its coverage, dismissing criticism from the left for "sanewashing" right-wing viewpoints in its coverage of Donald Trump. In covering Israel's war on the Gaza Strip that began in 2023, The New York Times instructed its reporters to restrict use of the terms 'Palestine', 'genocide', and 'refugee camps' to specific usages, with data analysis showing a pattern of articles emphasizing Israeli civilians killed by Palestinians over a much larger number of Palestinian civilians killed by Israelis. The group Writers Against the War on Gaza wrote in the blog Mondoweiss that this has contrasted with The New York Times coverage of Russia's invasion of Ukraine, in which Russia is considered a threat to U.S. foreign policy interests, while Israel is considered an ally. In February 1942, The New York Times crossword debuted in The New York Times Magazine; according to Richard Shepard, the attack on Pearl Harbor in December 1941 convinced then-publisher Arthur Hays Sulzberger of the necessity of a crossword. The New York Times has published recipes since the 1850s and has had a separate food section since the 1940s. In 1961, restaurant critic Craig Claiborne published The New York Times Cookbook, an unauthorized cookbook that drew from the Times's recipes. Since 2010, former food editor Amanda Hesser has published The Essential New York Times Cookbook, a compendium of recipes from The New York Times. The Innovation Report in 2014 revealed that the Times had attempted to establish a cooking website since 1998, but faced difficulties with the absence of a defined data structure. In September 2014, The New York Times introduced NYT Cooking, an application and website. Edited by food editor Sam Sifton, the Times's cooking website features 21,000 recipes as of 2022. NYT Cooking features videos as part of an effort by Sifton to hire two former Tasty employees from BuzzFeed. In August 2023, NYT Cooking added personalized recommendations through the cosine similarity of text embeddings of recipe titles. The website also features no-recipe recipes, a concept proposed by Sifton. In May 2016, The New York Times Company announced a partnership with startup Chef'd to form a meal delivery service that would deliver ingredients from The New York Times Cooking recipes to subscribers; Chef'd shut down in July 2018 after failing to accrue capital and secure financing. The Hollywood Reporter reported in September 2022 that the Times would expand its delivery options to US$95 cooking kits curated by chefs such as Nina Compton, Chintan Pandya, and Naoko Takei Moore. That month, the staff of NYT Cooking went on tour with Compton, Pandya, and Moore in Los Angeles, New Orleans, and New York City, culminating in a food festival. In addition, The New York Times offered its own wine club originally operated by the Global Wine Company. The New York Times Wine Club was established in August 2009, during a dramatic decrease in advertising revenue. By 2021, the wine club was managed by Lot18, a company that provides proprietary labels. Lot18 managed the Williams Sonoma Wine Club and its own wine club Tasting Room. The New York Times archives its articles in a basement annex beneath its building known as "the morgue", a venture started by managing editor Carr Van Anda in 1907. The morgue comprises news clippings, a pictures library, and the Times's book and periodicals library. As of 2014, it is the largest library of any media company, dating back to 1851. In November 2018, The New York Times partnered with Google to digitize the Archival Library. Additionally, The New York Times has maintained a virtual microfilm reader known as TimesMachine since 2014. The service launched with archives from 1851 to 1980; in 2016, TimesMachine expanded to include archives from 1981 to 2002. The Times built a pipeline to take in TIFF images, article metadata in XML and an INI file of Cartesian geometry describing the boundaries of the page, and convert it into a PNG of image tiles and JSON containing the information in the XML and INI files. The image tiles are generated using GDAL and displayed using Leaflet, using data from a content delivery network. The Times ran optical character recognition on the articles using Tesseract and shingled and fuzzy string matched the result. The New York Times uses a proprietary content management system known as Scoop for its online content and the Microsoft Word-based content management system CCI for its print content. Scoop was developed in 2008 to serve as a secondary content management system for editors working in CCI to publish their content on the Times's website; as part of The New York Times's online endeavors, editors now write their content in Scoop and send their work to CCI for print publication. Since its introduction, Scoop has superseded several processes within the Times, including print edition planning and collaboration, and features tools such as multimedia integration, notifications, content tagging, and drafts. The New York Times uses private articles for high-profile opinion pieces, such as those written by Russian president Vladimir Putin and actress Angelina Jolie, and for high-level investigations. In January 2012, the Times released Integrated Content Editor (ICE), a revision tracking tool for WordPress and TinyMCE. ICE is integrated within the Times's workflow by providing a unified text editor for print and online editors, reducing the divide between print and online operations. By 2017, The New York Times began developing a new authoring tool to its content management system known as Oak, in an attempt to further the Times's visual efforts in articles and reduce the discrepancy between the mediums in print and online articles. The system reduces the input of editors and supports additional visual mediums in an editor that resembles the appearance of the article. Oak is based on ProseMirror, a JavaScript rich-text editor toolkit, and retains the revision tracking and commenting functionalities of The New York Times's previous systems. Additionally, Oak supports predefined article headers. In 2019, Oak was updated to support collaborative editing using Firebase to update editors's cursor status. Several Google Cloud Functions and Google Cloud Tasks allow articles to be previewed as they will be printed, and the Times's primary MySQL database is regularly updated to update editors on the article status. Style and design Since 1895, The New York Times has maintained a manual of style in several forms. The New York Times Manual of Style and Usage was published on the Times's intranet in 1999. The New York Times uses honorifics when referring to individuals. With the AP Stylebook's removal of honorifics in 2000 and The Wall Street Journal's omission of courtesy titles in May 2023, the Times is the only national newspaper that continues to use honorifics. According to former copy editor Merrill Perlman, The New York Times continues to use honorifics as a "sign of civility". The Times's use of courtesy titles led to an apocryphal rumor that the paper had referred to singer Meat Loaf as "Mr. Loaf". Several exceptions have been made; the former sports section and The New York Times Book Review do not use honorifics. A leaked memo following the killing of Osama bin Laden in May 2011 revealed that editors were given a last-minute instruction to omit the honorific from Osama bin Laden's name, consistent with deceased figures of historic significance, such as Adolf Hitler, Napoleon, and Vladimir Lenin. The New York Times uses academic and military titles for individuals prominently serving in that position. In 1986, the Times began to use Ms., and introduced the gender-neutral title Mx. in 2015. The New York Times uses initials when a subject has expressed a preference, such as Donald Trump. The New York Times maintains a strict but not absolute obscenity policy, including phrases. In a review of the Canadian hardcore punk band Fucked Up, music critic Kelefa Sanneh wrote that the band's name—entirely rendered in asterisks—would not be printed in the Times "unless an American president, or someone similar, says it by mistake"; The New York Times did not repeat then-vice president Dick Cheney's use of "fuck" against then-senator Patrick Leahy in 2004 or then-vice president Joe Biden's remarks that the passage of the Affordable Care Act in 2010 was a "big fucking deal". The Times's profanity policy has been tested by former president Donald Trump. The New York Times published Trump's Access Hollywood tape in October 2016, containing the words "fuck", "pussy", "bitch", and "tits", the first time the publication had published an expletive on its front page, and repeated an explicit phrase for fellatio stated by then-White House communications director Anthony Scaramucci in July 2017. The New York Times omitted Trump's use of the phrase "shithole countries" from its headline in favor of "vulgar language" in January 2018. The Times banned certain words, such as "bitch", "whore", and "sluts", from Wordle in 2022. Journalists for The New York Times do not write their own headlines, but rather copy editors who specifically write headlines. The Times's guidelines insist headline editors get to the main point of an article but avoid giving away endings, if present. Other guidelines include using slang "sparingly", avoiding tabloid headlines, not ending a line on a preposition, article, or adjective, and chiefly, not to pun. The New York Times Manual of Style and Usage states that wordplay, such as "Rubber Industry Bounces Back", is to be tested on a colleague as a canary is to be tested in a coal mine; "when no song bursts forth, start rewriting". The New York Times has amended headlines due to controversy. In 2019, following two back-to-back mass shootings in El Paso and Dayton, the Times used the headline, "Trump Urges Unity vs. Racism", to describe then-president Donald Trump's words after the shootings. After criticism from FiveThirtyEight founder Nate Silver, the headline was changed to, "Assailing Hate But Not Guns". Online, The New York Times's headlines do not face the same length restrictions as headlines that appear in print; print headlines must fit within a column, often six words. Additionally, headlines must "break" properly, containing a complete thought on each line without splitting up prepositions and adverbs. Writers may edit a headline to fit an article more aptly if further developments occur. The Times uses A/B testing for articles on the front page, placing two headlines against each other. At the end of the test, the headlines that receives more traffic is chosen. The alteration of a headline regarding intercepted Russian data used in the Mueller special counsel investigation was noted by Trump in a March 2017 interview with Time, in which he claimed that the headline used the word "wiretapped" in the print version of the paper on January 20, while the digital article on January 19 omitted the word. The headline was intentionally changed in the print version to use "wiretapped" in order to fit within the print guidelines. The nameplate of The New York Times has been unaltered since 1967. In creating the initial nameplate, Henry Jarvis Raymond took as his model the British newspaper The Times, which used a Blackletter style called Textura, popularized following the fall of the Western Roman Empire and regional variations of Alcuin's script, as well as a period. With the change to The New-York Times on September 14, 1857, the nameplate followed. Under George Jones, the terminals of the "N", "r", and "s" were intentionally exaggerated into swashes. The nameplate in the January 15, 1894, issue trimmed the terminals once more, smoothed the edges, and turned the stem supporting the "T" into an ornament. The hyphen was dropped on December 1, 1896, after Adolph Ochs purchased the paper. The descender of the "h" was shortened on December 30, 1914. The largest change to the nameplate was introduced on February 21, 1967, when type designer Ed Benguiat redesigned the logo, most prominently turning the arrow ornament into a diamond. Notoriously, the new logo dropped the period that had followed the word Times up until that point; one reader compared the omission of the period to "performing plastic surgery on Helen of Troy." Picture editor John Radosta worked with a New York University professor to determine that dropping the period saved the paper US$41.28 (equivalent to $398.59 in 2025). Print edition As of December 2023, The New York Times has printed sixty thousand issues, a statistic represented in the paper's masthead to the right of the volume number, the Times's years in publication written in Roman numerals. The volume and issues are separated by four dots representing the edition number of that issue; on the day of the 2000 presidential election, the Times was revised four separate times, necessitating the use of an em dash in place of an ellipsis. The em dash issue was printed hundreds times over before being replaced by the one-dot issue. Despite efforts by newsroom employees to recycle copies sent to The New York Times's office, several copies were kept, including one put on display at the Museum at The Times. From February 7, 1898, to December 31, 1999, the Times's issue number was incorrect by five hundred issues, an error suspected by The Atlantic to be the result of a careless front page type editor. The misreporting was noticed by news editor Aaron Donovan, who was calculating the number of issues in a spreadsheet and noticed the discrepancy. The New York Times celebrated fifty thousand issues on March 14, 1995, an observance that should have occurred on July 26, 1996. The New York Times has reduced the physical size of its print edition while retaining its broadsheet format. The New-York Daily Times debuted at 18 inches (460 mm) across. By the 1950s, the Times was being printed at 16 inches (410 mm) across. In 1953, an increase in paper costs to US$10 (equivalent to $120.34 in 2025) a ton increased newsprint costs to US$21.7 million (equivalent to $326,110,074.63 in 2025) On December 28, 1953, the pages were reduced to 15.5 inches (390 mm). On February 14, 1955, a further reduction to 15 inches (380 mm) occurred, followed by 14.5 and 13.5 inches (370 and 340 mm). On August 6, 2007, the largest cut occurred when the pages were reduced to 12 inches (300 mm),[k] a decision that other broadsheets had previously considered. Then-executive editor Bill Keller stated that a narrower paper would be more beneficial to the reader but acknowledged a net loss in article space of five percent. In 1985, The New York Times Company established a minority stake in a US$21.7 million (equivalent to $326,110,074.63 in 2025) newsprint plant in Clermont, Quebec through Donahue Malbaie. The company sold its equity interest in Donahue Malbaie in 2017. The New York Times often uses large, bolded headlines for major events. For the print version of the Times, these headlines are written by one copy editor, reviewed by two other copy editors, approved by the masthead editors, and polished by other print editors. The process is completed before 8 p.m., but it may be repeated if further development occur, as did take place during the 2020 presidential election. On the day Joe Biden was declared the winner, The New York Times utilized a "hammer headline" reading, "Biden Beats Trump", in all caps and bolded. A dozen journalists discussed several potential headlines, such as "It's Biden" or "Biden's Moment", and prepared for a Donald Trump victory, in which they would use "Trump Prevails". During Trump's first impeachment, the Times drafted the hammer headline, "Trump Impeached". The New York Times altered the ligatures between the E and the A, as not doing so would leave a noticeable gap due to the stem of the A sloping away from the E. The Times reused the tight kerning for "Biden Beats Trump" and Trump's second impeachment, which simply read, "Impeached". In cases where two major events occur on the same day or immediately after each other, The New York Times has used a "paddle wheel" headline, where both headlines are used but split by a line. The term dates back to August 8, 1959, when it was revealed that the United States was monitoring Soviet missile firings and when Explorer 6—shaped like a paddle wheel—launched. Since then, the paddle wheel has been used several times, including on January 21, 1981, when Ronald Reagan was sworn in minutes before Iran released fifty-two American hostages, ending the Iran hostage crisis. At the time, most newspapers favored the end of the hostage crisis, but the Times placed the inauguration above the crisis. Other occasions in which the paddle wheel has been used include on July 26, 2000, when the 2000 Camp David Summit ended without an agreement and when Bush announced that Dick Cheney would be his running mate, and on June 24, 2016, when the United Kingdom European Union membership referendum passed, beginning Brexit, and when the Supreme Court deadlocked in United States v. Texas. The New York Times has run editorials from its editorial board on the front page twice. On June 13, 1920, the Times ran an editorial opposing Warren G. Harding, who was nominated during that year's Republican Party presidential primaries. Amid growing acceptance to run editorials on the front pages from publications such as the Detroit Free Press, The Patriot-News, The Arizona Republic, and The Indianapolis Star, The New York Times ran an editorial on its front page on December 5, 2015, following a terrorist attack in San Bernardino, California, in which fourteen people were killed. The editorial advocates for the prohibition of "slightly modified combat rifles" used in the San Bernardino shooting and "certain kinds of ammunition". Conservative figures, including Texas senator Ted Cruz, The Weekly Standard editor Bill Kristol, Fox & Friends co-anchor Steve Doocy, and then-New Jersey governor Chris Christie criticized the Times. Talk radio host Erick Erickson acquired an issue of The New York Times to fire several rounds into the paper, posting a picture online. Since 1997, The New York Times's primary distribution center is located in College Point, Queens. The facility is 300,000 ft2 (28,000 m2) and employs 170 people as of 2017. The College Point distribution center prints 300,000 to 800,000 newspapers daily. On most occasions, presses start before 11 p.m. and finish before 3 a.m. A robotic crane grabs a roll of newsprint and several rollers ensure ink can be printed on paper. The final newspapers are wrapped in plastic and shipped out. As of 2018, the College Point facility accounted for 41 percent of production. Other copies are printed at 26 other publications, such as The Atlanta Journal-Constitution, The Dallas Morning News, The Santa Fe New Mexican, and the Courier Journal. With the decline of newspapers, particularly regional publications, the Times must travel further; for example, newspapers for Hawaii are flown from San Francisco on United Airlines, and Sunday papers are flown from Los Angeles on Hawaiian Airlines. Computer glitches, mechanical issues, and weather phenomena affect circulation but do not stop the paper from reaching customers. The College Point facility prints over two dozen other papers, including The Wall Street Journal and USA Today. The New York Times has halted its printing process several times to account for major developments. The first printing stoppage occurred on March 31, 1968, when then-president Lyndon B. Johnson announced that he would not seek a second term. Other press stoppages include May 19, 1994, for the death of former first lady Jacqueline Kennedy Onassis, and July 17, 1996, for Trans World Airlines Flight 800. The 2000 presidential election necessitated two press stoppages. Al Gore appeared to concede on November 8, forcing then-executive editor Joseph Lelyveld to stop the Times's presses to print a new headline, "Bush Appears to Defeat Gore", with a story that stated George W. Bush was elected president. However, Gore held off his concession speech over doubts over Florida. Lelyveld reran the headline, "Bush and Gore Vie for an Edge". Since 2000, three printing stoppages have been issued for the death of William Rehnquist on September 3, 2005, for the killing of Osama bin Laden on May 1, 2011, and for the passage of the Marriage Equality Act in the New York State Assembly and subsequent signage by then-governor Andrew Cuomo on June 24, 2011. Online platforms The New York Times website is hosted at nytimes.com. It has undergone several major redesigns and infrastructure developments since its debut. In April 2006, The New York Times redesigned its website with an emphasis on multimedia. In preparation for Super Tuesday in February 2008, the Times developed a live election system using the Associated Press's File Transfer Protocol (FTP) service and a Ruby on Rails application; nytimes.com experienced its largest traffic on Super Tuesday and the day after. The NYTimes application debuted with the introduction of the App Store on July 10, 2008. Engadget's Scott McNulty wrote critically of the app, negatively comparing it to The New York Times's mobile website. An iPad version with select articles was released on April 3, 2010, with the release of the first-generation iPad. In October, The New York Times expanded NYT Editors' Choice to include the paper's full articles. NYT for iPad was free until 2011. The Times applications on iPhone and iPad began offering in-app subscriptions in July 2011. The Times released a web application for iPad—featuring a format summarizing trending headlines on Twitter—and a Windows 8 application in October 2012. Efforts to ensure profitability through an online magazine and a "Need to Know" subscription emerged in Adweek in July 2013. In March 2014, The New York Times announced three applications—NYT Now, an application that offers pertinent news in a blog format, and two unnamed applications, later known as NYT Opinion and NYT Cooking—to diversify its product laterals. The Daily is the modern front page of The New York Times. The New York Times manages several podcasts, including multiple podcasts with Serial Productions. The Times's longest-running podcast is The Book Review Podcast, debuting as Inside The New York Times Book Review in April 2006. The New York Times's defining podcast is The Daily, a daily news podcast hosted by Michael Barbaro which debuted on February 1, 2017. Between March 2022 and March 2025, the approximately 30 minute programme was co-hosted with Sabrina Tavernise. Beginning in April 2025 Barbaro was joined by two new regular co-hosts, Natalie Kitroeff and Rachel Abrams. The Interview was launched in 2024 and is hosted weekly by David Marchese and Lulu Garcia-Navarro. Episodes typically last 40 to 50 minutes. Condensed versions of the interviews are published simultaneously in The New York Times Magazine. Guests have included politicians, actors, influential experts, media figures and high-profile writers. In October 2021, The New York Times began testing "New York Times Audio", an application featuring podcasts from the Times, audio versions of articles—including from other publications through Audm, and archives from This American Life. The application debuted in May 2023 exclusively on iOS for Times subscribers. New York Times Audio includes exclusive podcasts such as The Headlines, a daily news recap, and Shorts, short audio stories under ten minutes. In addition, a "Reporter Reads" section features Times journalists reading their articles and providing commentary. The New York Times has used video games as part of its journalistic efforts, among the first publications to do so, contributing to an increase in Internet traffic; the publication has also developed its own video games. In 2014, The New York Times Magazine introduced Spelling Bee, a word game in which players guess words from a set of letters in a honeycomb and are awarded points for the length of the word and receive extra points if the word is a pangram. The game was proposed by Will Shortz, created by Frank Longo, and has been maintained by Sam Ezersky. In May 2018, Spelling Bee was published on nytimes.com, furthering its popularity. In February 2019, the Times introduced Letter Boxed, in which players form words from letters placed on the edges of a square box, followed in June 2019 by Tiles, a matching game in which players form sequences of tile pairings, and Vertex, in which players connect vertices to assemble an image. In July 2023, The New York Times introduced Connections, in which players identify groups of words that are connected by a common property. In April, the Times introduced Digits, a game that required using operations on different values to reach a set number; Digits was shut down in August. In March 2024, The New York Times released Strands, a themed word search. In January 2022, The New York Times Company acquired Wordle, a word game developed by Josh Wardle in 2021, at a valuation in the "low-seven figures". The acquisition was proposed by David Perpich, a member of the Sulzberger family who proposed the purchase to Knight over Slack after reading about the game. The Washington Post purportedly considered acquiring Wordle, according to Vanity Fair. At the 2022 Game Developers Conference, Wardle stated that he was overwhelmed by the volume of Wordle facsimiles and overzealous monetization practices in other games. Concerns over The New York Times monetizing Wordle by implementing a paywall mounted; Wordle is a client-side browser game and can be played offline by downloading its webpage. Wordle moved to the Times's servers and website in February. The game was added to the NYT Games application in August, necessitating it be rewritten in the JavaScript library React. In November, The New York Times announced that Tracy Bennett would be the Wordle's editor. Other publications The New York Times Magazine and The Boston Globe Magazine are the only weekly Sunday magazines following The Washington Post Magazine's cancellation in December 2022. In February 2016, The New York Times introduced a Spanish website, The New York Times en Español. The website, intended to be read on mobile devices, would contain translated articles from the Times and reporting from journalists based in Mexico City. The Times en Español's style editor is Paulina Chavira, who has advocated for pluralistic Spanish to accommodate the variety of nationalities in the newsroom's journalists and wrote a stylebook for The New York Times en Español. Articles the Times intends to publish in Spanish are sent to a translation agency and adapted for Spanish writing conventions; the present progressive tense may be used for forthcoming events in English, but other tenses are preferable in Spanish. The Times en Español consults the Real Academia Española and Fundéu and frequently modifies the use of diacritics—such as using an acute accent for the Cártel de Sinaloa but not the Cartel de Medellín—and using the gender-neutral pronoun elle. Headlines in The New York Times en Español are not capitalized. The Times en Español publishes El Times, a newsletter led by Elda Cantú intended for all Spanish speakers. In September 2019, The New York Times ended The New York Times en Español's separate operations. A study published in The Translator in 2023 found that the Times en Español engaged in tabloidization. In June 2012, The New York Times introduced a Chinese website, 纽约时报中文, in response to Chinese editions created by The Wall Street Journal and the Financial Times. Conscious to censorship, the Times established servers outside of China and affirmed that the website would uphold the paper's journalistic standards; the government of China had previously blocked articles from nytimes.com through the Great Firewall, and the website was blocked in China until August 2001 after then-general secretary Jiang Zemin met with journalists from The New York Times. Then-foreign editor Joseph Kahn assisted in the establishment of cn.nytimes.com, an effort that contributed to his appointment as executive editor in April 2022. In October 2012, 纽约时报中文 published an article detailing the wealth of then-premier Wen Jiabao's family. In response, the government of China blocked access to nytimes.com and cn.nytimes.com and references to the Times and Wen were censored on microblogging service Sina Weibo. In March 2015, a mirror of 纽约时报中文 and the website for GreatFire were the targets for a government-sanctioned distributed denial of service attack on GitHub in March 2015, disabling access to the service for several days. Chinese authorities requested the removal of The New York Times's news applications from the App Store in December 2016. Awards and recognition As of 2023, The New York Times has received 137 Pulitzer Prizes, the most of any publication. The New York Times is considered a newspaper of record in the United States.[l] The Times is the largest metropolitan newspaper in the United States; as of 2022, The New York Times is the second-largest newspaper by print circulation in the United States behind The Wall Street Journal. A study published in Science, Technology, & Human Values in 2013 found that The New York Times received more citations in academic journals than the American Sociological Review, Research Policy, or the Harvard Law Review. With sixteen million unique records, the Times is the third-most referenced source in Common Crawl, a collection of online material used in datasets such as GPT-3, behind Wikipedia and a United States patent database. The New Yorker's Max Norman wrote in March 2023 that the Times has shaped mainstream English usage. In a January 2018 article for The Washington Post, Margaret Sullivan stated that The New York Times affects the "whole media and political ecosystem". The New York Times's nascent success has led to concerns over media consolidation, particularly amid the decline of newspapers. In 2006, economists Lisa George and Joel Waldfogel examined the consequences of the Times's national distribution strategy and audience with circulation of local newspapers, finding that local circulation decreased among college-educated readers. The effect of The New York Times in this manner was observed in The Forum of Fargo-Moorhead, the newspaper of record for Fargo, North Dakota. Axios founder Jim VandeHei opined that the Times is "going to basically be a monopoly" in an opinion piece written by then-media columnist and former BuzzFeed News editor-in-chief Ben Smith; in the article, Smith cites the strength of The New York Times's journalistic workforce, broadening content, and the expropriation of Gawker editor-in-chief Choire Sicha, Recode editor-in-chief Kara Swisher, and Quartz editor-in-chief Kevin Delaney. Smith compared the Times to the New York Yankees during their 1927 season containing Murderers' Row. Controversies Since 2003, studies analyzing coverage of the Israeli–Palestinian conflict in the New York Times have demonstrated a bias against Palestinians and in favor of Israel.[m] The New York Times has received criticism for its coverage of the Gaza war and genocide. In April 2024, The Intercept reported that a November 2023 internal memorandum by Susan Wessling and Philip Pan instructed journalists to reduce using the terms "genocide" and "ethnic cleansing" and to avoid using the phrase "occupied territory" in the context of Palestinian land, "Palestine" except in rare circumstances, and the term "refugee camps" to describe areas of Gaza despite recognition from the United Nations. A spokesperson from the Times stated that issuing guidance was standard practice. An analysis by The Intercept noted that The New York Times described Israeli deaths as a massacre nearly sixty times, but had only described Palestinian deaths as a massacre once. Writers and editors have left the newspaper due to its coverage of events in Gaza, including Jazmine Hughes and Jamie Lauren Keiles. In December 2023, The New York Times published an investigation titled "'Screams Without Words': How Hamas Weaponized Sexual Violence on Oct. 7", alleging that Hamas weaponized sexual and gender-based violence during its armed incursion on Israel. The investigation was the subject of an article from The Intercept questioning the journalistic acumen of Anat Schwartz, a filmmaker involved in the inquiry who had no prior reporting experience and agreed with a post stating Israel should "violate any norm, on the way to victory", doubting the veracity of the opening claim that Gal Abdush was raped in a timespan disputed by her family, and alleging that the Times was pressured by the Committee for Accuracy in Middle East Reporting in America. The New York Times initiated an inquiry into the leaking of confidential information about the report to other outlets, which received criticism from NewsGuild of New York president Susan DeCarava for purported racial targeting; the Times's investigation was inconclusive, but found gaps in the way proprietary journalistic material is handled. The New York Times Building has been a site of protest action during the Gaza war and genocide, including a November 2023 sit-in demanding that The Times's editorial board publicly call for a ceasefire and accusing the media company of "complicity in laundering genocide", a February 29, 2024, protest and press conference following the release of The Intercept's critical investigation into the NYT "Screams Without Words" exposé, and an action on July 30, 2025, in which protesters spray-painted "NYT Lies, Gaza dies" on the building's glass facade. In addition, protesters blocked The New York Times's distribution center March 14, 2024 and executive editor Joseph Kahn's residence was splattered with red paint on August 25, 2025. The collective Writers Against the War on Gaza, which publishes the mock publication The New York War Crimes, has been associated with protests against The New York Times. On October 27, 2025, 300 writers—including scholars, journalists, and public intellectuals—pledged to boycott The New York Times and withhold contributions to the paper in protest of what they describe as its complicity in the Gaza genocide, demanding 1) a review of anti-Palestinian bias in the newsroom, 2) a retraction of "Screams Without Words", and 3) a call from the editorial board for a US arms embargo on Israel. Among the initial signatories, about 150 had previously contributed to the Times. The New York Times has received criticism regarding its coverage of transgender people. When it published an opinion piece by Weill Cornell Medicine professor Richard A. Friedman called "How Changeable Is Gender?" in August 2015, Vox's German Lopez criticized Friedman as suggesting that parents and doctors might be right in letting children suffer from severe dysphoria in case something changes down the line, and as implying that conversion therapy may work for transgender children. In February 2023, nearly one thousand current and former Times writers and contributors wrote an open letter addressed to standards editor Philip B. Corbett, criticizing the paper's coverage of transgender, non-binary, and gender-nonconforming people; some of the Times's articles have been cited in state legislatures attempting to justify criminalizing gender-affirming care. Contributors wrote in the open letter that "the Times has in recent years treated gender diversity with an eerily familiar mix of pseudoscience and euphemistic, charged language, while publishing reporting on trans children that omits relevant information about its sources."[n] According to former Times journalist Billie Jean Sweeney, a push for writers to challenge “every aspect of being trans”, ranging from gender-inclusive language to access to medical care, came from the top in 2022 after leadership was handed over to A. G. Sulzberger, Joe Kahn, and Carolyn Ryan; as part of an effort to win good will with the Trump campaign without incurring backlash from the general populace. The Times has continually denied any bias in its reporting, insisting that its coverage of “fiercely contested medical and legal debates” is fair and balanced, and that it would not tolerate journalists protesting its transgender coverage. Notes References Further reading External links |
======================================== |
[SOURCE: https://www.ynet.co.il/wellness] | [TOKENS: 189] |
האם זה הצמח שיכול לנצח את ההתקרחות? עדיין מצוננים? זה מה שאתם צריכים לאכול "אני אישן המון היום ומחר אעשה לילה לבן": האם אפשר לצבור שעות שינה? שיבולת שועל: יומיים של צריכה מרוכזת יכולים לשנות לכם את החיים |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Gapo] | [TOKENS: 593] |
Contents Gapo Gapo is a Vietnamese social networking service based in Hanoi, Vietnam. Users are able to create a personal profile and share text, photos and videos with others on the platform. Users can also use Gapo for live streaming, instant messaging, blogging, and online payments. Gapo was launched in July 2019 by Hà Trung Kiên and Duong Vi Khoa. History Gapo was founded in response to calls for Vietnam's Communist-led government to produce a domestic alternative to social media giants like Facebook and Google. Gapo officially launched on July 23, 2019 at an event in Hanoi. The company received 500 billion đồng (US$22 million) in funding from technology corporation G-Group to be utilized in the first phase of development. They also partnered with Sony Music Entertainment to provide music content to its services. Features Gapo features a news feed for posting content, livestreaming, instant messaging, and blogging. It also allows users to pay online and access public services. Reception Within two days of launch, Gapo received about 200,000 registrations. By September 2019, the user base increased to one million. Upon launch, Gapo experienced significant technical difficulties. Users complained about the inability to sign up for a new account and said that certain functions were not available for use at launch. This issue caused Gapo to temporarily suspend their services in order to perform upgrades and bug fixes. Gapo relaunched the next day, though many users reported that the access speed decreased. The mobile app also received mixed reviews from users in both the App Store and the Google Play Store, with an average rating of 3.1 and 3.5, respectively. Most users found the app to be a knockoff of Facebook, although some users praised the app for being locally developed. Le Hong Hiep of the ISEAS - Yusof Ishak Institute was doubtful that a Vietnamese-owned social network service could be as powerful as a foreign-based service, stating that Vietnam might not be able to develop a viable social media network to compete with the likes of Facebook or Google. Others, like blogger Ann Chi, said that, due to local players complying with local censorship policy, there is a chance that locals might not trust Gapo and other local services in light of possible surveillance. Regarding the targeted user base figure for the end of 2019 and 2021, experts cautioned that the company might need an additional trillion đồng of funding to reach its planned user base targets. In response, the company stated that Gapo was never meant to compete with Facebook, but instead noted that the main difference between Gapo and Facebook is that Gapo provides a personalized user experience through customization. Censorship Gapo has the right to censor posts and news that are deemed offensive and inaccurate by users or not approved by the censorship curators.[citation needed] References External links |
======================================== |
[SOURCE: https://www.wired.com/review/epilogue-gb-operator/] | [TOKENS: 4733] |
Matt KamenGearFeb 20, 2026 8:00 AMReview: Epilogue GB OperatorEpilogue’s adorable plug-in gizmo turns your laptop into an all-powerful Nintendo Game Boy.Courtesy of Epilogue$50 at EpilogueCommentLoaderSave StorySave this storyCommentLoaderSave StorySave this storyRating:9/10Open rating explainerInformationWIREDWide compatibility with any Game Boy, Game Boy Color, or Game Boy Advance cartridges. Supports accessories like Game Boy Camera. Save data backup and transfer tool. Exhaustive list of filters, tweaks, cheats, and emulation tools to experiment with. Bargain price.TIREDNot as portable or convenient as an actual Game Boy. Doesn’t remember some preferences universally. No save states (yet).The Game Boy family of handheld consoles was groundbreaking, making gaming more accessible to millions worldwide. Nintendo’s portables beat off technologically superior competition from the likes of Sega’s Game Gear and Atari’s Lynx. They became home to foundational moments for the medium, from what is still arguably the definitive version of Tetris to the birth of Pokémon. Yet with the iconic gray monolith launching in 1989, it’s now pushing 40—and playing those important classics gets tougher every year.If you have a collection of original, physical Game Boy cartridges in 2026, you essentially have two options. One is to hope your original console still works—a Game Boy Advance is best here, being a comparatively fresh-faced 25 years old with backward compatibility for original Game Boy and Game Boy Color cartridges. The other is to pick up a third-party field-programmable gate array (FPGA) console, like the Analogue Pocket, which also offers broad compatibility with all original carts. However, the former is victim to the ravages of time, with fewer functioning units available as the years march on, while the latter is a pricier investment tailored to hardcore collectors.Enter option three: Epilogue’s GB Operator, a way to play original Game Boy, Game Boy Color, and Game Boy Advance cartridges, directly on your computer, for a penny less than $50.Emulation NationPhotograph: Matt KamenThe GB Operator is billed as “a cartridge slot for your computer,” and rarely has a tech product been so accurately described. Unpack it, and you’ll find an unassuming translucent cuboid, a circuit board in a perspex box, the slot itself the only indication that something interesting is going on here. It measures a positively pocketable 1.3 × 1.2 × 3.5 inches and weighs a negligible 1.5 ounces. Setup is a breeze: Just plug it into your computer with the included USB-C cable, which also provides power, install the Playback software (available for PC, Mac, and Linux), and … that’s it.Epilogue GB OperatorRating: 9/10$50 at EpilogueSlot a cart in and Playback should automatically detect it, recognize the game and region it’s from, and pull up its cover art and a description. The software does an authenticity check when you insert a cartridge, which Epilogue says is 97.8 percent accurate. Anyone around for the original Game Boy will remember the plague of knockoff carts, so for collectors, it’s nice to have that peace of mind. This really is a gadget focused on celebrating the legitimate physical media from back in the day.Games run through emulation. Epilogue Playback defaults to the popular mGBA emulator, though you can select from several other cores—the engine that mimics the original console's hardware—or you can use your own if you prefer. It’s a great touch for anyone deeply immersed in that side of the retro gaming scene. Whichever core you choose, Playback allows users to get into the nuts and bolts of game settings, fine-tuning performance down to precise details such as frame skips and audio offsets. For anyone dipping their toes into emulation, everything is explained in refreshingly clear, jargon-free language, too.Game Boy ImaxPhotograph: Matt KamenIf you just want to play your old games, you don’t need to deal with any of that—just hit Start and travel back in time for some vintage gaming bliss.The visuals are the most striking difference when playing through the GB Operator. Whether in windowed or full-screen view, you’re getting a massive step up from the 160 × 144-pixel resolution of the Game Boy and Game Boy Color screens, and even the Advance’s luxurious 240 × 160-pixel resolution pales compared to modern monitors. Here, you see every last pixel programmed into the originals, blown up to colossal size. Seeing what were once tiny games at such a scale can take some getting used to, but it's a delight to appreciate all the detail and artistry packed in.Epilogue GB OperatorRating: 9/10$50 at EpilogueWhile Game Boy Advance games run essentially as-is, showcasing the richness of that full-color 32-bit era, the emulator provides a treasure trove of visual filters to play around with for original Game Boy games. By default, Game Boy cartridges load in a more modern gray scale, but there are options to play them in the classic green-and-black look, replicate the monochrome screens of the Game Boy Pocket, or mimic the brightness of the Japan-only Game Boy Light. The various color highlights that the Game Boy Color would apply to original Game Boy games are there too, as are the filters from the Super Game Boy (the adapter that ran Game Boy carts on the Super Nintendo). Optionally, you can display the frame rate too—I got a reliable 60 frames per second on every cart I tested.Running the carts through an emulator allows for some modern tweaks. Epilogue integrates support for Retro Achievements (note that a separate account login is required), allowing you to track accomplishments in these older games, and Playback can automatically pull up a list of cheats for each cart. Go ahead: 30-odd years on, infinite lives aren’t hurting anyone.Playback also allows performance and settings to be tailored on a per-game basis, including completely remapping controls or accelerating game speed. The only downside is that it seems to lack some helpful universal settings. For instance, I couldn’t find a way to set my Xbox controller as the default input—Playback randomly switched back to keyboard controls after I changed carts.Physical EphemeraPhotograph: Matt KamenWhen Epilogue says the GB Operator is universally compatible, it means that. I’ve thrown over a dozen carts at the device, testing out US, European, and Australian releases from all three main Game Boy console generations, and it’s recognized and played every single one.Epilogue GB OperatorRating: 9/10$50 at EpilogueSome took a little longer to launch, as Playback loads it into its emulator. Sonic Advance 3 took the longest, roughly a minute—ironic, given its speedy protagonist—but I was able to load and play every title I tried. Any save data still on the carts was read smoothly too, allowing me to jump right back into an endgame boss battle in Treasure’s brilliant-but-tough Astro Boy: Omega Factor. Sadly, my memory of the controls years after I last played wasn’t nearly as well preserved.The Operator even handles some of the curiosities that graced the various Game Boys over the decades. Case in point: my copy of Boktai 2: Solar Boy Django, an experimental title from Death Stranding auteur Hideo Kojima that used a solar sensor baked into the game cartridge to power up the protagonist’s weapons. Released for the GBA in 2005, it dared gamers to do the unthinkable: leave the house and get some sun. In 2026, you can plug it into the GB Operator and tell the Playback software to act as though it’s a bright day. Sure, it defeats the point, but it’s cool that even these niche titles are catered to. Epilogue says it even supports accessories like the Game Boy Camera, allowing you to use it as a webcam, although I don’t have one to test.The only downside to relying on the original cartridges is the carts themselves. If they’re not well maintained, they can be prone to read errors. I picked up a secondhand Game Boy Color copy of Disney’s Tarzan, and it loaded but kept cutting out, which I suspect was due to the cartridge's physical contacts eroding over time, meaning the GB Operator couldn’t reliably read the data. A few others also weren’t immediately recognized in the Playback software, but the time-honored ritual of blowing into them sorted the issue.Preservation SocietyPhotograph: Matt KamenWhile the GB Operator comfortably allows you to play your existing physical library through your computer, it’s also useful for preservation and new game creation purposes. The Playback software lets you back up a game’s save data to your computer, which has some real utility. Carts that stored progress locally (as opposed to older password systems) typically relied on a battery to do so, and once that runs out, so do your save files.Epilogue GB OperatorRating: 9/10$50 at EpilogueI found this out first-hand when loading up my copy of Pokémon Gold—whether I’d caught ’em all back in 2001, I’ll never be able to confirm, as the battery died at some point in the last quarter-century(!). Booting it up through the GB Operator was like switching it on for the first time. However, if it had still contained my surely completed Pokédex, I’d have been able to copy that data to my laptop’s hard drive, replace the battery in the cartridge (a fiddly process, but doable), and then load the save back onto it—magic.However, it’s worth noting that at present, everything relies on the cartridge's actual save processes. While virtual “snapshot” saves—capturing a game at any given moment—are on Epilogue’s road map, the feature is not yet available. It will first be tested through the experimental “Nightly Builds” version of Playback (found at the bottom of the downloads page) before being fully implemented.You can also use the GB Operator to dump the entire main game data from a cart you personally own, allowing you to make a legal copy for your own archival purposes (don’t share; that’s when it becomes piracy). The process itself is quick, depending on the game’s size, but even the Game Boy Advance’s biggest games were 32 MB at most. Even if you do back up a game in this manner, GB Operator still requires the original cart to run anything, though—you can’t just load the dumped ROM through the Playback software.Finally, if you’re an aspiring developer or into retro-style indie games, it allows you to transfer homebrew games created through the likes of GB Studio onto a flash cart and play them on an actual Game Boy. It’s another niche feature, but one that’s great to have, allowing present-day creators to build on the legacy of the beloved handhelds.It’s honestly hard to find much fault with the GB Operator. Sure, needing to be hooked up to a computer robs it of the pick-up-and-play appeal of the pocket-sized consoles it pays homage to. But even that feels like splitting hairs. It ultimately does everything it promises, all for less than $50.This is a marvelous bit of kit, and the overall performance and utility bode extremely well for Epilogue’s upcoming SN Operator, which aims to do the same for the Super Nintendo as this does for the Game Boy family (and a mysterious “?? Operator” to follow). If you’re looking for an easy, low-budget way to revisit or revive your Game Boy collection, this is your best option.Epilogue GB OperatorRating: 9/10$50 at Epilogue$50 at Epilogue Review: Epilogue GB Operator 9/10 The Game Boy family of handheld consoles was groundbreaking, making gaming more accessible to millions worldwide. Nintendo’s portables beat off technologically superior competition from the likes of Sega’s Game Gear and Atari’s Lynx. They became home to foundational moments for the medium, from what is still arguably the definitive version of Tetris to the birth of Pokémon. Yet with the iconic gray monolith launching in 1989, it’s now pushing 40—and playing those important classics gets tougher every year. If you have a collection of original, physical Game Boy cartridges in 2026, you essentially have two options. One is to hope your original console still works—a Game Boy Advance is best here, being a comparatively fresh-faced 25 years old with backward compatibility for original Game Boy and Game Boy Color cartridges. The other is to pick up a third-party field-programmable gate array (FPGA) console, like the Analogue Pocket, which also offers broad compatibility with all original carts. However, the former is victim to the ravages of time, with fewer functioning units available as the years march on, while the latter is a pricier investment tailored to hardcore collectors. Enter option three: Epilogue’s GB Operator, a way to play original Game Boy, Game Boy Color, and Game Boy Advance cartridges, directly on your computer, for a penny less than $50. Emulation Nation The GB Operator is billed as “a cartridge slot for your computer,” and rarely has a tech product been so accurately described. Unpack it, and you’ll find an unassuming translucent cuboid, a circuit board in a perspex box, the slot itself the only indication that something interesting is going on here. It measures a positively pocketable 1.3 × 1.2 × 3.5 inches and weighs a negligible 1.5 ounces. Setup is a breeze: Just plug it into your computer with the included USB-C cable, which also provides power, install the Playback software (available for PC, Mac, and Linux), and … that’s it. Epilogue GB Operator Rating: 9/10 Slot a cart in and Playback should automatically detect it, recognize the game and region it’s from, and pull up its cover art and a description. The software does an authenticity check when you insert a cartridge, which Epilogue says is 97.8 percent accurate. Anyone around for the original Game Boy will remember the plague of knockoff carts, so for collectors, it’s nice to have that peace of mind. This really is a gadget focused on celebrating the legitimate physical media from back in the day. Games run through emulation. Epilogue Playback defaults to the popular mGBA emulator, though you can select from several other cores—the engine that mimics the original console's hardware—or you can use your own if you prefer. It’s a great touch for anyone deeply immersed in that side of the retro gaming scene. Whichever core you choose, Playback allows users to get into the nuts and bolts of game settings, fine-tuning performance down to precise details such as frame skips and audio offsets. For anyone dipping their toes into emulation, everything is explained in refreshingly clear, jargon-free language, too. Game Boy Imax If you just want to play your old games, you don’t need to deal with any of that—just hit Start and travel back in time for some vintage gaming bliss. The visuals are the most striking difference when playing through the GB Operator. Whether in windowed or full-screen view, you’re getting a massive step up from the 160 × 144-pixel resolution of the Game Boy and Game Boy Color screens, and even the Advance’s luxurious 240 × 160-pixel resolution pales compared to modern monitors. Here, you see every last pixel programmed into the originals, blown up to colossal size. Seeing what were once tiny games at such a scale can take some getting used to, but it's a delight to appreciate all the detail and artistry packed in. Epilogue GB Operator Rating: 9/10 While Game Boy Advance games run essentially as-is, showcasing the richness of that full-color 32-bit era, the emulator provides a treasure trove of visual filters to play around with for original Game Boy games. By default, Game Boy cartridges load in a more modern gray scale, but there are options to play them in the classic green-and-black look, replicate the monochrome screens of the Game Boy Pocket, or mimic the brightness of the Japan-only Game Boy Light. The various color highlights that the Game Boy Color would apply to original Game Boy games are there too, as are the filters from the Super Game Boy (the adapter that ran Game Boy carts on the Super Nintendo). Optionally, you can display the frame rate too—I got a reliable 60 frames per second on every cart I tested. Running the carts through an emulator allows for some modern tweaks. Epilogue integrates support for Retro Achievements (note that a separate account login is required), allowing you to track accomplishments in these older games, and Playback can automatically pull up a list of cheats for each cart. Go ahead: 30-odd years on, infinite lives aren’t hurting anyone. Playback also allows performance and settings to be tailored on a per-game basis, including completely remapping controls or accelerating game speed. The only downside is that it seems to lack some helpful universal settings. For instance, I couldn’t find a way to set my Xbox controller as the default input—Playback randomly switched back to keyboard controls after I changed carts. Physical Ephemera When Epilogue says the GB Operator is universally compatible, it means that. I’ve thrown over a dozen carts at the device, testing out US, European, and Australian releases from all three main Game Boy console generations, and it’s recognized and played every single one. Epilogue GB Operator Rating: 9/10 Some took a little longer to launch, as Playback loads it into its emulator. Sonic Advance 3 took the longest, roughly a minute—ironic, given its speedy protagonist—but I was able to load and play every title I tried. Any save data still on the carts was read smoothly too, allowing me to jump right back into an endgame boss battle in Treasure’s brilliant-but-tough Astro Boy: Omega Factor. Sadly, my memory of the controls years after I last played wasn’t nearly as well preserved. The Operator even handles some of the curiosities that graced the various Game Boys over the decades. Case in point: my copy of Boktai 2: Solar Boy Django, an experimental title from Death Stranding auteur Hideo Kojima that used a solar sensor baked into the game cartridge to power up the protagonist’s weapons. Released for the GBA in 2005, it dared gamers to do the unthinkable: leave the house and get some sun. In 2026, you can plug it into the GB Operator and tell the Playback software to act as though it’s a bright day. Sure, it defeats the point, but it’s cool that even these niche titles are catered to. Epilogue says it even supports accessories like the Game Boy Camera, allowing you to use it as a webcam, although I don’t have one to test. The only downside to relying on the original cartridges is the carts themselves. If they’re not well maintained, they can be prone to read errors. I picked up a secondhand Game Boy Color copy of Disney’s Tarzan, and it loaded but kept cutting out, which I suspect was due to the cartridge's physical contacts eroding over time, meaning the GB Operator couldn’t reliably read the data. A few others also weren’t immediately recognized in the Playback software, but the time-honored ritual of blowing into them sorted the issue. Preservation Society While the GB Operator comfortably allows you to play your existing physical library through your computer, it’s also useful for preservation and new game creation purposes. The Playback software lets you back up a game’s save data to your computer, which has some real utility. Carts that stored progress locally (as opposed to older password systems) typically relied on a battery to do so, and once that runs out, so do your save files. Epilogue GB Operator Rating: 9/10 I found this out first-hand when loading up my copy of Pokémon Gold—whether I’d caught ’em all back in 2001, I’ll never be able to confirm, as the battery died at some point in the last quarter-century(!). Booting it up through the GB Operator was like switching it on for the first time. However, if it had still contained my surely completed Pokédex, I’d have been able to copy that data to my laptop’s hard drive, replace the battery in the cartridge (a fiddly process, but doable), and then load the save back onto it—magic. However, it’s worth noting that at present, everything relies on the cartridge's actual save processes. While virtual “snapshot” saves—capturing a game at any given moment—are on Epilogue’s road map, the feature is not yet available. It will first be tested through the experimental “Nightly Builds” version of Playback (found at the bottom of the downloads page) before being fully implemented. You can also use the GB Operator to dump the entire main game data from a cart you personally own, allowing you to make a legal copy for your own archival purposes (don’t share; that’s when it becomes piracy). The process itself is quick, depending on the game’s size, but even the Game Boy Advance’s biggest games were 32 MB at most. Even if you do back up a game in this manner, GB Operator still requires the original cart to run anything, though—you can’t just load the dumped ROM through the Playback software. Finally, if you’re an aspiring developer or into retro-style indie games, it allows you to transfer homebrew games created through the likes of GB Studio onto a flash cart and play them on an actual Game Boy. It’s another niche feature, but one that’s great to have, allowing present-day creators to build on the legacy of the beloved handhelds. It’s honestly hard to find much fault with the GB Operator. Sure, needing to be hooked up to a computer robs it of the pick-up-and-play appeal of the pocket-sized consoles it pays homage to. But even that feels like splitting hairs. It ultimately does everything it promises, all for less than $50. This is a marvelous bit of kit, and the overall performance and utility bode extremely well for Epilogue’s upcoming SN Operator, which aims to do the same for the Super Nintendo as this does for the Game Boy family (and a mysterious “?? Operator” to follow). If you’re looking for an easy, low-budget way to revisit or revive your Game Boy collection, this is your best option. Epilogue GB Operator Rating: 9/10 Comments © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices |
======================================== |
[SOURCE: https://www.theverge.com/ride-sharing] | [TOKENS: 1225] |
Ride-sharing The emergence of app-based ride-sharing platforms like Uber and Lyft transformed the way people in cities get around — and not always for the better. It nearly decimated the taxi industry while offering riders a more seamless way to travel. But it also choked many cities with car traffic and disrupted labor with the popularization of gig work. The Verge covers all the news and analysis related to ride-sharing as well as what the future holds for this mode of transportation. Say that five times really fast! Uber has said it would use Baidu’s Apollo Go robotaxis in London, and now the company is adding Dubai as well, starting in March 2026. The new accounts for riders aged 13 to 17 launch today in over 200 major markets, including New York, Chicago, Atlanta, Dallas, Boston, and Washington, DC. Parents get a link to track trips in real time, receive updates at pickup and drop-off, and can communicate directly with their teen’s Lyft driver if needed, the company says. The announcement comes almost three years after Uber first launched its teen accounts. Lyft made a big splash when it bought Citi Bike’s parent company in 2018. It promised huge investments and improved service. But, it’s also raised prices at a stunning rate, far outpacing inflation and fares for other transportation in NYC. And yet, it hasn’t bothered digging out most of its bike docks, according to Streetsblog. The case involves a woman passenger who sued Uber after being sexually assaulted by a driver, accusing the company of failing to take basic precautions to protect customers. Uber has long been dogged by similar allegations — Reuters says the company is now facing approximately 3,000 lawsuits over similar claims — but this case could be a bellwether for future enforcement. A recent investigation found that Uber receives a report of sexual assault or misconduct somewhere in the world every eight minutes. [Reuters] Uber customers can now be matched with a robotaxi operated by Avride in a small, 9-square mile section of Dallas. The vehicles, Hyundai Ioniq 5s, still have safety drivers for now as part of a phased introduction, with fully driverless operations coming later. The fleet will also be small at first, but will grow to “hundreds” over time, the company says. This is Uber’s latest robotaxi deployment in the US, following the partnership with Waymo in Austin and Atlanta. The workers, part of Project Sandbox, were one month into an expected three month stint, Business Insider reports. Around a dozen people were involved, though it’s not clear how many were cut. “The client has recently communicated a change in their internal priorities, which directly affects ongoing work on this program,” Uber emailed the affected contractors on Monday. [Business Insider] The company has announced a UK trial with autonomous delivery company Starship, starting in Sheffield and Leeds. It’s Uber’s first delivery bot trial in Europe, after tests in various US cities. Starship’s robots aren’t new to the region though — one even delivered dinner to my colleague Tom way back in 2017. Risher sees Lyft as a service company above all, but AI makes everything weird. The California governor, who is already angling for a presidential run, has a stack of AI regulation bills he can veto before October 12th. Newsom has a slew of tech donors — and may want more tech money for a presidential run. OpenAI is also staffed up with Newsom-affiliated operators. So will Newsom sign the bills? If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. [Blood in the Machine] The US Department of Justice filed a lawsuit alleging that the ridehail company “routinely refuse to serve individuals with disabilities,” including people with service animals and stowable wheelchairs. Uber settled a previous lawsuit with the Biden administration over a similar issue, but clearly this is an ongoing problem with ridesharing. [TechCrunch] Takeout on demand first expanded to groceries, and then to other retail, but the branding hasn’t always kept up. With Uber Eats now delivering from Best Buy, that opens the door to some pretty strange dinner orders, jackcousteau: Not confusing at all. I’m going to grubhub my next network appliance. Get the day’s best comment and more in my free newsletter, The Verge Daily. That’s how frequent Uber received a report of sexual assault or sexual misconduct in the US between 2017 and 2022, on average, according to a new investigation by The New York Times. That amounts to a staggering total of 400,181 Uber trips that involve reports of assault or misconduct. Uber’s official number of “serious sexual assault and misconduct” over that period is only 12,522; the company estimates that 75 percent of those 400,000 cases involve “less serious” incidents of harassing comments or flirting. Still, Uber says its working on the problem, but anonymous employees say the company is ignoring promising solutions. [nytimes.com] After a ride, you’ll be able to mark a driver as a favorite and Lyft will prioritize matching you with them when possible. Uber and one of the ridehail company’s many robotaxi partners, Wayve, announced today that they will begin testing Level 4 autonomous vehicles in London on public roads as soon as 2026. The timing coincides with the UK Secretary of State for Transport’s announcement of “an accelerated framework for self-driving commercial pilots, following the Automated Vehicles Act becoming law last month. Trials have been underway for a while, but always with a safety driver in the front seat. Now the companies can remove the driver from the vehicle, but in doing so they will accept full liability if the vehicle crashes. Pagination Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad © 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEKent2001588–589_145-0] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kirad_al-Ghannama] | [TOKENS: 572] |
Contents Kirad al-Ghannama Kirad al-Ghannama was a Palestinian Arab village in the Safad Subdistrict. It was depopulated during the 1947–1948 Civil War in Mandatory Palestine on April 22, 1948, by the Palmach's First Battalion of Operation Yiftach. It was located 11 km northeast of Safad. Wadi Mushayrifa ran between the two Kirad villages (al-Ghannama and al-Baqqara) and Wadi Waqqas supplied the village with its water requirements. The village contained the following khirbas: Khirbat Nijmat al-Subh, Tall al-Qadah, and Tall al-Safa. History By the 1931 census Arab Ghannameh had 265 Muslims inhabitants, in a total of 54 houses. In the 1945 statistics, its population was 350 Muslims, and the total land area were 3,975 dunams. Of this, 77 dunams were for citrus and bananas, 20 dunams were irrigated or used for plantations, 3,451 for cereals, while 64 dunams were classified as urban land. After the 1948 Palestine war, according to the armistice agreements of 1949 Between Israel and Syria, it was determined that a string of villages, including Al-Nuqayb Al-Hamma, Al-Samra in the Tiberias Subdistrict and Kirad al-Baqqara and Kirad al-Ghannama further north in the Safad Subdistrict, would be included the demilitarized zone (DMZ) between Israel and Syria. The villagers and their property were formally protected by Article V of the Israeli-Syrian agreement of 20 July that year. However, Israel thought the villagers could pose a security threat, and Israeli settlers and settlement agencies coveted the land. Israel therefore wanted the Palestinian inhabitants, a total of 2,200 villagers, moved to Syria. In the spring of 1951, Israel decided to assert its sovereignty over the DMZ, including "the transfer of Arab civilians from the area.." On the night of the 30 March they forcibly transferred all the 800 inhabitants of Kirad al-Ghannama and Kirad al-Baqqara to Sha'ab. A United Nations decision allowed the villagers to return, however, Israel pressured them to remain in Sha'ab. In spite of this, many of the villagers returned to their homes in the DMZ. In 1956 Israel expelled the two Khirad-villages again, and this time the sites were physically destroyed and ploughed over. Most of the villagers went to Syria, a few went back to Sha'ab. References Bibliography External links |
======================================== |
[SOURCE: https://forum.webflow.com/guidelines] | [TOKENS: 1008] |
Community Guidelines Welcome to the Webflow community forum. Connect with 75,000+ members who are contributing to Webflow’s mission and building the future of visual development. This forum is a place for designers and creators to share knowledge and support one another as they create the most innovative and efficient web experiences out there. In order to help you navigate this large community and to help you make the most out of it, we’ve put together some guidelines in an effort to facilitate constructive conversations for all. Please make sure you take the time to read them! The basics We have a straight forward community code of conduct and ask you to respect these few, simple rules : Do’s and Don’ts of the forum Do Don’t Community resources Are you keen on meeting other designers ? Do you want to continue interacting with the community outside of threads ? Then make sure you bookmark this list: it contains all of the wonderful things that are happening in the Webflow world. Subscribe to become a Webflow Insider: Want to hear more about what’s going on in our community? Wait no longer- subscribe to the community newsletter and become a Webflow insider for a chance to connect with members, projects, programs, and events; experience life at Webflow through our core behaviors; and most importantly- get early access to announcements. Check out our community homepage now! Take your community participation further- join a Webflow Chapter. With groups all over the world, be sure to check out if there’s a group meeting in your timezone (in-person or remotely). Check out the Webflow Community Facebook Group. If you want to connect with the community in a more social way, that’s the place to be. Celebrate your wins in the Show and Tell category. If no one’s told you today, you’re doing amazing work. Brag about your wins and give props to others for their successes in our forum! PS: We also keep a close eye on this category to pick our site of the month, which gets featured on our social media channels. Showcase your projects and discover other designers: you can add your favorite projects to our public Showcase page and get discovered as well as find other designers you like. Inspire and get inspired! Learn about Webflow and web design. Our University has all the courses and tutorials you need to learn web design in Webflow Follow us: Twitter, Facebook, Instagram, LinkedIn, YouTube, Dribbble. How to post and comment New to this forum? No problem! Read how to post and comment below, and we’ll have you contributing to conversations in no time. Replying to posts Liking and flagging Navigating the forum Last but not least, our friendly robot @Discobot is here to help you get started. Simply type @discobot start new user anywhere and let it show you what you can do! Moderation and moderators Our forum provides tools that enable the community to collectively identify the best (and worst) contributions: favorites, bookmarks, likes, flags, replies, edits, and so forth. Use these tools to improve your own experience, and everyone else’s, too. In addition to these features, we rely on a group of Webflow community forum moderators to keep this forum the best that it can be. Most of our moderators are community members, just like you, who are committed to growing and improving this community and making the design world a better place. They have moderator privileges in the forum and can be recognized by the “Community Leader” title and badge on their profile. They are responsible for this forum, but so are you. For members to have the best experience in this forum, everyone has a role to play. With your help, our community can grow and be a source of goodness and positivity for all. If you are interested in becoming a moderator in this forum, we would love to have you. Once you get to trustlevel3: Community Specialist, you become eligible to be considered for the moderator role in this forum. In order to maintain the health and happiness of our community, moderators have the ability to remove any content and any user account for any reason at any time. Webflow does not preview every new post in the forum and does not take responsibility for any unsuitable content posted by community members (we will, however, do our best to remove said content in a timely manner). What will get you flagged or banned This is a public forum, and search engines index these discussions. Please keep the language, links, and images safe and appropriate for the global community. To report a code of conduct violation, you can direct message @WebflowCommunityTeam on this forum, send an email to events@webflow.com, or you may report an incident anonymously if you prefer. Terms of Service By using the Webflow Community Forum, you agree to abide by the Webflow Terms of Service and privacy policy. You may unsubscribe from forum or community emails at any time by altering your subscription settings. Powered by Discourse, best viewed with JavaScript enabled |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.